Explore our articles
View All Results

Stefaan Verhulst

Network

MacArthur Foundation Research Network on Opening Governance formed to gather evidence and develop new designs for governing 

NEW YORK, NY, March 4, 2014 The Governance Lab (The GovLab) at New York University today announced the formation of a Research Network on Opening Governance, which will seek to develop blueprints for more effective and legitimate democratic institutions to help improve people’s lives.
Convened and organized by the GovLab, the MacArthur Foundation Research Network on Opening Governance is made possible by a three-year grant of $5 million from the John D. and Catherine T. MacArthur Foundation as well as a gift from Google.org, which will allow the Network to tap the latest technological advances to further its work.
Combining empirical research with real-world experiments, the Research Network will study what happens when governments and institutions open themselves to diverse participation, pursue collaborative problem-solving, and seek input and expertise from a range of people. Network members include twelve experts (see below) in computer science, political science, policy informatics, social psychology and philosophy, law, and communications. This core group is supported by an advisory network of academics, technologists, and current and former government officials. Together, they will assess existing innovations in governing and experiment with new practices and how institutions make decisions at the local, national, and international levels.
Support for the Network from Google.org will be used to build technology platforms to solve problems more openly and to run agile, real-world, empirical experiments with institutional partners such as governments and NGOs to discover what can enhance collaboration and decision-making in the public interest.
The Network’s research will be complemented by theoretical writing and compelling storytelling designed to articulate and demonstrate clearly and concretely how governing agencies might work better than they do today. “We want to arm policymakers and practitioners with evidence of what works and what does not,” says Professor Beth Simone Noveck, Network Chair and author of Wiki Government: How Technology Can Make Government Better, Democracy Stronger and Citi More Powerful, “which is vital to drive innovation, re-establish legitimacy and more effectively target scarce resources to solve today’s problems.”
“From prize-backed challenges to spur creative thinking to the use of expert networks to get the smartest people focused on a problem no matter where they work, this shift from top-down, closed, and professional government to decentralized, open, and smarter governance may be the major social innovation of the 21st century,” says Noveck. “The MacArthur Research Network on Opening Governance is the ideal crucible for helping  transition from closed and centralized to open and collaborative institutions of governance in a way that is scientifically sound and yields new insights to inform future efforts, always with an eye toward real-world impacts.”
MacArthur Foundation President Robert Gallucci added, “Recognizing that we cannot solve today’s challenges with yesterday’s tools, this interdisciplinary group will bring fresh thinking to questions about how our governing institutions operate, and how they can develop better ways to help address seemingly intractable social problems for the common good.”
Members
The MacArthur Research Network on Opening Governance comprises:
Chair: Beth Simone Noveck
Network Coordinator: Andrew Young
Chief of Research: Stefaan Verhulst
Faculty Members:

  • Sir Tim Berners-Lee (Massachusetts Institute of Technology (MIT)/University of Southampton, UK)
  • Deborah Estrin (Cornell Tech/Weill Cornell Medical College)
  • Erik Johnston (Arizona State University)
  • Henry Farrell (George Washington University)
  • Sheena S. Iyengar (Columbia Business School/Jerome A. Chazen Institute of International Business)
  • Karim Lakhani (Harvard Business School)
  • Anita McGahan (University of Toronto)
  • Cosma Shalizi (Carnegie Mellon/Santa Fe Institute)

Institutional Members:

  • Christian Bason and Jesper Christiansen (MindLab, Denmark)
  • Geoff Mulgan (National Endowment for Science Technology and the Arts – NESTA, United Kingdom)
  • Lee Rainie (Pew Research Center)

The Network is eager to hear from and engage with the public as it undertakes its work. Please contact Stefaan Verhulst to share your ideas or identify opportunities to collaborate.”

New Research Network to Study and Design Innovative Ways of Solving Public Problems
Paper by Alan W. Brown, Jerry Fishenden,  and Mark Thompson: ” For public sector organizations across the world, the pressures for improved
efficiency during the past decades are now accompanied by an equally strong need to revolutionise service delivery to create solutions that better meet citizens’ needs; to develop channels that offer efficiency and increase inclusion to all citizens being served; and to re-invent supply chains to deliver services faster, cheaper, and more effectively. But how do government organisations ensure investment in digital transformation delivers the intended outcomes after earlier “online government” and “e-government” initiatives produced little in terms of significant, sustainable benefits? Here we focus on how digitisation, built on open standards, is transforming the public sector’s relationship with its citizens. This paper provides a perspective of digital change efforts across the UK government as an illustration of the improvements taking place more broadly in the public sector. It provides a vision for the future of our digital world, revealing the symbiotic relationship between organisational change and digitisation, and offering insights into public service delivery in the digital economy.”
Revolutionising Digital Public Service Delivery: A UK Government Perspective

Emerging Technology From the arXiv:  “Do you know who can see the items you’ve posted on Facebook? This, of course, depends on the privacy settings you’ve used for each picture, text or link that you’ve shared throughout your Facebook history.
You might be extremely careful in deciding who can see these things. But as time goes on, the number of items people share increases. And the number contacts they share them with increases too. So it’s easy to lose track of who can see what.
What’s more, an item that you may have been happy to share three years ago when you were at university, you may not be quite so happy to share now that you are looking for employment.
So how best to increase people’s awareness of their privacy settings? Today, Alexandra Cetto and pals from the University of Regensburg in Germany, say they’ve developed a serious game called Friend Inspector that allows users to increase their privacy awareness on Facebook.
And they say that within five months of its launch, the game had been requested over 100,000 times.
In recent years, serious games have become an increasingly important learning medium through digital simulations and virtual environments. So Cetto and co set about developing a game that could increase people’s awareness of privacy on Facebook.
Designing serious games is something of a black art. At the very least, there needs to be motivation to play and some kind of feedback or score to beat. And at the same time, the game has to achieve some kind of learning objective, in this case an enhanced awareness of privacy.
Aimed at 16-25 year olds, the game these guys came up with is deceptively simple. When potential players land on the home page, they’re asked a simple question: “Do you know who can see your Facebook profile?” This is followed by the teaser: “Playfully discover who can see your shared items and get advice to improve your privacy.”
When players sign up, the game retrieves his or her contacts, shared items and their privacy settings from Facebook. It then presents the player with a pair of these shared items asking which is more personal….
Finally, the game assesses the player’s score and makes a set of personalised recommendations about how to improve privacy, such as how to create friend lists, how to share personal items in a targeted manner and how the term friendship on a social network site differs from friendship in the real world…. Try it at http://www.friend-inspector.org/.
Ref: arxiv.org/abs/1402.5878 : Friend Inspector: A Serious Game to Enhance Privacy Awareness in Social Networks”

Can A Serious Game Improve Privacy Awareness on Facebook?

Dissertation by Jonathan T. Morgan: “The success of Wikipedia demonstrates that open collaboration can be an effective model for organizing geographically-distributed volunteers to perform complex, sustained work at a massive scale. However, Wikipedia’s history also demonstrates some of the challenges that large, long-term open collaborations face: the core community of Wikipedia editors—the volunteers who contribute most of the encyclopedia’s content and ensure that articles are correct and consistent — has been gradually shrinking since 2007, in part because Wikipedia’s social climate has become increasingly inhospitable for newcomers, female editors, and editors from other underrepresented demographics. Previous research studies of change over time within other work contexts, such as corporations, suggests that incremental processes such as bureaucratic formalization can make organizations more rule-bound and less adaptable — in effect, less open— as they grow and age. There has been little research on how open collaborations like Wikipedia change over time, and on the impact of those changes on the social dynamics of the collaborating community and the way community members prioritize and perform work. Learning from Wikipedia’s successes and failures can help researchers and designers understand how to support open collaborations in other domains — such as Free/Libre Open Source Software, Citizen Science, and Citizen Journalism.

In this dissertation, I examine the role of openness, and the potential antecedents and consequences of formalization, within Wikipedia through an analysis of three distinct but interrelated social structures: community-created rules within the Wikipedia policy environment, coordination work and group dynamics within self-organized open teams called WikiProjects, and the socialization mechanisms that Wikipedia editors use to teach new community members how to participate.To inquire further, I have designed a new editor peer support space, the Wikipedia Teahouse, based on the findings from my empirical studies. The Teahouse is a volunteer-driven project that provides a welcoming and engaging environment in which new editors can learn how to be productive members of the Wikipedia community, with the goal of increasing the number and diversity of newcomers who go on to make substantial contributions to Wikipedia …”
Coordinating the Commons: Diversity & Dynamics in Open Collaborations

New Social media platform called “State”: The simplest way to get your opinions heard.Just state about whatever matters to you, get counted and instantly see where you stand. When everyone’s opinion counts, the full picture emerges. This could make good things happen…
We set up State, because at the moment, most people never get heard. So we’re levelling the playing field for everyone by allowing them to express their opinions quickly and delivering them to the people who most need to hear them.
State lets people communicate in a lucid, non-competitive way. It’s a place where you don’t need hashtags, followers, or fame, just an opinion. The solution we lit upon was at the convergence of design simplicity and semantic intelligence. It allows people to express opinions in a quick and fun way that also provides enough information to interpret, count, and connect them.
For those in positions of leadership or influence, State offers the first many-to-one capability that can precisely map the prevailing sentiment on key issues. These are opinions shared spontaneously, not extracted from a survey.
We believe that everyone deserves a powerful voice online, no one should be left out, and when everyone’s opinions count, a more complete picture emerges. We firmly believe that this could make good things happen.

State

Paper by Geoff Mulgan in Philosophy & Technology :” Collective intelligence is much talked about but remains very underdeveloped as a field. There are small pockets in computer science and psychology and fragments in other fields, ranging from economics to biology. New networks and social media also provide a rich source of emerging evidence. However, there are surprisingly few useable theories, and many of the fashionable claims have not stood up to scrutiny. The field of analysis should be how intelligence is organised at large scale—in organisations, cities, nations and networks. The paper sets out some of the potential theoretical building blocks, suggests an experimental and research agenda, shows how it could be analysed within an organisation or business sector and points to the possible intellectual barriers to progress.”

True Collective Intelligence? A Sketch of a Possible New Field

Article by Mariano Mosquera at Edmond J. Safra Research Lab: “There has been an important development in the study of the right of access to public information and the so-called economics of information: by combining these two premises, it is possible to outline an economics theory of access to public information.


Moral Hazard
The legal development of the right of access to public information has been remarkable. Many international conventions, laws and national regulations have been passed on this matter. In this regard, access to information has consolidated within the framework of international human rights law.
The Inter-American Court of Human Rights was the first international court to acknowledge that access to information is a human right that is part of the right to freedom of speech. The Court recognized this right in two parts, as the individual right of any person to search for information and as a positive obligation of the state to ensure the individual’s right to receive the requested information.
This right and obligation can also be seen as the demand and supply of information.
The so-called economics of information has focused on the issue of information asymmetry between the principal and the agent. The principal (society) and the agent (state) enter into a contract.This contract is based on the idea that the agent’s specialization and professionalism (or the politician’s, according to Weber) enables him to attend to the principal’s affairs, such as public affairs in this case. This representation contract does not provide for a complete delegation,but rather it involves the principal’s commitment to monitoring the agent.
When we study corruption, it is important to note that monitoring aims to ensure that the agent adjusts its behavior to comply with the contract, in order to pursue public goals, and not to serve private interests. Stiglitz4 describes moral hazard as a situation arising from information asymmetry between the principal and the agent. The principal takes a risk when acting without comprehensive information about the agent’s actions. The moral hazard means that the handling of closed, privileged information by the agent could bring about negative consequences for the principal.
In this case, it is a risk related to corrupt practices, since a public official could use the state’s power and information to achieve private benefits, and not to resolve public issues in accordance with the principal-agent contract. This creates negative social consequences.
In this model, there are a number of safeguards against moral hazard, such as monitoring institutions (with members of the opposition) and rewards for efficient and effective administration,5 among others. Access to public information could also serve as an effective means of monitoring the agent, so that the agent adjusts its behavior to comply with the contract.
The Economic Principle of Public Information
According to this principal-agent model, public information should be defined as:
information whose social interpretation enables the state to act in the best interests of society. This definition is based on the idea of information for monitoring purposes and uses a systematic approach to feedback. This definition also implies that the state is not entirely effective at adjusting its behavior by itself.
Technically, as an economic principle of public information, public information is:
information whose interpretation by the principal is useful for the agent, so that the latter adjusts its behavior to comply with the principal-agent contract. It should be noted that this is very different from the legal definition of public information, such as “any information produced or held by the state.” This type of legal definition is focused only on supply, but not on demand.
In this principal-agent model, public information stems from two different rationales: the principal’s interpretation and the usefulness for the agent. The measure of the principal’s interpretation is the likelihood of being useful for the agent. The measure of usefulness for the agent is the likelihood of adjusting the principal-agent contract.
Another totally different situation is the development of institutions that ensure the application of this principle. For example, the channels of supplied, and demanded, information, and the channels of feedback, could be strengthened so that the social interpretation that is useful for the state actually reaches the public authorities that are able to adjust policies….”

The Economics of Access to Information

The Guardian: In our livechat on 28 February the experts discussed how to connect up government and citizens online. Digital public services are not just for ‘techno wizzy people’, so government should make them easier for everyone… Read the livechat in full
Michael Sanders, head of research for the behavioural insights team@mike_t_sanders
It’s important that government is a part of people’s lives: when people interact with government it shouldn’t be a weird and alienating experience, but one that feels part of their everyday lives.
Online services are still too often difficult to use: most people who use the HMRC website will do so infrequently, and will forget its many nuances between visits. This is getting better but there’s a long way to go.
Digital by default keeps things simple: one of our main findings from our research on improving public services is that we should do all we can to “make it easy”.
There is always a risk of exclusion: we should avoid “digital by default” becoming “digital only”.
Ben Matthews, head of communications at Futuregov@benrmatthews
We prefer digital by design to digital by default: sometimes people can use technology badly, under the guise of ‘digital by default’. We should take a more thoughtful approach to technology, using it as a means to an end – to help us be open, accountable and human.
Leadership is important: you can get enthusiasm from the frontline or younger workers who are comfortable with digital tools, but until they’re empowered by the top of the organisation to use them actively and effectively, we’ll see little progress.
Jargon scares people off: ‘big data’ or ‘open data’, for example….”

How government can engage with citizens online – expert views

Article by Sharad Goel and Daniel Goldstein (Microsoft Research): “With the availability of social network data, it has become possible to relate the behavior of individuals to that of their acquaintances on a large scale. Although the similarity of connected individuals is well established, it is unclear whether behavioral predictions based on social data are more accurate than those arising from current marketing practices. We employ a communications network of over 100 million people to forecast highly diverse behaviors, from patronizing an off-line department store to responding to advertising to joining a recreational league. Across all domains, we find that social data are informative in identifying individuals who are most likely to undertake various actions, and moreover, such data improve on both demographic and behavioral models. There are, however, limits to the utility of social data. In particular, when rich transactional data were available, social data did little to improve prediction.”

Predicting Individual Behavior with Social Networks

Article by Phil Rosenzweig in McKinsey Quaterly: “The growing power of decision models has captured plenty of C-suite attention in recent years. Combining vast amounts of data and increasingly sophisticated algorithms, modeling has opened up new pathways for improving corporate performance.1 Models can be immensely useful, often making very accurate predictions or guiding knotty optimization choices and, in the process, can help companies to avoid some of the common biases that at times undermine leaders’ judgments.
Yet when organizations embrace decision models, they sometimes overlook the need to use them well. In this article, I’ll address an important distinction between outcomes leaders can influence and those they cannot. For things that executives cannot directly influence, accurate judgments are paramount and the new modeling tools can be valuable. However, when a senior manager can have a direct influence over the outcome of a decision, the challenge is quite different. In this case, the task isn’t to predict what will happen but to make it happen. Here, positive thinking—indeed, a healthy dose of management confidence—can make the difference between success and failure.

Where models work well

Examples of successful decision models are numerous and growing. Retailers gather real-time information about customer behavior by monitoring preferences and spending patterns. They can also run experiments to test the impact of changes in pricing or packaging and then rapidly observe the quantities sold. Banks approve loans and insurance companies extend coverage, basing their decisions on models that are continually updated, factoring in the most information to make the best decisions.
Some recent applications are truly dazzling. Certain companies analyze masses of financial transactions in real time to detect fraudulent credit-card use. A number of companies are gathering years of data about temperature and rainfall across the United States to run weather simulations and help farmers decide what to plant and when. Better risk management and improved crop yields are the result.
Other examples of decision models border on the humorous. Garth Sundem and John Tierney devised a model to shed light on what they described, tongues firmly in cheek, as one of the world’s great unsolved mysteries: how long will a celebrity marriage last? They came up with the Sundem/Tierney Unified Celebrity Theory, which predicted the length of a marriage based on the couple’s combined age (older was better), whether either had tied the knot before (failed marriages were not a good sign), and how long they had dated (the longer the better). The model also took into account fame (measured by hits on a Google search) and sex appeal (the share of those Google hits that came up with images of the wife scantily clad). With only a handful of variables, the model did a very good job of predicting the fate of celebrity marriages over the next few years.
Models have also shown remarkable power in fields that are usually considered the domain of experts. With data from France’s premier wine-producing regions, Bordeaux and Burgundy, Princeton economist Orley Ashenfelter devised a model that used just three variables to predict the quality of a vintage: winter rainfall, harvest rainfall, and average growing-season temperature. To the surprise of many, the model outperformed wine connoisseurs.
Why do decision models perform so well? In part because they can gather vast quantities of data, but also because they avoid common biases that undermine human judgment.2 People tend to be overly precise, believing that their estimates will be more accurate than they really are. They suffer from the recency bias, placing too much weight on the most immediate information. They are also unreliable: ask someone the same question on two different occasions and you may get two different answers. Decision models have none of these drawbacks; they weigh all data objectively and evenly. No wonder they do better than humans.

Can we control outcomes?

With so many impressive examples, we might conclude that decision models can improve just about anything. That would be a mistake. Executives need not only to appreciate the power of models but also to be cognizant of their limits.
Look back over the previous examples. In every case, the goal was to make a prediction about something that could not be influenced directly. Models can estimate whether a loan will be repaid but won’t actually change the likelihood that payments will arrive on time, give borrowers a greater capacity to pay, or make sure they don’t squander their money before payment is due. Models can predict the rainfall and days of sunshine on a given farm in central Iowa but can’t change the weather. They can estimate how long a celebrity marriage might last but won’t help it last longer or cause another to end sooner. They can predict the quality of a wine vintage but won’t make the wine any better, reduce its acidity, improve the balance, or change the undertones. For these sorts of estimates, finding ways to avoid bias and maintain accuracy is essential.
Executives, however, are not concerned only with predicting things they cannot influence. Their primary duty—as the word execution implies—is to get things done. The task of leadership is to mobilize people to achieve a desired end. For that, leaders need to inspire their followers to reach demanding goals, perhaps even to do more than they have done before or believe is possible. Here, positive thinking matters. Holding a somewhat exaggerated level of self-confidence isn’t a dangerous bias; it often helps to stimulate higher performance.
This distinction seems simple but it’s often overlooked. In our embrace of decision models, we sometimes forget that so much of life is about getting things done, not predicting things we cannot control.

Improving models over time

Part of the appeal of decision models lies in their ability to make predictions, to compare those predictions with what actually happens, and then to evolve so as to make more accurate predictions. In retailing, for example, companies can run experiments with different combinations of price and packaging, then rapidly obtain feedback and alter their marketing strategy. Netflix captures rapid feedback to learn what programs have the greatest appeal and then uses those insights to adjust its offerings. Models are not only useful at any particular moment but can also be updated over time to become more and more accurate.
Using feedback to improve models is a powerful technique but is more applicable in some settings than in others. Dynamic improvement depends on two features: first, the observation of results should not make any future occurrence either more or less likely and, second, the feedback cycle of observation and adjustment should happen rapidly. Both conditions hold in retailing, where customer behavior can be measured without directly altering it and results can be applied rapidly, with prices or other features changed almost in real time. They also hold in weather forecasting, since daily measurements can refine models and help to improve subsequent predictions. The steady improvement of models that predict weather—from an average error (in the maximum temperature) of 6 degrees Fahrenheit in the early 1970s to 5 degrees in the 1990s and just 4 by 2010—is testimony to the power of updated models.
For other events, however, these two conditions may not be present. As noted, executives not only estimate things they cannot affect but are also charged with bringing about outcomes. Some of the most consequential decisions of all—including the launch of a new product, entry into a new market, or the acquisition of a rival—are about mobilizing resources to get things done. Furthermore, the results are not immediately visible and may take months or years to unfold. The ability to gather and insert objective feedback into a model, to update it, and to make a better decision the next time just isn’t present.
None of these caveats call into question the considerable power of decision analysis and predictive models in so many domains. They help underscore the main point: an appreciation of decision analytics is important, but an understanding of when these techniques are useful and of their limitations is essential, too…”

The benefits—and limits—of decision models

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday