The era of development mutants


Guilo Quaggiotto at Nesta: “If you were looking for the cutting edge of the development sector, where would you go these days? You would probably look at startups like Premise who have predicted food trends 25 days faster than national statistics in Brazil, or GiveDirectly who are pushing the boundaries on evidence – from RCTs to new ways of mapping poverty – to fast track the adoption of cash transfers.

Or perhaps you might draw your attention to PetaJakarta who are experimenting with new responses to crises by harnessing human sensor networks. You might be tempted to consider Airbnb’s Disaster Response programme as an indicator of an emerging alternative infrastructure for disaster response (and perhaps raising questions about the political economy of this all).

And could Bitnation’s Refugee Emergency programme in response to the European refugee crisis be the possible precursor of future solutions for transnational issues – among the development sector’s hardest challenges? Are the business models of One Acre Fund, which provides services for smallholder farmers, or Floodtags, which analyses citizen data during floods for water and disaster managers, an indicator of future pathways to scale – that elusive development unicorn?

If you want to look at the future of procuring solutions for the development sector, should you be looking at initiatives like Citymart, which works with municipalities across the world to rethink traditional procurement and unleash the expertise and innovation capabilities of their citizens? By the same token, projects like Pathogen Box, Poverty Stoplight or Patient Innovation point to a brave new world where lead-user innovation and harnessing ‘sticky’ local knowledge becomes the norm, rather than the exception. You would also be forgiven for thinking that social movements across the world are the place to look for signs of future mechanisms for harnessing collective intelligence – Kawal Pamilu’s “citizen experts” self-organising around the Indonesian elections in 2014 is a textbook case study in this department.

The list could go on and on: welcome to the era of development mutants. While established players in the development sector are engrossed in soul-searching and their fitness for purpose is being scrutinised from all quarters, a whole new set of players is emerging, unfettered by legacy and borrowing from a variety of different disciplines. They point to a potentially different future – indeed, many potentially different futures – for the sector…..

But what if we wanted to invert this paradigm? How could we move from denial to fruitful collaboration with the ‘edgeryders’ of the development sector and accelerate its transformation?

Adopting new programming principles

Based on our experience working with development organisations, we believe that partnering with the mutants involves two types of shifts for traditional players: at the programmatic and the operational level. At the programmatic level, our work on the ground led us to articulate the following emerging principles:

  1. Mapping what people have, not what they need: even though approaches like jugaad and positive deviance have been around for a long time, unfortunately the default starting point for many development projects is still mapping needs, not assets. Inverting this paradigm allows for potentially disruptive project design and partnerships to emerge. (Signs of the future: Patient Innovation, Edgeryders, Community Mirror, Premise)

  2. Getting ready for multiple futures: When distributed across an organisation and not limited to a centralised function, the discipline of scanning the horizon for emergent solutions that contradict the dominant paradigm can help move beyond the denial phase and develop new interfaces to collaborate with the mutants. Here the link between analysis (to understand not only what is probable, but also what is possible) and action is critical – otherwise this remains purely an academic exercise. (Signs of the future: OpenCare, Improstuctures, Seeds of Good Anthropocene, Museum of the Future)

  3. Running multiple parallel experiments: According to Dave Snowden, in order to intervene in a complex system “you need multiple parallel experiments and they should be based on different and competing theories/hypotheses”. Unfortunately, many development projects are still based on linear narratives and assumptions such as “if only we run an awareness raising campaign citizens will change their behaviour”. Turning linear narratives into hypotheses to be tested (without becoming religious on a specific approach) opens up the possibility to explore the solution landscape and collaborate with non-obvious partners that bring new approaches to the table. (Signs of the future: Chukua Hakua, GiveDirectly, Finnish PM’s Office of Experiments, Ideas42, Cognitive Edge)

  4. Embracing obliquity: A deep, granular understanding of local assets and dynamics along with system mapping (see point 5 below) and pairing behavioural experts with development practitioners can help identify entry points for exploring new types of intervention based on obliquity principles. Mutants are often faster in adopting this approach and partnering with them is a way to bypass organisational inertia and explore nonlinear interventions. (Signs of the future: Sardex, social prescriptions, forensic architecture)

  5. From projects to systems: development organisations genuinely interested in developing new partnerships need to make the shift from the project logic to system investments. This involves, among other things, shifting the focus from providing solutions to helping every actor in the system to develop a higher level of consciousness about the issues they are facing and to take better decisions over time. It also entails partnering with mutants to explore entirely new financial mechanisms. (Signs of the future: Lankelly Chase, Indonesia waste banks, Dark Matter Labs)

Adopting new interfaces for working with the mutants

Harvard Business School professor Carliss Baldwin argued that most bureaucracies these days have a ‘non-contractible’ problem: they don’t know where smart people are, or how to evaluate how good they are. Most importantly, most smart people don’t want to work for them because they find them either too callous, unrewarding or slow (or a combination of all of these)….(More)”

Friended, but not Friends: Federal Ethics Authorities Address Role of Social Media in Politics


CRS Reports & Analysis: “Since the rise of social media over the past decade, new platforms of technology have reinforced the adage that the law lags behind developments in technology. Government agencies, officials, and employees regularly use a number of social media options – e.g., Twitter, Facebook, etc. – that have led agencies to update existing ethics rules to reflect the unique issues that they may present. Two areas of ethics regulation affected by the increased role of social media are the ethical standards governing gifts to federal employees and the restrictions on employees’ political activities. These rules apply to employees in the executive branch, though separate ethics rules and guidance on similar topics apply to the House and Senate….(More)”

What Should We Do About Big Data Leaks?


Paul Ford at the New Republic: “I have a great fondness for government data, and the government has a great fondness for making more of it. Federal elections financial data, for example, with every contribution identified, connected to a name and address. Or the results of the census. I don’t know if you’ve ever had the experience of downloading census data but it’s pretty exciting. You can hold America on your hard drive! Meditate on the miracles of zip codes, the way the country is held together and addressable by arbitrary sets of digits.

You can download whole books, in PDF format, about the foreign policy of the Reagan Administration as it related to Russia. Negotiations over which door the Soviet ambassador would use to enter a building. Gigabytes and gigabytes of pure joy for the ephemeralist. The government is the greatest creator of ephemera ever.

Consider the Financial Crisis Inquiry Commission, or FCIC, created in 2009 to figure out exactly how the global economic pooch was screwed. The FCIC has made so much data, and has done an admirable job (caveats noted below) of arranging it. So much stuff. There are reams of treasure on a single FCIC web site, hosted at Stanford Law School: Hundreds of MP3 files, for example, with interviews with Jamie Dimonof JPMorgan Chase and Lloyd Blankfein of Goldman Sachs. I am desperate to find  time to write some code that automatically extracts random audio snippets from each and puts them on top of a slow ambient drone with plenty of reverb, so that I can relax to the dulcet tones of the financial industry explaining away its failings. (There’s a Paul Krugman interview that I assume is more critical.)

The recordings are just the beginning. They’ve released so many documents, and with the documents, a finding aid that you can download in handy PDF format, which will tell you where to, well, find things, pointing to thousands of documents. That aid alone is 1,439 pages.

Look, it is excellent that this exists, in public, on the web. But it also presents a very contemporary problem: What is transparency in the age of massive database drops? The data is available, but locked in MP3s and PDFs and other documents; it’s not searchable in the way a web page is searchable, not easy to comment on or share.

Consider the WikiLeaks release of State Department cables. They were exhausting, there were so many of them, they were in all caps. Or the trove of data Edward Snowden gathered on aUSB drive, or Chelsea Manning on CD. And the Ashley Madison leak, spread across database files and logs of credit card receipts. The massive and sprawling Sony leak, complete with whole email inboxes. And with the just-released Panama Papers, we see two exciting new developments: First, the consortium of media organizations that managed the leak actually came together and collectively, well, branded the papers, down to a hashtag (#panamapapers), informational website, etc. Second, the size of the leak itself—2.5 terabytes!—become a talking point, even though that exact description of what was contained within those terabytes was harder to understand. This, said the consortia of journalists that notably did not include The New York Times, The Washington Post, etc., is the big one. Stay tuned. And we are. But the fact remains: These artifacts are not accessible to any but the most assiduous amateur conspiracist; they’re the domain of professionals with the time and money to deal with them. Who else could be bothered?

If you watched the movie Spotlight, you saw journalists at work, pawing through reams of documents, going through, essentially, phone books. I am an inveterate downloader of such things. I love what they represent. And I’m also comfortable with many-gigabyte corpora spread across web sites. I know how to fetch data, how to consolidate it, and how to search it. I share this skill set with many data journalists, and these capacities have, in some ways, become the sole province of the media. Organs of journalism are among the only remaining cultural institutions that can fund investigations of this size and tease the data apart, identifying linkages and thus constructing informational webs that can, with great effort, be turned into narratives, yielding something like what we call “a story” or “the truth.” 

Spotlight was set around 2001, and it features a lot of people looking at things on paper. The problem has changed greatly since then: The data is everywhere. The media has been forced into a new cultural role, that of the arbiter of the giant and semi-legal database. ProPublica, a nonprofit that does a great deal of data gathering and data journalism and then shares its findings with other media outlets, is one example; it funded a project called DocumentCloud with other media organizations that simplifies the process of searching through giant piles of PDFs (e.g., court records, or the results of Freedom of Information Act requests).

At some level the sheer boredom and drudgery of managing these large data leaks make them immune to casual interest; even the Ashley Madison leak, which I downloaded, was basically an opaque pile of data and really quite boring unless you had some motive to poke around.

If this is the age of the citizen journalist, or at least the citizen opinion columnist, it’s also the age of the data journalist, with the news media acting as product managers of data leaks, making the information usable, browsable, attractive. There is an uneasy partnership between leakers and the media, just as there is an uneasy partnership between the press and the government, which would like some credit for its efforts, thank you very much, and wouldn’t mind if you gave it some points for transparency while you’re at it.

Pause for a second. There’s a glut of data, but most of it comes to us in ugly formats. What would happen if the things released in the interest of transparency were released in actual transparent formats?…(More)”

Can Data Literacy Protect Us from Misleading Political Ads?


Walter Frick at Harvard Business Review: “It’s campaign season in the U.S., and politicians have no compunction about twisting facts and figures, as a quick skim of the fact-checking website Politifact illustrates.

Can data literacy guard against the worst of these offenses? Maybe, according to research.

There is substantial evidence that numeracy can aid critical thinking, and some reason to think it can help in the political realm, within limits. But there is also evidence that numbers can mislead even data-savvy people when it’s in service of those people’s politics.

In a study published at the end of last year, Vittorio Merola of Ohio State University and Matthew Hitt of Louisiana State examined how numeracy might guard against partisan messaging. They showed participants information comparing the costs of probation and prison, and then asked whether participants agreed with the statement, “Probation should be used as an alternative form of punishment, instead of prison, for felons.”

Some of the participants were shown highly relevant numeric information arguing for the benefits of probation: that it costs less and has a better cost-benefit ratio, and that the cost of U.S. prisons has been rising. Another group was shown weaker, less-relevant numeric information. This message didn’t contain anything about the costs or benefits of parole, and instead compared prison costs to transportation spending, with no mention of why these might be at all related. The experiment also varied whether the information was supposedly from a study commissioned by Democrats or Republicans.

The researchers scored participants’ numeracy by asking questions like, “The chance of getting a viral infection is 0.0005. Out of 10,000 people, about how
many of them are expected to get infected?”

For participants who scored low in numeracy, their support depended more on the political party making the argument than on the strength of the data. When the information came from those participants’ own party, they were more likely to agree with it, no matter whether it was weak or strong.

By contrast, participants who scored higher in numeracy were persuaded by the stronger numeric information, even when it came from the other party. The results held up even after accounting for participants’ education, among other variables….

In 2013, Dan Kahan of Yale and several colleagues conducted a study in which they asked participants to draw conclusions from data. In one group, the data was about a treatment for skin rashes, a nonpolitical topic. Another group was asked to evaluate data on gun control, comparing crime rates for cities that have banned concealed weapons to cities that haven’t.

Additionally, in the skin rash group some participants were shown data indicating that the use of skin cream correlated with rashes getting better, while some were shown the opposite. Similarly, some in the gun control group were shown less crime in cities that have banned concealed weapons, while some were shown the reverse…. They found that highly numerate people did better than less-numerate ones in drawing the correct inference in the skin rash case. But comfort with numbers didn’t seem to help when it came to gun control. In fact, highly numerate participants were more polarized over the gun control data than less-numerate ones. The reason seemed to be that the numerate participants used their skill with data selectively, employing it only when doing so helped them reach a conclusion that fit with their political ideology.

Two other lines of research are relevant here.

First, work by Philip Tetlock and Barbara Mellers of the University of Pennsylvania suggests that numerate people tend to make better forecasts, including about geopolitical events. They’ve also documented that even very basic training in probabilistic thinking can improve one’s forecasting accuracy. And this approach works best, Tetlock argues, when it’s part of a whole style of thinking that emphasizes multiple points of view.

Second, two papers, one from the University of Texas at Austin and one from Princeton, found that partisan bias can be diminished with incentives: People are more likely to report factually correct beliefs about the economy when money is on the line…..(More)”

Automating power: Social bot interference in global politics


Samuel C. Woolley at First Monday: “Over the last several years political actors worldwide have begun harnessing the digital power of social bots — software programs designed to mimic human social media users on platforms like Facebook, Twitter, and Reddit. Increasingly, politicians, militaries, and government-contracted firms use these automated actors in online attempts to manipulate public opinion and disrupt organizational communication. Politicized social bots — here ‘political bots’ — are used to massively boost politicians’ follower levels on social media sites in attempts to generate false impressions of popularity. They are programmed to actively and automatically flood news streams with spam during political crises, elections, and conflicts in order to interrupt the efforts of activists and political dissidents who publicize and organize online. They are used by regimes to send out sophisticated computational propaganda. This paper conducts a content analysis of available media articles on political bots in order to build an event dataset of global political bot deployment that codes for usage, capability, and history. This information is then analyzed, generating a global outline of this phenomenon. This outline seeks to explain the variety of political bot-oriented strategies and presents details crucial to building understandings of these automated software actors in the humanities, social and computer sciences….(More)”

Selected Readings on Data and Humanitarian Response


By Prianka Srinivasan and Stefaan G. Verhulst *

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of data and humanitarian response was originally published in 2016.

Data, when used well in a trusted manner, allows humanitarian organizations to innovate how to respond to emergency events, including better coordination of post-disaster relief efforts, the ability to harness local knowledge to create more targeted relief strategies, and tools to predict and monitor disasters in real time. Consequently, in recent years both multinational groups and community-based advocates have begun to integrate data collection and evaluation strategies into their humanitarian operations, to better and more quickly respond to emergencies. However, this movement poses a number of challenges. Compared to the private sector, humanitarian organizations are often less equipped to successfully analyze and manage big data, which pose a number of risks related to the security of victims’ data. Furthermore, complex power dynamics which exist within humanitarian spaces may be further exacerbated through the introduction of new technologies and big data collection mechanisms. In the below we share:

  • Selected Reading List (summaries and hyperlinks)
  • Annotated Selected Reading List
  • Additional Readings

Selected Reading List  (summaries in alphabetical order)

Data and Humanitarian Response

Risks of Using Big Data in Humanitarian Context

Annotated Selected Reading List (in alphabetical order)

Karlsrud, John. “Peacekeeping 4.0: Harnessing the Potential of Big Data, Social Media, and Cyber Technologies.” Cyberspace and International Relations, 2013. http://bit.ly/235Qb3e

  • This chapter from the book “Cyberspace and International Relations” suggests that advances in big data give humanitarian organizations unprecedented opportunities to prevent and mitigate natural disasters and humanitarian crises. However, the sheer amount of unstructured data necessitates effective “data mining” strategies for multinational organizations to make the most use of this data.
  • By profiling some civil-society organizations who use big data in their peacekeeping efforts, Karlsrud suggests that these community-focused initiatives are leading the movement toward analyzing and using big data in countries vulnerable to crisis.
  • The chapter concludes by offering ten recommendations to UN peacekeeping forces to best realize the potential of big data and new technology in supporting their operations.

Mancini, Fancesco. “New Technology and the prevention of Violence and Conflict.” International Peace Institute, 2013. http://bit.ly/1ltLfNV

  • This report from the International Peace Institute looks at five case studies to assess how information and communications technologies (ICTs) can help prevent humanitarian conflicts and violence. Their findings suggest that context has a significant impact on the ability for these ICTs for conflict prevention, and any strategies must take into account the specific contingencies of the region to be successful.
  • The report suggests seven lessons gleaned from the five case studies:
    • New technologies are just one in a variety of tools to combat violence. Consequently, organizations must investigate a variety of complementary strategies to prevent conflicts, and not simply rely on ICTs.
    • Not every community or social group will have the same relationship to technology, and their ability to adopt new technologies are similarly influenced by their context. Therefore, a detailed needs assessment must take place before any new technologies are implemented.
    • New technologies may be co-opted by violent groups seeking to maintain conflict in the region. Consequently, humanitarian groups must be sensitive to existing political actors and be aware of possible negative consequences these new technologies may spark.
    • Local input is integral to support conflict prevention measures, and there exists need for collaboration and awareness-raising with communities to ensure new technologies are sustainable and effective.
    • Information shared between civil-society has more potential to develop early-warning systems. This horizontal distribution of information can also allow communities to maintain the accountability of local leaders.

Meier, Patrick. “Digital humanitarians: how big data is changing the face of humanitarian response.” Crc Press, 2015. http://amzn.to/1RQ4ozc

  • This book traces the emergence of “Digital Humanitarians”—people who harness new digital tools and technologies to support humanitarian action. Meier suggests that this has created a “nervous system” to connect people from disparate parts of the world, revolutionizing the way we respond to humanitarian crises.
  • Meier argues that such technology is reconfiguring the structure of the humanitarian space, where victims are not simply passive recipients of aid but can contribute with other global citizens. This in turn makes us more humane and engaged people.

Robertson, Andrew and Olson, Steve. “Using Data Sharing to Improve Coordination in Peacebuilding.” United States Institute for Peace, 2012. http://bit.ly/235QuLm

  • This report functions as an overview of a roundtable workshop on Technology, Science and Peace Building held at the United States Institute of Peace. The workshop aimed to investigate how data-sharing techniques can be developed for use in peace building or conflict management.
  • Four main themes emerged from discussions during the workshop:
    • “Data sharing requires working across a technology-culture divide”—Data sharing needs the foundation of a strong relationship, which can depend on sociocultural, rather than technological, factors.
    • “Information sharing requires building and maintaining trust”—These relationships are often built on trust, which can include both technological and social perspectives.
    • “Information sharing requires linking civilian-military policy discussions to technology”—Even when sophisticated data-sharing technologies exist, continuous engagement between different stakeholders is necessary. Therefore, procedures used to maintain civil-military engagement should be broadened to include technology.
    • “Collaboration software needs to be aligned with user needs”—technology providers need to keep in mind the needs of its users, in this case peacebuilders, in order to ensure sustainability.

United Nations Independent Expert Advisory Group on a Data Revolution for Sustainable Development. “A World That Counts, Mobilizing the Data Revolution.” 2014. https://bit.ly/2Cb3lXq

  • This report focuses on the potential benefits and risks data holds for sustainable development. Included in this is a strategic framework for using and managing data for humanitarian purposes. It describes a need for a multinational consensus to be developed to ensure data is shared effectively and efficiently.
  • It suggests that “people who are counted”—i.e., those who are included in data collection processes—have better development outcomes and a better chance for humanitarian response in emergency or conflict situations.

Katie Whipkey and Andrej Verity. “Guidance for Incorporating Big Data into Humanitarian Operations.” Digital Humanitarian Network, 2015. http://bit.ly/1Y2BMkQ

  • This report produced by the Digital Humanitarian Network provides an overview of big data, and how humanitarian organizations can integrate this technology into their humanitarian response. It primarily functions as a guide for organizations, and provides concise, brief outlines of what big data is, and how it can benefit humanitarian groups.
  • The report puts forward four main benefits acquired through the use of big data by humanitarian organizations: 1) the ability to leverage real-time information; 2) the ability to make more informed decisions; 3) the ability to learn new insights; 4) the ability for organizations to be more prepared.
  • It goes on to assess seven challenges big data poses for humanitarian organizations: 1) geography, and the unequal access to technology across regions; 2) the potential for user error when processing data; 3) limited technology; 4) questionable validity of data; 5) underdeveloped policies and ethics relating to data management; 6) limitations relating to staff knowledge.

Risks of Using Big Data in Humanitarian Context
Crawford, Kate, and Megan Finn. “The limits of crisis data: analytical and ethical challenges of using social and mobile data to understand disasters.” GeoJournal 80.4, 2015. http://bit.ly/1X0F7AI

  • Crawford & Finn present a critical analysis of the use of big data in disaster management, taking a more skeptical tone to the data revolution facing humanitarian response.
  • They argue that though social and mobile data analysis can yield important insights and tools in crisis events, it also presents a number of limitations which can lead to oversights being made by researchers or humanitarian response teams.
  • Crawford & Finn explore the ethical concerns the use of big data in disaster events introduces, including issues of power, privacy, and consent.
  • The paper concludes by recommending that critical data studies, such as those presented in the paper, be integrated into crisis event research in order to analyze some of the assumptions which underlie mobile and social data.

Jacobsen, Katja Lindskov (2010) Making design safe for citizens: A hidden history of humanitarian experimentation. Citizenship Studies 14.1: 89-103. http://bit.ly/1YaRTwG

  • This paper explores the phenomenon of “humanitarian experimentation,” where victims of disaster or conflict are the subjects of experiments to test the application of technologies before they are administered in greater civilian populations.
  • By analyzing the particular use of iris recognition technology during the repatriation of Afghan refugees to Pakistan in 2002 to 2007, Jacobsen suggests that this “humanitarian experimentation” compromises the security of already vulnerable refugees in order to better deliver biometric product to the rest of the world.

Responsible Data Forum. “Responsible Data Reflection Stories: An Overview.” http://bit.ly/1Rszrz1

  • This piece from the Responsible Data forum is primarily a compilation of “war stories” which follow some of the challenges in using big data for social good. By drawing on these crowdsourced cases, the Forum also presents an overview which makes key recommendations to overcome some of the challenges associated with big data in humanitarian organizations.
  • It finds that most of these challenges occur when organizations are ill-equipped to manage data and new technologies, or are unaware about how different groups interact in digital spaces in different ways.

Sandvik, Kristin Bergtora. “The humanitarian cyberspace: shrinking space or an expanding frontier?” Third World Quarterly 37:1, 17-32, 2016. http://bit.ly/1PIiACK

  • This paper analyzes the shift toward more technology-driven humanitarian work, where humanitarian work increasingly takes place online in cyberspace, reshaping the definition and application of aid. This has occurred along with what many suggest is a shrinking of the humanitarian space.
  • Sandvik provides three interpretations of this phenomena:
    • First, traditional threats remain in the humanitarian space, which are both modified and reinforced by technology.
    • Second, new threats are introduced by the increasing use of technology in humanitarianism, and consequently the humanitarian space may be broadening, not shrinking.
    • Finally, if the shrinking humanitarian space theory holds, cyberspace offers one example of this, where the increasing use of digital technology to manage disasters leads to a contraction of space through the proliferation of remote services.

Additional Readings on Data and Humanitarian Response

* Thanks to: Kristen B. Sandvik; Zara Rahman; Jennifer Schulte; Sean McDonald; Paul Currion; Dinorah Cantú-Pedraza and the Responsible Data Listserve for valuable input.

Innovating for pro-poor services: Why politics matter


Nathaniel Mason, Clare Cummings and Julian Doczi for ODI insights: “To solve sustainable development challenges, such as the provision of universal access to basic services, we need new ideas, as well as old ideas applied in new ways and new places. The pace of global innovation, particularly digital innovation, is generating optimism, positioning the world at the start of the ‘Fourth Industrial Revolution’.1 Innovation can make basic services cheaper, more accessible, more relevant and more desirable for poor people. However, we also know few innovations lead to sustainable, systemic change. The barriers the this are often political – including problems related to motivation, power and collective action. Yet, just as political factors can prevent innovations from being widely adopted, politically smart approaches can help in navigating and mitigating these challenges. And, because innovations can alter the balance of power in societies and markets, they can both provoke new and challenging politics themselves and also help unlock systemic political change. When and why does politics affect innovation? What does this mean for donors, foundations and impact investors backing innovations for development?…(More)

Technology and politics: The signal and the noise


Special Issue of The Economist: “…The way these candidates are fighting their campaigns,each in his own way, is proof that politics as usual is no longer an option. The internet and the availability of huge piles of data on everyone and everything are transforming the democratic process, just as they are upending many industries. They are becoming a force in all kinds of things,from running election campaigns and organising protest movements to improving public policy and the delivery of services. This special report will argue that, as a result, the relationship between citizens and those who govern them is changing fundamentally.

Incongruous though it may seem, the forces that are now powering the campaign of Mr Trump—as well as that ofBernie Sanders, the surprise candidate on the Democratic side (Hillary Clinton is less of a success online)—were first seen in full cry during the Arab spring in 2011. The revolution in Egypt and other Arab countries was not instigated by Twitter, Facebook and other social-media services, but they certainly help edit gain momentum. “The internet is an intensifier,” says Marc Lynch of GeorgeWashington University, a noted scholar of the protest movements in the region…..

However, this special report will argue that, in the longer term, online crusading and organising will turn out to matter less to politics in the digital age than harnessing those ever-growing piles of data. The internet and related technologies, such as smart phones and cloud computing, make it cheap and easy not only to communicate but also to collect, store and analyse immense quantities of information. This is becoming ever more important in influencing political outcomes.

America’s elections are a case in point. Mr Cruz with his data savvy is merely following in the footsteps of Barack Obama, who won his first presidential term with the clever application of digital know-how. Campaigners are hoovering up more and more digital information about every voting-age citizen and stashing it away in enormous databases.With the aid of complex algorithms, these data allow campaigners to decide, say, who needs to be reminded to make the trip to the polling station and who may be persuaded to vote for a particular candidate.

No hiding place

In the case of protest movements, the waves of collective action leave a big digital footprint. Using ever more sophisticated algorithms, governments can mine these data.That is changing the balance of power. In the event of another Arab spring, autocrats would not be caught off guard again because they are now able to monitor protests and intervene when they consider it necessary. They can also identify and neutralise the most influential activists. Governments that were digitally blind when the internet first took off in the mid-1990s now have both a telescope and a microscope.

But data are not just changing campaigns and political movements; they affect how policy is made and public services are offered. This is most visible at local-government level. Cities have begun to use them for everything from smoothing traffic flows to identifying fire hazards. Having all this information at their fingertips is bound to change the way these bureaucracies work, and how they interact with citizens. This will not only make cities more efficient, but provide them with data and tools that could help them involve their citizens more.

This report will look at electoral campaigns, protest movements and local government in turn. Readers will note that most of the examples quoted are American and that most of the people quoted are academics. That is because the study of the interrelationship between data and politics is relatively new and most developed in America. But it is beginning to spill out from the ivory towers, and is gradually spreading to other countries.

The growing role of technology in politics raises many questions. How much of a difference, for instance, do digitally enabled protest surges really make? Many seem to emerge from nowhere, then crash almost as suddenly, defeated by hard political realities and entrenched institutions. The Arab spring uprising in Egypt is one example. Once the incumbent president, Hosni Mubarak, was toppled, the coalition that brought him down fell apart, leaving the stage to the old powers, first the Muslim Brotherhood and then the armed forces.

In party politics, some worry that the digital targeting of voters might end up reducing the democratic process to a marketing exercise. Ever more data and better algorithms, they fret, could lead politicians to ignore those unlikely to vote for them. And in cities it is no tclear that more data will ensure that citizens become more engaged….(More)

See also:

Nudging voters


John Hasnas and Annette Hasnas at the Hill: “A perennial complaint about our democracy is that too large a portion of the electorate is poorly informed about important political issues. This is the problem of the ignorant voter. Especially this year, with its multiplicity of candidates, keeping track of the candidates’ various, and often shifting, policy positions can be extraordinarily difficult. As a result, many of those voting in the presidential primaries will cast their ballots with little idea of where the candidates stand on several important issues.

Isn’t there some way to nudge the voters into making more informed choices? Well, actually, yes, there is. But in making this claim, we use the word nudge advisedly.

Among contemporary policy analysts, “nudge” is a term of art. It refers to creating a context within which people make choices–a “choice architecture”–that makes it more likely that people will select one option rather than another. The typical example of a nudge is a school cafeteria in which fruits and vegetables are placed in front in easy to reach locations and less healthy fare is placed in less visible and harder to reach locations. No one is forced to select the fruit or vegetables, but the choice architecture makes it more likely that people will.
The key feature of a nudge is that it is not coercive. It is an effort to influence choice, not to impose it. People are always able to “opt out” of the nudge. Thus, to nudge is to design the context in which individuals make decisions so as to influence their choice without eliminating any options.

We think that nudging can be employed to help voters make more informed decisions in the voting booth.

Imagine the following scenario. A bipartisan good government group creates a list of the most significant contemporary policy issues. It then invites all candidates to state their positions on the issues. In the current campaign, candidates could be invited to state where they stand on gay marriage, immigration, intervention in Syria, climate change, tax reform, the minimum wage, gun control, income inequality, etc. This information would be collected and fed into the relevant election commission computer. When voters enter the voting booth, they would have the option of electronically recording their policy preferences on the same form that the candidates completed. The computer would display a ranking of the candidates on the basis of how closely their positions aligned with the voter’s. After receiving this information, voters would cast their ballots.

Our proposal is a nudge. It is completely non-coercive. No candidate would be required to complete the list of his or her policy positions, although refusing to do so might be viewed negatively by voters. No voter would be required to utilize the option. All would remain free to simply walk into the booth and cast their vote. Even those who utilize the option remain free to disregard its results and vote for whomever they please. The proposal simply alters the choice architecture of voting to build in access to a source of information about the candidates. Yet, it makes it more likely that citizens will cast more informed votes than they do at present….(More)”

The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections


Robert Epstein and Ronald E. Robertson at PNAS: “Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections. Here we report the results of five relevant double-blind, randomized controlled experiments, using a total of 4,556 undecided voters representing diverse demographic characteristics of the voting populations of the United States and India. The fifth experiment is especially notable in that it was conducted with eligible voters throughout India in the midst of India’s 2014 Lok Sabha elections just before the final votes were cast. The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company…(More)”