Digital Dividends – World Development Report 2016


World Bank Report: “Digital technologies have spread rapidly in much of the world. Digital dividends—the broader development benefits from using these technologies—have lagged behind. In many instances digital technologies have boosted growth, expanded opportunities, and improved service delivery. Yet their aggregate impact has fallen short and is unevenly distributed. For digital technologies to benefit everyone everywhere requires closing the remaining digital divide, especially in internet access. But greater digital adoption will not be enough. To get the most out of the digital revolution, countries also need to work on the “analog complements”—by strengthening regulations that ensure competition among businesses, by adapting workers’ skills to the demands of the new economy, and by ensuring that institutions are accountable…..

Engendering control: The gap between institutions and technology The internet was expected to usher in a new era of accountability and political empowerment, with citizens participating in policy making and forming self-organized virtual communities to hold government to account. These hopes have been largely unmet. While the internet has made many government functions more efficient and convenient, it has generally had limited impact on the most protracted problems—how to improve service provider accountability (principal-agent problems) and how to broaden public involvement and give greater voice to the poor and disadvantaged (collective action problems).

Whether citizens can successfully use the internet to raise the accountability of service providers depends on the context. Most important is the strength of existing accountability relationships between policy makers and providers, as discussed in the 2004 World Development Report, Making Services Work for Poor People. An examination of seventeen digital engagement initiatives for this Report finds that of nine cases in which citizen engagement involved a partnership between civil society organizations (CSOs) and government, three were successful (table O.2). Of eight cases that did not involve a partnership, most failed. This suggests that, although collaboration with government is not a sufficient condition for success, it may well be a necessary one.

Another ingredient for success is effective offline mobilization, particularly because citizen uptake of the digital channels was low in most of the cases. For example, Maji Matone, which facilitates SMS-based feedback about rural water supply problems in Tanzania, received only 53 SMS messages during its first six months of operation, far less than the initial target of 3,000, and was then abandoned. Political participation and engagement of the poor has remained rare, while in many countries the internet has disproportionately benefited political elites and increased the governments’ capacity to influence social and political discourse. Digital technologies have sometimes increased voting overall, but this has not necessarily resulted in more informed or more representative voting. In the Brazilian state of Rio Grande do Sul, online voting increased voter turnout by 8 percentage points, but online voters were disproportionately wealthier and more educated (fi gure O.19). Even in developed countries, engaging citizens continues to be a challenge. Only a small, unrepresentative subset of the population participates, and it is often difficult to sustain citizen engagement. There is no agreement among social scientists on whether the internet disproportionately empowers citizens or political elites, whether it increases polarization, or whether it deepens or weakens social capital, in some cases even facilitating organized violence. The use of technology in governments tends to be successful when it addresses fairly straightforward information and monitoring problems. For more demanding challenges, such as better management of providers or giving citizens

There is no agreement among social scientists on whether the internet disproportionately empowers citizens or political elites, whether it increases polarization, or whether it deepens or weakens social capital, in some cases even facilitating organized violence. The use of technology in governments tends to be successful when it addresses fairly straightforward information and monitoring problems. For more demanding challenges, such as better management of providers or giving citizens greater voice, technology helps only when governments are already responsive. The internet will thus often reinforce rather than replace existing accountability relationships between governments and citizens, including giving governments more capacity for surveillance and control (box O.6). Closing the gap between changing technology and unchanging institutions will require initiatives that strengthen the transparency and accountability of governments….(More)”

Can crowdsourcing decipher the roots of armed conflict?


Stephanie Kanowitz at GCN: “Researchers at Pennsylvania State University and the University of Texas at Dallas are proving that there’s accuracy, not just safety, in numbers. The Correlates of War project, a long-standing effort that studies the history of warfare, is now experimenting with crowdsourcing as a way to more quickly and inexpensively create a global conflict database that could help explain when and why countries go to war.

The goal is to facilitate the collection, dissemination and use of reliable data in international relations, but a byproduct has emerged: the development of technology that uses machine learning and natural language processing to efficiently, cost-effectively and accurately create databases from news articles that detail militarized interstate disputes.

The project is in its fifth iteration, having released the fourth set of Militarized Dispute (MID) Data in 2014. To create those earlier versions, researchers paid subject-matter experts such as political scientists to read and hand code newswire articles about disputes, identifying features of possible militarized incidents. Now, however, they’re soliciting help from anyone and everyone — and finding the results are much the same as what the experts produced, except the results come in faster and with significantly less expense.

As news articles come across the wire, the researchers pull them and formulate questions about them that help evaluate the military events. Next, the articles and questions are loaded onto the Amazon Mechanical Turk, a marketplace for crowdsourcing. The project assigns articles to readers, who typically spend about 10 minutes reading an article and responding to the questions. The readers submit the answers to the project researchers, who review them. The project assigns the same article to multiple workers and uses computer algorithms to combine the data into one annotation.

A systematic comparison of the crowdsourced responses with those of trained subject-matter experts showed that the crowdsourced work was accurate for 68 percent of the news reports coded. More important, the aggregation of answers for each article showed that common answers from multiple readers strongly correlated with correct coding. This allowed researchers to easily flag the articles that required deeper expert involvement and process the majority of the news items in near-real time and at limited cost….(more)”

Humanity 360: World Humanitarian Data and Trends 2015


OCHA: “WORLD HUMANITARIAN DATA AND TRENDS

Highlights major trends, challenges and opportunities in the nature of humanitarian crises, showing how the humanitarian landscape is evolving in a rapidly changing world.

EXPLORE...

LEAVING NO ONE BEHIND: HUMANITARIAN EFFECTIVENESS IN THE AGE OF THE SUSTAINABLE DEVELOPMENT GOALS

Exploring what humanitarian effectiveness means in today’s world ‐ better meeting the needs of people in crisis, better moving people out of crisis.

EXPLORE

TOOLS FOR DATA COORDINATION AND COLLECTION

 

How Much Development Data Is Enough?


Keith D. Shepherd at Project Syndicate: “Rapid advances in technology have dramatically lowered the cost of gathering data. Sensors in space, the sky, the lab, and the field, along with newfound opportunities for crowdsourcing and widespread adoption of the Internet and mobile telephones, are making large amounts of information available to those for whom it was previously out of reach. A small-scale farmer in rural Africa, for example, can now access weather forecasts and market prices at the tap of a screen.

This data revolution offers enormous potential for improving decision-making at every level – from the local farmer to world-spanning development organizations. But gathering data is not enough. The information must also be managed and evaluated – and doing this properly can be far more complicated and expensive than the effort to collect it. If the decisions to be improved are not first properly identified and analyzed, there is a high risk that much of the collection effort could be wasted or misdirected.

This conclusion is itself based on empirical analysis. The evidence is weak, for example, that monitoring initiatives in agriculture or environmental management have had a positive impact. Quantitative analysis of decisions across many domains, including environmental policy, business investments, and cyber security, has shown that people tend to overestimate the amount of data needed to make a good decision or misunderstand what type of data are needed.

Furthermore, grave errors can occur when large data sets are mined using machine algorithms without having first having properly examined the decision that needs to be made. There are many examples of cases in which data mining has led to the wrong conclusion – including in medical diagnoses or legal cases – because experts in the field were not consulted and critical information was left out of the analysis.

Decision science, which combines understanding of behavior with universal principles of coherent decision-making, limits these risks by pairing empirical data with expert knowledge. If the data revolution is to be harnessed in the service of sustainable development, the best practices of this field must be incorporated into the effort.

The first step is to identify and frame frequently recurring decisions. In the field of development, these include large-scale decisions such as spending priorities – and thus budget allocations – by governments and international organizations. But it also includes choices made on a much smaller scale: farmers pondering which crops to plant, how much fertilizer to apply, and when and where to sell their produce.

The second step is to build a quantitative model of the uncertainties in such decisions, including the various triggers, consequences, controls, and mitigants, as well as the different costs, benefits, and risks involved. Incorporating – rather than ignoring – difficult-to-measure, highly uncertain factors leads to the best decisions…..

The third step is to compute the value of obtaining additional information – something that is possible only if the uncertainties in all of the variables have been quantified. The value of information is the amount a rational decision-maker would be willing to pay for it. So we need to know where additional data will have value for improving a decision and how much we should spend to get it. In some cases, no further information may be needed to make a sound decision; in others, acquiring further data could be worth millions of dollars….(More)”

Initial Conditions Matter: Social Capital and Participatory Development


Paper by Lisa A. Cameron et al: “Billions of dollars have been spent on participatory development programs in the developing world. These programs give community members an active decision-making role. Given the emphasis on community involvement, one might expect that the effectiveness of this approach would depend on communities’ pre-existing social capital stocks. Using data from a large randomised field experiment of Community-Led Total Sanitation in Indonesia, we find that villages with high initial social capital built toilets and reduced open defecation, resulting in substantial health benefits. In villages with low initial stocks of social capital, the approach was counterproductive – fewer toilets were built than in control communities and social capital suffered….(More)”

Privacy, security and data protection in smart cities: a critical EU law perspective


CREATe Working Paper by Lilian Edwards: “Smart cities” are a buzzword of the moment. Although legal interest is growing, most academic responses at least in the EU, are still from the technological, urban studies, environmental and sociological rather than legal, sectors2 and have primarily laid emphasis on the social, urban, policing and environmental benefits of smart cities, rather than their challenges, in often a rather uncritical fashion3 . However a growing backlash from the privacy and surveillance sectors warns of the potential threat to personal privacy posed by smart cities . A key issue is the lack of opportunity in an ambient or smart city environment for the giving of meaningful consent to processing of personal data; other crucial issues include the degree to which smart cities collect private data from inevitable public interactions, the “privatisation” of ownership of both infrastructure and data, the repurposing of “big data” drawn from IoT in smart cities and the storage of that data in the Cloud.

This paper, drawing on author engagement with smart city development in Glasgow as well as the results of an international conference in the area curated by the author, argues that smart cities combine the three greatest current threats to personal privacy, with which regulation has so far failed to deal effectively; the Internet of Things(IoT) or “ubiquitous computing”; “Big Data” ; and the Cloud. While these three phenomena have been examined extensively in much privacy literature (particularly the last two), both in the US and EU, the combination is under-explored. Furthermore, US legal literature and solutions (if any) are not simply transferable to the EU because of the US’s lack of an omnibus data protection (DP) law. I will discuss how and if EU DP law controls possible threats to personal privacy from smart cities and suggest further research on two possible solutions: one, a mandatory holistic privacy impact assessment (PIA) exercise for smart cities: two, code solutions for flagging the need for, and consequences of, giving consent to collection of data in ambient environments….(More)

Social Media for Government Services


Book edited by Surya Nepal, Cécile Paris and Dimitrios Georgakopoulos: “This book highlights state-of-the-art research, development and implementation efforts concerning social media in government services, bringing together researchers and practitioners in a number of case studies. It elucidates a number of significant challenges associated with social media specific to government services, such as:  benefits and methods of assessing; usability and suitability of tools, technologies and platforms; governance policies and frameworks; opportunities for new services; integrating social media with organisational business processes; and specific case studies. The book also highlights the range of uses and applications of social media in the government domain, at both local and federal levels. As such, it offers a valuable resource for a broad readership including academic researchers, practitioners in the IT industry, developers, and government policy- and decision-makers….(More)

Living Labs: Concepts, Tools and Cases


Introduction by , : “This special issue on “Living labs: concepts, tools and cases” comes 10 years after the first scientific publications that defined the notion of living labs, but more than 15 years after the appearance of the first living lab projects (Ballon et al., 2005; Eriksson et al., 2005). This five-year gap demonstrates the extent to which living labs have been a practice-driven phenomenon. Right up to this day, they represent a pragmatic approach to innovation (of information and communication technologies [ICTs] and other artefacts), characterised by a.o. experimentation in real life and active involvement of users.

While there is now a certain body of literature that attempts to clarify and analyse the concept (Følstad, 2008; Almirall et al., 2012; Leminen et al., 2012), living lab practices are still under-researched, and a theoretical and methodological gap continues to exist in terms of the restricted amount and visibility of living lab literature vis-à-vis the rather large community of practice (Schuurman, 2015). The present special issue aims to assist in filling that gap.

This does not mean that the development of living labs has not been informed by scholarly literature previously (Ballon, 2015). Cornerstones include von Hippel’s (1988) work on user-driven innovation because of its emphasis on the ability of so-called lead users, rather than manufacturers, to create (mainly ICT) innovations. Another cornerstone is Silverstone’s (1993) theory on the domestication of ICTs that frames technology adoption as an ongoing struggle between users and technology where the user attempts to take control of the technological artefact and the technology comes to be fitted to users’ daily routines. It has been said that, in living labs, von Hippel’s concept of user-driven design and Silverstone’s insights into the appropriation of technologies are coupled dynamically through experimentation (Frissen and Van Lieshout, 2006).

The concept of stigmergy, which refers to addressing complex problems by collective, yet uncoordinated, actions and interactions of communities of individuals, has gradually become the third foundational element, as social media have provided online platforms for stigmergic behaviour, which has subsequently been linked to the “spontaneous” emergence of innovations (Pallot et al., 2010; Kiemen and Ballon, 2012). A fourth cornerstone is the literature on open and business model innovation, which argues that today’s fast-paced innovation landscape requires collaboration between multiple business and institutional stakeholders, and that the business should use these joint innovation endeavours to find the right “business architecture” (Chesbrough, 2003; Mitchell and Coles, 2003).….(More)

Playing ‘serious games,’ adults learn to solve thorny real-world problems


Lawrence Susskind and Ella Kim in The Conversation: “…We have been testing the use of role-playing games to promote collaborative decision-making by nations, states and communities. Unlike online computer games, players in role-playing games interact face-to-face in small groups of six to eight. The games place them in a hypothetical setting that simulates a real-life problem-solving situation. People are often assigned roles that are very different from their real-life roles. This helps them appreciate how their political adversaries view the problem.

Players receive briefing materials to read ahead of time so they can perform their assigned roles realistically. The idea is to reenact the tensions that actual stakeholders will feel when they are making real-life decisions. In the game itself, participants are asked to reach agreement in their roles in 60-90 minutes. (Other games, like the Mercury Game or the Chlorine Game, take longer to play.) If multiple small groups play the game at the same time, the entire room – which may include 100 tables of game players or more – can discuss the results together. In these debriefings, the most potent learning often occurs when players hear about creative moves that others have used to reach agreement.

It can take up to several months to design a game. Designers start by interviewing real-life decision makers to understand how they view the problem. Game designers must also synthesize a great deal of scientific and technical information to present it in the game in a form that anyone can understand. After the design phase, games have to be tested and refined before they are ready for play.

Research shows that this immersive approach to learning is particularly effective for adults. Our own research shows that elected and appointed officials, citizen advocates and corporate leaders can absorb a surprising amount of new scientific information when it is embedded in a carefully crafted role-playing game. In one study of more than 500 people in four New England coastal communities, we found that a significant portion of game players (1) changed their minds about how urgent a threat climate change is; (2) became more optimistic about their local government’s ability to reduce climate change risks; and (3) became more confident that conflicting groups would be able to reach agreement on how to proceed with climate adaptation….

Our conclusion is that “serious games” can prepare citizens and officialsto participate successfully in science-based problem-solving. In related research in Ghana and Vietnam, we found that role-playing games had similarly valuable effects. While the agreements reached in games do not necessarily indicate what actual agreements may be reached, they can help officials and stakeholder representatives get a much clearer sense of what might be possible.

We believe that role-playing games can be used in a wide range of situations. We have designed games that have been used in different parts of the world to help all kinds of interest groups work together to draft new environmental regulations. We have brought together adversaries in energy facility siting and waste cleanup disputes to play a game prior to facing off against each other in real life. This approach has also facilitated decisions in regional economic development disputes, water allocation disputes in an international river basin and disputes among aboriginal communities, national governments and private industry….(More)”

Toward WSIS 3.0: Adopting Next-Gen Governance Solutions for Tomorrow’s Information Society


Lea Kaspar  & Stefaan G. Verhulst at CircleID: “… Collectively, this process has been known as the “World Summit on the Information Society” (WSIS). During December 2015 in New York, twelve years after that first meeting in Geneva and with more than 3 billion people now online, member states of the United Nations unanimously adopted the final outcome document of the WSIS ten-year Review process.

The document (known as the WSIS+10 document) reflects on the progress made over the past decade and outlines a set of recommendations for shaping the information society in coming years. Among other things, it acknowledges the role of different stakeholders in achieving the WSIS vision, reaffirms the centrality of human rights, and calls for a number of measures to ensure effective follow-up.

For many, these represent significant achievements, leading observers to proclaim the outcome a diplomatic victory. However, as is the case with most non-binding international agreements, the WSIS+10 document will remain little more than a hollow guidepost until it is translated into practice. Ultimately, it is up to the national policy-makers, relevant international agencies, and the WSIS community as a whole to deliver meaningful progress towards achieving the WSIS vision.

Unfortunately, the WSIS+10 document provides little actual guidance for practitioners. What is even more striking, it reveals little progress in its understanding of emerging governance trends and methods since Geneva and Tunis, or how these could be leveraged in our efforts to harness the benefits of information and communication technologies (ICT).

As such, the WSIS remains a 20th-century approach to 21st-century challenges. In particular, the document fails to seek ways to make WSIS post 2015:

  • evidence-based in how to make decisions;
  • collaborative in how to measure progress; and
  • innovative in how to solve challenges.

Three approaches toward WSIS 3.0

Drawing on lessons in the field of governance innovation, we suggest in what follows three approaches, accompanied by practical recommendations, that could allow the WSIS to address the challenges raised by the information society in a more evidence-based, innovative and participatory way:

1. Adopt an evidence-based approach to WSIS policy making and implementation.

Since 2003, we have had massive experimentation in both developed and developing countries in a number of efforts to increase access to the Internet. We have seen some failures and some successes; above all, we have gained insight into what works, what doesn’t, and why. Unfortunately, much of the evidence remains scattered and ad-hoc, poorly translated into actionable guidance that would be effective across regions; nor is there any reflection on what we don’t know, and how we can galvanize the research and funding community to address information gaps. A few practical steps we could take to address this:….

2. Measure progress towards WSIS goals in a more open, collaborative way, founded on metrics and data developed through a bottom-up approach

The current WSIS+10 document has many lofty goals, many of which will remain effectively meaningless unless we are able to measure progress in concrete and specific terms. This requires the development of clear metrics, a process which is inevitably subjective and value-laden. Metrics and indicators must therefore be chosen with great care, particularly as they become points of reference for important decisions and policies. Having legitimate, widely-accepted indicators is critical. The best way to do this is to develop a participatory process that engages those actors who will be affected by WSIS-related actions and decisions. …These could include:…

3. Experiment with governance innovations to achieve WSIS objectives.

Over the last few years, we have seen a variety of innovations in governance that have provided new and often improved ways to solve problems and make decisions. They include, for instance:

  • The use of open and big data to generate new insights in both the problem and the solution space. We live in the age of abundant data — why aren’t we using it to inform our decision making? Data on the current landscape and the potential implications of policies could make our predictions and correlations more accurate.
  • The adoption of design thinking, agile development and user-focused research in developing more targeted and effective interventions. A linear approach to policy making with a fixed set of objectives and milestones allows little room for dealing with unforeseen or changing circumstances, making it difficult to adapt and change course. Applying lessons from software engineering — including the importance of feedback loops, continuous learning, and agile approach to project design — would allow policies to become more flexible and solutions more robust.
  • The application of behavioral sciences — for example, the concept of ‘nudging’ individuals to act in their own best interest or adopt behaviors that benefit society. How choices (e.g. to use new technologies) are presented and designed can be more powerful in informing adoption than laws, rules or technical standards.
  • The use of prizes and challenges to tap into the wisdom of the crowd to solve complex problems and identify new ideas. Resource constraints can be addressed by creating avenues for people/volunteers to act as resource in creating solutions, rather than being only their passive benefactors….(More)