The Emergence of a Post-Fact World


Francis Fukuyama in Project Syndicate: “One of the more striking developments of 2016 and its highly unusual politics was the emergence of a “post-fact” world, in which virtually all authoritative information sources were called into question and challenged by contrary facts of dubious quality and provenance.

The emergence of the Internet and the World Wide Web in the 1990s was greeted as a moment of liberation and a boon for democracy worldwide. Information constitutes a form of power, and to the extent that information was becoming cheaper and more accessible, democratic publics would be able to participate in domains from which they had been hitherto excluded.

The development of social media in the early 2000s appeared to accelerate this trend, permitting the mass mobilization that fueled various democratic “color revolutions” around the world, from Ukraine to Burma (Myanmar) to Egypt. In a world of peer-to-peer communication, the old gatekeepers of information, largely seen to be oppressive authoritarian states, could now be bypassed.

While there was some truth to this positive narrative, another, darker one was also taking shape. Those old authoritarian forces were responding in dialectical fashion, learning to control the Internet, as in China, with its tens of thousands of censors, or, as in Russia, by recruiting legions of trolls and unleashing bots to flood social media with bad information. These trends all came together in a hugely visible way during 2016, in ways that bridged foreign and domestic politics….

The traditional remedy for bad information, according to freedom-of-information advocates, is simply to put out good information, which in a marketplace of ideas will rise to the top. This solution, unfortunately, works much less well in a social-media world of trolls and bots. There are estimates that as many as a third to a quarter of Twitter users fall into this category. The Internet was supposed to liberate us from gatekeepers; and, indeed, information now comes at us from all possible sources, all with equal credibility. There is no reason to think that good information will win out over bad information….

The inability to agree on the most basic facts is the direct product of an across-the-board assault on democratic institutions – in the US, in Britain, and around the world. And this is where the democracies are headed for trouble. In the US, there has in fact been real institutional decay, whereby powerful interest groups have been able to protect themselves through a system of unlimited campaign finance. The primary locus of this decay is Congress, and the bad behavior is for the most part as legal as it is widespread. So ordinary people are right to be upset.

And yet, the US election campaign has shifted the ground to a general belief that everything has been rigged or politicized, and that outright bribery is rampant. If the election authorities certify that your favored candidate is not the victor, or if the other candidate seemed to perform better in a debate, it must be the result of an elaborate conspiracy by the other side to corrupt the outcome. The belief in the corruptibility of all institutions leads to a dead end of universal distrust. American democracy, all democracy, will not survive a lack of belief in the possibility of impartial institutions; instead, partisan political combat will come to pervade every aspect of life….(More)”

Governing with Collective Intelligence


Tom Saunders and Geoff Mulgan at Nesta: “This paper provides an introduction to collective intelligence in government. It aims to be useful and relevant to governments of countries at very different levels of development. It highlights the ways in which governments are better understanding the world around them, drawing on ideas and expertise from their citizens, and encouraging greater scrutiny of their actions.

Collective intelligence is a new term to describe something which is in some respects old, but in other respects changing dramatically thanks to advances in digital technologies. It refers to the ability of large groups – a community, region, city or nation – to think and act intelligently in a way that amounts to more than the sum of their parts.

Key findings

Our analysis of government use of collective intelligence initiatives around the world finds that activities fall into four broad categories:

1. Better understanding facts and experiences: using new digital tools to gather data from many more sources.

2. Better development of options and ideas: tapping into the collective brainpower of citizens to come up with better ideas and options for action.

3. Better, more inclusive decision-making: involving citizens in decision making, from policymaking to planning and budgeting.

4. Better oversight of what is done: encouraging broader involvement in the oversight of government activity, from monitoring corruption to scrutinising budgets, helping to increase accountability and transparency….(More)”

Data Collaboratives as a New Frontier of Cross-Sector Partnerships in the Age of Open Data: Taxonomy Development


Paper by Iryna Susha, Marijn Janssen and Stefaan Verhulst: “Data collaboratives present a new form of cross-sector and public-private partnership to leverage (often corporate) data for addressing a societal challenge. They can be seen as the latest attempt to make data accessible to solve public problems. Although an increasing number of initiatives can be found, there is hardly any analysis of these emerging practices. This paper seeks to develop a taxonomy of forms of data collaboratives. The taxonomy consists of six dimensions related to data sharing and eight dimensions related to data use. Our analysis shows that data collaboratives exist in a variety of models. The taxonomy can help organizations to find a suitable form when shaping their efforts to create public value from corporate and other data. The use of data is not only dependent on the organizational arrangement, but also on aspects like the type of policy problem, incentives for use, and the expected outcome of data collaborative….(More)”

Developing transparency through digital means? Examining institutional responses to civic technology in Latin America


Rebecca Rumbul at Journal of eDemocracy and Open Government: A number of NGOs across the world currently develop digital tools to increase citizen interaction with official information. The successful operation of such tools depends on the expertise and efficiency of the NGO, and the willingness of institutions to disclose suitable information and data. It is this institutional interaction with civic technology that this study  examines. The research explores empirical interview data gathered from government officials, public servants, campaigners and NGO’s involved in the development and implementation of civic technologies in Chile, Argentina and Mexico. The findings identify the impact these technologies have had upon government bureaucracy, and the existing barriers to openness created by institutionalised behaviours and norms. Institutionalised attitudes to information rights and conventions are shown to inform the approach that government bureaucracy takes in the provision of information, and institutionalised procedural behaviour is shown to be a factor in frustrating NGOs attempting to implement civic technology….(More)”.

Making Citizen-Generated Data Work


screen-shot-2016-12-21-at-13-55-03

Danny Lämmerhirt at Open Knowledge: “We are pleased to announce a new research series investigating how citizens and civil society create data to drive sustainable development. The series follows on from earlier papers on Democratising The Data Revolution and how citizen-generated data can change what public institutions measure. The first report “Making Citizen-Generated Data Work” asks what makes citizens and others want to produce and use citizen-generated data. It was written by myself, Shazade Jameson, and Eko Prasetyo.

“The goal of Citizen-Generated Data is to monitor, advocate for, or drive change around an issue important to citizens”

The report demonstrates that citizen-generated data projects are rarely the work of individual citizens. Instead, they often depend on partnerships to thrive and are supported by civil society organisations, community-based organisations, governments, or business. These partners play a necessary role to provide resources, support, and knowledge to citizens. In return, they can harness data created by citizens to support their own mission. Thus, citizens and their partners often gain mutual benefits from citizen-generated data.

But if CGD projects rely on partnerships, who has to be engaged, and through which incentives, to enable CGD projects to achieve their goals? How are such multi-stakeholder projects organised, and which resources and expertise do partners bring into a project? What can other projects do to support and benefit their own citizen-generated data initiatives? This report offers recommendations to citizens, civil society organisations, policy-makers, donors, and others on how to foster stronger collaborations….(Read the full report here).

The Open Science Prize


The Open Science Prize is a new initiative from the Wellcome Trust, US National Institutes of Health and Howard Hughes Medical Institute to encourage and support the prototyping and development of services, tools and/or platforms that enable open content – including publications, datasets, code and other research outputs – to be discovered, accessed and re-used in ways that will advance research, spark innovation and generate new societal benefits….
The volume of digital objects for research available to researchers and the wider public is greater now than ever before, and so, consequently, are the opportunities to mine and extract value from existing open content and to generate new discoveries and other societal benefits. A key obstacle in realizing these benefits is the discoverability of open content, and the ability to access and utilize it.
The goal of this Prize is to stimulate the development of novel and ground-breaking tools and platforms to enable the reuse and repurposing of open digital research objects relevant to biomedical or health applications.  A Prize model is necessary to help accelerate the field of open biomedical research beyond what current funding mechanisms can achieve.  We also hope to demonstrate the huge potential value of Open Science approaches, and to generate excitement, momentum and further investment in the field….(More)”.

Global Standards in National Contexts: The Role of Transnational Multi-Stakeholder Initiatives in Public Sector Governance Reform


Paper by Brandon Brockmyer: “Multi-stakeholder initiatives (i.e., partnerships between governments, civil society, and the private sector) are an increasingly prevalent strategy promoted by multilateral, bilateral, and nongovernmental development organizations for addressing weaknesses in public sector governance. Global public sector governance MSIs seek to make national governments more transparent and accountable by setting shared standards for information disclosure and multi- stakeholder collaboration. However, research on similar interventions implemented at the national or subnational level suggests that the effectiveness of these initiatives is likely to be mediated by a variety of socio-political factors.

This dissertation examines the transnational evidence base for three global public sector governance MSIs — the Extractive Industries Transparency Initiative, the Construction Sector Transparency Initiative, and the Open Government Partnership — and investigates their implementation within and across three shared national contexts — Guatemala, the Philippines, and Tanzania — in order to determine whether and how these initiatives lead to improvements in proactive transparency (i.e., discretionary release of government data), demand-driven transparency (i.e., reforms that increase access to government information upon request), and accountability (i.e., the extent to which government officials are compelled to publicly explain their actions and/or face penalties or sanction for them), as well as the extent to which they provide participating governments with an opportunity to project a public image of transparency and accountability, while maintaining questionable practices in these areas (i.e., openwashing).

The evidence suggests that global public sector governance MSIs often facilitate gains in proactive transparency by national governments, but that improvements in demand-driven transparency and accountability remain relatively rare. Qualitative comparative analysis reveals that a combination of multi-stakeholder power sharing and civil society capacity is sufficient to drive improvements in proactive transparency, while the absence of visible, high-level political support is sufficient to impede such reforms. The lack of demand-driven transparency or accountability gains suggests that national-level coalitions forged by global MSIs are often too narrow to successfully advocate for broader improvements to public sector governance. Moreover, evidence for openwashing was found in one-third of cases, suggesting that national governments sometimes use global MSIs to deliberately mislead international observers and domestic stakeholders about their commitment to reform….(More)”

How Artificial Intelligence Will Usher in the Next Stage of E-Government


Daniel Castro at GovTech: “Since the earliest days of the Internet, most government agencies have eagerly explored how to use technology to better deliver services to citizens, businesses and other public-sector organizations. Early on, observers recognized that these efforts often varied widely in their implementation, and so researchers developed various frameworks to describe the different stages of growth and development of e-government. While each model is different, they all identify the same general progression from the informational, for example websites that make government facts available online, to the interactive, such as two-way communication between government officials and users, to the transactional, like applications that allow users to access government services completely online.

However, we will soon see a new stage of e-government: the perceptive.

The defining feature of the perceptive stage will be that the work involved in interacting with government will be significantly reduced and automated for all parties involved. This will come about principally from the integration of artificial intelligence (AI) — computer systems that can learn, reason and decide at levels similar to that of a human — into government services to make it more insightful and intelligent.

Consider the evolution of the Department of Motor Vehicles. The informational stage made it possible for users to find the hours for the local office; the interactive stage made it possible to ask the agency a question by email; and the transactional stage made it possible to renew a driver’s license online.

In the perceptive stage, the user will simply say, “Siri, I need a driver’s license,” and the individual’s virtual assistant will take over — collecting any additional information from the user, coordinating with the government’s system and scheduling any in-person meetings automatically. That’s right: AI might finally end your wait at the DMV.

In general, there are at least three ways that AI will impact government agencies. First, it will enable government workers to be more productive since the technology can be used to automate many tasks. …

Second, AI will create a faster, more responsive government. AI enables the creation of autonomous, intelligent agents — think online chatbots that answer citizens’ questions, real-time fraud detection systems that constantly monitor government expenditures and virtual legislative assistants that quickly synthesize feedback from citizens to lawmakers.

Third, AI will allow people to interact more naturally with digital government services…(More)”

Artificial Intelligence Could Help Colleges Better Plan What Courses They Should Offer


Jeffrey R. Young at EdSsurge: Big data could help community colleges better predict how industries are changing so they can tailor their IT courses and other programs. After all, if Amazon can forecast what consumers will buy and prestock items in their warehouses to meet the expected demand, why can’t colleges do the same thing when planning their curricula, using predictive analytics to make sure new degree or certificates programs are started just in time for expanding job opportunities?

That’s the argument made by Gordon Freedman, president of the nonprofit National Laboratory for Education Transformation. He’s part of a new center that will do just that, by building a data warehouse that brings together up-to-date information on what skills employers need and what colleges currently offer—and then applying artificial intelligence to attempt to predict when sectors or certain employment needs might be expanding.

He calls the approach “opportunity engineering,” and the center boasts some heavy-hitting players to assist in the efforts, including the University of Chicago, the San Diego Supercomputing Center and Argonne National Laboratory. It’s called the National Center for Opportunity Engineering & Analysis.

Ian Roark, vice president of workforce development at Pima Community College in Arizona, is among those eager for this kind of “opportunity engineering” to emerge.

He explains when colleges want to start new programs, they face a long haul—it takes time to develop a new curriculum, put it through an internal review, and then send it through an accreditor….

Other players are already trying to translate the job market into a giant data set to spot trends. LinkedIn sits on one of the biggest troves of data, with hundreds of millions of job profiles, and ambitions to create what it calls the “economic graph” of the economy. But not everyone is on LinkedIn, which attracts mainly those in white-collar jobs. And companies such as Burning Glass Technologies have scanned hundreds of thousands of job listings and attempt to provide real-time intelligence on what employers say they’re looking for. Those still don’t paint the full picture, Freedman argues, such as what jobs are forming at companies.

“We need better information from the employer, better information from the job seeker and better information from the college, and that’s what we’re going after,” Freedman says…(More)”.

Rethinking how we collect, share, and use development results data


Development Gateway: “The international development community spends a great deal of time, effort, and money gathering data on thousands of indicators embedded in various levels of Results Frameworks. These data comprise outputs (school enrollment, immunization figures), program outcomes (educational attainment, disease prevalence), and, in some cases, impacts (changes in key outcomes over time).

Ostensibly, we use results data to allocate resources to the places, partners, and programs most likely to achieve lasting success. But is this data good enough – and is it used well enough – to genuinely increase development impact in priority areas?

Experience suggests that decision-makers at all levels may often face inadequate, incorrect, late, or incomplete results data. At the same time, a figurative “Tower of Babel” of both project-level M&E and program-level outcome data can make it difficult for agencies and organizations to share and use data effectively. Further, potential users may not have the skills, resources, or enabling environment to meaningfully analyze and apply results data to decisions. With these challenges in mind, the development community needs to re-think its investments in results data, making sure that the right users are able to collect, share, and use this information to maximum effect.

Our Initiative

To this end, Development Gateway (DG), with the support of the Bill & Melinda Gates Foundation, aims to “diagnose” the results data ecosystem in three countries, identifying ways to improve data quality, sharing, and use in the health and agriculture sectors. Some of our important questions include:

  • Quality: Who collects data and how? Is data quality adequate? Does the data meet actual needs? How much time does data collection demand? How can data collection, quality, and reporting be improved?
  • Sharing: How can we compare results data from different donors, governments, and implementers? Is there demand for comparability? Should data be shared more freely? If so, how?
  • Use: How is results data analyzed and used to inform actual policies and plans? Does (or can) access to results data improve decision-making? Do the right people have the right data? How else can (or should) we promote data use?…(More)”