The great ‘unnewsed’ struggle to participate fully in democracy


Polly Curtis in the Financial Times: “…We once believed in utopian dreams about how a digital world would challenge power structures, democratise information and put power into the hands of the audience. Twenty years ago, I even wrote a university dissertation on how the internet was going to re-democratise society.

Two decades on, power structures have certainly been disrupted, but that utopianism has now crashed into a different reality: a growing and largely unrecognised crisis of the “unnewsed” population. The idea of the unnewsed stems from the concept of the “unbanked”, people who are dispossessed of the structures of society that depend on having a bank account.

Not having news does the same for you in a democratic system. It is a global problem. In parts of the developing world the digital divide is defined by the cost of data, often splitting between rural and urban, and in some places male control of mobile phones exacerbates the disenfranchisement of women. Even in the affluent west, where data is cheap and there are more sim cards than people, that digital divide exists. In the US the concept of “news deserts”, communities with no daily local news outlet, is well established.

Last week, the Reuters Digital News Report, an annual survey of the digital news habits of 75,000 people in 38 countries, reported that 32 per cent now actively avoid the news — avoidance is up 6 percentage points overall and 11 points in the UK. When I dug into other data on news consumption, from the UK communications regulator Ofcom, I found that those who claim not to follow any news are younger, less educated, have lower incomes and are less likely to be in work than those who do. We don’t like to talk about it, but news habits are closely aligned to something that looks very like class. How people get their news explains some of this — and demonstrates the class divide in access to information.

Research by Oxford university’s Reuters Institute last year found that there is greater social inequality in news consumption online than offline. Whereas on average we all use the same number of news sources offline, those on the lower end of the socio-economic scale use significantly fewer sources online. Even the popular tabloids, with their tradition of campaigning news for mass audiences, now have higher social class readers online than in print. Instead of democratising information, there is a risk that the digital revolution is exacerbating gaps in news habits….(More)”.

Open Data Retrospective


Laura Bacon at Luminate:: “Our global philanthropic organisation – previously the Government & Citizen Engagement (GCE) initiative at Omidyar Network, now Luminate – has been active in the open data space for over decade. In that time, we have invested more than $50m in organisations and platforms that are working to advance open data’s potential, including Open Data Institute, IMCO, Open Knowledge, ITS Rio, Sunlight, GovLab, Web Foundation, Open Data Charter, and Open Government Partnership.

Ahead of our transition from GCE to Luminate last year, we wanted to take a step back and assess the field in order to cultivate a richer understanding of the evolution of open data—including its critical developments, drivers of change, and influential actors[1]. This research would help inform our own strategy and provide valuable insight that we can share with the broader open data ecosystem. 

First, what is open data? Open data is data that can be freely used, shared, and built-upon by anyone, anywhere, for any purpose. At its best, open government data can empower citizens, improve governments, create opportunities, and help solve public problems. Have you used a transport app to find out when the next bus will arrive? Or a weather app to look up a forecast? When using a real estate website to buy or rent a home, have you also reviewed its proximity to health, education, and recreational facilities or checked out neighborhood crime rates? If so, your life has been impacted by open data. 

The Open Data Retrospective

We commissioned Dalberg, a global strategic advisory firm, to conduct an Open Data Retrospective to explore: ‘how and why did the open data field evolve globally over the past decade?’ as well as ‘where is the field today?’ With the concurrent release of the report “The State of Open Data” – led by IDRC and Open Data for Development initiative – we thought this would be a great time to make public the report we’d commissioned. 

You can see Dalberg’s open data report here, and its affiliated data here. Please note, this presentation is a modification of the report. Several sections and slides have been removed for brevity and/or confidentiality. Therefore, some details about particular organisations and strategies are not included in this deck.

Evolution and impact

Dalberg’s report covers the trajectory of the open data field and characterised it as: inception (pre-2008), systematisation (2009-2010), expansion (2011-2015), and reevaluation (2016-2018).This characterisation varies by region and sector, but generally captures the evolution of the open data movement….(More)”.

Political Corruption in a World in Transition


Book edited by Jonathan Mendilow and Éric Phélippeau: “This book argues that the mainstream definitions of corruption, and the key expectations they embed concerning the relationship between corruption, democracy, and the process of democratization, require reexamination. Even critics who did not consider stable institutions and legal clarity of veteran democracies as a cure-all, assumed that the process of widening the influence on government decision making and implementation allows non-elites to defend their interests, define the acceptable sources and uses of wealth, and demand government accountability. This had proved correct, especially insofar as ‘petty corruption’ is involved. But the assumption that corruption necessarily involves the evasion of democratic principles and a ‘market approach’ in which the corrupt seek to maximize profit does not exhaust the possible incentives for corruption, the types of behaviors involved (for obvious reasons, the tendency in the literature is to focus on bribery), or the range of situations that ‘permit’ corruption in democracies. In the effort to identify some of the problems that require recognition, and to offer a more exhaustive alternative, the chapters in this book focus on corruption in democratic settings (including NGOs and the United Nations which were largely so far ignored), while focusing mainly on behaviors other than bribery….(More)”.

The Age of Digital Interdependence


Report of the High-level Panel on Digital Cooperation: “The immense power and value of data in the modern economy can and must be harnessed to meet the SDGs, but this will require new models of collaboration. The Panel discussed potential pooling of data in areas such as health, agriculture and the environment to enable scientists and thought leaders to use data and artificial intelligence to better understand issues and find new ways to make progress on the SDGs. Such data commons would require criteria for establishing relevance to the SDGs, standards for interoperability, rules on access and safeguards to ensure privacy and security.

Anonymised data – information that is rendered anonymous in such a way that the data subject is not or no longer identifiable – about progress toward the SDGs is generally less sensitive and controversial than the use of personal data of the kind companies such as Facebook, Twitter or Google may collect to drive their business models, or facial and gait data that could be used for surveillance. However, personal data can also serve development goals, if handled with proper oversight to ensure its security and privacy.

For example, individual health data is extremely sensitive – but many people’s health data, taken together, can allow researchers to map disease outbreaks, compare the effectiveness of treatments and improve understanding of conditions. Aggregated data from individual patient cases was crucial to containing the Ebola outbreak in West Africa. Private and public sector healthcare providers around the world are now using various forms of electronic medical records. These help individual patients by making it easier to personalise health services, but the public health benefits require these records to be interoperable.

There is scope to launch collaborative projects to test the interoperability of data, standards and safeguards across the globe. The World Health Assembly’s consideration of a global strategy for digital health in 2020 presents an opportunity to launch such projects, which could initially be aimed at global health challenges such as Alzheimer’s and hypertension.

Improved digital cooperation on a data-driven approach to public health has the potential to lower costs, build new partnerships among hospitals, technology companies, insurance providers and research institutes and support the shift from treating diseases to improving wellness. Appropriate safeguards are needed to ensure the focus remains on improving health care outcomes. With testing, experience and necessary protective measures as well as guidelines for the responsible use of data, similar cooperation could emerge in many other fields related to the SDGs, from education to urban planning to agriculture…(More)”.

Study finds that a GPS outage would cost $1 billion per day


Eric Berger at Ars Technica: “….one of the most comprehensive studies on the subject has assessed the value of this GPS technology to the US economy and examined what effect a 30-day outage would have—whether it’s due to a severe space weather event or “nefarious activity by a bad actor.” The study was sponsored by the US government’s National Institutes of Standards and Technology and performed by a North Carolina-based research organization named RTI International.

Economic effect

As part of the analysis, researchers spoke to more than 200 experts in the use of GPS technology for various services, from agriculture to the positioning of offshore drilling rigs to location services for delivery drivers. (If they’d spoken to me, I’d have said the value of using GPS to navigate Los Angeles freeways and side streets was incalculable). The study covered a period from 1984, when the nascent GPS network was first opened to commercial use, through 2017. It found that GPS has generated an estimated $1.4 trillion in economic benefits during that time period.

The researchers found that the largest benefit, valued at $685.9 billion, came in the “telecommunications” category,  including improved reliability and bandwidth utilization for wireless networks. Telematics (efficiency gains, cost reductions, and environmental benefits through improved vehicle dispatch and navigation) ranked as the second most valuable category at $325 billion. Location-based services on smartphones was third, valued at $215 billion.

Notably, the value of GPS technology to the US economy is growing. According to the study, 90 percent of the technology’s financial impact has come since just 2010, or just 20 percent of the study period. Some sectors of the economy are only beginning to realize the value of GPS technology, or are identifying new uses for it, the report says, indicating that its value as a platform for innovation will continue to grow.

Outage impact

In the case of some adverse event leading to a widespread outage, the study estimates that the loss of GPS service would have a $1 billion per-day impact, although the authors acknowledge this is at best a rough estimate. It would likely be higher during the planting season of April and May, when farmers are highly reliant on GPS technology for information about their fields.

To assess the effect of an outage, the study looked at several different variables. Among them was “precision timing” that enables a number of wireless services, including the synchronization of traffic between carrier networks, wireless handoff between base stations, and billing management. Moreover, higher levels of precision timing enable higher bandwidth and provide access to more devices. (For example, the implementation of 4G LTE technology would have been impossible without GPS technology)….(More)”

The New York Times has a course to teach its reporters data skills, and now they’ve open-sourced it


Joshua Benton at Nieman Labs: “The New York Times wants more of its journalists to have those basic data skills, and now it’s releasing the curriculum they’ve built in-house out into the world, where it can be of use to reporters, newsrooms, and lots of other people too.

Here’s Lindsey Rogers Cook, an editor for digital storytelling and training at the Times, and the sort of person who is willing to have “spreadsheets make my heart sing” appear under her byline:

Even with some of the best data and graphics journalists in the business, we identified a challenge: data knowledge wasn’t spread widely among desks in our newsroom and wasn’t filtering into news desks’ daily reporting.

Yet fluency with numbers and data has become more important than ever. While journalists once were fond of joking that they got into the field because of an aversion to math, numbers now comprise the foundation for beats as wide-ranging as education, the stock market, the Census, and criminal justice. More data is released than ever before — there are nearly 250,000 datasets on data.govalone — and increasingly, government, politicians, and companies try to twist those numbers to back their own agendas…

We wanted to help our reporters better understand the numbers they get from sources and government, and give them the tools to analyze those numbers. We wanted to increase collaboration between traditional and non-traditional journalists…And with more competition than ever, we wanted to empower our reporters to find stories lurking in the hundreds of thousands of databases maintained by governments, academics, and think tanks. We wanted to give our reporters the tools and support necessary to incorporate data into their everyday beat reporting, not just in big and ambitious projects.

….You can access the Times’ training materials here. Some of what you’ll find:

  • An outline of the data skills the course aims to teach. It’s all run on Google Docs and Google Sheets; class starts with the uber-basics (mean! median! sum!), crosses the bridge of pivot tables, and then heads into data cleaning and more advanced formulas.
  • The full day-by-day outline of the Times’ three-week course, which of course you’re free to use or reshape to your newsroom’s needs.
  • It’s not just about cells, columns, and rows — the course also includes more journalism-based information around ethical questions, how to use data effectively inside a story’s narrative, and how best to work with colleagues in the graphic department.
  • Cheat sheets! If you don’t have time to dig too deeply, they’ll give a quick hit of information: onetwothreefourfive.
  • Data sets that you use to work through the beginner, intermediate, and advanced stages of the training, including such journalism classics as census datacampaign finance data, and BLS data.But don’t be a dummy and try to write real news stories off these spreadsheets; the Times cautions in bold: “NOTE: We have altered many of these datasets for instructional purposes, so please download the data from the original source if you want to use it in your reporting.”
  • How Not To Be Wrong,” which seems like a useful thing….(More)”

Data & Policy: A new venue to study and explore policy–data interaction


Opening editorial by Stefaan G. Verhulst, Zeynep Engin and Jon Crowcroft: “…Policy–data interactions or governance initiatives that use data have been the exception rather than the norm, isolated prototypes and trials rather than an indication of real, systemic change. There are various reasons for the generally slow uptake of data in policymaking, and several factors will have to change if the situation is to improve. ….

  • Despite the number of successful prototypes and small-scale initiatives, policy makers’ understanding of data’s potential and its value proposition generally remains limited (Lutes, 2015). There is also limited appreciation of the advances data science has made the last few years. This is a major limiting factor; we cannot expect policy makers to use data if they do not recognize what data and data science can do.
  • The recent (and justifiable) backlash against how certain private companies handle consumer data has had something of a reverse halo effect: There is a growing lack of trust in the way data is collected, analyzed, and used, and this often leads to a certain reluctance (or simply risk-aversion) on the part of officials and others (Engin, 2018).
  • Despite several high-profile open data projects around the world, much (probably the majority) of data that could be helpful in governance remains either privately held or otherwise hidden in silos (Verhulst and Young, 2017b). There remains a shortage not only of data but, more specifically, of high-quality and relevant data.
  • With few exceptions, the technical capacities of officials remain limited, and this has obviously negative ramifications for the potential use of data in governance (Giest, 2017).
  • It’s not just a question of limited technical capacities. There is often a vast conceptual and values gap between the policy and technical communities (Thompson et al., 2015; Uzochukwu et al., 2016); sometimes it seems as if they speak different languages. Compounding this difference in world views is the fact that the two communities rarely interact.
  • Yet, data about the use and evidence of the impact of data remain sparse. The impetus to use more data in policy making is stymied by limited scholarship and a weak evidential basis to show that data can be helpful and how. Without such evidence, data advocates are limited in their ability to make the case for more data initiatives in governance.
  • Data are not only changing the way policy is developed, but they have also reopened the debate around theory- versus data-driven methods in generating scientific knowledge (Lee, 1973; Kitchin, 2014; Chivers, 2018; Dreyfuss, 2017) and thus directly questioning the evidence base to utilization and implementation of data within policy making. A number of associated challenges are being discussed, such as: (i) traceability and reproducibility of research outcomes (due to “black box processing”); (ii) the use of correlation instead of causation as the basis of analysis, biases and uncertainties present in large historical datasets that cause replication and, in some cases, amplification of human cognitive biases and imperfections; and (iii) the incorporation of existing human knowledge and domain expertise into the scientific knowledge generation processes—among many other topics (Castelvecchi, 2016; Miller and Goodchild, 2015; Obermeyer and Emanuel, 2016; Provost and Fawcett, 2013).
  • Finally, we believe that there should be a sound under-pinning a new theory of what we call Policy–Data Interactions. To date, in reaction to the proliferation of data in the commercial world, theories of data management,1 privacy,2 and fairness3 have emerged. From the Human–Computer Interaction world, a manifesto of principles of Human–Data Interaction (Mortier et al., 2014) has found traction, which intends reducing the asymmetry of power present in current design considerations of systems of data about people. However, we need a consistent, symmetric approach to consideration of systems of policy and data, how they interact with one another.

All these challenges are real, and they are sticky. We are under no illusions that they will be overcome easily or quickly….

During the past four conferences, we have hosted an incredibly diverse range of dialogues and examinations by key global thought leaders, opinion leaders, practitioners, and the scientific community (Data for Policy, 2015201620172019). What became increasingly obvious was the need for a dedicated venue to deepen and sustain the conversations and deliberations beyond the limitations of an annual conference. This leads us to today and the launch of Data & Policy, which aims to confront and mitigate the barriers to greater use of data in policy making and governance.

Data & Policy is a venue for peer-reviewed research and discussion about the potential for and impact of data science on policy. Our aim is to provide a nuanced and multistranded assessment of the potential and challenges involved in using data for policy and to bridge the “two cultures” of science and humanism—as CP Snow famously described in his lecture on “Two Cultures and the Scientific Revolution” (Snow, 1959). By doing so, we also seek to bridge the two other dichotomies that limit an examination of datafication and is interaction with policy from various angles: the divide between practice and scholarship; and between private and public…

So these are our principles: scholarly, pragmatic, open-minded, interdisciplinary, focused on actionable intelligence, and, most of all, innovative in how we will share insight and pushing at the boundaries of what we already know and what already exists. We are excited to launch Data & Policy with the support of Cambridge University Press and University College London, and we’re looking for partners to help us build it as a resource for the community. If you’re reading this manifesto it means you have at least a passing interest in the subject; we hope you will be part of the conversation….(More)”.

From Planning to Prototypes: New Ways of Seeing Like a State


Fleur Johns at Modern Law Review: “All states have pursued what James C. Scott characterised as modernist projects of legibility and simplification: maps, censuses, national economic plans and related legislative programs. Many, including Scott, have pointed out blindspots embedded in these tools. As such criticism persists, however, the synoptic style of law and development has changed. Governments, NGOs and international agencies now aspire to draw upon immense repositories of digital data. Modes of analysis too have changed. No longer is legibility a precondition for action. Law‐ and policy‐making are being informed by business development methods that prefer prototypes over plans. States and international institutions continue to plan, but also seek insight from the release of minimally viable policy mock‐ups. Familiar critiques of law and development work, and arguments for its reform, have limited purchase on these practices, Scott’s included. Effective critical intervention in this field today requires careful attention to be paid to these emergent patterns of practice…(More)”.

Introducing ‘AI Commons’: A framework for collaboration to achieve global impact


Press Release: “Last week’s 3rd annual AI for Good Global Summit once again showcased the growing number of Artificial Intelligence (AI) projects with promise to advance the United Nations Sustainable Development Goals (SDGs).

Now, using the Summit’s momentum, AI innovators and humanitarian leaders are prepared to take the ‘AI for Good’ movement to the next level.

They are working together to launch an ‘AI Commons’ that aims to scale AI for Good projects and maximize their impact across the world.

The AI Commons will enable AI adopters to connect with AI specialists and data owners to align incentives for innovation and develop AI solutions to precisely defined problems.

“The concept of AI Commons has developed over three editions of the Summit and is now motivating implementation,” said ITU Secretary-General Houlin Zhao in closing remarks to the summit. “AI and data need to be a shared resource if we are serious about scaling AI for good. The community supporting the Summit is creating infrastructure to scale-up their collaboration − to convert the principles underlying the Summit into global impact.”…

The AI Commons will provide an open framework for collaboration, a decentralized system to democratize problem solving with AI.

It aims to be a “knowledge space”, says Banifatemi, answering a key question: “How can problem solving with AI become common knowledge?”

“The goal is to be an open initiative, like a Linux effort, like an open-source network, where everyone can participate and we jointly share and we create an abundance of knowledge, knowledge of how we can solve problems with AI,” said Banifatemi.

AI development and application will build on the state of the art, enabling AI solutions to scale with the help of shared datasets, testing and simulation environments, AI models and associated software, and storage and computing resources….(More)”.

Privacy Enhancing Technologies


The Royal Society: “How can technologies help organisations and individuals protect data in practice and, at the same time, unlock opportunities for data access and use?

The Royal Society’s Privacy Enhancing Technologies project has been investigating this question and has launched a report (PDF) setting out the current use, development and limits of privacy enhancing technologies (PETs) in data analysis. 

The data we generate every day holds a lot of value and potentially also contains sensitive information that individuals or organisations might not wish to share with everyone. The protection of personal or sensitive data featured prominently in the social and ethical tensions identified in our British Academy and Royal Society report Data management and use: Governance in the 21st century. For example, how can organisations best use data for public good whilst protecting sensitive information about individuals? Under other circumstances, how can they share data with groups with competing interests whilst protecting commercially or otherwise sensitive information?

Realising the full potential of large-scale data analysis may be constrained by important legal, reputational, political, business and competition concerns.  Certain risks can potentially be mitigated and managed with a set of emerging technologies and approaches often collectively referred to as ‘Privacy Enhancing Technologies’ (PETs). 

This disruptive set of technologies, combined with changes in wider policy and business frameworks, could enable the sharing and use of data in a privacy-preserving manner. They also have the potential to reshape the data economy and to change the trust relationships between citizens, governments and companies.

This report provides a high-level overview of five current and promising PETs of a diverse nature, with their respective readiness levels and illustrative case studies from a range of sectors, with a view to inform in particular applied data science research and the digital strategies of government departments and businesses. This report also includes recommendations on how the UK could fully realise the potential of PETs and to allow their use on a greater scale.

The project was informed by a series of conversations and evidence gathering events, involving a range of stakeholders across academia, government and the private sector (also see the project terms of reference and Working Group)….(More)”.