National SDG Review: data challenges and opportunities


Press Release: “…the Partnership in Statistics for Development in the 21st Century (PARIS21) and Partners for Review launched a landmark new paper that identifies the factors preventing countries from fully exploiting their data ecosystem and proposes solutions to strengthening statistical capacities to achieve the 2030 Agenda for Sustainable Development.

Ninety percent of the data in the world has been created in the past two years, yet many countries with low statistical capacity struggle to produce, analyse and communicate the data necessary to advance sustainable development. At the same time, demand for more and better data and statistics is increasingly massively, with international agreements like the 2030 Agenda placing unprecedented demand on countries to report on more than 230 indicators.

Using PARIS21’s Capacity Development 4.0 (CD 4.0) approach, the paper shows that leveraging data available in the data ecosystem for official re­porting requires new capacity in terms of skills and knowledge, man­agement, politics and power. The paper also shows that these capacities need to be developed at both the organisational and systemic level, which involves the various channels and interactions that connect different organisations.

Aimed at national statistics offices, development professionals and others involved in the national data ecosystem, the paper provides a roadmap that can help national statistical systems develop and strengthen the capacities of traditional and new actors in the data ecosystem to improve both the fol­low-up and review process of the 2030 Agenda as well as the data architecture for sustainable development at the national level…(More)”.

European E-Democracy in Practice


Book by Hennen, Leonhard (et al.): “This open access book explores how digital tools and social media technologies can contribute to better participation and involvement of EU citizens in European politics. By analyzing selected representative e-participation projects at the local, national and European governmental levels, it identifies the preconditions, best practices and shortcomings of e-participation practices in connection with EU decision-making procedures and institutions. The book features case studies on parliamentary monitoring, e-voting practices, and e-publics, and offers recommendations for improving the integration of e-democracy in European politics and governance. Accordingly, it will appeal to scholars as well as practitioners interested in identifying suitable e-participation tools for European institutions and thus helps to reduce the EU’s current democratic deficit….(More)”.

Why Data Is Not the New Oil


Blogpost by Alec Stapp: “Data is the new oil,” said Jaron Lanier in a recent op-ed for The New York Times. Lanier’s use of this metaphor is only the latest instance of what has become the dumbest meme in tech policy. As the digital economy becomes more prominent in our lives, it is not unreasonable to seek to understand one of its most important inputs. But this analogy to the physical economy is fundamentally flawed. Worse, introducing regulations premised upon faulty assumptions like this will likely do far more harm than good. Here are seven reasons why “data is the new oil” misses the mark:

1. Oil is rivalrous; data is non-rivalrous

If someone uses a barrel of oil, it can’t be consumed again. But, as Alan McQuinn, a senior policy analyst at the Information Technology and Innovation Foundation, noted, “when consumers ‘pay with data’ to access a website, they still have the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services.” Imposing restrictions on data collection makes this infinite resource finite. 

2. Oil is excludable; data is non-excludable

Oil is highly excludable because, as a physical commodity, it can be stored in ways that prevent use by non-authorized parties. However, as my colleagues pointed out in a recent comment to the FTC: “While databases may be proprietary, the underlying data usually is not.” They go on to argue that this can lead to under-investment in data collection:

[C]ompanies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also  means that firms may well be more reluctant to invest in data generation than is socially optimal. In fact, to the extent this is true there is arguably more risk of companies under-investing in data  generation than of firms over-investing in order to create data troves with which to monopolize a market. This contrasts with oil, where complete excludability is the norm.

3. Oil is fungible; data is non-fungible

Oil is a commodity, so, by definition, one barrel of oil of a given grade is equivalent to any other barrel of that grade. Data, on the other hand, is heterogeneous. Each person’s data is unique and may consist of a practically unlimited number of different attributes that can be collected into a profile. This means that oil will follow the law of one price, while a dataset’s value will be highly contingent on its particular properties and commercialization potential.

4. Oil has positive marginal costs; data has zero marginal costs

There is a significant expense to producing and distributing an additional barrel of oil (as low as $5.49 per barrel in Saudi Arabia; as high as $21.66 in the U.K.). Data is merely encoded information (bits of 1s and 0s), so gathering, storing, and transferring it is nearly costless (though, to be clear, setting up systems for collecting and processing can be a large fixed cost). Under perfect competition, the market clearing price is equal to the marginal cost of production (hence why data is traded for free services and oil still requires cold, hard cash)….(More)”.

Principles alone cannot guarantee ethical AI


Paper by Brent Mittelstadt: “Artificial intelligence (AI) ethics is now a global topic of discussion in academic and policy circles. At least 84 public–private initiatives have produced statements describing high-level principles, values and other tenets to guide the ethical development, deployment and governance of AI. According to recent meta-analyses, AI ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach for the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement….(More)”.

Surveillance giants: how the business model of Google and Facebook threatens human rights


Report by Amnesty International: “Google and Facebook help connect the world and provide crucial services to billions. To participate meaningfully in today’s economy and society, and to realize their human rights, people rely on access to the internet—and to the tools Google and Facebook offer. But Google and Facebook’s platforms come at a systemic cost. The companies’ surveillance-based business model is inherently incompatible with the right to privacy and poses a threat to a range of other rights including freedom of opinion and expression, freedom of thought, and the right to equality and non-discrimination….(More)”.

A Republic of Equals: A Manifesto for a Just Society


Book by Jonathan Rothwell: “Political equality is the most basic tenet of democracy. Yet in America and other democratic nations, those with political power have special access to markets and public services. A Republic of Equals traces the massive income inequality observed in the United States and other rich democracies to politicized markets and avoidable gaps in opportunity—and explains why they are the root cause of what ails democracy today.

In this provocative book, economist Jonathan Rothwell draws on the latest empirical evidence from across the social sciences to demonstrate how rich democracies have allowed racial politics and the interests of those at the top to subordinate justice. He looks at the rise of nationalism in Europe and the United States, revealing how this trend overlaps with racial prejudice and is related to mounting frustration with a political status quo that thrives on income inequality and inefficient markets. But economic differences are by no means inevitable. Differences in group status by race and ethnicity are dynamic and have reversed themselves across continents and within countries. Inequalities persist between races in the United States because Black Americans are denied equal access to markets and public services. Meanwhile, elite professional associations carve out privileged market status for their members, leading to compensation in excess of their skills.

A Republic of Equals provides a bold new perspective on how to foster greater political and social equality, while moving societies closer to what a true republic should be….(More)”.

Responsible Data for Children


New Site and Report by UNICEF and The GovLab: “RD4C seeks to build awareness regarding the need for special attention to data issues affecting children—especially in this age of changing technology and data linkage; and to engage with governments, communities, and development actors to put the best interests of children and a child rights approach at the center of our data activities. The right data in the right hands at the right time can significantly improve outcomes for children. The challenge is to understand the potential risks and ensure that the collection, analysis and use of data on children does not undermine these benefits.

Drawing upon field-based research and established good practice, RD4C aims to highlight and support best practice data responsibility; identify challenges and develop practical tools to assist practitioners in evaluating and addressing them; and encourage a broader discussion on actionable principles, insights, and approaches for responsible data management.

Uses and Reuses of Scientific Data: The Data Creators’ Advantage


Paper by Irene V. Pasquetto, Christine L. Borgman, and Morgan F. Wofford: “Open access to data, as a core principle of open science, is predicated on assumptions that scientific data can be reused by other researchers. We test those assumptions by asking where scientists find reusable data, how they reuse those data, and how they interpret data they did not collect themselves. By conducting a qualitative meta-analysis of evidence on two long-term, distributed, interdisciplinary consortia, we found that scientists frequently sought data from public collections and from other researchers for comparative purposes such as “ground-truthing” and calibration. When they sought others’ data for reanalysis or for combining with their own data, which was relatively rare, most preferred to collaborate with the data creators.

We propose a typology of data reuses ranging from comparative to integrative. Comparative data reuse requires interactional expertise, which involves knowing enough about the data to assess their quality and value for a specific comparison such as calibrating an instrument in a lab experiment. Integrative reuse requires contributory expertise, which involves the ability to perform the action, such as reusing data in a new experiment. Data integration requires more specialized scientific knowledge and deeper levels of epistemic trust in the knowledge products. Metadata, ontologies, and other forms of curation benefit interpretation for any kind of data reuse. Based on these findings, we theorize the data creators’ advantage, that those who create data have intimate and tacit knowledge that can be used as barter to form collaborations for mutual advantage. Data reuse is a process that occurs within knowledge infrastructures that evolve over time, encompassing expertise, trust, communities, technologies, policies, resources, and institutions….(More)”.

Belgian experiment that Aristotle would have approved of


The Economist: “In a sleepy corner of Belgium, a democratic experiment is under way. On September 16th, 24 randomly chosen Germanophones from the country’s eastern fringe took their seats in a Citizens’ Council. They will have the power to tell elected officials which issues matter, and for each such issue to task a Citizens’ Assembly (also chosen at random) with brainstorming ideas on how to solve them. It’s an engaged citizen’s dream come true.

Belgium’s German-speakers are an often-overlooked minority next to their Francophone and Flemish countrymen. They are few in number—just 76,000 people out of a population of 11m—yet have a distinct identity, shaped by their proximity to Germany, the Netherlands and Luxembourg. Thanks to Belgium’s federal system the community is thought to be the smallest region of the EU with its own legislative powers: a parliament of 25 representatives and a government of four decides on policies related to issues including education, sport, training and child benefits.

This new system takes democracy one step further. Based on selection by lottery—which Aristotle regarded as real democracy, in contrast to election, which he described as “oligarchy”—it was trialled in 2017 and won enthusiastic reviews from participants, officials and locals.

Under the “Ostbelgien Model”, the Citizens’ Council and the assemblies it convenes will run in parallel to the existing parliament and will set its legislative agenda. Parliamentarians must consider every proposal that wins support from 80% of the council, and must publicly defend any decision to take a different path.

Some see the project as a tool that could counter political discontent by involving ordinary folk in decision-making. But for Alexander Miesen, a Belgian senator who initiated the project, the motivation is cosier. “People would like to share their ideas, and they also have a lot of experience in their lives which you can import into parliament. It’s a win-win,” he says.

Selecting decision-makers by lottery is unusual these days, but not unknown: Ireland randomly selected the members of the Citizens’ Assembly that succeeded in breaking the deadlock on abortion laws. Referendums are a common way of settling important matters in several countries. But in Eupen, the largest town in the German-speaking region, citizens themselves will come up with the topics and policies which parliamentarians then review, rather than expressing consent to ideas proposed by politicians. Traditional decision-makers still have the final say, but “citizens can be sure that their ideas are part of the process,” says Mr Miesen….(More)”.

The Impact of Open Data on Public Procurement


Paper by Raphael Duguay, Thomas Rauter and Delphine Samuels: “We examine how the increased accessibility of public purchasing data affects competition, prices, contract allocations, and contract performance in government procurement. The European Union recently made its already public but difficult-to-access information about the process and outcomes of procurement awards available for bulk download in a user-friendly format.

Comparing government contracts above EU publication thresholds with contracts that are not, we find that increasing the public accessibility of procurement data raises the likelihood of having competitive bidding processes, increases the number of bids per contract, and facilitates market entry by new vendors. Following the open data initiative, procurement prices decrease and EU government agencies are more likely to award contracts to the lowest bidder.

However, the increased competition comes at a cost ─ firms execute government contracts with more delays and ex-post price renegotiations. These effects are stronger for new vendors, complex procurement projects, and contracts awarded solely based on price. Overall, our results suggest that open procurement data facilitates competition and lowers ex-ante procurement prices but does not necessarily increase allocative efficiency in government contracting….(More)”.