Decoding Data Use: What evidence do world leaders want to achieve their goals?


Paper by Samantha Custer, Takaaki Masaki, and Carolyn Iwicki: “Information is “never the hero”, but it plays a supporting role in how leaders allocate scarce resources and accelerate development in their communities. Even in low- and middle-income countries, decision-makers have ample choices in sourcing evidence from a growing field of domestic and international data providers. However, more information is not necessarily better if it misses the mark for what leaders need to monitor their country’s progress. Claims that information is the “world’s most valuable resource” and calls for a “data revolution” will ring hollow if we can’t decode what leaders actually use — and why.

In a new report, Decoding Data Use: How leaders source data and use it to accelerate development, AidData reveals what 3500 leaders from 126 countries have to say about the types of data or analysis they use, from what sources, and for which purposes in the context of their work.  We analyze responses to AidData’s 2017 Listening to Leaders (LTL) Survey to offer insights to help funders, producers, advocates, and infomediaries of development data understand how to position themselves for greater impact….(more)”.

Data for Development


The 2017 volume of the  Development Co-operation Report by the OECD focuses on Data for Development:  “Big Data” and “the Internet of Things” are more than buzzwords: the data revolution is transforming the way that economies and societies are functioning across the planet. The Sustainable Development Goals along with the data revolution are opportunities that should not be missed: more and better data can help boost inclusive growth, fight inequalities and combat climate change. These data are also essential to measure and monitor progress against the Sustainable Development Goals.

The value of data in enabling development is uncontested. Yet, there continue to be worrying gaps in basic data about people and the planet and weak capacity in developing countries to produce the data that policy makers need to deliver reforms and policies that achieve real, visible and long-lasting development results. At the same time, investing in building statistical capacity – which represented about 0.30% of ODA in 2015 – is not a priority for most providers of development assistance.

There is a need for stronger political leadership, greater investment and more collective action to bridge the data divide for development. With the unfolding data revolution, developing countries and donors have a unique chance to act now to boost data production and use for the benefit of citizens. This report sets out priority actions and good practices that will help policy makers and providers of development assistance to bridge the global data divide, notably by strengthening statistical systems in developing countries to produce better data for better policies and better lives…(More)”.

Evidence-Based Policy Mistakes


Kaushik Basu at Project Syndicate: “… it is important to recognize that data alone are not enough to determine future expectations or policies. While there is certainly value in collecting data (via, for example, randomized control trials), there is also a need for deductive and inductive reasoning, guided by common sense – and not just on the part of experts. By dismissing the views and opinions of ordinary people, economists may miss out on crucial insights.

People’s everyday experiences provide huge amounts of potentially useful information. While a common-sense approach based on individual experience is not the most “scientific,” it should not be dismissed out of hand. A meteorologist might detect a coming storm by plugging data from myriad sources – atmospheric sensors, weather balloons, radar, and satellites – into complex computer models. But that doesn’t mean that the sight of gathering clouds in the sky is not also a legitimate sign that one might need an umbrella – even if the weather forecast promises sunshine.

Intuition and common sense have been critical to our evolution. After all, had humans not been able to draw reasonably accurate conclusions about the world through experience or observation, we wouldn’t have survived as a species.

The development of more systematic approaches to scientific inquiry has not diminished the need for such intuitive reasoning. In fact, there are important and not obvious truths that are best deduced using pure reason.

Consider the Pythagorean Theorem, which establishes the relation among the three sides of a right triangle. If all conclusions had to be reached by combing through large data sets, Pythagoras, who is believed to have devised the theorem’s first proof, would have had to measure a huge number of right triangles. In any case, critics would likely argue that he had looked at a biased sample, because all of the triangles examined were collected from the Mediterranean region.

Inductive reasoning, too, is vital to reach certain kinds of knowledge. We “know” that an apple will not remain suspended in mid-air, because we have seen so many objects fall. But such reasoning is not foolproof. As Bertrand Russell pointed out, “The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.”

Of course, many policymakers – not just the likes of Erdoğan and Trump – make bad decisions not because of a misunderstanding of the evidence, but because they prefer to pursue politically expedient measures that benefit their benefactors or themselves. In such cases, exposing the inappropriateness of their supposed evidence may be the only option.

But, for the rest, the imperative must be to advocate for a more comprehensive approach, in which leaders use “reasoned intuition” to draw effective conclusions based on hard data. Only then will the age of effective evidence-based policymaking really begin….(More)”.

GovEx Launches First International Open Data Standards Directory


GT Magazine: “…A nonprofit gov tech group has created an international open data standards directory, aspiring to give cities a singular resource for guidance on formatting data they release to the public…The nature of municipal data is nuanced and diverse, and the format in which it is released often varies depending on subject matter. In other words, a format that works well for public safety data is not necessarily the same that works for info about building permits, transit or budgets. Not having a coordinated and agreed-upon resource to identify the best standards for these different types of info, Nicklin said, creates problems.

One such problem is that it can be time-consuming and challenging for city government data workers to research and identify ideal formats for data. Another is that the lack of info leads to discord between different jurisdictions, meaning one city might format a data set about economic development in an entirely different way than another, making collaboration and comparisons problematic.

What the directory does is provide a list of standards that are in use within municipal governments, as well as an evaluation based on how frequent that use is, whether the format is machine-readable, and whether users have to pay to license it, among other factors.

The directory currently contains 60 standards, some of which are in Spanish, and those involved with the project say they hope to expand their efforts to include more languages. There is also a crowdsourcing component to the directory, in that users are encouraged to make additions and updates….(More)”

The frontiers of data interoperability for sustainable development


Report from the Joined-Up Data Standards [JUDS] project: “…explores where progress has been made, what challenges still remain, and how the new Collaborative on SDG Data Interoperability will play a critical role in moving forward the agenda for interoperability policy.

There is an ever-growing need for a more holistic picture of development processes worldwide and interoperability solutions that can be scaled, driven by global development agendas such as the 2030 Agenda and the Open Data movement. This requires the ability to join up data across multiple data sources and standards to create actionable information.

Solutions that create value for front-line decision makers — health centre managers, local school authorities or water and sanitation committees, for example, and those engaged in government accountability – will be crucial to meet the data needs of the SDGs, and do so in an internationally comparable way. While progress has been made at both a national and international level, moving from principle to practice by embedding interoperability into day-to-day work continues to present challenges.

Based on research and learning generated by the JUDS project team at Development Initiatives and Publish What You Fund, as well as inputs from interviews with key stakeholders, this report aims to provide an overview of the different definitions and components of interoperability and why it is important, and an outline of the current policy landscape.

We offer a set of guiding principles that we consider essential to implementing interoperability, and contextualise the five frontiers of interoperability for sustainable development that we have identified. The report also offers recommendations on what the role of the Collaborative could be in this fast-evolving landscape….(More)”.

Leveraging the disruptive power of artificial intelligence for fairer opportunities


Makada Henry-Nickie at Brookings: “According to President Obama’s Council of Economic Advisers (CEA), approximately 3.1 million jobs will be rendered obsolete or permanently altered as a consequence of artificial intelligence technologies. Artificial intelligence (AI) will, for the foreseeable future, have a significant disruptive impact on jobs. That said, this disruption can create new opportunities if policymakers choose to harness them—including some with the potential to help address long-standing social inequities. Investing in quality training programs that deliver premium skills, such as computational analysis and cognitive thinking, provides a real opportunity to leverage AI’s disruptive power.

AI’s disruption presents a clear challenge: competition to traditional skilled workers arising from the cross-relevance of data scientists and code engineers, who can adapt quickly to new contexts. Data analytics has become an indispensable feature of successful companies across all industries. ….

Investing in high-quality education and training programs is one way that policymakers proactively attempt to address the workforce challenges presented by artificial intelligence. It is essential that we make affirmative, inclusive choices to ensure that marginalized communities participate equitably in these opportunities.

Policymakers should prioritize understanding the demographics of those most likely to lose jobs in the short-run. As opposed to obsessively assembling case studies, we need to proactively identify policy entrepreneurs who can conceive of training policies that equip workers with technical skills of “long-game” relevance. As IBM points out, “[d]ata democratization impacts every career path, so academia must strive to make data literacy an option, if not a requirement, for every student in any field of study.”

Machines are an equal opportunity displacer, blind to color and socioeconomic status. Effective policy responses require collaborative data collection and coordination among key stakeholders—policymakers, employers, and educational institutions—to  identify at-risk worker groups and to inform workforce development strategies. Machine substitution is purely an efficiency game in which workers overwhelmingly lose. Nevertheless, we can blunt these effects by identifying critical leverage points….

Policymakers can choose to harness AI’s disruptive power to address workforce challenges and redesign fair access to opportunity simultaneously. We should train our collective energies on identifying practical policies that update our current agrarian-based education model, which unfairly disadvantages children from economically segregated neighborhoods…(More)”

Open Data in Developing Economies: Toward Building an Evidence Base on What Works and How


New book by Stefaan Verhulst and Andrew Young: “Recent years have witnessed considerable speculation about the potential of open data to bring about wide-scale transformation. The bulk of existing evidence about the impact of open data, however, focuses on high-income countries. Much less is known about open data’s role and value in low- and middle-income countries, and more generally about its possible contributions to economic and social development.

Open Data in Developing Economies features in-depth case studies on how open data is having an impact across Screen Shot 2017-11-14 at 5.41.30 AMthe developing world-from an agriculture initiative in Colombia to data-driven healthcare
projects in Uganda and South Africa to crisis response in Nepal. The analysis built on these case studies aims to create actionable intelligence regarding:

(a) the conditions under which open data is most (and least) effective in development, presented in the form of a Periodic Table of Open Data;

(b) strategies to maximize the positive contributions of open data to development; and

(c) the means for limiting open data’s harms on developing countries.

Endorsements:

“An empirically grounded assessment that helps us move beyond the hype that greater access to information can improve the lives of people and outlines the enabling factors for open data to be leveraged for development.”-Ania Calderon, Executive Director, International Open Data Charter

“This book is compulsory reading for practitioners, researchers and decision-makers exploring how to harness open data for achieving development outcomes. In an intuitive and compelling way, it provides valuable recommendations and critical reflections to anyone working to share the benefits of an increasingly networked and data-driven society.”-Fernando Perini, Coordinator of the Open Data for Development (OD4D) Network, International Development Research Centre, Canada

Download full-text PDF – See also: http://odimpact.org/

Transatlantic Data Privacy


Paul M. Schwartz and Karl-Nikolaus Peifer in Georgetown Law Journal: “International flows of personal information are more significant than ever, but differences in transatlantic data privacy law imperil this data trade. The resulting policy debate has led the EU to set strict limits on transfers of personal data to any non-EU country—including the United States—that lacks sufficient privacy protections. Bridging the transatlantic data divide is therefore a matter of the greatest significance.

In exploring this issue, this Article analyzes the respective legal identities constructed around data privacy in the EU and the United States. It identifies profound differences in the two systems’ images of the individual as bearer of legal interests. The EU has created a privacy culture around “rights talk” that protects its “datasubjects.” In the EU, moreover, rights talk forms a critical part of the postwar European project of creating the identity of a European citizen. In the United States, in contrast, the focus is on a “marketplace discourse” about personal information and the safeguarding of “privacy consumers.” In the United States, data privacy law focuses on protecting consumers in a data marketplace.

This Article uses its models of rights talk and marketplace discourse to analyze how the EU and United States protect their respective data subjects and privacy consumers. Although the differences are great, there is still a path forward. A new set of institutions and processes can play a central role in developing mutually acceptable standards of data privacy. The key documents in this regard are the General Data Protection Regulation, an EU-wide standard that becomes binding in 2018, and the Privacy Shield, an EU–U.S. treaty signed in 2016. These legal standards require regular interactions between the EU and United States and create numerous points for harmonization, coordination, and cooperation. The GDPR and Privacy Shield also establish new kinds of governmental networks to resolve conflicts. The future of international data privacy law rests on the development of new understandings of privacy within these innovative structures….(More)”.

Measuring Tomorrow: Accounting for Well-Being, Resilience, and Sustainability in the Twenty-First Century


Book by Éloi Laurent on “How moving beyond GDP will improve well-being and sustainability…Never before in human history have we produced so much data, and this empirical revolution has shaped economic research and policy profoundly. But are we measuring, and thus managing, the right things—those that will help us solve the real social, economic, political, and environmental challenges of the twenty-first century? In Measuring Tomorrow, Éloi Laurent argues that we need to move away from narrowly useful metrics such as gross domestic product and instead use broader ones that aim at well-being, resilience, and sustainability. By doing so, countries will be able to shift their focus away from infinite and unrealistic growth and toward social justice and quality of life for their citizens.

The time has come for these broader metrics to become more than just descriptive, Laurent argues; applied carefully by private and public decision makers, they can foster genuine progress. He begins by taking stock of the booming field of well-being and sustainability indicators, and explains the insights that the best of these can offer. He then shows how these indicators can be used to develop new policies, from the local to the global….(More)”.

Understanding Corporate Data Sharing Decisions: Practices, Challenges, and Opportunities for Sharing Corporate Data with Researchers


Leslie Harris at the Future of Privacy Forum: “Data has become the currency of the modern economy. A recent study projects the global volume of data to grow from about 0.8 zettabytes (ZB) in 2009 to more than 35 ZB in 2020, most of it generated within the last two years and held by the corporate sector.

As the cost of data collection and storage becomes cheaper and computing power increases, so does the value of data to the corporate bottom line. Powerful data science techniques, including machine learning and deep learning, make it possible to search, extract and analyze enormous sets of data from many sources in order to uncover novel insights and engage in predictive analysis. Breakthrough computational techniques allow complex analysis of encrypted data, making it possible for researchers to protect individual privacy, while extracting valuable insights.

At the same time, these newfound data sources hold significant promise for advancing scholarship and shaping more impactful social policies, supporting evidence-based policymaking and more robust government statistics, and shaping more impactful social interventions. But because most of this data is held by the private sector, it is rarely available for these purposes, posing what many have argued is a serious impediment to scientific progress.

A variety of reasons have been posited for the reluctance of the corporate sector to share data for academic research. Some have suggested that the private sector doesn’t realize the value of their data for broader social and scientific advancement. Others suggest that companies have no “chief mission” or public obligation to share. But most observers describe the challenge as complex and multifaceted. Companies face a variety of commercial, legal, ethical, and reputational risks that serve as disincentives to sharing data for academic research, with privacy – particularly the risk of reidentification – an intractable concern. For companies, striking the right balance between the commercial and societal value of their data, the privacy interests of their customers, and the interests of academics presents a formidable dilemma.

To be sure, there is evidence that some companies are beginning to share for academic research. For example, a number of pharmaceutical companies are now sharing clinical trial data with researchers, and a number of individual companies have taken steps to make data available as well. What is more, companies are also increasingly providing open or shared data for other important “public good” activities, including international development, humanitarian assistance and better public decision-making. Some are contributing to data collaboratives that pool data from different sources to address societal concerns. Yet, it is still not clear whether and to what extent this “new era of data openness” will accelerate data sharing for academic research.

Today, the Future of Privacy Forum released a new study, Understanding Corporate Data Sharing Decisions: Practices, Challenges, and Opportunities for Sharing Corporate Data with ResearchersIn this report, we aim to contribute to the literature by seeking the “ground truth” from the corporate sector about the challenges they encounter when they consider making data available for academic research. We hope that the impressions and insights gained from this first look at the issue will help formulate further research questions, inform the dialogue between key stakeholders, and identify constructive next steps and areas for further action and investment….(More)”.