National SDG Review: data challenges and opportunities


Press Release: “…the Partnership in Statistics for Development in the 21st Century (PARIS21) and Partners for Review launched a landmark new paper that identifies the factors preventing countries from fully exploiting their data ecosystem and proposes solutions to strengthening statistical capacities to achieve the 2030 Agenda for Sustainable Development.

Ninety percent of the data in the world has been created in the past two years, yet many countries with low statistical capacity struggle to produce, analyse and communicate the data necessary to advance sustainable development. At the same time, demand for more and better data and statistics is increasingly massively, with international agreements like the 2030 Agenda placing unprecedented demand on countries to report on more than 230 indicators.

Using PARIS21’s Capacity Development 4.0 (CD 4.0) approach, the paper shows that leveraging data available in the data ecosystem for official re­porting requires new capacity in terms of skills and knowledge, man­agement, politics and power. The paper also shows that these capacities need to be developed at both the organisational and systemic level, which involves the various channels and interactions that connect different organisations.

Aimed at national statistics offices, development professionals and others involved in the national data ecosystem, the paper provides a roadmap that can help national statistical systems develop and strengthen the capacities of traditional and new actors in the data ecosystem to improve both the fol­low-up and review process of the 2030 Agenda as well as the data architecture for sustainable development at the national level…(More)”.

Responsible Data for Children


New Site and Report by UNICEF and The GovLab: “RD4C seeks to build awareness regarding the need for special attention to data issues affecting children—especially in this age of changing technology and data linkage; and to engage with governments, communities, and development actors to put the best interests of children and a child rights approach at the center of our data activities. The right data in the right hands at the right time can significantly improve outcomes for children. The challenge is to understand the potential risks and ensure that the collection, analysis and use of data on children does not undermine these benefits.

Drawing upon field-based research and established good practice, RD4C aims to highlight and support best practice data responsibility; identify challenges and develop practical tools to assist practitioners in evaluating and addressing them; and encourage a broader discussion on actionable principles, insights, and approaches for responsible data management.

Google’s ‘Project Nightingale’ Gathers Personal Health Data on Millions of Americans


Rob Copeland at Wall Street Journal: “Google is engaged with one of the U.S.’s largest health-care systems on a project to collect and crunch the detailed personal-health information of millions of people across 21 states.

The initiative, code-named “Project Nightingale,” appears to be the biggest effort yet by a Silicon Valley giant to gain a toehold in the health-care industry through the handling of patients’ medical data. Amazon.com Inc., Apple Inc.  and Microsoft Corp. are also aggressively pushing into health care, though they haven’t yet struck deals of this scope.

Google began Project Nightingale in secret last year with St. Louis-based Ascension, a Catholic chain of 2,600 hospitals, doctors’ offices and other facilities, with the data sharing accelerating since summer, according to internal documents.

The data involved in the initiative encompasses lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, including patient names and dates of birth….

Neither patients nor doctors have been notified. At least 150 Google employees already have access to much of the data on tens of millions of patients, according to a person familiar with the matter and the documents.

In a news release issued after The Wall Street Journal reported on Project Nightingale on Monday, the companies said the initiative is compliant with federal health law and includes robust protections for patient data….(More)”.

Finland’s model in utilising forest data


Report by Matti Valonen et al: “The aim of this study is to depict the Finnish Forest Centre’s Metsään.fiwebsite’s background, objectives and implementation and to assess its needs for development and future prospects. The Metsään.fi-service included in the Metsään.fi-website is a free e-service for forest owners and corporate actors (companies, associations and service providers) in the forest sector, which aim is to support active decision-making among forest owners by offering forest resource data and maps on forest properties, by making contacts with the authorities easier through online services and to act as a platform for offering forest services, among other things.

In addition to the Metsään.fi-service, the website includes open forest data services that offer the users national forest resource data that is not linked with personal information.

Private forests are in a key position as raw material sources for traditional and new forest-based bioeconomy. In addition to wood material, the forests produce non-timber forest products (for example berries and mushrooms), opportunities for recreation and other ecosystem services.

Private forests cover roughly 60 percent of forest land, but about 80 percent of the domestic wood used by forest industry. In 2017 the value of the forest industry production was 21 billion euros, which is a fifth of the entire industry production value in Finland. The forest industry export in 2017 was worth about 12 billion euros, which covers a fifth of the entire export of goods. Therefore, the forest sector is important for Finland’s national economy…(More)”.

OMB rethinks ‘protected’ or ‘open’ data binary with upcoming Evidence Act guidance


Jory Heckman at Federal News Network: “The Foundations for Evidence-Based Policymaking Act has ordered agencies to share their datasets internally and with other government partners — unless, of course, doing so would break the law.

Nearly a year after President Donald Trump signed the bill into law, agencies still have only a murky idea of what data they can share, and with whom. But soon, they’ll have more nuanced options of ranking the sensitivity of their datasets before sharing them out to others.

Chief Statistician Nancy Potok said the Office of Management and Budget will soon release proposed guidelines for agencies to provide “tiered” access to their data, based on the sensitivity of that information….

OMB, as part of its Evidence Act rollout, will also rethink how agencies ensure protected access to data for research. Potok said agency officials expect to pilot a single application governmentwide for people seeking access to sensitive data not available to the public.

The pilot resembles plans for a National Secure Data Service envisioned by the Commission on Evidence-Based Policymaking, an advisory group whose recommendations laid the groundwork for the Evidence Act.

“As a state-of-the-art resource for improving government’s capacity to use the data it already collects, the National Secure Data Service will be able to temporarily link existing data and provide secure access to those data for exclusively statistical purposes in connection with approved projects,” the commission wrote in its 2017 final report.

In an effort to strike a balance between access and privacy, Potok said OMB has also asked agencies to provide a list of the statutes that prohibit them from sharing data amongst themselves….(More)”.

Geolocation Data for Pattern of Life Analysis in Lower-Income Countries


Report by Eduardo Laguna-Muggenburg, Shreyan Sen and Eric Lewandowski: “Urbanization processes in the developing world are often associated with the creation of informal settlements. These areas frequently have few or no public services exacerbating inequality even in the context of substantial economic growth.

In the past, the high costs of gathering data through traditional surveying methods made it challenging to study how these under-served areas evolve through time and in relation to the metropolitan area to which they belong. However, the advent of mobile phones and smartphones in particular presents an opportunity to generate new insights on these old questions.

In June 2019, Orbital Insight and the United Nations Development Programme (UNDP) Arab States Human Development Report team launched a collaborative pilot program assessing the feasibility of using geolocation data to understand patterns of life among the urban poor in Cairo, Egypt.

The objectives of this collaboration were to assess feasibility (and conditionally pursue preliminary analysis) of geolocation data to create near-real time population density maps, understand where residents of informal settlements tend to work during the day, and to classify universities by percentage of students living in informal settlements.

The report is organized as follows. In Section 2 we describe the data and its limitations. In Section 3 we briefly explain the methodological background. Section 4 summarizes the insights derived from the data for the Egyptian context. Section 5 concludes….(More)”.

The Value of Data: Towards a Framework to Redistribute It


Paper by Maria Savona: “This note attempts a systematisation of different pieces of literature that underpin the recent policy and academic debate on the value of data. It mainly poses foundational questions around the definition, economic nature and measurement of data value, and discusses the opportunity to redistribute it. It then articulates a framework to compare ways of implementing redistribution, distinguishing between data as capital, data as labour or data as an intellectual property. Each of these raises challenges, revolving around the notions of data property and data rights, that are also briefly discussed. The note concludes by indicating areas for policy considerations and a research agenda to shape the future structure of data governance more at large….(More)”.

Algorithmic futures: The life and death of Google Flu Trends


Vincent Duclos in Medicine Anthropology Theory: “In the last few years, tracking systems that harvest web data to identify trends, calculate predictions, and warn about potential epidemic outbreaks have proliferated. These systems integrate crowdsourced data and digital traces, collecting information from a variety of online sources, and they promise to change the way governments, institutions, and individuals understand and respond to health concerns. This article examines some of the conceptual and practical challenges raised by the online algorithmic tracking of disease by focusing on the case of Google Flu Trends (GFT). Launched in 2008, GFT was Google’s flagship syndromic surveillance system, specializing in ‘real-time’ tracking of outbreaks of influenza. GFT mined massive amounts of data about online search behavior to extract patterns and anticipate the future of viral activity. But it did a poor job, and Google shut the system down in 2015. This paper focuses on GFT’s shortcomings, which were particularly severe during flu epidemics, when GFT struggled to make sense of the unexpected surges in the number of search queries. I suggest two reasons for GFT’s difficulties. First, it failed to keep track of the dynamics of contagion, at once biological and digital, as it affected what I call here the ‘googling crowds’. Search behavior during epidemics in part stems from a sort of viral anxiety not easily amenable to algorithmic anticipation, to the extent that the algorithm’s predictive capacity remains dependent on past data and patterns. Second, I suggest that GFT’s troubles were the result of how it collected data and performed what I call ‘epidemic reality’. GFT’s data became severed from the processes Google aimed to track, and the data took on a life of their own: a trackable life, in which there was little flu left. The story of GFT, I suggest, offers insight into contemporary tensions between the indomitable intensity of collective life and stubborn attempts at its algorithmic formalization.Vincent DuclosIn the last few years, tracking systems that harvest web data to identify trends, calculate predictions, and warn about potential epidemic outbreaks have proliferated. These systems integrate crowdsourced data and digital traces, collecting information from a variety of online sources, and they promise to change the way governments, institutions, and individuals understand and respond to health concerns. This article examines some of the conceptual and practical challenges raised by the online algorithmic tracking of disease by focusing on the case of Google Flu Trends (GFT). Launched in 2008, GFT was Google’s flagship syndromic surveillance system, specializing in ‘real-time’ tracking of outbreaks of influenza. GFT mined massive amounts of data about online search behavior to extract patterns and anticipate the future of viral activity. But it did a poor job, and Google shut the system down in 2015. This paper focuses on GFT’s shortcomings, which were particularly severe during flu epidemics, when GFT struggled to make sense of the unexpected surges in the number of search queries. I suggest two reasons for GFT’s difficulties. First, it failed to keep track of the dynamics of contagion, at once biological and digital, as it affected what I call here the ‘googling crowds’. Search behavior during epidemics in part stems from a sort of viral anxiety not easily amenable to algorithmic anticipation, to the extent that the algorithm’s predictive capacity remains dependent on past data and patterns. Second, I suggest that GFT’s troubles were the result of how it collected data and performed what I call ‘epidemic reality’. GFT’s data became severed from the processes Google aimed to track, and the data took on a life of their own: a trackable life, in which there was little flu left. The story of GFT, I suggest, offers insight into contemporary tensions between the indomitable intensity of collective life and stubborn attempts at its algorithmic formalization….(More)”.

Leveraging Private Data for Public Good: A Descriptive Analysis and Typology of Existing Practices


New report by Stefaan Verhulst, Andrew Young, Michelle Winowatan. and Andrew J. Zahuranec: “To address the challenges of our times, we need both new solutions and new ways to develop those solutions. The responsible use of data will be key toward that end. Since pioneering the concept of “data collaboratives” in 2015, The GovLab has studied and experimented with innovative ways to leverage private-sector data to tackle various societal challenges, such as urban mobility, public health, and climate change.

While we have seen an uptake in normative discussions on how data should be shared, little analysis exists of the actual practice. This paper seeks to address that gap and seeks to answer the following question: What are the variables and models that determine functional access to private sector data for public good? In Leveraging Private Data for Public Good: A Descriptive Analysis and Typology of Existing Practices, we describe the emerging universe of data collaboratives and develop a typology of six practice areas. Our goal is to provide insight into current applications to accelerate the creation of new data collaboratives. The report outlines dozens of examples, as well as a set of recommendations to enable more systematic, sustainable, and responsible data collaboration….(More)”

Internet of Water


About: “Water is the essence of life and vital to the well-being of every person, economy, and ecosystem on the planet. But around the globe and here in the United States, water challenges are mounting as climate change, population growth, and other drivers of water stress increase. Many of these challenges are regional in scope and larger than any one organization (or even states), such as the depletion of multi-state aquifers, basin-scale flooding, or the wide-spread accumulation of nutrients leading to dead zones. Much of the infrastructure built to address these problems decades ago, including our data infrastructure, are struggling to meet these challenges. Much of our water data exists in paper formats unique to the organization collecting the data. Often, these organizations existed long before the personal computer was created (1975) or the internet became mainstream (mid 1990’s). As organizations adopted data infrastructure in the late 1990’s, it was with the mindset of “normal infrastructure” at the time. It was built to last for decades, rather than adapt with rapid technological changes. 

New water data infrastructure with new technologies that enable data to flow seamlessly between users and generate information for real-time management are needed to meet our growing water challenges. Decision-makers need accurate, timely data to understand current conditions, identify sustainability problems, illuminate possible solutions, track progress, and adapt along the way. Stakeholders need easy-to-understand metrics of water conditions so they can make sure managers and policymakers protect the environment and the public’s water supplies. The water community needs to continually improve how they manage this complex resource by using data and communicating information to support decision-making. In short, a sustained effort is required to accelerate the development of open data and information systems to support sustainable water resources management. The Internet of Water (IoW) is designed to be just such an effort….(More)”.