System-wide Roadmap for Innovating UN Data and Statistics


Roadmap by the United Nations System: “Since 2018, the Secretary-General has pursued an ambitious agenda to prepare the UN System for the challenges of the 21st century. In lockstep with other structural UN reforms, he has launched a portfolio of initiatives through the CEB to help transform system-wide approaches to new technologies, innovation and data. Driven by the urgency and ambition of the “Decade of Action”, these initiatives are designed to nurture cross-cutting capabilities the UN System will need to deliver better “for people and planet”. Unlocking data and harnessing the potential of statistics will be critical to the success of UN reform.

Recognizing that data are a strategic asset for the UN System, the UN Secretary-General’s overarching Data Strategy sets out a vision for a “data ecosystem that maximizes the value of our data assets for our organizations and the stakeholders we serve”, including high-level objectives, principles, core workstreams and concrete system-wide data initiatives. The strategy signals that improving how we collect, manage, use and share data should be a crosscutting strategic concern: Across all pillars of the UN System, across programmes and operations, and across all level of our organizations.

The System-wide Roadmap for Innovating UN Data and Statistics contributes to the overall objectives of the Data Strategy of the Secretary-General that constitutes a framework to support the Roadmap as a priority initiative. The two strategic plans converge around a vision that recognizes the power of data and stimulates the United Nations to embrace a more coherent and modern approach to data…(More)”.

Removing the pump handle: Stewarding data at times of public health emergency


Reema Patel at Significance: “There is a saying, incorrectly attributed to Mark Twain, that states: “History never repeat itself but it rhymes”. Seeking to understand the implications of the current crisis for the effective use of data, I’ve drawn on the nineteenth-century cholera outbreak in London’s Soho to identify some “rhyming patterns” that might inform our approaches to data use and governance at this time of public health crisis.

Where better to begin than with the work of Victorian pioneer John Snow? In 1854, Snow’s use of a dot map to illustrate clusters of cholera cases around public water pumps, and of statistics to establish the connection between the quality of water sources and cholera outbreaks, led to a breakthrough in public health interventions – and, famously, the removal of the handle of a water pump in Broad Street.

Data is vital

We owe a lot to Snow, especially now. His examples teaches us that data has a central role to play in saving lives, and that the effective use of (and access to) data is critical for enabling timely responses to public health emergencies.

Take, for instance, transport app CityMapper’s rapid redeployment of its aggregated transport data. In the early days of the Covid-19 pandemic, this formed part of an analysis of compliance with social distancing restrictions across a range of European cities. There is also the US-based health weather map, which uses anonymised and aggregated data to visualise fever, specifically influenza-like illnesses. This data helped model early indications of where, and how quickly, Covid-19 was spreading….

Ethics and human rights still matter

As the current crisis evolves, many have expressed concern that the pandemic will be used to justify the rapid roll out of surveillance technologies that do not meet ethical and human rights standards, and that this will be done in the name of the “public good”. Examples of these technologies include symptom- and contact-tracing applications. Privacy experts are also increasingly concerned that governments will be trading off more personal data than is necessary or proportionate to respond to the public health crisis.

Many ethical and human rights considerations (including those listed at the bottom of this piece) are at risk of being overlooked at this time of emergency, and governments would be wise not to press ahead regardless, ignoring legitimate concerns about rights and standards. Instead, policymakers should begin to address these concerns by asking how we can prepare (now and in future) to establish clear and trusted boundaries for the use of data (personal and non-personal) in such crises.

Democratic states in Europe and the US have not, in recent memory, prioritised infrastructures and systems for a crisis of this scale – and this has contributed to our current predicament. Contrast this with Singapore, which suffered outbreaks of SARS and H1N1, and channelled this experience into implementing pandemic preparedness measures.

We cannot undo the past, but we can begin planning and preparing constructively for the future, and that means strengthening global coordination and finding mechanisms to share learning internationally. Getting the right data infrastructure in place has a central role to play in addressing ethical and human rights concerns around the use of data….(More)”.

Open science: after the COVID-19 pandemic there can be no return to closed working


Article by Virginia Barbour and Martin Borchert: “In the few months since the first case of COVID-19 was identified, the underlying cause has been isolated, its symptoms agreed on, its genome sequenced, diagnostic tests developed, and potential treatments and vaccines are on the horizon. The astonishingly short time frame of these discoveries has only happened through a global open science effort.

The principles and practices underpinning open science are what underpin good research—research that is reliable, reproducible, and has the broadest impact possible. It specifically requires the application of principles and practices that make research FAIR (Findable, Accessible, Interoperable, Reusable); researchers are making their data and preliminary publications openly accessible, and then publishers are making the peer-reviewed research immediately and freely available to all. The rapid dissemination of research—through preprints in particular as well as journal articles—stands in contrast to what happened in the 2003 SARS outbreak when the majority of research on the disease was published well after the outbreak had ended.

Many outside observers might reasonably assume, given the digital world we all now inhabit, that science usually works like this. Yet this is very far from the norm for most research. Science is not something that just happens in response to emergencies or specific events—it is an ongoing, largely publicly funded, national and international enterprise….

Sharing of the underlying data that journal articles are based on is not yet a universal requirement for publication, nor are researchers usually recognised for data sharing.

There are many benefits associated with an open science model. Image adapted from: Gaelen Pinnock/UCT; CC-BY-SA 4.0 .

Once published, even access to research is not seamless. The majority of academic journals still require a subscription to access. Subscriptions are expensive; Australian universities alone currently spend more than $300 million per year on subscriptions to academic journals. Access to academic journals also varies between universities with varying library budgets. The main markets for subscriptions to the commercial journal literature are higher education and health, with some access to government and commercial….(More)”.

The Big Failure of Small Government


Mariana Mazzucato and Giulio Quaggiotto at Project Syndicate: “Decades of privatization, outsourcing, and budget cuts in the name of “efficiency” have significantly hampered many governments’ responses to the COVID-19 crisis. At the same time, successful responses by other governments have shown that investments in core public-sector capabilities make all the difference in times of emergency. The countries that have handled the crisis well are those where the state maintains a productive relationship with value creators in society, by investing in critical capacities and designing private-sector contracts to serve the public interest.

From the United States and the United Kingdom to Europe, Japan, and South Africa, governments are investing billions – and, in some cases, trillions – of dollars to shore up national economies. Yet, if there is one thing we learned from the 2008 financial crisis, it is that quality matters at least as much as quantity. If the money falls on empty, weak, or poorly managed structures, it will have little effect, and may simply be sucked into the financial sector. Too many lives are at stake to repeat past errors.

Unfortunately, for the last half-century, the prevailing political message in many countries has been that governments cannot – and therefore should not – actually govern. Politicians, business leaders, and pundits have long relied on a management creed that focuses obsessively on static measures of efficiency to justify spending cuts, privatization, and outsourcing.

As a result, governments now have fewer options for responding to the crisis, which may be why some are now desperately clinging to the unrealistic hope of technological panaceas such as artificial intelligence or contact-tracing apps. With less investment in public capacity has come a loss of institutional memory (as the UK’s government has discovered) and increased dependence on private consulting firms, which have raked in billions. Not surprisingly, morale among public-sector employees has plunged in recent years.

Consider two core government responsibilities during the COVID-19 crisis: public health and the digital realm. In 2018 alone, the UK government outsourced health contracts worth £9.2 billion ($11.2 billion), putting 84% of beds in care homes in the hands of private-sector operators (including private equity firms). Making matters worse, since 2015, the UK’s National Health Service has endured £1 billion in budget cuts.

Outsourcing by itself is not the problem. But the outsourcing of critical state capacities clearly is, especially when the resulting public-private “partnerships” are not designed to serve the public interest. Ironically, some governments have outsourced so eagerly that they have undermined their own ability to structure outsourcing contracts. After a 12-year effort to spur the private sector to develop low-cost ventilators, the US government is now learning that outsourcing is not a reliable way to ensure emergency access to medical equipment….(More)”.

Big data, privacy and COVID-19 – learning from humanitarian expertise in data protection


Andrej Zwitter & Oskar J. Gstrein at the Journal of International Humanitarian Action: “The use of location data to control the coronavirus pandemic can be fruitful and might improve the ability of governments and research institutions to combat the threat more quickly. It is important to note that location data is not the only useful data that can be used to curb the current crisis. Genetic data can be relevant for AI enhanced searches for vaccines and monitoring online communication on social media might be helpful to keep an eye on peace and security (Taulli n.d.). However, the use of such large amounts of data comes at a price for individual freedom and collective autonomy. The risks of the use of such data should ideally be mitigated through dedicated legal frameworks which describe the purpose and objectives of data use, its collection, analysis, storage and sharing, as well as the erasure of ‘raw’ data once insights have been extracted. In the absence of such clear and democratically legitimized norms, one can only resort to fundamental rights provisions such as Article 8 paragraph 2 of the ECHR that reminds us that any infringement of rights such as privacy need to be in accordance with law, necessary in a democratic society, pursuing a legitimate objective and proportionate in their application.

However as shown above, legal frameworks including human rights standards are currently not capable of effectively ensuring data protection, since they focus too much on the individual as the point of departure. Hence, we submit that currently applicable guidelines and standards for responsible data use in the humanitarian sector should also be fully applicable to corporate, academic and state efforts which are currently enacted to curb the COVID-19 crisis globally. Instead of ‘re-calibrating’ the expectations of individuals on their own privacy and collective autonomy, the requirements for the use of data should be broader and more comprehensive. Applicable principles and standards as developed by OCHA, the 510 project of the Dutch Red Cross, or by academic initiatives such as the Signal Code are valid minimum standards during a humanitarian crisis. Hence, they are also applicable minimum standards during the current pandemic.

Core findings that can be extracted from these guidelines and standards for the practical implementation into data driven responses to COVIC-19 are:

  • data sensitivity is highly contextual; one and the same data can be sensitive in different contexts. Location data during the current pandemic might be very useful for epidemiological analysis. However, if (ab-)used to re-calibrate political power relations, data can be open for misuse. Hence, any party supplying data and data analysis needs to check whether data and insights can be misused in the context they are presented.
  • privacy and data protection are important values; they do not disappear during a crisis. Nevertheless, they have to be weighed against respective benefits and risks.
  • data-breaches are inevitable; with time (t) approaching infinity, the chance of any system being hacked or becoming insecure approaches 100%. Hence, it is not a question of whether, but when. Therefore, organisations have to prepare sound data retention and deletion policies.
  • data ethics is an obligation to provide high quality analysis; using machine learning and big data might be appealing for the moment, but the quality of source data might be low, and results might be unreliable, or even harmful. Biases in incomplete datasets, algorithms and human users are abundant and widely discussed. We must not forget that in times of crisis, the risk of bias is more pronounced, and more problematic due to the vulnerability of data subjects and groups. Therefore, working to the highest standards of data processing and analysis is an ethical obligation.

The adherence to these principles is particularly relevant in times of crisis such as now, where they mark the difference between societies that focus on control and repression on the one hand, and those who believe in freedom and autonomy on the other. Eventually, we will need to think of including data policies into legal frameworks for state of emergency regulations, and coordinate with corporate stakeholders as well as private organisations on how to best deal with such crises. Data-driven practices have to be used in a responsible manner. Furthermore, it will be important to observe whether data practices and surveillance assemblages introduced under current circumstances will be rolled back to status quo ante when returning to normalcy. If not, our rights will become hollowed out, just waiting for the next crisis to eventually become irrelevant….(More)”.

Testing Transparency


Paper by Brigham Daniels, Mark Buntaine and Tanner Bangerter: “In modern democracies, governmental transparency is thought to have great value. When it comes to addressing administrative corruption and mismanagement, many would agree with Justice Brandeis’s observation that sunlight is the best disinfectant. Beyond this, many credit transparency with enabling meaningful citizen participation.

But even though transparency appears highly correlated with successful governance in developed democracies, assumptions about administrative transparency have remained empirically untested. Testing effects of transparency would prove particularly helpful in developing democracies where transparency norms have not taken hold or only have done so slowly. In these contexts, does administrative transparency really create the sorts of benefits attributed to it? Transparency might grease the gears of developed democracies, but what good is grease when many of the gears seem to be broken or missing entirely?

This Article presents empirical results from a first-of-its-kind field study that tested two major promises of administrative transparency in a developing democracy: that transparency increases public participation in government affairs and that it increases government accountability. To test these hypotheses, we used two randomized controlled trials.

Surprisingly, we found transparency had no significant effect in almost any of our quantitative measurements, although our qualitative results suggested that when transparency interventions exposed corruption, some limited oversight could result. Our findings are particularly significant for developing democracies and show, at least in this context, that Justice Brandeis may have oversold the cleansing effects of transparency. A few rays of transparency shining light on government action do not disinfect the system and cure government corruption and mismanagement. Once corruption and mismanagement are identified, it takes effective government institutions and action from civil society to successfully act as a disinfectant….(More)”.

MEPs chart path for a European approach to Artificial Intelligence


Samuel Stolton at Euractiv: “As part of a series of debates in Parliament’s Legal Affairs Committee on Tuesday afternoon, MEPs exchanged ideas concerning several reports on Artificial Intelligence, covering ethics, civil liability, and intellectual property.

The reports represent Parliament’s recommendations to the Commission on the future for AI technology in the bloc, following the publication of the executive’s White Paper on Artificial Intelligence, which stated that high-risk technologies in ‘critical sectors’ and those deemed to be of ‘critical use’ should be subjected to new requirements.

One Parliament initiative on the ethical aspects of AI, led by Spanish Socialist Ibán García del Blanco, notes that he believes a uniform regulatory framework in the field of AI in Europe is necessary to avoid member states adopting different approaches.

“We felt that regulation is important to make sure that there is no restriction on the internal market. If we leave scope to the member states, I think we’ll see greater legal uncertainty,” García del Blanco said on Tuesday.

In the context of the current public health crisis, García del Blanco also said the use of certain biometric applications and remote recognition technologies should be proportionate, while respecting the EU’s data protection regime and the EU Charter of Fundamental Rights.

A new EU agency for Artificial Intelligence?

One of the most contested areas of García del Blanco’s report was his suggestion that the EU should establish a new agency responsible for overseeing compliance with future ethical principles in Artificial Intelligence.

“We shouldn’t get distracted by the idea of setting up an agency, European Union citizens are not interested in setting up further bodies,” said the conservative EPP’s shadow rapporteur on the file, Geoffroy Didier.

The centrist-liberal Renew group also did not warm up to the idea of establishing a new agency for AI, with MEP Stephane Sejourne saying that there already exist bodies that could have their remits extended.

In the previous mandate, as part of a 2017 resolution on Civil Law Rules on Robotics, Parliament had called upon the Commission to ‘consider’ whether an EU Agency for Robotics and Artificial Intelligence could be worth establishing in the future.

Another point of divergence consistently raised by MEPs on Tuesday was the lack of harmony in key definitions related to Artificial Intelligence across different Parliamentary texts, which could create legal loopholes in the future.

In this vein, members highlighted the need to work towards joint definitions for Artificial intelligence operations, in order to ensure consistency across Parliament’s four draft recommendations to the Commission….(More)”.

The Coronavirus Is Rewriting Our Imaginations


Kim Stanley Robinson at the New Yorker: “…We are individuals first, yes, just as bees are, but we exist in a larger social body. Society is not only real; it’s fundamental. We can’t live without it. And now we’re beginning to understand that this “we” includes many other creatures and societies in our biosphere and even in ourselves. Even as an individual, you are a biome, an ecosystem, much like a forest or a swamp or a coral reef. Your skin holds inside it all kinds of unlikely coöperations, and to survive you depend on any number of interspecies operations going on within you all at once. We are societies made of societies; there are nothing but societies. This is shocking news—it demands a whole new world view. And now, when those of us who are sheltering in place venture out and see everyone in masks, sharing looks with strangers is a different thing. It’s eye to eye, this knowledge that, although we are practicing social distancing as we need to, we want to be social—we not only want to be social, we’ve got to be social, if we are to survive. It’s a new feeling, this alienation and solidarity at once. It’s the reality of the social; it’s seeing the tangible existence of a society of strangers, all of whom depend on one another to survive. It’s as if the reality of citizenship has smacked us in the face.

As for government: it’s government that listens to science and responds by taking action to save us. Stop to ponder what is now obstructing the performance of that government. Who opposes it?…

There will be enormous pressure to forget this spring and go back to the old ways of experiencing life. And yet forgetting something this big never works. We’ll remember this even if we pretend not to. History is happening now, and it will have happened. So what will we do with that?

A structure of feeling is not a free-floating thing. It’s tightly coupled with its corresponding political economy. How we feel is shaped by what we value, and vice versa. Food, water, shelter, clothing, education, health care: maybe now we value these things more, along with the people whose work creates them. To survive the next century, we need to start valuing the planet more, too, since it’s our only home.

It will be hard to make these values durable. Valuing the right things and wanting to keep on valuing them—maybe that’s also part of our new structure of feeling. As is knowing how much work there is to be done. But the spring of 2020 is suggestive of how much, and how quickly, we can change. It’s like a bell ringing to start a race. Off we go—into a new time….(More)”.

Viruses Cross Borders. To Fight Them, Countries Must Let Medical Data Flow, Too


Nigel Cory at ITIF: “If nations could regulate viruses the way many regulate data, there would be no global pandemics. But the sad reality is that, in the midst of the worst global pandemic in living memory, many nations make it unnecessarily complicated and costly, if not illegal, for health data to cross their borders. In so doing, they are hindering critically needed medical progress.

In the COVID-19 crisis, data analytics powered by artificial intelligence (AI) is critical to identifying the exact nature of the pandemic and developing effective treatments. The technology can produce powerful insights and innovations, but only if researchers can aggregate and analyze data from populations around the globe. And that requires data to move across borders as part of international research efforts by private firms, universities, and other research institutions. Yet, some countries, most notably China, are stopping health and genomic data at their borders.

Indeed, despite the significant benefits to companies, citizens, and economies that arise from the ability to easily share data across borders, dozens of countries—across every stage of development—have erected barriers to cross-border data flows. These data-residency requirements strictly confine data within a country’s borders, a concept known as “data localization,” and many countries have especially strict requirements for health data.

China is a noteworthy offender, having created a new digital iron curtain that requires data localization for a range of data types, including health data, as part of its so-called “cyber sovereignty” strategy. A May 2019 State Council regulation required genomic data to be stored and processed locally by Chinese firms—and foreign organizations are prohibited. This is in service of China’s mercantilist strategy to advance its domestic life sciences industry. While there has been collaboration between U.S. and Chinese medical researchers on COVID-19, including on clinical trials for potential treatments, these restrictions mean that it won’t involve the transfer, aggregation, and analysis of Chinese personal data, which otherwise might help find a treatment or vaccine. If China truly wanted to make amends for blocking critical information during the early stages of the outbreak in Wuhan, then it should abolish this restriction and allow genomic and other health data to cross its borders.

But China is not alone in limiting data flows. Russia requires all personal data, health-related or not, to be stored locally. India’s draft data protection bill permits the government to classify any sensitive personal data as critical personal data and mandate that it be stored and processed only within the country. This would be consistent with recent debates and decisions to require localization for payments data and other types of data. And despite its leading role in pushing for the free flow of data as part of new digital trade agreementsAustralia requires genomic and other data attached to personal electronic health records to be only stored and processed within its borders.

Countries also enact de facto barriers to health and genomic data transfers by making it harder and more expensive, if not impractical, for firms to transfer it overseas than to store it locally. For example, South Korea and Turkey require firms to get explicit consent from people to transfer sensitive data like genomic data overseas. Doing this for hundreds or thousands of people adds considerable costs and complexity.

And the European Union’s General Data Protection Regulation encourages data localization as firms feel pressured to store and process personal data within the EU given the restrictions it places on data transfers to many countries. This is in addition to the renewed push for local data storage and processing under the EU’s new data strategy.

Countries rationalize these steps on the basis that health data, particularly genomic data, is sensitive. But requiring health data to be stored locally does little to increase privacy or data security. The confidentiality of data does not depend on which country the information is stored in, only on the measures used to store it securely, such as via encryption, and the policies and procedures the firms follow in storing or analyzing the data. For example, if a nation has limits on the use of genomics data, then domestic organizations using that data face the same restrictions, whether they store the data in the country or outside of it. And if they share the data with other organizations, they must require those organizations, regardless of where they are located, to abide by the home government’s rules.

As such, policymakers need to stop treating health data differently when it comes to cross-border movement, and instead build technical, legal, and ethical protections into both domestic and international data-governance mechanisms, which together allow the responsible sharing and transfer of health and genomic data.

This is clearly possible—and needed. In February 2020, leading health researchers called for an international code of conduct for genomic data following the end of their first-of-its-kind international data-driven research project. The project used a purpose-built cloud service that stored 800 terabytes of genomic data on 2,658 cancer genomes across 13 data centers on three continents. The collaboration and use of cloud computing were transformational in enabling large-scale genomic analysis….(More)”.

Models v. Evidence


Jonathan Fuller at the Boston Review: “COVID-19 has revealed a contest between two competing philosophies of scientific knowledge. To manage the crisis, we must draw on both….The lasting icon of the COVID-19 pandemic will likely be the graphic associated with “flattening the curve.” The image is now familiar: a skewed bell curve measuring coronavirus cases that towers above a horizontal line—the health system’s capacity—only to be flattened by an invisible force representing “non-pharmaceutical interventions” such as school closures, social distancing, and full-on lockdowns.

How do the coronavirus models generating these hypothetical curves square with the evidence? What roles do models and evidence play in a pandemic? Answering these questions requires reconciling two competing philosophies in the science of COVID-19.

To some extent, public health epidemiology and clinical epidemiology are distinct traditions in health care, competing philosophies of scientific knowledge.

In one camp are infectious disease epidemiologists, who work very closely with institutions of public health. They have used a multitude of models to create virtual worlds in which sim viruses wash over sim populations—sometimes unabated, sometimes held back by a virtual dam of social interventions. This deluge of simulated outcomes played a significant role in leading government actors to shut borders as well as doors to schools and businesses. But the hypothetical curves are smooth, while real-world data are rough. Some detractors have questioned whether we have good evidence for the assumptions the models rely on, and even the necessity of the dramatic steps taken to curb the pandemic. Among this camp are several clinical epidemiologists, who typically provide guidance for clinical practice—regarding, for example, the effectiveness of medical interventions—rather than public health.

The latter camp has won significant media attention in recent weeks. Bill Gates—whose foundation funds the research behind the most visible outbreak model in the United States, developed by the Institute for Health Metrics and Evaluation (IHME) at the University of Washington—worries that COVID-19 might be a “once-in-a-century pandemic.” A notable detractor from this view is Stanford’s John Ioannidis, a clinical epidemiologist, meta-researcher, and reliable skeptic who has openly wondered whether the coronavirus pandemic might rather be a “once-in-a-century evidence fiasco.” He argues that better data are needed to justify the drastic measures undertaken to contain the pandemic in the United States and elsewhere.

Ioannidis claims, in particular, that our data about the pandemic are unreliable, leading to exaggerated estimates of risk. He also points to a systematic review published in 2011 of the evidence regarding physical interventions that aim to reduce the spread of respiratory viruses, worrying that the available evidence is nonrandomized and prone to bias. (A systematic review specific to COVID-19 has now been published; it concurs that the quality of evidence is “low” to “very low” but nonetheless supports the use of quarantine and other public health measures.) According to Ioannidis, the current steps we are taking are “non-evidence-based.”…(More)”.