Testing Transparency


Paper by Brigham Daniels, Mark Buntaine and Tanner Bangerter: “In modern democracies, governmental transparency is thought to have great value. When it comes to addressing administrative corruption and mismanagement, many would agree with Justice Brandeis’s observation that sunlight is the best disinfectant. Beyond this, many credit transparency with enabling meaningful citizen participation.

But even though transparency appears highly correlated with successful governance in developed democracies, assumptions about administrative transparency have remained empirically untested. Testing effects of transparency would prove particularly helpful in developing democracies where transparency norms have not taken hold or only have done so slowly. In these contexts, does administrative transparency really create the sorts of benefits attributed to it? Transparency might grease the gears of developed democracies, but what good is grease when many of the gears seem to be broken or missing entirely?

This Article presents empirical results from a first-of-its-kind field study that tested two major promises of administrative transparency in a developing democracy: that transparency increases public participation in government affairs and that it increases government accountability. To test these hypotheses, we used two randomized controlled trials.

Surprisingly, we found transparency had no significant effect in almost any of our quantitative measurements, although our qualitative results suggested that when transparency interventions exposed corruption, some limited oversight could result. Our findings are particularly significant for developing democracies and show, at least in this context, that Justice Brandeis may have oversold the cleansing effects of transparency. A few rays of transparency shining light on government action do not disinfect the system and cure government corruption and mismanagement. Once corruption and mismanagement are identified, it takes effective government institutions and action from civil society to successfully act as a disinfectant….(More)”.

MEPs chart path for a European approach to Artificial Intelligence


Samuel Stolton at Euractiv: “As part of a series of debates in Parliament’s Legal Affairs Committee on Tuesday afternoon, MEPs exchanged ideas concerning several reports on Artificial Intelligence, covering ethics, civil liability, and intellectual property.

The reports represent Parliament’s recommendations to the Commission on the future for AI technology in the bloc, following the publication of the executive’s White Paper on Artificial Intelligence, which stated that high-risk technologies in ‘critical sectors’ and those deemed to be of ‘critical use’ should be subjected to new requirements.

One Parliament initiative on the ethical aspects of AI, led by Spanish Socialist Ibán García del Blanco, notes that he believes a uniform regulatory framework in the field of AI in Europe is necessary to avoid member states adopting different approaches.

“We felt that regulation is important to make sure that there is no restriction on the internal market. If we leave scope to the member states, I think we’ll see greater legal uncertainty,” García del Blanco said on Tuesday.

In the context of the current public health crisis, García del Blanco also said the use of certain biometric applications and remote recognition technologies should be proportionate, while respecting the EU’s data protection regime and the EU Charter of Fundamental Rights.

A new EU agency for Artificial Intelligence?

One of the most contested areas of García del Blanco’s report was his suggestion that the EU should establish a new agency responsible for overseeing compliance with future ethical principles in Artificial Intelligence.

“We shouldn’t get distracted by the idea of setting up an agency, European Union citizens are not interested in setting up further bodies,” said the conservative EPP’s shadow rapporteur on the file, Geoffroy Didier.

The centrist-liberal Renew group also did not warm up to the idea of establishing a new agency for AI, with MEP Stephane Sejourne saying that there already exist bodies that could have their remits extended.

In the previous mandate, as part of a 2017 resolution on Civil Law Rules on Robotics, Parliament had called upon the Commission to ‘consider’ whether an EU Agency for Robotics and Artificial Intelligence could be worth establishing in the future.

Another point of divergence consistently raised by MEPs on Tuesday was the lack of harmony in key definitions related to Artificial Intelligence across different Parliamentary texts, which could create legal loopholes in the future.

In this vein, members highlighted the need to work towards joint definitions for Artificial intelligence operations, in order to ensure consistency across Parliament’s four draft recommendations to the Commission….(More)”.

The Coronavirus Is Rewriting Our Imaginations


Kim Stanley Robinson at the New Yorker: “…We are individuals first, yes, just as bees are, but we exist in a larger social body. Society is not only real; it’s fundamental. We can’t live without it. And now we’re beginning to understand that this “we” includes many other creatures and societies in our biosphere and even in ourselves. Even as an individual, you are a biome, an ecosystem, much like a forest or a swamp or a coral reef. Your skin holds inside it all kinds of unlikely coöperations, and to survive you depend on any number of interspecies operations going on within you all at once. We are societies made of societies; there are nothing but societies. This is shocking news—it demands a whole new world view. And now, when those of us who are sheltering in place venture out and see everyone in masks, sharing looks with strangers is a different thing. It’s eye to eye, this knowledge that, although we are practicing social distancing as we need to, we want to be social—we not only want to be social, we’ve got to be social, if we are to survive. It’s a new feeling, this alienation and solidarity at once. It’s the reality of the social; it’s seeing the tangible existence of a society of strangers, all of whom depend on one another to survive. It’s as if the reality of citizenship has smacked us in the face.

As for government: it’s government that listens to science and responds by taking action to save us. Stop to ponder what is now obstructing the performance of that government. Who opposes it?…

There will be enormous pressure to forget this spring and go back to the old ways of experiencing life. And yet forgetting something this big never works. We’ll remember this even if we pretend not to. History is happening now, and it will have happened. So what will we do with that?

A structure of feeling is not a free-floating thing. It’s tightly coupled with its corresponding political economy. How we feel is shaped by what we value, and vice versa. Food, water, shelter, clothing, education, health care: maybe now we value these things more, along with the people whose work creates them. To survive the next century, we need to start valuing the planet more, too, since it’s our only home.

It will be hard to make these values durable. Valuing the right things and wanting to keep on valuing them—maybe that’s also part of our new structure of feeling. As is knowing how much work there is to be done. But the spring of 2020 is suggestive of how much, and how quickly, we can change. It’s like a bell ringing to start a race. Off we go—into a new time….(More)”.

Viruses Cross Borders. To Fight Them, Countries Must Let Medical Data Flow, Too


Nigel Cory at ITIF: “If nations could regulate viruses the way many regulate data, there would be no global pandemics. But the sad reality is that, in the midst of the worst global pandemic in living memory, many nations make it unnecessarily complicated and costly, if not illegal, for health data to cross their borders. In so doing, they are hindering critically needed medical progress.

In the COVID-19 crisis, data analytics powered by artificial intelligence (AI) is critical to identifying the exact nature of the pandemic and developing effective treatments. The technology can produce powerful insights and innovations, but only if researchers can aggregate and analyze data from populations around the globe. And that requires data to move across borders as part of international research efforts by private firms, universities, and other research institutions. Yet, some countries, most notably China, are stopping health and genomic data at their borders.

Indeed, despite the significant benefits to companies, citizens, and economies that arise from the ability to easily share data across borders, dozens of countries—across every stage of development—have erected barriers to cross-border data flows. These data-residency requirements strictly confine data within a country’s borders, a concept known as “data localization,” and many countries have especially strict requirements for health data.

China is a noteworthy offender, having created a new digital iron curtain that requires data localization for a range of data types, including health data, as part of its so-called “cyber sovereignty” strategy. A May 2019 State Council regulation required genomic data to be stored and processed locally by Chinese firms—and foreign organizations are prohibited. This is in service of China’s mercantilist strategy to advance its domestic life sciences industry. While there has been collaboration between U.S. and Chinese medical researchers on COVID-19, including on clinical trials for potential treatments, these restrictions mean that it won’t involve the transfer, aggregation, and analysis of Chinese personal data, which otherwise might help find a treatment or vaccine. If China truly wanted to make amends for blocking critical information during the early stages of the outbreak in Wuhan, then it should abolish this restriction and allow genomic and other health data to cross its borders.

But China is not alone in limiting data flows. Russia requires all personal data, health-related or not, to be stored locally. India’s draft data protection bill permits the government to classify any sensitive personal data as critical personal data and mandate that it be stored and processed only within the country. This would be consistent with recent debates and decisions to require localization for payments data and other types of data. And despite its leading role in pushing for the free flow of data as part of new digital trade agreementsAustralia requires genomic and other data attached to personal electronic health records to be only stored and processed within its borders.

Countries also enact de facto barriers to health and genomic data transfers by making it harder and more expensive, if not impractical, for firms to transfer it overseas than to store it locally. For example, South Korea and Turkey require firms to get explicit consent from people to transfer sensitive data like genomic data overseas. Doing this for hundreds or thousands of people adds considerable costs and complexity.

And the European Union’s General Data Protection Regulation encourages data localization as firms feel pressured to store and process personal data within the EU given the restrictions it places on data transfers to many countries. This is in addition to the renewed push for local data storage and processing under the EU’s new data strategy.

Countries rationalize these steps on the basis that health data, particularly genomic data, is sensitive. But requiring health data to be stored locally does little to increase privacy or data security. The confidentiality of data does not depend on which country the information is stored in, only on the measures used to store it securely, such as via encryption, and the policies and procedures the firms follow in storing or analyzing the data. For example, if a nation has limits on the use of genomics data, then domestic organizations using that data face the same restrictions, whether they store the data in the country or outside of it. And if they share the data with other organizations, they must require those organizations, regardless of where they are located, to abide by the home government’s rules.

As such, policymakers need to stop treating health data differently when it comes to cross-border movement, and instead build technical, legal, and ethical protections into both domestic and international data-governance mechanisms, which together allow the responsible sharing and transfer of health and genomic data.

This is clearly possible—and needed. In February 2020, leading health researchers called for an international code of conduct for genomic data following the end of their first-of-its-kind international data-driven research project. The project used a purpose-built cloud service that stored 800 terabytes of genomic data on 2,658 cancer genomes across 13 data centers on three continents. The collaboration and use of cloud computing were transformational in enabling large-scale genomic analysis….(More)”.

Models v. Evidence


Jonathan Fuller at the Boston Review: “COVID-19 has revealed a contest between two competing philosophies of scientific knowledge. To manage the crisis, we must draw on both….The lasting icon of the COVID-19 pandemic will likely be the graphic associated with “flattening the curve.” The image is now familiar: a skewed bell curve measuring coronavirus cases that towers above a horizontal line—the health system’s capacity—only to be flattened by an invisible force representing “non-pharmaceutical interventions” such as school closures, social distancing, and full-on lockdowns.

How do the coronavirus models generating these hypothetical curves square with the evidence? What roles do models and evidence play in a pandemic? Answering these questions requires reconciling two competing philosophies in the science of COVID-19.

To some extent, public health epidemiology and clinical epidemiology are distinct traditions in health care, competing philosophies of scientific knowledge.

In one camp are infectious disease epidemiologists, who work very closely with institutions of public health. They have used a multitude of models to create virtual worlds in which sim viruses wash over sim populations—sometimes unabated, sometimes held back by a virtual dam of social interventions. This deluge of simulated outcomes played a significant role in leading government actors to shut borders as well as doors to schools and businesses. But the hypothetical curves are smooth, while real-world data are rough. Some detractors have questioned whether we have good evidence for the assumptions the models rely on, and even the necessity of the dramatic steps taken to curb the pandemic. Among this camp are several clinical epidemiologists, who typically provide guidance for clinical practice—regarding, for example, the effectiveness of medical interventions—rather than public health.

The latter camp has won significant media attention in recent weeks. Bill Gates—whose foundation funds the research behind the most visible outbreak model in the United States, developed by the Institute for Health Metrics and Evaluation (IHME) at the University of Washington—worries that COVID-19 might be a “once-in-a-century pandemic.” A notable detractor from this view is Stanford’s John Ioannidis, a clinical epidemiologist, meta-researcher, and reliable skeptic who has openly wondered whether the coronavirus pandemic might rather be a “once-in-a-century evidence fiasco.” He argues that better data are needed to justify the drastic measures undertaken to contain the pandemic in the United States and elsewhere.

Ioannidis claims, in particular, that our data about the pandemic are unreliable, leading to exaggerated estimates of risk. He also points to a systematic review published in 2011 of the evidence regarding physical interventions that aim to reduce the spread of respiratory viruses, worrying that the available evidence is nonrandomized and prone to bias. (A systematic review specific to COVID-19 has now been published; it concurs that the quality of evidence is “low” to “very low” but nonetheless supports the use of quarantine and other public health measures.) According to Ioannidis, the current steps we are taking are “non-evidence-based.”…(More)”.

10 transformative data questions related to gender


Press Release: “As part of efforts to identify priorities across sectors in which data and data science could make a difference, The Governance Lab (The GovLab) at the New York University Tandon School of Engineering has partnered with Data2X, the gender data alliance housed at the United Nations Foundation, to release ten pressing questions on gender that experts have determined can be answered using data. Members of the public are invited to share their views and vote to help develop a data agenda on gender.

The questions are part of the 100 Questions Initiative, an effort to identify the most important societal questions that can be answered by data. The project relies on an innovative process of sourcing “bilinguals,” individuals with both subject-matter and data expertise, who in this instance provided questions related to gender they considered to be urgent and answerable. The results span issues of labor, health, climate change, and gender-based violence.

Through the initiative’s new online platform, anyone can now vote on what they consider to be the most pressing, data-related questions about gender that researchers and institutions should prioritize. Through voting, the public can steer the conversation and determine which topics should be the subject of data collaboratives, an emerging form of collaboration that allows organizations from different sectors to exchange data to create public value.

The GovLab has conducted significant research on the value and practice of data collaboratives, and its research shows that inter-sectoral collaboration can both increase access to data as well as unleash the potential of that data to serve the public good.

Data2X supported the 100 Questions Initiative by providing expertise and connecting The GovLab with relevant communities, events, and resources. The initiative helped inform Data2X’s “Big Data, Big Impact? Towards Gender-Sensitive Data Systems” report, which identifies gaps of information on gender equality across key policy domains.

“Asking the right questions is a critical first step in fostering data production and encouraging data use to truly meet the unique experiences and needs of women and girls,” said Emily Courey Pryor, executive director of Data2X. “Obtaining public feedback is a crucial way to identify the most urgent questions — and to ultimately incentivize investment in gender data collection and use to find the answers.”Said Stefaan Verhulst, co-founder and chief research and development officer at The GovLab, “Sourcing and prioritizing questions related to gender can inform resource and funding allocation to address gender data gaps and support projects with the greatest potential impact. This way, we can be confident about solutions that address the challenges facing women and girls.”…(More)”.

The institutionalization of digital public health: lessons learned from the COVID19 app


Paper by Ciro Cattuto and Alessandro Spina: “Amid the outbreak of the SARS-CoV-2 pandemic, there has been a call to use innovative digital tools for the purpose of protecting public health. There are a number of proposals to embed digital solutions into the regulatory strategies adopted by public authorities to control the spread of the coronavirus more effectively. They range from algorithms to detect population movements by using telecommunication data to the use of artificial intelligence and high-performance computing power to detect patterns in the spread of the virus. However, the use of a mobile phone application for contact tracing is certainly the most popular.

These proposals, which have a very powerful persuasive force, and have apparently contributed to the success of public health response in a few Asian countries, also raise questions and criticisms in particular with regard to the risks that these novel digital surveillance systems pose for privacy and in the long term for our democracies.

With this short paper, we would like to describe the pattern that has led to the institutionalization of digital tools for public health purposes. By tracing their origins to “digital epidemiology”, an approach originated in the early 2010s, we will expose that, whilst there exists limited experimental knowledge on the use of digital tools for tracking disease, this is the first time in which they are being introduced by policy-makers into the set of non-clinical emergency strategies to a major public health crisis….(More)”

Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives


Book by Philip N. Howard: “Artificially intelligent “bot” accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies. With massive amounts of social media and public polling data, and in-depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these “lie machines” apart.
 
Lie Machines is full of riveting behind-the-scenes stories from the world’s biggest and most damagingly successful misinformation initiatives—including those used in Brexit and U.S. elections. Howard not only shows how these campaigns evolved from older propaganda operations but also exposes their new powers, gives us insight into their effectiveness, and explains how to shut them down…(More)”.

Using crowdsourcing for a safer society: When the crowd rules


Paper by Enrique Estellés-Arolas: “Neighbours sharing information about robberies in their district through social networking platforms, citizens and volunteers posting about the irregularities of political elections on the Internet, and internauts trying to identify a suspect of a crime: in all these situations, people who share different degrees of relationship collaborate through the Internet and other technologies to try to help with or solve an offence. T

he crowd, which is sometimes seen as a threat, in these cases becomes an invaluable resource that can complement law enforcement through collective intelligence. Owing to the increasing growth of such initiatives, this article conducts a systematic review of the literature to identify the elements that characterize them and to find the conditions that make them work successfully….(More)”.

Responsible Data Toolkit


Andrew Young at The GovLab: “The GovLab and UNICEF, as part of the Responsible Data for Children initiative (RD4C), are pleased to share a set of user-friendly tools to support organizations and practitioners seeking to operationalize the RD4C Principles. These principles—Purpose-Driven, People-Centric, Participatory, Protective of Children’s Rights, Proportional, Professionally Accountable, and Prevention of Harms Across the Data Lifecycle—are especially important in the current moment, as actors around the world are taking a data-driven approach to the fight against COVID-19.

The initial components of the RD4C Toolkit are:

The RD4C Data Ecosystem Mapping Tool intends to help users to identify the systems generating data about children and the key components of those systems. After using this tool, users will be positioned to understand the breadth of data they generate and hold about children; assess data systems’ redundancies or gaps; identify opportunities for responsible data use; and achieve other insights.

The RD4C Decision Provenance Mapping methodology provides a way for actors designing or assessing data investments for children to identify key decision points and determine which internal and external parties influence those decision points. This distillation can help users to pinpoint any gaps and develop strategies for improving decision-making processes and advancing more professionally accountable data practices.

The RD4C Opportunity and Risk Diagnostic provides organizations with a way to take stock of the RD4C principles and how they might be realized as an organization reviews a data project or system. The high-level questions and prompts below are intended to help users identify areas in need of attention and to strategize next steps for ensuring more responsible handling of data for and about children across their organization.

Finally, the Data for Children Collaborative with UNICEF developed an Ethical Assessment that “forms part of [their] safe data ecosystem, alongside data management and data protection policies and practices.” The tool reflects the RD4C Principles and aims to “provide an opportunity for project teams to reflect on the material consequences of their actions, and how their work will have real impacts on children’s lives.

RD4C launched in October 2019 with the release of the RD4C Synthesis ReportSelected Readings, and the RD4C Principles. Last month we published the The RD4C Case Studies, which analyze data systems deployed in diverse country environments, with a focus on their alignment with the RD4C Principles. The case studies are: Romania’s The Aurora ProjectChildline Kenya, and Afghanistan’s Nutrition Online Database.

To learn more about Responsible Data for Children, visit rd4c.org or contact rd4c [at] thegovlab.org. To join the RD4C conversation and be alerted to future releases, subscribe at this link.”