Privacy Is Power: How Tech Policy Can Bolster Democracy


Essay by Andrew Imbrie, Daniel Baer, Andrew Trask, Anna Puglisi, Erik Brattberg, and Helen Toner: “…History is rarely forgiving, but as we adopt the next phase of digital tools, policymakers can avoid the errors of the past. Privacy-enhancing technologies, or PETs, are a collection of technologies with applications ranging from improved medical diagnostics to secure voting systems and messaging platforms. PETs allow researchers to harness big data to solve problems affecting billions of people while also protecting privacy. …

PETs are ripe for coordination among democratic allies and partners, offering a way for them to jointly develop standards and practical applications that benefit the public good. At an AI summit last July, U.S. Secretary of State Antony Blinken noted the United States’ interest in “increasing access to shared public data sets for AI training and testing, while still preserving privacy,” and National Security Adviser Jake Sullivan pointed to PETs as a promising area “to overcome data privacy challenges while still delivering the value of big data.” Given China’s advantages in scale, the United States and like-minded partners should foster emerging technologies that play to their strengths in medical research and discovery, energy innovation, trade facilitation, and reform around money laundering. Driving innovation and collaboration within and across democracies is important not only because it will help ensure those societies’ success but also because there will be a first-mover advantage in the adoption of PETs for governing the world’s private data–sharing networks.

Accelerating the development of PETs for the public good will require an international approach. Democratic governments will not be the trendsetters on PETs; instead, policymakers for these governments should focus on nurturing the ecosystems these technologies need to flourish. The role for policymakers is not to decide the fate of specific protocols or techniques but rather to foster a conducive environment for researchers to experiment widely and innovate responsibly.    

Democracies should identify shared priorities and promote basic research to mature the technological foundations of PETs. The underlying technologies require greater investment in algorithmic development and hardware to optimize the chips and mitigate the costs of network overhead. To support the computational requirements for PETs, for example, the National Science Foundation could create an interface through CloudBank and provide cloud compute credits to researchers without access to these resources. The United States could also help incubate an international network of research universities collaborating on these technologies.

Second, science-funding agencies in democracies should host competitions to incentivize new PETs protocols and standards—the collaboration between the United States and the United Kingdom announced in early December is a good example. The goal should be to create free, open-source protocols and avoid the fragmentation of the market and the proliferation of proprietary standards. The National Institute of Standards and Technology and other similar bodies should develop standards and measurement tools for PETs; governments and companies should form public-private partnerships to fund open-source protocols over the long term. Open-source protocols are especially important in the early days of PET development, because closed-source PET implementations by profit-seeking actors can be leveraged to build data monopolies. For example, imagine a scenario where all U.S. cancer data could be controlled by a single company because all the hospitals are running their proprietary software. And you have to become a customer to join the network…(More)”.

The Attack of Zombie Science


Article by Natalia Pasternak, Carlos Orsi, Aaron F. Mertz, & Stuart Firestein: “When we think about how science is distorted, we usually think about concepts that have ample currency in public discourse, such as pseudoscience and junk science. Practices like astrology and homeopathy come wrapped in scientific concepts and jargon that can’t meet the methodological requirements of actual sciences. During the COVID-19 pandemic, pseudoscience has had a field day. Bleach, anyone? Bear bile? Yet the pandemic has brought a newer, more subtle form of distortion to light. To the philosophy of science, we humbly submit a new concept: “zombie science.”

We think of zombie science as mindless science. It goes through the motions of scientific research without a real research question to answer, it follows all the correct methodology, but it doesn’t aspire to contribute to advance knowledge in the field. Practically all the information about hydroxychloroquine during the pandemic falls into that category, including not just the living dead found in preprint repositories, but also papers published in journals that ought to have been caught by a more discerning eye. Journals, after all, invest their reputation in every piece they choose to publish. And every investment in useless science is a net loss.

From a social and historical stance, it seems almost inevitable that the penchant for productivism in the academic and scientific world would end up encouraging zombie science. If those who do not publish perish, then publishing—even nonsense or irrelevancies—is a matter of life or death. The peer-review process and the criteria for editorial importance are filters, for sure, but they are limited. Not only do they get clogged and overwhelmed due to excess submissions, they have to deal with the weaknesses of the human condition, including feelings of personal loyalty, prejudice, and vanity. Additionally, these filters fail, as the proliferation of predatory journals shows us all too well…(More)”.

Making data for good better


Article by Caroline Buckee, Satchit Balsari, and Andrew Schroeder: “…Despite the long standing excitement about the potential for digital tools, Big Data and AI to transform our lives, these innovations–with some exceptions–have so far had little impact on the greatest public health emergency of our time.

Attempts to use digital data streams to rapidly produce public health insights that were not only relevant for local contexts in cities and countries around the world, but also available to decision makers who needed them, exposed enormous gaps across the translational pipeline. The insights from novel data streams which could help drive precise, impactful health programs, and bring effective aid to communities, found limited use among public health and emergency response systems. We share here our experience from the COVID-19 Mobility Data Network (CMDN), now Crisis Ready (crisisready.io), a global collaboration of researchers, mostly infectious disease epidemiologists and data scientists, who served as trusted intermediaries between technology companies willing to share vast amounts of digital data, and policy makers, struggling to incorporate insights from these novel data streams into their decision making. Through our experience with the Network, and using human mobility data as an illustrative example, we recognize three sets of barriers to the successful application of large digital datasets for public good.

First, in the absence of pre-established working relationships with technology companies and data brokers, the data remain primarily confined within private circuits of ownership and control. During the pandemic, data sharing agreements between large technology companies and researchers were hastily cobbled together, often without the right kind of domain expertise in the mix. Second, the lack of standardization, interoperability and information on the uncertainty and biases associated with these data, necessitated complex analytical processing by highly specialized domain experts. And finally, local public health departments, understandably unfamiliar with these novel data streams, had neither the bandwidth nor the expertise to sift noise from signal. Ultimately, most efforts did not yield consistently useful information for decision making, particularly in low resource settings, where capacity limitations in the public sector are most acute…(More)”.

From Poisons to Antidotes: Algorithms as Democracy Boosters


Paper by Paolo Cavaliere and Graziella Romeo: “Under what conditions can artificial intelligence contribute to political processes without undermining their legitimacy? Thanks to the ever-growing availability of data and the increasing power of decision-making algorithms, the future of political institutions is unlikely to be anything similar to what we have known throughout the last century, possibly with Parliaments deprived of their traditional authority and public decision-making processes largely unaccountable. This paper discusses and challenges these concerns by suggesting a theoretical framework under which algorithmic decision-making is compatible with democracy and, most relevantly, can offer a viable solution to counter the rise of populist rhetoric in the governance arena. Such a framework is based on three pillars: a. understanding the civic issues that are subjected to automated decision-making; b. controlling the issues that are assigned to AI; and c. evaluating and challenging the outputs of algorithmic decision-making….(More)”.

What Works? Developing a global evidence base for public engagement


Report by Reema Patel and Stephen Yeo: “…the Wellcome Trust commissioned OTT Consulting to recommend the best approach for enabling public engagement communities to share and gather evidence on public engagement practice globally, and in particular to assess the suitability of an approach adapted from the UK ‘What Works Centres’. This report is the output from that commission. It draws from a desk-based literature review, workshops in India, Peru and the UK, and a series of stakeholder interviews with international organisations.

The key themes that emerged from stakeholder interviews and workshops were that, in order for evidence about public engagement to help inform and shape public engagement practice, and for public engagement to be used and deployed effectively, there has to be an approach that can: understand the audiences, broaden out how ‘evidence’ is understood and generated, think strategically about how evidence affects and informs practice and understand the complexity of the system dynamics within which public engagement (and evidence about public engagement) operates….(More)”.

Trove of unique health data sets could help AI predict medical conditions earlier


Madhumita Murgia at the Financial Times: “…Ziad Obermeyer, a physician and machine learning scientist at the University of California, Berkeley, launched Nightingale Open Science last month — a treasure trove of unique medical data sets, each curated around an unsolved medical mystery that artificial intelligence could help to solve.

The data sets, released after the project received $2m of funding from former Google chief executive Eric Schmidt, could help to train computer algorithms to predict medical conditions earlier, triage better and save lives.

The data include 40 terabytes of medical imagery, such as X-rays, electrocardiogram waveforms and pathology specimens, from patients with a range of conditions, including high-risk breast cancer, sudden cardiac arrest, fractures and Covid-19. Each image is labelled with the patient’s medical outcomes, such as the stage of breast cancer and whether it resulted in death, or whether a Covid patient needed a ventilator.

Obermeyer has made the data sets free to use and mainly worked with hospitals in the US and Taiwan to build them over two years. He plans to expand this to Kenya and Lebanon in the coming months to reflect as much medical diversity as possible.

“Nothing exists like it,” said Obermeyer, who announced the new project in December alongside colleagues at NeurIPS, the global academic conference for artificial intelligence. “What sets this apart from anything available online is the data sets are labelled with the ‘ground truth’, which means with what really happened to a patient and not just a doctor’s opinion.”…

The Nightingale data sets were among dozens proposed this year at NeurIPS.

Other projects included a speech data set of Mandarin and eight subdialects recorded by 27,000 speakers in 34 cities in China; the largest audio data set of Covid respiratory sounds, such as breathing, coughing and voice recordings, from more than 36,000 participants to help screen for the disease; and a data set of satellite images covering the entire country of South Africa from 2006 to 2017, divided and labelled by neighbourhood, to study the social effects of spatial apartheid.

Elaine Nsoesie, a computational epidemiologist at the Boston University School of Public Health, said new types of data could also help with studying the spread of diseases in diverse locations, as people from different cultures react differently to illnesses.

She said her grandmother in Cameroon, for example, might think differently than Americans do about health. “If someone had an influenza-like illness in Cameroon, they may be looking for traditional, herbal treatments or home remedies, compared to drugs or different home remedies in the US.”

Computer scientists Serena Yeung and Joaquin Vanschoren, who proposed that research to build new data sets should be exchanged at NeurIPS, pointed out that the vast majority of the AI community still cannot find good data sets to evaluate their algorithms. This meant that AI researchers were still turning to data that were potentially “plagued with bias”, they said. “There are no good models without good data.”…(More)”.

The new machinery of government: using machine technology in administrative decision-making


Report by New South Wales Ombudsman: “There are many situations in which government agencies could use appropriately-designed machine technologies to assist in the exercise of their functions, which would be compatible with lawful and appropriate conduct. Indeed, in some instances machine technology may improve aspects of good administrative conduct – such as accuracy and consistency in decision-making, as well as mitigating the risk of individual human bias.

However, if machine technology is designed and used in a way that does not accord with administrative law and associated principles of good administrative practice, then its use could constitute or involve maladministration. It could also result in legal challenges, including a risk that administrative decisions or actions may later be held by a court to have been unlawful or invalid.

The New South Wales Ombudsman was prompted to prepare this report after becoming aware of one agency (Revenue NSW) using machine technology for the performance of a discretionary statutory function (the garnisheeing of unpaid fine debts from individuals’ bank accounts), in a way that was having a significant impact on individuals, many of whom were already in situations of financial vulnerability.

The Ombudsman’s experience with Revenue NSW, and a scan of the government’s published policies on the use of artificial intelligence and other digital technologies, suggests that there may be inadequate attention being given to fundamental aspects of public law that are relevant to machine technology adoption….(More)”

The Government of Emergency: Vital Systems, Expertise, and the Politics of Security


Book by Stephen J. Collier and Andrew Lakoff: “From pandemic disease, to the disasters associated with global warming, to cyberattacks, today we face an increasing array of catastrophic threats. It is striking that, despite the diversity of these threats, experts and officials approach them in common terms: as future events that threaten to disrupt the vital, vulnerable systems upon which modern life depends.

The Government of Emergency tells the story of how this now taken-for-granted way of understanding and managing emergencies arose. Amid the Great Depression, World War II, and the Cold War, an array of experts and officials working in obscure government offices developed a new understanding of the nation as a complex of vital, vulnerable systems. They invented technical and administrative devices to mitigate the nation’s vulnerability, and organized a distinctive form of emergency government that would make it possible to prepare for and manage potentially catastrophic events.

Through these conceptual and technical inventions, Stephen Collier and Andrew Lakoff argue, vulnerability was defined as a particular kind of problem, one that continues to structure the approach of experts, officials, and policymakers to future emergencies…(More)”.

Navigating Trust in Society,


Report by Coeuraj: “This report provides empirical evidence of existing levels of trust, among the US population, with regard to institutions, and philanthropy—all shaped during a time of deep polarization and a global pandemic.

The source of the data is two-fold. Firstly, a year-over-year analysis of institutional trust, as measured by Global Web Index USA from more than 20,000 respondents and, secondly, an ad-hoc nationally representative survey, conducted by one of Coeuraj’s data partners AudienceNet, in the two weeks immediately preceding the 2021 United Nations General Assembly. This report presents the core findings that emerged from both research initiatives….(More)”.

Data trust and data privacy in the COVID-19 period


Paper by Nicholas Biddle et al: “In this article, we focus on data trust and data privacy, and how attitudes may be changing during the COVID-19 period. On balance, it appears that Australians are more trusting of organizations with regards to data privacy and less concerned about their own personal information and data than they were prior to the spread of COVID-19. The major determinant of this change in trust with regards to data was changes in general confidence in government institutions. Despite this improvement in trust with regards to data privacy, trust levels are still low….(More)”.