It’s complicated: what the public thinks about COVID-19 technologies


Imogen Parker at Ada Lovelace Institute: “…Tools of this societal importance need to be shaped by the public. Given the technicality and complexity, that means going beyond surface-level opinions captured through polling and focus groups and creating structures to deliberate with groups of informed citizens. That’s hard to do well, and at the pace needed to keep up with policy and technology, but difficult problems are the ones that most need to be solved.

To help bring much-needed public voices into this debate at pace, we have drawn out emergent themes from three recent in-depth public deliberation projects, that can bring insight to bear on the questions of health apps and public health identity systems.

While there are no green lights, red lines – or indeed silver bullets – there are important nuances and strongly held views about the conditions that COVID-19 technologies would need to meet. The report goes into detailed lessons from the public, and I would like to add to those by drawing out here aspects that are consistently under-addressed in discussions I’ve heard about these tools in technology and policy circles.

  1. Trust isn’t just about data or privacy. The technology must be effective – and be seen to be effective. Too often, debates about public acceptability lapse into flawed and tired arguments about privacy vs public health; or citizens’ trust in a technology being confused with reassurances about data protection or security frameworks against malicious actors. First and foremost people need to trust the technology works – they need to trust that it can solve a problem, that it won’t fail, and it can be relied on. The public discussion must be about the outcome of the technology – not just its function. This is particularly vital in the context of public health, which affects everyone in society.
  2. Any application linked to identity is seen as high-stakes. Identity matters and is complex – and there is anxiety about the creation of technological systems that put people in pre-defined boxes or establishes static categories as the primary mechanisms by which they are known, recognised and seen. Proportionality (while not expressed as such) runs deep in public consciousness and any intrusion will require justification, not simply a rallying call for people to do their duty.
  3. Tools must proactively protect against harm. Mechanisms for challenge or redress need to be built around the app – and indeed be seen as part of the technology. This means that legitimate fears that discrimination or prejudice will arise must be addressed head on, and lower uptake from potentially disadvantaged groups that may legitimately mistrust surveillance systems must be acknowledged and mitigated.
  4. Apps will be judged as part of the system they are embedded into. The whole system must be trustworthy, not just the app or technology – and that encompasses those who develop and deploy it and those who will use it out in the world. An app – however technically perfect – can still be misused by rogue employers, or mistrusted through fear of government overreach or scope creep.
  5. Tools are seen by the public as political and social. Technology developers need to understand that they are shifting the social-political fabric of society during a crisis, and potentially beyond. Tech cannot be decoupled or isolated from questions of the nature of the society it will shape – solidaristic or individualistic; divisive or inclusive….(More)”.

Medical data has a silo problem. These models could help fix it.


Scott Khan at the WEF: “Every day, more and more data about our health is generated. Data, which if analyzed, could hold the key to unlocking cures for rare diseases, help us manage our health risk factors and provide evidence for public policy decisions. However, due to the highly sensitive nature of health data, much is out of reach to researchers, halting discovery and innovation. The problem is amplified further in the international context when governments naturally want to protect their citizens’ privacy and therefore restrict the movement of health data across international borders. To address this challenge, governments will need to pursue a special approach to policymaking that acknowledges new technology capabilities.

Understanding data siloes

Data becomes siloed for a range of well-considered reasons ranging from restrictions on terms-of-use (e.g., commercial, non-commercial, disease-specific, etc), regulations imposed by governments (e.g., Safe Harbor, privacy, etc.), and an inability to obtain informed consent from historically marginalized populations.

Siloed data, however, also creates a range of problems for researchers looking to make that data useful to the general population. Siloes, for example, block researchers from accessing the most up-to-date information or the most diverse, comprehensive datasets. They can slow the development of new treatments and therefore, curtail key findings that can lead to much needed treatments or cures.

Even when these challenges are overcome, the incidences of data mis-use – where health data is used to explore non-health related topics or without an individual’s consent – continue to erode public trust in the same research institutions that are dependent on such data to advance medical knowledge.

Solving this problem through technology

Technology designed to better protect and decentralize data is being developed to address many of these challenges. Techniques such as homomorphic encryption (a cryptosystem that encrypts data with a public key) and differential privacy (a system leveraging information about a group without revealing details about individuals) both provide means to protect and centralize data while distributing the control of its use to the parties that steward the respective data sets.

Federated data leverages a special type of distributed database management system that can provide an alternative approach to centralizing encoded data without moving the data sets across jurisdictions or between institutions. Such an approach can help connect data sources while accounting for privacy. To further forge trust in the system, a federated model can be implemented to return encoded data to prevent unauthorized distribution of data and learnings as a result of the research activity.

To be sure, within every discussion of the analysis of aggregated data lies challenges with data fusion between data sets, between different studies, between data silos, between institutions. Despite there being several data standards that could be used, most data exist within bespoke data models built for a single purpose rather than for the facilitation of data sharing and data fusion. Furthermore, even when data has been captured into a standardized data model (e.g., the Global Alliance for Genomics and Health offers some models for standardizing sensitive health data), many data sets are still narrowly defined. They often lack any shared identifiers to combine data from different sources into a coherent aggregate data source useful for research. Within a model of data centralization, data fusion can be addressed through data curation of each data set, whereas within a federated model, data fusion is much more vexing….(More)“.

The National Cancer Institute Cancer Moonshot Public Access and Data Sharing Policy—Initial assessment and implications


Paper by Tammy M. Frisby and Jorge L. Contreras: “Since 2013, federal research-funding agencies have been required to develop and implement broad data sharing policies. Yet agencies today continue to grapple with the mechanisms necessary to enable the sharing of a wide range of data types, from genomic and other -omics data to clinical and pharmacological data to survey and qualitative data. In 2016, the National Cancer Institute (NCI) launched the ambitious $1.8 billion Cancer Moonshot Program, which included a new Public Access and Data Sharing (PADS) Policy applicable to funding applications submitted on or after October 1, 2017. The PADS Policy encourages the immediate public release of published research results and data and requires all Cancer Moonshot grant applicants to submit a PADS plan describing how they will meet these goals. We reviewed the PADS plans submitted with approximately half of all funded Cancer Moonshot grant applications in fiscal year 2018, and found that a majority did not address one or more elements required by the PADS Policy. Many such plans made no reference to the PADS Policy at all, and several referenced obsolete or outdated National Institutes of Health (NIH) policies instead. We believe that these omissions arose from a combination of insufficient education and outreach by NCI concerning its PADS Policy, both to potential grant applicants and among NCI’s program staff and external grant reviewers. We recommend that other research funding agencies heed these findings as they develop and roll out new data sharing policies….(More)”.

Data4Covid19


The GovLab: “Three months ago, COVID-19 brought much of the world to a halt. Faced with the unprecedented challenges brought by the virus, The GovLab put forth a Call for Action to develop the responsible data infrastructure needed to address the pandemic and other dynamic threats. With our partners, we initiated several projects to achieve the goals outlined in the call.

Today we are launching a new hub for The GovLab’s #Data4COVID19 efforts at data4covid19.org. This site brings together our efforts to implement the Call for Action including developing a governance frameworkbuilding capacity, establishing data stewardship and a network of data stewards, and engaging people.

You can also use the site to share your updates and efforts with The GovLab team or subscribe to our newsletter to stay informed….(More)’.

The Shape of Epidemics


Essay by David S. Jones and Stefan Helmreich: “…Is the most recent rise in new cases—the sharp increase in case counts and hospitalizations reported this week in several states—a second wave, or rather a second peak of a first wave? Will the world see a devastating second wave in the fall?

Such imagery of waves has pervaded talk about the propagation of the infection from the beginning. On January 29, just under a month after the first instances of COVID-19 were reported in Wuhan, Chinese health officials published a clinical report about their first 425 cases, describing them as “the first wave of the epidemic.” On March 4 the French epidemiologist Antoine Flahault asked, “Has China just experienced a herald wave, to use terminology borrowed from those who study tsunamis, and is the big wave still to come?” The Asia Times warned shortly thereafter that “a second deadly wave of COVID-19 could crash over China like a tsunami.” A tsunami, however, struck elsewhere, with the epidemic surging in Iran, Italy, France, and then the United States. By the end of April, with the United States having passed one million cases, the wave forecasts had become bleaker. Prominent epidemiologists predicted three possible future “wave scenarios”—described by one Boston reporter as “seascapes,” characterized either by oscillating outbreaks, the arrival of a “monster wave,” or a persistent and rolling crisis.


From Kristine Moore et al., “The Future of the COVID-19 Pandemic” (April 30, 2020). Used with permission from the Center for Infectious Disease Research and Policy, University of Minnesota.

While this language may be new to much of the public, the figure of the wave has long been employed to describe, analyze, and predict the behavior of epidemics. Understanding this history can help us better appreciate the conceptual inheritances of a scientific discipline suddenly at the center of public discussion. It can also help us judge the utility as well as limitations of those representations of epidemiological waves now in play in thinking about the science and policy of COVID-19. As the statistician Edward Tufte writes in his classic work The Visual Display of Quantitative Information (1983), “At their best, graphics are instruments for reasoning about quantitative information.” The wave, operating as a hybrid of the diagrammatic, mathematical, and pictorial, certainly does help to visualize and think about COVID-19 data, but it also does much more. The wave image has become an instrument for public health management and prediction—even prophecy—offering a synoptic, schematic view of the dynamics it describes.

This essay sketches this backstory of epidemic waves, which falls roughly into three eras: waves emerge first as a device of data visualization, then evolve into an object of mathematical modeling and causal investigation and finally morph into a tool of persuasion, intervention, and governance. Accounts of the wave-like rise and fall of rates of illness and death in populations first appeared in the mid-nineteenth century, with England a key player in developments that saw government officials collect data permitting the graphical tabulation of disease trends over time. During this period the wave image was primarily metaphorical, a heuristic way of talking about patterns in data. Using curving numerical plots, epidemiologists offered analogies between the spread of infection and the travel of waves, sometimes transposing the temporal tracing of epidemic data onto maps of geographical space. Exactly what mix of forces—natural or social—generated these “epidemic waves” remained a source of speculation….(More)”.

Five ways to ensure that models serve society: a manifesto


Andrea Saltelli et al at Nature: “The COVID-19 pandemic illustrates perfectly how the operation of science changes when questions of urgency, stakes, values and uncertainty collide — in the ‘post-normal’ regime.

Well before the coronavirus pandemic, statisticians were debating how to prevent malpractice such as p-hacking, particularly when it could influence policy1. Now, computer modelling is in the limelight, with politicians presenting their policies as dictated by ‘science’2. Yet there is no substantial aspect of this pandemic for which any researcher can currently provide precise, reliable numbers. Known unknowns include the prevalence and fatality and reproduction rates of the virus in populations. There are few estimates of the number of asymptomatic infections, and they are highly variable. We know even less about the seasonality of infections and how immunity works, not to mention the impact of social-distancing interventions in diverse, complex societies.

Mathematical models produce highly uncertain numbers that predict future infections, hospitalizations and deaths under various scenarios. Rather than using models to inform their understanding, political rivals often brandish them to support predetermined agendas. To make sure predictions do not become adjuncts to a political cause, modellers, decision makers and citizens need to establish new social norms. Modellers must not be permitted to project more certainty than their models deserve; and politicians must not be allowed to offload accountability to models of their choosing2,3.

This is important because, when used appropriately, models serve society extremely well: perhaps the best known are those used in weather forecasting. These models have been honed by testing millions of forecasts against reality. So, too, have ways to communicate results to diverse users, from the Digital Marine Weather Dissemination System for ocean-going vessels to the hourly forecasts accumulated by weather.com. Picnickers, airline executives and fishers alike understand both that the modelling outputs are fundamentally uncertain, and how to factor the predictions into decisions.

Here we present a manifesto for best practices for responsible mathematical modelling. Many groups before us have described the best ways to apply modelling insights to policies, including for diseases4 (see also Supplementary information). We distil five simple principles to help society demand the quality it needs from modelling….(More)”.

Government Capacity, Societal Trust or Party Preferences? What Accounts for the Variety of National Policy Responses to the COVID-19 Pandemic in Europe?


Paper by Dimiter Toshkov, Kutsal Yesilkagit and Brendan Carroll: “European states responded to the rapid spread of the COVID-19 pandemic in 2020 with a variety of public policy measures. Governments across the continent acted more or less swiftly to close down schools, restrict arrival into their countries and travel within their territories, ban public meetings, impose local and national lockdowns, declare states of emergency and pass other emergency measures. Importantly, both the mix of policy tools as well as the speed with which they were enacted differed significantly even within the member states of the European Union.

In this article we ask what can account for this variation in policy responses, and we identify a number of factors related to institutions, general governance and specific health-sector related capacities, societal trust, government type, and party preferences as possible determinants. Using multivariate regression and survival analysis, we model the speed with which school closures, national lockdowns and states of emergency were announced. The models suggest a number of significant and often counterintuitive relationships: we find that more centralized countries with lower government effectiveness, freedom and societal trust, but with separate ministries of health and health ministers with medical background acted faster and more decisively. These results are important in light of the large positive effects early policy responses likely had on managing the impact of the pandemic….(More)”.

Social Distancing and Social Capital: Why U.S. Counties Respond Differently to Covid-19


NBER Paper by Wenzhi Ding et al: Since social distancing is the primary strategy for slowing the spread of many diseases, understanding why U.S. counties respond differently to COVID-19 is critical for designing effective public policies. Using daily data from about 45 million mobile phones to measure social distancing we examine how counties responded to both local COVID-19 cases and statewide shelter-in-place orders. We find that social distancing increases more in response to cases and official orders in counties where individuals historically (1) engaged less in community activities and (2) demonstrated greater willingness to incur individual costs to contribute to social objectives. Our work highlights the importance of these two features of social capital—community engagement and individual commitment to societal institutions—in formulating public health policies….(More)”

Secondhand Smoke, Moral Sanctions, and How We Should Respond to COVID-19


Article by Barry Schwartz: “How did we get from that day to this one, with widespread smoking bans in public places? The answer, I believe, was the discovery of the effects of secondhand smoke. When I smoked, it harmed innocent bystanders. It harmed children, including my own. The research on secondhand smoke began in the 1960s, showing negative effects on lab animals. As the work continued, it left no doubt that secondhand smoke contributes to asthma, cardiovascular disease, many types of cancer, stroke, cognitive impairment, and countless other maladies. These sorts of findings empowered people to demand, not request, that others put out their cigarettes. The secondhand smoke research led eventually to all the regulation that we now take for granted.

Why did this research change public attitudes and change them so fast—in a single generation? The answer, I think, is that research on secondhand smoke took an individual (perhaps foolish) choice and moralized it, by emphasizing its effects on others. It was no longer simply dumb to smoke; it was immoral. And that changed everything.

Psychologist Paul Rozin has studied the process of moralization. When activities get moralized, they move from being matters of individual discretion to being matters of obligation. Smoking went from being an individual consumer decision to being a transgression. And the process of moralization can go in the other direction, as we have seen, for most people, in the case of sexuality. In recent years, homosexuality has been “demoralized,” and moral sanctions against it have slowly been melting away….(More)”.

Normalizing Health-Positive Technology


Article by By Sara J. Singer, Stephen Downs, Grace Ann Joseph, Neha Chaudhary, Christopher Gardner, Nina Hersher, Kelsey P. Mellard, Norma Padrón & Yennie Solheim: “….Aligning the technology sector with a societal goal of greater health and well-being entails a number of shifts in thinking. The most fundamental is understanding health not as a vertical market segment, but as a horizontal value: In addition to developing a line of health products or services, health should be expressed across a company’s full portfolio of products and services. Rather than pushing behaviors on people through information and feedback, technology companies should also pull behaviors from people by changing the environment and products they are offered; in addition to developing technology to help people overcome the challenge of being healthy, we need to envision technology that helps to reduce the challenges to being healthy. And in addition to holding individuals responsible for choices that they make, we also need to recognize the collective responsibility that society bears for the choices it makes available.

How to catalyze these shifts?

To find out, we convened a “tech-enabled health,” in which 50 entrepreneurs, leaders from large technology companies, investors, policymakers, clinicians, and public health experts designed a hands-on, interactive, and substantively focused agenda. Participants brainstormed ways that consumer-facing technologies could help people move more, eat better, sleep well, stay socially connected, and reduce stress. In groups and collectively, participants also considered ways in which ideas related and might be synergistic, potential barriers and contextual conditions that might impede or support transformation, and strategies for catalyzing the desired shift. Participants were mixed in terms of sector, discipline, and gender (though the attendees were not as diverse in terms of race/ethnicity or economic strata as the users we potentially wanted to impact—a limitation noted by participants). We intentionally maintained a positive tone, emphasizing potential benefits of shifting toward a health-positive approach, rather than bemoaning the negative role that technology can play….(More)”.