Paper by Luca Marelli, Giuseppe Testa, and Ine van Hoyweghen: “The emergence of a global industry of digital health platforms operated by Big Tech corporations, and its growing entanglements with academic and pharmaceutical research networks, raise pressing questions on the capacity of current data governance models, regulatory and legal frameworks to safeguard the sustainability of the health research ecosystem. In this article, we direct our attention toward the challenges faced by the European General Data Protection Regulation in regulating the potentially disruptive engagement of Big Tech platforms in health research. The General Data Protection Regulation upholds a rather flexible regime for scientific research through a number of derogations to otherwise stricter data protection requirements, while providing a very broad interpretation of the notion of “scientific research”. Precisely the breadth of these exemptions combined with the ample scope of this notion could provide unintended leeway to the health data processing activities of Big Tech platforms, which have not been immune from carrying out privacy-infringing and socially disruptive practices in the health domain. We thus discuss further finer-grained demarcations to be traced within the broadly construed notion of scientific research, geared to implementing use-based data governance frameworks that distinguish health research activities that should benefit from a facilitated data protection regime from those that should not. We conclude that a “re-purposing” of big data governance approaches in health research is needed if European nations are to promote research activities within a framework of high safeguards for both individual citizens and society….(More)”.
How a largely untested AI algorithm crept into hundreds of hospitals
Vishal Khetpal and Nishant Shah at FastCompany: “Last spring, physicians like us were confused. COVID-19 was just starting its deadly journey around the world, afflicting our patients with severe lung infections, strokes, skin rashes, debilitating fatigue, and numerous other acute and chronic symptoms. Armed with outdated clinical intuitions, we were left disoriented by a disease shrouded in ambiguity.
In the midst of the uncertainty, Epic, a private electronic health record giant and a key purveyor of American health data, accelerated the deployment of a clinical prediction tool called the Deterioration Index. Built with a type of artificial intelligence called machine learning and in use at some hospitals prior to the pandemic, the index is designed to help physicians decide when to move a patient into or out of intensive care, and is influenced by factors like breathing rate and blood potassium level. Epic had been tinkering with the index for years but expanded its use during the pandemic. At hundreds of hospitals, including those in which we both work, a Deterioration Index score is prominently displayed on the chart of every patient admitted to the hospital.
The Deterioration Index is poised to upend a key cultural practice in medicine: triage. Loosely speaking, triage is an act of determining how sick a patient is at any given moment to prioritize treatment and limited resources. In the past, physicians have performed this task by rapidly interpreting a patient’s vital signs, physical exam findings, test results, and other data points, using heuristics learned through years of on-the-job medical training.
Ostensibly, the core assumption of the Deterioration Index is that traditional triage can be augmented, or perhaps replaced entirely, by machine learning and big data. Indeed, a study of 392 COVID-19 patients admitted to Michigan Medicine that the index was moderately successful at discriminating between low-risk patients and those who were at high-risk of being transferred to an ICU, getting placed on a ventilator, or dying while admitted to the hospital. But last year’s hurried rollout of the Deterioration Index also sets a worrisome precedent, and it illustrates the potential for such decision-support tools to propagate biases in medicine and change the ways in which doctors think about their patients….(More)”.
For Whose Benefit? Transparency in the development and procurement of COVID-19 vaccines
Report by Transparency International Global Health: “The COVID-19 pandemic has required an unprecedented public health response, with governments dedicating massive amounts of resources to their health systems at extraordinary speed. Governments have had to respond quickly to fast-changing contexts, with many competing interests, and little in the way of historical precedent to guide them.
Transparency here is paramount; publicly available information is critical to reducing the inherent risks of such a situation by ensuring governmental decisions are accountable and by enabling non-governmental expert input into the global vaccination process.
This report analyses transparency of two key stages of the vaccine development in chronological order: the development and subsequent buying of vaccines.
Given the scope, rapid progression and complexity of the global vaccination process, this is not an exhaustive analysis. First, all the following analysis is limited to 20 leading COVID-19 vaccines that were in, or had completed, phase 3 clinical trials as of 11th January 2021. Second, we have concentrated on transparency of two of the initial stages of the process: clinical trial transparency and the public contracting for the supply of vaccines. The report provides concrete recommendations on how to overcome current opacity in order to contribute to achieving the commitment of world leaders to ensure equal, fair and affordable access to COVID-19 vaccines for all countries….(More)”.
Improving hand hygiene in hospitals: comparing the effect of a nudge and a boost on protocol compliance
Paper by Henrico van Roekel, Joanne Reinhard and Stephan Grimmelikhuijsen: “Nudging has become a well-known policy practice. Recently, ‘boosting’ has been suggested as an alternative to nudging. In contrast to nudges, boosts aim to empower individuals to exert their own agency to make decisions. This article is one of the first to compare a nudging and a boosting intervention, and it does so in a critical field setting: hand hygiene compliance of hospital nurses. During a 4-week quasi-experiment, we tested the effect of a reframing nudge and a risk literacy boost on hand hygiene compliance in three hospital wards. The results show that nudging and boosting were both effective interventions to improve hand hygiene compliance. A tentative finding is that, while the nudge had a stronger immediate effect, the boost effect remained stable for a week, even after the removal of the intervention. We conclude that, besides nudging, researchers and policymakers may consider boosting when they seek to implement or test behavioral interventions in domains such as healthcare….(More)”.
Selecting the Most Effective Nudge: Evidence from a Large-Scale Experiment on Immunization
NBER Paper by Abhijit Banerjee et al: “We evaluate a large-scale set of interventions to increase demand for immunization in Haryana, India. The policies under consideration include the two most frequently discussed tools—reminders and incentives—as well as an intervention inspired by the networks literature. We cross-randomize whether (a) individuals receive SMS reminders about upcoming vaccination drives; (b) individuals receive incentives for vaccinating their children; (c) influential individuals (information hubs, trusted individuals, or both) are asked to act as “ambassadors” receiving regular reminders to spread the word about immunization in their community. By taking into account different versions (or “dosages”) of each intervention, we obtain 75 unique policy combinations.
We develop a new statistical technique—a smart pooling and pruning procedure—for finding a best policy from a large set, which also determines which policies are effective and the effect of the best policy. We proceed in two steps. First, we use a LASSO technique to collapse the data: we pool dosages of the same treatment if the data cannot reject that they had the same impact, and prune policies deemed ineffective. Second, using the remaining (pooled) policies, we estimate the effect of the best policy, accounting for the winner’s curse. The key outcomes are (i) the number of measles immunizations and (ii) the number of immunizations per dollar spent. The policy that has the largest impact (information hubs, SMS reminders, incentives that increase with each immunization) increases the number of immunizations by 44 % relative to the status quo. The most cost-effective policy (information hubs, SMS reminders, no incentives) increases the number of immunizations per dollar by 9.1%….(More)”.
How COVID broke the evidence pipeline
Article by Helen Pearson: “It wasn’t long into the pandemic before Simon Carley realized we had an evidence problem. It was early 2020, and COVID-19 infections were starting to lap at the shores of the United Kingdom, where Carley is an emergency-medicine doctor at hospitals in Manchester. Carley is also a specialist in evidence-based medicine — the transformative idea that physicians should decide how to treat people by referring to rigorous evidence, such as clinical trials.
As cases of COVID-19 climbed in February, Carley thought that clinicians were suddenly abandoning evidence and reaching for drugs just because they sounded biologically plausible. Early studies Carley saw being published often lacked control groups or enrolled too few people to draw firm conclusions. “We were starting to treat patients with these drugs initially just on what seemed like a good idea,” he says. He understood the desire to do whatever is possible for someone gravely ill, but he also knew how dangerous it is to assume a drug works when so many promising treatments prove to be ineffective — or even harmful — in trials. “The COVID-19 pandemic has arguably been one of the greatest challenges to evidence-based medicine since the term was coined in the last century,” Carley and his colleagues wrote of the problems they were seeing1.
Other medical experts echo these concerns. With the pandemic now deep into its second year, it’s clear the crisis has exposed major weaknesses in the production and use of research-based evidence — failures that have inevitably cost lives. Researchers have registered more than 2,900 clinical trials related to COVID-19, but the majority are too small or poorly designed to be of much use (see ‘Small samples’). Organizations worldwide have scrambled to synthesize the available evidence on drugs, masks and other key issues, but can’t keep up with the outpouring of new research, and often repeat others’ work. There’s been “research waste at an unprecedented scale”, says Huseyin Naci, who studies health policy at the London School of Economics….(More)”.
Public participation in crisis policymaking. How 30,000 Dutch citizens advised their government on relaxing COVID-19 lockdown measures
Paper by Niek Mouter et al: “Following the outbreak of COVID-19, governments took unprecedented measures to curb the spread of the virus. Public participation in decisions regarding (the relaxation of) these measures has been notably absent, despite being recommended in the literature. Here, as one of the exceptions, we report the results of 30,000 citizens advising the government on eight different possibilities for relaxing lockdown measures in the Netherlands. By making use of the novel method Participatory Value Evaluation (PVE), participants were asked to recommend which out of the eight options they prefer to be relaxed. Participants received information regarding the societal impacts of each relaxation option, such as the impact of the option on the healthcare system.
The results of the PVE informed policymakers about people’s preferences regarding (the impacts of) the relaxation options. For instance, we established that participants assign an equal value to a reduction of 100 deaths among citizens younger than 70 years and a reduction of 168 deaths among citizens older than 70 years. We show how these preferences can be used to rank options in terms of desirability. Citizens advised to relax lockdown measures, but not to the point at which the healthcare system becomes heavily overloaded. We found wide support for prioritising the re-opening of contact professions. Conversely, participants disfavoured options to relax restrictions for specific groups of citizens as they found it important that decisions lead to “unity” and not to “division”. 80% of the participants state that PVE is a good method to let citizens participate in government decision-making on relaxing lockdown measures. Participants felt that they could express a nuanced opinion, communicate arguments, and appreciated the opportunity to evaluate relaxation options in comparison to each other while being informed about the consequences of each option. This increased their awareness of the dilemmas the government faces….(More)”.
People Count: Contact-Tracing Apps and Public Health
Book by Susan Landau: “An introduction to the technology of contact tracing and its usefulness for public health, considering questions of efficacy, equity, and privacy.
How do you stop a pandemic before a vaccine arrives? Contact tracing is key, the first step in a process that has proven effective: trace, test, and isolate. Smartphones can collect some of the information required by contact tracers—not just where you’ve been but also who’s been near you. Can we repurpose the tracking technology that we carry with us—devices with GPS, Wi-Fi, Bluetooth, and social media connectivity—to serve public health in a pandemic? In People Count, cybersecurity expert Susan Landau looks at some of the apps developed for contact tracing during the COVID-19 pandemic, finding that issues of effectiveness and equity intersect.
Landau explains the effectiveness (or ineffectiveness) of a range of technological interventions, including dongles in Singapore that collect proximity information; India’s biometric national identity system; Harvard University’s experiment, TraceFi; and China’s surveillance network. Other nations rejected China-style surveillance in favor of systems based on Bluetooth, GPS, and cell towers, but Landau explains the limitations of these technologies. She also reports that many current apps appear to be premised on a model of middle-class income and a job that can be done remotely. How can they be effective when low-income communities and front-line workers are the ones who are hit hardest by the virus? COVID-19 will not be our last pandemic; we need to get this essential method of infection control right….(More)”.
WHO, Germany launch new global hub for pandemic and epidemic intelligence
Press Release: “The World Health Organization (WHO) and the Federal Republic of Germany will establish a new global hub for pandemic and epidemic intelligence, data, surveillance and analytics innovation. The Hub, based in Berlin and working with partners around the world, will lead innovations in data analytics across the largest network of global data to predict, prevent, detect prepare for and respond to pandemic and epidemic risks worldwide.
H.E. German Federal Chancellor Dr Angela Merkel said: “The current COVID-19 pandemic has taught us that we can only fight pandemics and epidemics together. The new WHO Hub will be a global platform for pandemic prevention, bringing together various governmental, academic and private sector institutions. I am delighted that WHO chose Berlin as its location and invite partners from all around the world to contribute to the WHO Hub.”
The WHO Hub for Pandemic and Epidemic Intelligence is part of WHO’s Health Emergencies Programme and will be a new collaboration of countries and partners worldwide, driving innovations to increase availability and linkage of diverse data; develop tools and predictive models for risk analysis; and to monitor disease control measures, community acceptance and infodemics. Critically, the WHO Hub will support the work of public health experts and policy-makers in all countries with insights so they can take rapid decisions to prevent and respond to future public health emergencies.
“We need to identify pandemic and epidemic risks as quickly as possible, wherever they occur in the world. For that aim, we need to strengthen the global early warning surveillance system with improved collection of health-related data and inter-disciplinary risk analysis,” said Jens Spahn, German Minister of Health. “Germany has consistently been committed to support WHO’s work in preparing for and responding to health emergencies, and the WHO Hub is a concrete initiative that will make the world safer.”
Working with partners globally, the WHO Hub will drive a scale-up in innovation for existing forecasting and early warning capacities in WHO and Member States. At the same time, the WHO Hub will accelerate global collaborations across public and private sector organizations, academia, and international partner networks. It will help them to collaborate and co-create the necessary tools for managing and analyzing data for early warning surveillance. It will also promote greater access to data and information….(More)”.
In scramble to respond to Covid-19, hospitals turned to models with high risk of bias
Article by Elise Reuter: “…Michigan Medicine is one of 80 hospitals contacted by MedCity News between January and April in a survey of decision-support systems implemented during the pandemic. Of the 26 respondents, 12 used machine learning tools or automated decision systems as part of their pandemic response. Larger hospitals and academic medical centers used them more frequently.
Faced with scarcities in testing, masks, hospital beds and vaccines, several of the hospitals turned to models as they prepared for difficult decisions. The deterioration index created by Epic was one of the most widely implemented — more than 100 hospitals are currently using it — but in many cases, hospitals also formulated their own algorithms.
They built models to predict which patients were most likely to test positive when shortages of swabs and reagents backlogged tests early in the pandemic. Others developed risk-scoring tools to help determine who should be contacted first for monoclonal antibody treatment, or which Covid patients should be enrolled in at-home monitoring programs.
MedCity News also interviewed hospitals on their processes for evaluating software tools to ensure they are accurate and unbiased. Currently, the FDA does not require some clinical decision-support systems to be cleared as medical devices, leaving the developers of these tools and the hospitals that implement them responsible for vetting them.
Among the hospitals that published efficacy data, some of the models were only evaluated through retrospective studies. This can pose a challenge in figuring out how clinicians actually use them in practice, and how well they work in real time. And while some of the hospitals tested whether the models were accurate across different groups of patients — such as people of a certain race, gender or location — this practice wasn’t universal.
As more companies spin up these models, researchers cautioned that they need to be designed and implemented carefully, to ensure they don’t yield biased results.
An ongoing review of more than 200 Covid-19 risk-prediction models found that the majority had a high risk of bias, meaning the data they were trained on might not represent the real world….(More)”.