Governance mechanisms for sharing of health data: An approach towards selecting attributes for complex discrete choice experiment studies

Paper by Jennifer Viberg Johansson: “Discrete Choice Experiment (DCE) is a well-established technique to elicit individual preferences, but it has rarely been used to elicit governance preferences for health data sharing.

The aim of this article was to describe the process of identifying attributes for a DCE study aiming to elicit preferences of citizens in Sweden, Iceland and the UK for governance mechanisms for digitally sharing different kinds of health data in different contexts.

A three-step approach was utilised to inform the attribute and level selection: 1) Attribute identification, 2) Attribute development and 3) Attribute refinement. First, we developed an initial set of potential attributes from a literature review and a workshop with experts. To further develop attributes, focus group discussions with citizens (n = 13), ranking exercises among focus group participants (n = 48) and expert interviews (n = 18) were performed. Thereafter, attributes were refined using group discussion (n = 3) with experts as well as cognitive interviews with citizens (n = 11).

The results led to the selection of seven attributes for further development: 1) level of identification, 2) the purpose of data use, 3) type of information, 4) consent, 5) new data user, 6) collector and 7) the oversight of data sharing. Differences were found between countries regarding the order of top three attributes. The process outlined participants’ conceptualisation of the chosen attributes, and what we learned for our attribute development phase.

This study demonstrates a process for selection of attributes for a (multi-country) DCE involving three stages: Attribute identification, Attribute development and Attribute refinement. This study can contribute to improve the ethical aspects and good practice of this phase in DCE studies. Specifically, it can contribute to the development of governance mechanisms in the digital world, where people’s health data are shared for multiple purposes….(More)”.

How Low and Middle-Income Countries Are Innovating to Combat Covid

Article by Ben Ramalingam, Benjamin Kumpf, Rahul Malhotra and Merrick Schaefer: “Since the Covid-19 pandemic hit, innovators around the world have developed thousands of novel solutions and practical approaches to this unprecedented global health challenge. About one-fifth of those innovations have come from low- and middle-income countries across sub-Saharan Africa, South Asia, and Latin America, according to our analysis of WHO data, and they work to address the needs of poor, marginalized, or excluded communities at the so-called bottom of the pyramid.

Over the past year we’ve been able to learn from and support some of those inspiring innovators. Their approaches are diverse in scope and scale and cover a vast range of pandemic response needs — from infection prevention and control to community engagement, contract tracing, social protection, business continuity, and more.

Here we share seven lessons from those innovators that offer promising insights not only for the ongoing Covid response but also for how we think about, manage, and enable innovation.

1. Ensure that your solutions are sensitive to social and cultural dynamics. 

Successful innovations are relevant to the lived realities of the people they’re intended to help. Socially and culturally sensitive design approaches see greater uptake and use. This is true in both resource-constrained and resource-rich environments.

Take contact tracing in Kenya. In a context where more than half of all residents use public transportation every day, the provider of a ticketing app for Nairobi’s bus fleets adapted its software to collect real-time passenger data. The app has been used across one of the world’s most mobile populations to trace Covid-19 cases, identify future clusters, trigger automated warnings to exposed passengers, and monitor the maximum number of people that could safely be allowed in each vehicle….(More)”.

The pandemic showed that big tech isn’t a public health savior

Nicole Wetsman at Verge: “…It seemed like Big Tech, with its analytic firepower and new focus on health, could help with these very real problems. “We saw all over the papers: Facebook is gonna save the world, and Google’s going to save the world,” says Katerini Storeng, a medical anthropologist who studies public-private partnerships in global public health at the University of Oslo. Politicians were eager to welcome Silicon Valley to the table and to discuss the best ways to manage the pandemic. “It was remarkable, and indicative of a blurring of the boundaries between the public domain and the private domain,” Storeng says.

Over a year later, many of the promised tech innovations never materialized. There are areas where tech companies have made significant contributions — like collecting mobility data that helped officials understand the effects of social distancing policies. But Google wasn’t actually building a nationwide testing website. The program that eventually appeared, a testing program for California run by Google’s sibling company Verily, was quietly phased out after it created more problems than it solved.

Now, after a year, we’re starting to get a clear picture of what worked, what didn’t, and what the relationship between Big Tech and public health might look like in the future.

Tech companies were interested in health before the pandemic, and COVID-19 accelerated those initiatives. There may be things that tech companies are better equipped to handle than traditional public health agencies and other public institutions, and the past year showed some of those strengths. But it also showed their weaknesses and underscored the risks to putting health responsibilities in the hands of private companies — which have goals outside of the public good.

When the pandemic started, Storeng was already studying how private companies participated in public health preparedness efforts. Over the past two decades, consumers and health officials have become more and more confident that tech hacks can be shortcuts to healthy communities. These digital hacks can take many forms and include everything from a smartphone app nudging people toward exercise to a data model analyzing how an illness spreads, she says.

“What they have in common, I think, is this hope and optimism that it’ll help bypass some more systemic, intrinsic problems,” Storeng says.

But healthcare and public health present hard problems. Parachuting in with a new approach that isn’t based on a detailed understanding of the existing system doesn’t always work. “I think we tend to believe in our culture that higher tech, private sector is necessarily better,” says Melissa McPheeters, co-director of the Center for Improving the Public’s Health through Informatics at Vanderbilt University. “Sometimes that’s true. And sometimes it’s not.”

McPheeters spent three years as the director of the Office of Informatics and Analytics at the Tennessee Department of Health. While in that role, she got calls from technology companies all the time, promising quick fixes to any data issues the department was facing. But they were more interested in delivering a product than a collaboration, she says. “It never began with, ‘Help me understand your problem.’”…(More)”

Collective data rights can stop big tech from obliterating privacy

Article by Martin Tisne: “…There are two parallel approaches that should be pursued to protect the public.

One is better use of class or group actions, otherwise known as collective redress actions. Historically, these have been limited in Europe, but in November 2020 the European parliament passed a measure that requires all 27 EU member states to implement measures allowing for collective redress actions across the region. Compared with the US, the EU has stronger laws protecting consumer data and promoting competition, so class or group action lawsuits in Europe can be a powerful tool for lawyers and activists to force big tech companies to change their behavior even in cases where the per-person damages would be very low.

Class action lawsuits have most often been used in the US to seek financial damages, but they can also be used to force changes in policy and practice. They can work hand in hand with campaigns to change public opinion, especially in consumer cases (for example, by forcing Big Tobacco to admit to the link between smoking and cancer, or by paving the way for car seatbelt laws). They are powerful tools when there are thousands, if not millions, of similar individual harms, which add up to help prove causation. Part of the problem is getting the right information to sue in the first place. Government efforts, like a lawsuit brought against Facebook in December by the Federal Trade Commission (FTC) and a group of 46 states, are crucial. As the tech journalist Gilad Edelman puts it, “According to the lawsuits, the erosion of user privacy over time is a form of consumer harm—a social network that protects user data less is an inferior product—that tips Facebook from a mere monopoly to an illegal one.” In the US, as the New York Times recently reported, private lawsuits, including class actions, often “lean on evidence unearthed by the government investigations.” In the EU, however, it’s the other way around: private lawsuits can open up the possibility of regulatory action, which is constrained by the gap between EU-wide laws and national regulators.

Which brings us to the second approach: a little-known 2016 French law called the Digital Republic Bill. The Digital Republic Bill is one of the few modern laws focused on automated decision making. The law currently applies only to administrative decisions taken by public-sector algorithmic systems. But it provides a sketch for what future laws could look like. It says that the source code behind such systems must be made available to the public. Anyone can request that code.

Importantly, the law enables advocacy organizations to request information on the functioning of an algorithm and the source code behind it even if they don’t represent a specific individual or claimant who is allegedly harmed. The need to find a “perfect plaintiff” who can prove harm in order to file a suit makes it very difficult to tackle the systemic issues that cause collective data harms. Laure Lucchesi, the director of Etalab, a French government office in charge of overseeing the bill, says that the law’s focus on algorithmic accountability was ahead of its time. Other laws, like the European General Data Protection Regulation (GDPR), focus too heavily on individual consent and privacy. But both the data and the algorithms need to be regulated…(More)”

The Coronavirus Pandemic Creative Responses Archive

National Academies of Science: “Creativity often flourishes in stressful times because innovation evolves out of need. During the coronavirus pandemic, we are witnessing a range of creative responses from individuals, communities, organizations, and industries. Some are intensely personal, others expansively global—mirroring the many ways the pandemic has affected us. What do these responses to the pandemic tell us about our society, our level of resilience, and how we might imagine the future? Explore the Coronavirus Pandemic Creative Responses Archive…

What Robots Can — And Can’t — Do For the Old and Lonely

Katie Engelhart at The New Yorker: “…In 2017, the Surgeon General, Vivek Murthy, declared loneliness an “epidemic” among Americans of all ages. This warning was partly inspired by new medical research that has revealed the damage that social isolation and loneliness can inflict on a body. The two conditions are often linked, but they are not the same: isolation is an objective state (not having much contact with the world); loneliness is a subjective one (feeling that the contact you have is not enough). Both are thought to prompt a heightened inflammatory response, which can increase a person’s risk for a vast range of pathologies, including dementia, depression, high blood pressure, and stroke. Older people are more susceptible to loneliness; forty-three per cent of Americans over sixty identify as lonely. Their individual suffering is often described by medical researchers as especially perilous, and their collective suffering is seen as an especially awful societal failing….

So what’s a well-meaning social worker to do? In 2018, New York State’s Office for the Aging launched a pilot project, distributing Joy for All robots to sixty state residents and then tracking them over time. Researchers used a six-point loneliness scale, which asks respondents to agree or disagree with statements like “I experience a general sense of emptiness.” They concluded that seventy per cent of participants felt less lonely after one year. The pets were not as sophisticated as other social robots being designed for the so-called silver market or loneliness economy, but they were cheaper, at about a hundred dollars apiece.

In April, 2020, a few weeks after New York aging departments shut down their adult day programs and communal dining sites, the state placed a bulk order for more than a thousand robot cats and dogs. The pets went quickly, and caseworkers started asking for more: “Can I get five cats?” A few clients with cognitive impairments were disoriented by the machines. One called her local department, distraught, to say that her kitty wasn’t eating. But, more commonly, people liked the pets so much that the batteries ran out. Caseworkers joked that their clients had loved them to death….(More)”.

Big Tech platforms in health research: Re-purposing big data governance in light of the General Data Protection Regulation’s research exemption

Paper by Luca Marelli, Giuseppe Testa, and Ine van Hoyweghen: “The emergence of a global industry of digital health platforms operated by Big Tech corporations, and its growing entanglements with academic and pharmaceutical research networks, raise pressing questions on the capacity of current data governance models, regulatory and legal frameworks to safeguard the sustainability of the health research ecosystem. In this article, we direct our attention toward the challenges faced by the European General Data Protection Regulation in regulating the potentially disruptive engagement of Big Tech platforms in health research. The General Data Protection Regulation upholds a rather flexible regime for scientific research through a number of derogations to otherwise stricter data protection requirements, while providing a very broad interpretation of the notion of “scientific research”. Precisely the breadth of these exemptions combined with the ample scope of this notion could provide unintended leeway to the health data processing activities of Big Tech platforms, which have not been immune from carrying out privacy-infringing and socially disruptive practices in the health domain. We thus discuss further finer-grained demarcations to be traced within the broadly construed notion of scientific research, geared to implementing use-based data governance frameworks that distinguish health research activities that should benefit from a facilitated data protection regime from those that should not. We conclude that a “re-purposing” of big data governance approaches in health research is needed if European nations are to promote research activities within a framework of high safeguards for both individual citizens and society….(More)”.

How a largely untested AI algorithm crept into hundreds of hospitals

Vishal Khetpal and Nishant Shah at FastCompany: “Last spring, physicians like us were confused. COVID-19 was just starting its deadly journey around the world, afflicting our patients with severe lung infections, strokes, skin rashes, debilitating fatigue, and numerous other acute and chronic symptoms. Armed with outdated clinical intuitions, we were left disoriented by a disease shrouded in ambiguity.

In the midst of the uncertainty, Epic, a private electronic health record giant and a key purveyor of American health data, accelerated the deployment of a clinical prediction tool called the Deterioration Index. Built with a type of artificial intelligence called machine learning and in use at some hospitals prior to the pandemic, the index is designed to help physicians decide when to move a patient into or out of intensive care, and is influenced by factors like breathing rate and blood potassium level. Epic had been tinkering with the index for years but expanded its use during the pandemic. At hundreds of hospitals, including those in which we both work, a Deterioration Index score is prominently displayed on the chart of every patient admitted to the hospital.

The Deterioration Index is poised to upend a key cultural practice in medicine: triage. Loosely speaking, triage is an act of determining how sick a patient is at any given moment to prioritize treatment and limited resources. In the past, physicians have performed this task by rapidly interpreting a patient’s vital signs, physical exam findings, test results, and other data points, using heuristics learned through years of on-the-job medical training.

Ostensibly, the core assumption of the Deterioration Index is that traditional triage can be augmented, or perhaps replaced entirely, by machine learning and big data. Indeed, a study of 392 COVID-19 patients admitted to Michigan Medicine that the index was moderately successful at discriminating between low-risk patients and those who were at high-risk of being transferred to an ICU, getting placed on a ventilator, or dying while admitted to the hospital. But last year’s hurried rollout of the Deterioration Index also sets a worrisome precedent, and it illustrates the potential for such decision-support tools to propagate biases in medicine and change the ways in which doctors think about their patients….(More)”.

For Whose Benefit? Transparency in the development and procurement of COVID-19 vaccines

Report by Transparency International Global Health: “The COVID-19 pandemic has required an unprecedented public health response, with governments dedicating massive amounts of resources to their health systems at extraordinary speed. Governments have had to respond quickly to fast-changing contexts, with many competing interests, and little in the way of historical precedent to guide them.

Transparency here is paramount; publicly available information is critical to reducing the inherent risks of such a situation by ensuring governmental decisions are accountable and by enabling non-governmental expert input into the global vaccination process.

This report analyses transparency of two key stages of the vaccine development in chronological order: the development and subsequent buying of vaccines.

Given the scope, rapid progression and complexity of the global vaccination process, this is not an exhaustive analysis. First, all the following analysis is limited to 20 leading COVID-19 vaccines that were in, or had completed, phase 3 clinical trials as of 11th January 2021. Second, we have concentrated on transparency of two of the initial stages of the process: clinical trial transparency and the public contracting for the supply of vaccines. The report provides concrete recommendations on how to overcome current opacity in order to contribute to achieving the commitment of world leaders to ensure equal, fair and affordable access to COVID-19 vaccines for all countries….(More)”.

Improving hand hygiene in hospitals: comparing the effect of a nudge and a boost on protocol compliance

Paper by Henrico van Roekel, Joanne Reinhard and Stephan Grimmelikhuijsen: “Nudging has become a well-known policy practice. Recently, ‘boosting’ has been suggested as an alternative to nudging. In contrast to nudges, boosts aim to empower individuals to exert their own agency to make decisions. This article is one of the first to compare a nudging and a boosting intervention, and it does so in a critical field setting: hand hygiene compliance of hospital nurses. During a 4-week quasi-experiment, we tested the effect of a reframing nudge and a risk literacy boost on hand hygiene compliance in three hospital wards. The results show that nudging and boosting were both effective interventions to improve hand hygiene compliance. A tentative finding is that, while the nudge had a stronger immediate effect, the boost effect remained stable for a week, even after the removal of the intervention. We conclude that, besides nudging, researchers and policymakers may consider boosting when they seek to implement or test behavioral interventions in domains such as healthcare….(More)”.