Wanted: rules for pandemic data access that everyone can trust


Editorial at Nature: “The need for better pandemic preparedness before the world faces another outbreak is rising up the global agenda. Last week, the World Health Organization’s member states met virtually for the World Health Assembly and decided to reconvene for a special three-day session in November to discuss a pandemic treaty. If agreed, the treaty would be an international law that would bind its signatories to take swift, collective and evidence-based action in the event of an outbreak of an infectious disease with pandemic potential.

As Nature has previously reported, the jury is out on whether such a treaty is necessary. It is still not clear whether the idea has the support of a majority of nations, and it is being debated whether now is the time to be discussing a future pandemic, when so much remains to be done to end the current one. However, if there is to be such a treaty, it must include internationally agreed rules on accessing data in a pandemic — or any global emergency that has the potential to cause large-scale loss of life.

Discussions on pandemic data access are already taking place, and at some pace. The science academies of the G7 group of the world’s seven biggest economies — known as the S7 — have published a statement emphasizing the need for emergency data-access rules, including questions of governance (go.nature.com/2sjqj2v).

These will be discussed at this month’s G7 meeting in Cornwall, UK. Discussions have also been taking place among the G20 science academies and at the World Health Organization. These separate conversations need to converge. A pandemic treaty involving researchers and policymakers from every country could be where this happens. Once the rules are set — and a treaty will make them legally binding — countries can then enact them into national laws.

Researchers and policymakers have needed a range of types of data during the pandemic. These data, often reported daily, include updates on: COVID-19 test results and assessments of their accuracy; the number of people who have died; and how many people have been vaccinated. Some countries are tracking data on viral genome sequences, in part to unlock the identity of variants.

Mobile-phone data offer considerable power to understand in real time how a disease is spreading, as do data from Internet search results, and data from mapping applications, which allow researchers to see the movement of people. Payment data held by banks and credit-card companies can help to provide an accurate understanding of the impact of lockdowns on economies.

But access to such data are spotty, to say the least. Researchers in some countries have used these data to good effect, according to a November report on data readiness from an initiative called DELVE: Data Evaluation and Learning for Viral Epidemics, convened by the Royal Society in London (go.nature.com/3fymrcd). But there is no agreed, trusted mechanism for access….(More)”

AI helps scour video archives for evidence of human-rights abuses


The Economist: “Thanks especially to ubiquitous camera-phones, today’s wars have been filmed more than any in history. Consider the growing archives of Mnemonic, a Berlin charity that preserves video that purports to document war crimes and other violations of human rights. If played nonstop, Mnemonic’s collection of video from Syria’s decade-long war would run until 2061. Mnemonic also holds seemingly bottomless archives of video from conflicts in Sudan and Yemen. Even greater amounts of potentially relevant additional footage await review online.

Outfits that, like Mnemonic, scan video for evidence of rights abuses note that the task is a slog. Some trim costs by recruiting volunteer reviewers. Not everyone, however, is cut out for the tedium and, especially, periodic dreadfulness involved. That is true even for paid staff. Karim Khan, who leads a United Nations team in Baghdad investigating Islamic State (IS) atrocities, says viewing the graphic cruelty causes enough “secondary trauma” for turnover to be high. The UN project, called UNITAD, is sifting through documentation that includes more than a year’s worth of video, most of it found online or on the phones and computers of captured or killed IS members.

Now, however, reviewing such video is becoming much easier. Technologists are developing a type of artificial-intelligence (AI) software that uses “machine vision” to rapidly scour video for imagery that suggests an abuse of human rights has been recorded. It’s early days, but the software is promising. A number of organisations, including Mnemonic and UNITAD, have begun to operate such programs.

This year UNITAD began to run one dubbed Zeteo. It performs well, says David Hasman, one of its operators. Zeteo can be instructed to find—and, if the image resolution is decent, typically does find—bits of video showing things like explosions, beheadings, firing into a crowd and grave-digging. Zeteo can also spot footage of a known person’s face, as well as scenes as precise as a woman walking in uniform, a boy holding a gun in twilight, and people sitting on a rug with an IS flag in view. Searches can encompass metadata that reveals when, where and on what devices clips were filmed….(More)”.

Social-Tech Entrepreneurs: Building Blocks of a New Social Economy


Article by Mario Calderini, Veronica Chiodo, Francesco Gerli & Giulio Pasi: “Is it possible to create a sustainable, human-centric, resilient economy that achieves diverse objectives—including growth, inclusion, and equity? Could industry provide prosperity beyond jobs and economic growth, by adopting societal well-being as a compass to inform the production of goods and services?

The policy brief “Industry 5.0,” recently released by the European Commission, seems to reply positively. It makes the case for conceiving economic growth as a means to inclusive prosperity. It is also an invitation to rethink the role of industry in society, and reprioritize policy targets and tools

The following reflection, based on insights gathered from empirical research, is a first attempt to elaborate on how we might achieve this rethinking, and aims to contribute to the social economy debate in Europe and beyond.

A New Entrepreneurial Genre

A new entrepreneurial genre forged by the values of social entrepreneurship and fueled by technological opportunities is emerging, and it is well-poised to mend the economic and social wounds inflicted by both COVID-19 and the unexpected consequences of the early knowledge economy—an economy built around ideas and intellectual capital, and driven by diffused creativity, technology, and innovation.

We believe this genre, which we call social-tech entrepreneurship, is important to inaugurating a new generation of place-based, innovation-driven development policies inspired by a more inclusive idea of growth—though under the condition that industrial and innovation policies include it in their frame of reference.

This is partly because social innovation has undergone a complex transformation in recent years. It has seen a hybridization of social and commercial objectives and, as a direct consequence, new forms of management that support organizational missions that blend the two. Today, a more recent trend, reinforced by the pandemic, might push this transformation further: the idea that technologies—particularly those commoditized in the digital and software domains—offer a unique opportunity to solve societal challenges at scale.

Social-tech entrepreneurship differs from the work of high-tech companies in that, as researchers Geoffrey Desa and Suresh Kotha explain, it specifically aims to “develop and deploy technology-driven solutions to address social needs.” A social-tech entrepreneur also leverages technology not just to make parts of their operations more efficient, but to prompt a disruptive change in the way a specific social problem is addressed—and in a way that safeguards economic sustainability. In other words, they attempt to satisfy a social need through technological innovation in a financially sustainable manner. …(More)”.

The pandemic showed that big tech isn’t a public health savior


Nicole Wetsman at Verge: “…It seemed like Big Tech, with its analytic firepower and new focus on health, could help with these very real problems. “We saw all over the papers: Facebook is gonna save the world, and Google’s going to save the world,” says Katerini Storeng, a medical anthropologist who studies public-private partnerships in global public health at the University of Oslo. Politicians were eager to welcome Silicon Valley to the table and to discuss the best ways to manage the pandemic. “It was remarkable, and indicative of a blurring of the boundaries between the public domain and the private domain,” Storeng says.

Over a year later, many of the promised tech innovations never materialized. There are areas where tech companies have made significant contributions — like collecting mobility data that helped officials understand the effects of social distancing policies. But Google wasn’t actually building a nationwide testing website. The program that eventually appeared, a testing program for California run by Google’s sibling company Verily, was quietly phased out after it created more problems than it solved.

Now, after a year, we’re starting to get a clear picture of what worked, what didn’t, and what the relationship between Big Tech and public health might look like in the future.

Tech companies were interested in health before the pandemic, and COVID-19 accelerated those initiatives. There may be things that tech companies are better equipped to handle than traditional public health agencies and other public institutions, and the past year showed some of those strengths. But it also showed their weaknesses and underscored the risks to putting health responsibilities in the hands of private companies — which have goals outside of the public good.

When the pandemic started, Storeng was already studying how private companies participated in public health preparedness efforts. Over the past two decades, consumers and health officials have become more and more confident that tech hacks can be shortcuts to healthy communities. These digital hacks can take many forms and include everything from a smartphone app nudging people toward exercise to a data model analyzing how an illness spreads, she says.

“What they have in common, I think, is this hope and optimism that it’ll help bypass some more systemic, intrinsic problems,” Storeng says.

But healthcare and public health present hard problems. Parachuting in with a new approach that isn’t based on a detailed understanding of the existing system doesn’t always work. “I think we tend to believe in our culture that higher tech, private sector is necessarily better,” says Melissa McPheeters, co-director of the Center for Improving the Public’s Health through Informatics at Vanderbilt University. “Sometimes that’s true. And sometimes it’s not.”

McPheeters spent three years as the director of the Office of Informatics and Analytics at the Tennessee Department of Health. While in that role, she got calls from technology companies all the time, promising quick fixes to any data issues the department was facing. But they were more interested in delivering a product than a collaboration, she says. “It never began with, ‘Help me understand your problem.’”…(More)”

Collective data rights can stop big tech from obliterating privacy


Article by Martin Tisne: “…There are two parallel approaches that should be pursued to protect the public.

One is better use of class or group actions, otherwise known as collective redress actions. Historically, these have been limited in Europe, but in November 2020 the European parliament passed a measure that requires all 27 EU member states to implement measures allowing for collective redress actions across the region. Compared with the US, the EU has stronger laws protecting consumer data and promoting competition, so class or group action lawsuits in Europe can be a powerful tool for lawyers and activists to force big tech companies to change their behavior even in cases where the per-person damages would be very low.

Class action lawsuits have most often been used in the US to seek financial damages, but they can also be used to force changes in policy and practice. They can work hand in hand with campaigns to change public opinion, especially in consumer cases (for example, by forcing Big Tobacco to admit to the link between smoking and cancer, or by paving the way for car seatbelt laws). They are powerful tools when there are thousands, if not millions, of similar individual harms, which add up to help prove causation. Part of the problem is getting the right information to sue in the first place. Government efforts, like a lawsuit brought against Facebook in December by the Federal Trade Commission (FTC) and a group of 46 states, are crucial. As the tech journalist Gilad Edelman puts it, “According to the lawsuits, the erosion of user privacy over time is a form of consumer harm—a social network that protects user data less is an inferior product—that tips Facebook from a mere monopoly to an illegal one.” In the US, as the New York Times recently reported, private lawsuits, including class actions, often “lean on evidence unearthed by the government investigations.” In the EU, however, it’s the other way around: private lawsuits can open up the possibility of regulatory action, which is constrained by the gap between EU-wide laws and national regulators.

Which brings us to the second approach: a little-known 2016 French law called the Digital Republic Bill. The Digital Republic Bill is one of the few modern laws focused on automated decision making. The law currently applies only to administrative decisions taken by public-sector algorithmic systems. But it provides a sketch for what future laws could look like. It says that the source code behind such systems must be made available to the public. Anyone can request that code.

Importantly, the law enables advocacy organizations to request information on the functioning of an algorithm and the source code behind it even if they don’t represent a specific individual or claimant who is allegedly harmed. The need to find a “perfect plaintiff” who can prove harm in order to file a suit makes it very difficult to tackle the systemic issues that cause collective data harms. Laure Lucchesi, the director of Etalab, a French government office in charge of overseeing the bill, says that the law’s focus on algorithmic accountability was ahead of its time. Other laws, like the European General Data Protection Regulation (GDPR), focus too heavily on individual consent and privacy. But both the data and the algorithms need to be regulated…(More)”

A growing number of governments hope to clone America’s DARPA


The Economist: “Using messenger RNA to make vaccines was an unproven idea. But if it worked, the technique would revolutionise medicine, not least by providing protection against infectious diseases and biological weapons. So in 2013 America’s Defence Advanced Research Projects Agency (DARPA) gambled. It awarded a small, new firm called Moderna $25m to develop the idea. Eight years, and more than 175m doses later, Moderna’s covid-19 vaccine sits on the list of innovations for which DARPA can claim at least partial credit, alongside weather satellites, GPS, drones, stealth technology, voice interfaces, the personal computer and the internet.

It is the agency that shaped the modern world, and this success has spurred imitators. In America there are ARPAs for homeland security, intelligence and energy, as well as the original defence one. President Joe Biden has asked Congress for $6.5bn to set up a health version, which will, the president vows, “end cancer as we know it”. His administration also has plans for another, to tackle climate change. Germany has recently established two such agencies: one civilian (the Federal Agency for Disruptive Innovation, or SPRIN-D) and another military (the Cybersecurity Innovation Agency). Japan’s version is called Moonshot R&D. In Britain, a bill for an Advanced Research and Invention Agency—often referred to as UK ARPA—is making its way through parliament….(More)”.

Investing in Data Saves Lives


Mark Lowcock and Raj Shah at Project Syndicate: “…Our experience of building a predictive model, and its use by public-health officials in these countries, showed that this approach could lead to better humanitarian outcomes. But it was also a reminder that significant data challenges, regarding both gaps and quality, limit the viability and accuracy of such models for the world’s most vulnerable countries. For example, data on the prevalence of cardiovascular diseases was 4-7 years old in several poorer countries, and not available at all for Sudan and South Sudan.

Globally, we are still missing about 50% of the data needed to respond effectively in countries experiencing humanitarian emergencies. OCHA and The Rockefeller Foundation are cooperating to provide early insight into crises, during and beyond the COVID-19 pandemic. But realizing the full potential of our approach depends on the contributions of others.

So, as governments, development banks, and major humanitarian and development agencies reflect on the first year of the pandemic response, as well as on discussions at the recent World Bank Spring Meetings, they must recognize the crucial role data will play in recovering from this crisis and preventing future ones. Filling gaps in critical data should be a top priority for all humanitarian and development actors.

Governments, humanitarian organizations, and regional development banks thus need to invest in data collection, data-sharing infrastructure, and the people who manage these processes. Likewise, these stakeholders must become more adept at responsibly sharing their data through open data platforms and that maintain rigorous interoperability standards.

Where data are not available, the private sector should develop new sources of information through innovative methods such as using anonymized social-media data or call records to understand population movement patterns….(More)”.

We Need to Reimagine the Modern Think Tank


Article by Emma Vadehra: “We are in the midst of a great realignment in policymaking. After an era-defining pandemic, which itself served as backdrop to a generations-in-the-making reckoning on racial injustice, the era of policy incrementalism is giving way to broad, grassroots demands for structural change. But elected officials are not the only ones who need to evolve. As the broader policy ecosystem adjusts to a post-2020 world, think tanks that aim to provide the intellectual backbone to policy movements—through research, data analysis, and evidence-based recommendation—need to change their approach as well.

Think tanks may be slower to adapt because of long-standing biases around what qualifies someone to be a policy “expert.” Traditionally, think tanks assess qualifications based on educational attainment and advanced degrees, which has often meant prioritizing academic credentials over lived or professional experience on the ground. These hiring preferences alone leave many people out of the debates that shape their lives: if think tanks expect a master’s degree for mid-level and senior research and policy positions, their pool of candidates will be limited to the 4 percent of Latinos and 7 percent of Black people with those degrees (lower than the rates among white people (10.5 percent) or Asian/Pacific Islanders (17 percent)). And in specific fields like Economics, from which many think tanks draw their experts, just 0.5 percent of doctoral degrees go to Black women each year.

Think tanks alone cannot change the larger cultural and societal forces that have historically limited access to certain fields. But they can change their own practices: namely, they can change how they assess expertise and who they recruit and cultivate as policy experts. In doing so, they can push the broader policy sector—including government and philanthropic donors—to do the same. Because while the next generation marches in the streets and runs for office, the public policy sector is not doing enough to diversify and support who develops, researches, enacts, and implements policy. And excluding impacted communities from the decision-making table makes our democracy less inclusive, responsive, and effective.

Two years ago, my colleagues and I at The Century Foundation, a 100-year-old think tank that has weathered many paradigm shifts in policymaking, launched an organization, Next100, to experiment with a new model for think tanks. Our mission was simple: policy by those with the most at stake, for those with the most at stake. We believed that proximity to the communities that policy looks to serve will make policy stronger, and we put muscle and resources behind the theory that those with lived experience are as much policy experts as anyone with a PhD from an Ivy League university. The pandemic and heightened calls for racial justice in the last year have only strengthened our belief in the need to thoughtfully democratize policy development. While it’s common understanding now that COVID-19 has surfaced and exacerbated profound historical inequities, not enough has been done to question why those inequities exist, or why they run so deep. How we make policy—and who makes it—is a big reason why….(More)”

What Robots Can — And Can’t — Do For the Old and Lonely


Katie Engelhart at The New Yorker: “…In 2017, the Surgeon General, Vivek Murthy, declared loneliness an “epidemic” among Americans of all ages. This warning was partly inspired by new medical research that has revealed the damage that social isolation and loneliness can inflict on a body. The two conditions are often linked, but they are not the same: isolation is an objective state (not having much contact with the world); loneliness is a subjective one (feeling that the contact you have is not enough). Both are thought to prompt a heightened inflammatory response, which can increase a person’s risk for a vast range of pathologies, including dementia, depression, high blood pressure, and stroke. Older people are more susceptible to loneliness; forty-three per cent of Americans over sixty identify as lonely. Their individual suffering is often described by medical researchers as especially perilous, and their collective suffering is seen as an especially awful societal failing….

So what’s a well-meaning social worker to do? In 2018, New York State’s Office for the Aging launched a pilot project, distributing Joy for All robots to sixty state residents and then tracking them over time. Researchers used a six-point loneliness scale, which asks respondents to agree or disagree with statements like “I experience a general sense of emptiness.” They concluded that seventy per cent of participants felt less lonely after one year. The pets were not as sophisticated as other social robots being designed for the so-called silver market or loneliness economy, but they were cheaper, at about a hundred dollars apiece.

In April, 2020, a few weeks after New York aging departments shut down their adult day programs and communal dining sites, the state placed a bulk order for more than a thousand robot cats and dogs. The pets went quickly, and caseworkers started asking for more: “Can I get five cats?” A few clients with cognitive impairments were disoriented by the machines. One called her local department, distraught, to say that her kitty wasn’t eating. But, more commonly, people liked the pets so much that the batteries ran out. Caseworkers joked that their clients had loved them to death….(More)”.

How a largely untested AI algorithm crept into hundreds of hospitals


Vishal Khetpal and Nishant Shah at FastCompany: “Last spring, physicians like us were confused. COVID-19 was just starting its deadly journey around the world, afflicting our patients with severe lung infections, strokes, skin rashes, debilitating fatigue, and numerous other acute and chronic symptoms. Armed with outdated clinical intuitions, we were left disoriented by a disease shrouded in ambiguity.

In the midst of the uncertainty, Epic, a private electronic health record giant and a key purveyor of American health data, accelerated the deployment of a clinical prediction tool called the Deterioration Index. Built with a type of artificial intelligence called machine learning and in use at some hospitals prior to the pandemic, the index is designed to help physicians decide when to move a patient into or out of intensive care, and is influenced by factors like breathing rate and blood potassium level. Epic had been tinkering with the index for years but expanded its use during the pandemic. At hundreds of hospitals, including those in which we both work, a Deterioration Index score is prominently displayed on the chart of every patient admitted to the hospital.

The Deterioration Index is poised to upend a key cultural practice in medicine: triage. Loosely speaking, triage is an act of determining how sick a patient is at any given moment to prioritize treatment and limited resources. In the past, physicians have performed this task by rapidly interpreting a patient’s vital signs, physical exam findings, test results, and other data points, using heuristics learned through years of on-the-job medical training.

Ostensibly, the core assumption of the Deterioration Index is that traditional triage can be augmented, or perhaps replaced entirely, by machine learning and big data. Indeed, a study of 392 COVID-19 patients admitted to Michigan Medicine that the index was moderately successful at discriminating between low-risk patients and those who were at high-risk of being transferred to an ICU, getting placed on a ventilator, or dying while admitted to the hospital. But last year’s hurried rollout of the Deterioration Index also sets a worrisome precedent, and it illustrates the potential for such decision-support tools to propagate biases in medicine and change the ways in which doctors think about their patients….(More)”.