COVID vaccination studies: plan now to pool data, or be bogged down in confusion


Natalie Dean at Nature: “More and more COVID-19 vaccines are rolling out safely around the world; just last month, the United States authorized one produced by Johnson & Johnson. But there is still much to be learnt. How long does protection last? How much does it vary by age? How well do vaccines work against various circulating variants, and how well will they work against future ones? Do vaccinated people transmit less of the virus?

Answers to these questions will help regulators to set the best policies. Now is the time to make sure that those answers are as reliable as possible, and I worry that we are not laying the essential groundwork. Our current trajectory has us on course for confusion: we must plan ahead to pool data.

Many questions remain after vaccines are approved. Randomized trials generate the best evidence to answer targeted questions, such as how effective booster doses are. But for others, randomized trials will become too difficult as more and more people are vaccinated. To fill in our knowledge gaps, observational studies of the millions of vaccinated people worldwide will be essential….

Perhaps most importantly, we must coordinate now on plans to combine data. We must take measures to counter the long-standing siloed approach to research. Investigators should be discouraged from setting up single-site studies and encouraged to contribute to a larger effort. Funding agencies should favour studies with plans for collaborating or for sharing de-identified individual-level data.

Even when studies do not officially pool data, they should make their designs compatible with others. That means up-front discussions about standardization and data-quality thresholds. Ideally, this will lead to a minimum common set of variables to be collected, which the WHO has already hammered out for COVID-19 clinical outcomes. Categories include clinical severity (such as all infections, symptomatic disease or critical/fatal disease) and patient characteristics, such as comorbidities. This will help researchers to conduct meta-analyses of even narrow subgroups. Efforts are under way to develop reporting guidelines for test-negative studies, but these will be most successful when there is broad engagement.

There are many important questions that will be addressed only by observational studies, and data that can be combined are much more powerful than lone results. We need to plan these studies with as much care and intentionality as we would for randomized trials….(More)”.

Covid-19 Data Cards: Building a Data Taxonomy for Pandemic Preparedness


Open Data Charter: “…We want to initiate the repair of the public’s trust through the building of a Pandemic Data Taxonomy with you — a network of data users and practitioners.

Building on feedback we got from our call to identify high value Open COVID-19 Data, we have structured a set of data cards, including key data types related to health issues, legal and socioeconomic impacts and fiscal transparency, within which are well-defined data models and dictionaries. Our target audience for this data taxonomy are governments. We are hoping this framework is a starting point towards building greater consistency around pandemic data release, and flag areas for better cooperation and standardisation within and between our governments and communities around the world.

We hope that together, with the input and feedback from a diverse group of data users and practitioners, we can have at the end of this public consultation and open-call, a document by a global collective, one that we can present to governments and public servants for their buy-in to reform our data infrastructures to be better prepared for future outbreaks.

In order to analyze the variables necessary to manage and investigate the different aspects of a pandemic, as exemplified by COVID-19, and based on a review of the type of data being released by 25 countries — we categorised the data in 4 major categories:

  • General — Contains the general concepts that all the files have in common and are defined, such as the METADATA, global sections of RISKS and their MITIGATION and the general STANDARDS required for the use, management and publication of the data. Then, a link to a spreadsheet, where more details of the precision, update frequency, publication methods and specific standards of each data set are defined.
  • Health Data — Describes how to manage and potentially publish the follow-up information on COVID-19 cases, considering data with temporal, geographical and demographic distribution along with the details for the study of the evolution of the disease.
  • Legal and Socioeconomic Impact Data — Contains the regulations, actions, measures, restrictions, protocols, documents and all the information regarding quarantine and the socio-economic impact as well as medical, labor or economic regulations for each data publisher.
  • Fiscal Data — Contains all budget allocations in accordance with the overall approved Pandemic budget, as well as the implemented adjustments. It also identifies specific allocations for facing prevention, detection, control, treatment and containment of the virus, as well as possible budget reallocations from other sectors or items derived from the actions mentioned above or by the derived economic constraints. It’s based on the recommendations made by GIFT and Open Contracting….(More)”

E-mail Is Making Us Miserable


Cal Newport at The New Yorker: “In early 2017, a French labor law went into effect that attempted to preserve the so-called right to disconnect. Companies with fifty or more employees were required to negotiate specific policies about the use of e-mail after work hours, with the goal of reducing the time that workers spent in their in-boxes during the evening or over the weekend. Myriam El Khomri, the minister of labor at the time, justified the new law, in part, as a necessary step to reduce burnout. The law is unwieldy, but it points toward a universal problem, one that’s become harder to avoid during the recent shift toward a more frenetic and improvisational approach to work: e-mail is making us miserable.

To study the effects of e-mail, a team led by researchers from the University of California, Irvine, hooked up forty office workers to wireless heart-rate monitors for around twelve days. They recorded the subjects’ heart-rate variability, a common technique for measuring mental stress. They also monitored the employees’ computer use, which allowed them to correlate e-mail checks with stress levels. What they found would not surprise the French. “The longer one spends on email in [a given] hour the higher is one’s stress for that hour,” the authors noted. In another study, researchers placed thermal cameras below each subject’s computer monitor, allowing them to measure the tell-tale “heat blooms” on a person’s face that indicate psychological distress. They discovered that batching in-box checks—a commonly suggested “solution” to improving one’s experience with e-mail—is not necessarily a panacea. For those people who scored highly in the trait of neuroticism, batching e-mails actually made them more stressed, perhaps because of worry about all of the urgent messages they were ignoring. The researchers also found that people answered e-mails more quickly when under stress but with less care—a text-analysis program called Linguistic Inquiry and Word Count revealed that these anxious e-mails were more likely to contain words that expressed anger. “While email use certainly saves people time effort in communicating, it also comes at a cost, the authors of the two studies concluded. Their recommendation? To “suggest that organizations make a concerted effort to cut down on email traffic.”

Other researchers have found similar connections between e-mail and unhappiness. A study, published in 2019, looked at long-term trends in the health of a group of nearly five thousand Swedish workers. They found that repeated exposure to “high information and communication technology demands” (translation: a need to be constantly connected) were associated with “suboptimal” health outcomes. This trend persisted even after they adjusted the statistics for potential complicating factors such as age, sex, socioeconomic status, health behavior, body-mass index, job strain, and social support. Of course, we don’t really need data to capture something that so many of us feel intuitively. I recently surveyed the readers of my blog about e-mail. “It’s slow and very frustrating. . . . I often feel like email is impersonal and a waste of time,” one respondent said. “I’m frazzled—just keeping up,” another admitted. Some went further. “I feel an almost uncontrollable need to stop what I’m doing to check email,” one person reported. “It makes me very depressed, anxious and frustrated.”…(More)”

Dialogues about Data: Building trust and unlocking the value of citizens’ health and care data


Nesta Report by Sinead Mac Manus and Alice Clay: “The last decade has seen exponential growth in the amount of data generated, collected and analysed to provide insights across all aspects of industry. Healthcare is no exception. We are increasingly seeing the value of using health and care data to prevent ill health, improve health outcomes for people and provide new insights into disease and treatments.

Bringing together common themes across the existing research, this report sets out two interlinked challenges to building a data-driven health and care system. This is interspersed with best practice examples of the potential of data to improve health and care, as well as cautionary tales of what can happen when this is done badly.

The first challenge we explore is how to increase citizens’ trust and transparency in data sharing. The second challenge is how to unlock the value of health and care data.

We are excited about the role for participatory futures – a set of techniques that systematically engage people to imagine and create more sustainable, inclusive futures – in helping governments and other organisations work with citizens to engage them in debate about their health and care data to build a data-driven health and care system for the benefit of all….(More)”.

How can stakeholder engagement and mini-publics better inform the use of data for pandemic response?


Andrew Zahuranec, Andrew Young and Stefaan G. Verhulst at the OECD Participo Blog Series:

Image for post

“What does the public expect from data-driven responses to the COVID-19 pandemic? And under what conditions?” These are the motivating questions behind The Data Assembly, a recent initiative by The GovLab at New York University Tandon School of Engineering — an action research center that aims to help institutions work more openly, collaboratively, effectively, and legitimately.

Launched with support from The Henry Luce Foundation, The Data Assembly solicited diverse, actionable public input on data re-use for crisis response in the United States. In particular, we sought to engage the public on how to facilitate, if deemed acceptable, the use of data that was collected for a particular purpose for informing COVID-19. One additional objective was to inform the broader emergence of data collaboration— through formal and ad hoc arrangements between the public sector, civil society, and those in the private sector — by evaluating public expectation and concern with current institutional, contractual, and technical structures and instruments that may underpin these partnerships.

The Data Assembly used a new methodology that re-imagines how organisations can engage with society to better understand local expectations regarding data re-use and related issues. This work goes beyond soliciting input from just the “usual suspects”. Instead, data assemblies provide a forum for a much more diverse set of participants to share their insights and voice their concerns.

This article is informed by our experience piloting The Data Assembly in New York City in summer 2020. It provides an overview of The Data Assembly’s methodology and outcomes and describes major elements of the effort to support organisations working on similar issues in other cities, regions, and countries….(More)”.

Surveillance and the ‘New Normal’ of Covid-19: Public Health, Data, and Justice


Report by the Social Science Research Council: “The Covid-19 pandemic has dramatically altered the way nations around the world use technology in public health. As the virus spread globally, some nations responded by closing businesses, shuttering schools, limiting gatherings, and banning travel. Many also deployed varied technological tools and systems to track virus exposure, monitor outbreaks, and aggregate hospital data.

Some regions are still grappling with crisis-level conditions, and others are struggling to navigate the complexities of vaccine rollouts. Amid the upheavals, communities are adjusting to a new normal, in which mask-wearing has become as commonplace as seatbelt use and digital temperature checks are a routine part of entering public buildings.

Even as the frenzy of emergency responses begins to subside, the emergent forms of surveillance that have accompanied this new normal persist. As a consequence, societies face new questions about how to manage the monitoring systems created in response to the virus, what processes are required in order to immunize populations, and what new norms the systems have generated. How they answer these questions will have long-term impacts on civil liberties, governance, and the role of technology in society. The systems implemented amid the public health emergency could jeopardize individual freedoms and exacerbate harms to already vulnerable groups, particularly if they are adapted to operate as permanent social management tools. At the same time, growing public awareness about the impact of public health technologies could also provide a catalyst for strengthening democratic engagement and demonstrating the urgency of improving governance systems. As the world transitions in and out of pandemic crisis modes, there is an opportunity to think broadly about strengthening public health systems, policymaking, and the underlying structure of our social compacts.

The stakes are high: an enduring lesson from history is that moments of crisis often recast the roles of governments and the rights of individuals. Moments of crisis often recast the roles of governments and the rights of individuals.In this moment of flux, the Social Science Research Council calls on policymakers, technologists, data scientists, health experts, academics, activists, and communities around the world to assess the implications of this transformation and seize opportunities for positive social change. The Council seeks to facilitate a shift from reactive modes of crisis response to more strategic forms of deliberation among varied stakeholders. As such, it has convened discussions and directed research in order to better understand the intersection of governance and technologically enabled surveillance in conditions of public health emergencies. Through these activities, the Council aims to provide analysis that can help foster societies that are more resilient, democratic, and inclusive and can, therefore, better withstand future crises.

With these goals in mind, the Council convened a cross-disciplinary, multinational group of experts in the summer of 2020 to survey the landscape of human rights and social justice with regard to technologically driven public health practices. The resulting group—the Public Health, Surveillance, and Human Rights (PHSHR) Network—raised a broad range of questions about governance, social inequalities, data protection, medical systems, and community norms: What rules should govern the sharing of personal health data? How should the efficacy of public health interventions be weighed against the emergence and expansion of new forms of surveillance? How much control should multinational corporations have in designing and implementing nations’ public health technology systems? These are among the questions that pushed members to think beyond traditional professional, geographic, and intellectual boundaries….(More)”.

My Data, My Choice? – German Patient Organizations’ Attitudes towards Big Data-Driven Approaches in Personalized Medicine. An Empirical-Ethical Study


Paper by Carolin Martina Rauter, Sabine Wöhlke & Silke Schicktanz: “Personalized medicine (PM) operates with biological data to optimize therapy or prevention and to achieve cost reduction. Associated data may consist of large variations of informational subtypes e.g. genetic characteristics and their epigenetic modifications, biomarkers or even individual lifestyle factors. Present innovations in the field of information technology have already enabled the procession of increasingly large amounts of such data (‘volume’) from various sources (‘variety’) and varying quality in terms of data accuracy (‘veracity’) to facilitate the generation and analyzation of messy data sets within a short and highly efficient time period (‘velocity’) to provide insights into previously unknown connections and correlations between different items (‘value’). As such developments are characteristics of Big Data approaches, Big Data itself has become an important catchphrase that is closely linked to the emerging foundations and approaches of PM. However, as ethical concerns have been pointed out by experts in the debate already, moral concerns by stakeholders such as patient organizations (POs) need to be reflected in this context as well. We used an empirical-ethical approach including a website-analysis and 27 telephone-interviews for gaining in-depth insight into German POs’ perspectives on PM and Big Data. Our results show that not all POs are stakeholders in the same way. Comparing the perspectives and political engagement of the minority of POs that is currently actively involved in research around PM and Big Data-driven research led to four stakeholder sub-classifications: ‘mediators’ support research projects through facilitating researcher’s access to the patient community while simultaneously selecting projects they preferably support while ‘cooperators’ tend to contribute more directly to research projects by providing and implemeting patient perspectives. ‘Financers’ provide financial resources. ‘Independents’ keep control over their collected samples and associated patient-related information with a strong interest in making autonomous decisions about its scientific use. A more detailed terminology for the involvement of POs as stakeholders facilitates the adressing of their aims and goals. Based on our results, the ‘independents’ subgroup is a promising candidate for future collaborations in scientific research. Additionally, we identified gaps in PO’s knowledge about PM and Big Data. Based on these findings, approaches can be developed to increase data and statistical literacy. This way, the full potential of stakeholder involvement of POs can be made accessible in discourses around PM and Big Data….(More)”.

Public-Private Partnerships: Compound and Data Sharing in Drug Discovery and Development


Paper by Andrew M. Davis et al: “Collaborative efforts between public and private entities such as academic institutions, governments, and pharmaceutical companies form an integral part of scientific research, and notable instances of such initiatives have been created within the life science community. Several examples of alliances exist with the broad goal of collaborating toward scientific advancement and improved public welfare. Such collaborations can be essential in catalyzing breaking areas of science within high-risk or global public health strategies that may have otherwise not progressed. A common term used to describe these alliances is public-private partnership (PPP). This review discusses different aspects of such partnerships in drug discovery/development and provides example applications as well as successful case studies. Specific areas that are covered include PPPs for sharing compounds at various phases of the drug discovery process—from compound collections for hit identification to sharing clinical candidates. Instances of PPPs to support better data integration and build better machine learning models are also discussed. The review also provides examples of PPPs that address the gap in knowledge or resources among involved parties and advance drug discovery, especially in disease areas with unfulfilled and/or social needs, like neurological disorders, cancer, and neglected and rare diseases….(More)”.

Time to evaluate COVID-19 contact-tracing apps


Letter to the Editor of Nature by Vittoria Colizza et al: “Digital contact tracing is a public-health intervention. Real-time monitoring and evaluation of the effectiveness of app-based contact tracing is key for improvement and public trust.

SARS-CoV-2 is likely to become endemic in many parts of the world, and there is still no certainty about how quickly vaccination will become available or how long its protection will last. For the foreseeable future, most countries will rely on a combination of various measures, including vaccination, social distancing, mask wearing and contact tracing.

Digital contact tracing via smartphone apps was established as a new public-health intervention in many countries in 2020. Most of these apps are now at a stage at which they need to be evaluated as public-health tools. We present here five key epidemiological and public-health requirements for COVID-19 contact-tracing apps and their evaluation.

1. Integration with local health policy. App notifications should be consistent with local health policies. The app should be integrated into access to testing, medical care and advice on isolation, and should work in conjunction with conventional contact tracing where available1. Apps should be interoperable across countries, as envisaged by the European Commission’s eHealth Network.

2. High user uptake and adherence. Contact-tracing apps can reduce transmission at low levels of uptake, including for those without smartphones2. However, large numbers of users increase effectiveness3,4. An effective communication strategy that explains the apps’ role and addresses privacy concerns is essential for increasing adoption5. Design, implementation and deployment should make the apps accessible to harder-to-reach communities. Adherence to quarantine should be encouraged and supported.

3. Quarantine infectious people as accurately as possible. The purpose of contact tracing is to quarantine as many potentially infectious people as possible, but to minimize the time spent in quarantine by uninfected people. To achieve optimal performance, apps’ algorithms must be ‘tunable’, to adjust to the epidemic as it evolves6.

4. Rapid notification. The time between the onset of symptoms in an index case and the quarantine of their contacts is of key importance in COVID-19 contact tracing7,8. Where a design feature introduces a delay, it needs to be outweighed by gains in, for example, specificity, uptake or adherence. If the delays exceed the period during which most contacts transmit the disease, the app will fail to reduce transmission.

5. Ability to evaluate effectiveness transparently. The public must be provided with evidence that notifications are based on the best available data. The tracing algorithm should therefore be transparent, auditable, under oversight and subject to review. Aggregated data (not linked to individual people) are essential for evaluation of and improvement in the performance of the app. Data on local uptake at a sufficiently coarse-grained spatial resolution are equally key. As apps in Europe do not ‘geolocate’ people, this additional information can be provided by the user or through surveys. Real-time monitoring should be performed whenever possible….(More)”.

N.Y.’s Vaccine Websites Weren’t Working. He Built a New One for $50.


Sharon Otterman at New York Times: “Huge Ma, a 31-year-old software engineer for Airbnb, was stunned when he tried to make a coronavirus vaccine appointment for his mother in early January and saw that there were dozens of websites to check, each with its own sign-up protocol. The city and state appointment systems were completely distinct.

“There has to be a better way,” he said he remembered thinking.

So, he developed one. In less than two weeks, he launched TurboVax, a free website that compiles availability from the three main city and state New York vaccine systems and sends the information in real time to Twitter. It cost Mr. Ma less than $50 to build, yet it offers an easier way to spot appointments than the city and state’s official systems do.

“It’s sort of become a challenge to myself, to prove what one person with time and a little motivation can do,” he said last week. “This wasn’t a priority for governments, which was unfortunate. But everyone has a role to play in the pandemic, and I’m just doing the very little that I can to make it a little bit easier.”

Supply shortages and problems with access to vaccination appointments have been some of the barriers to the equitable distribution of the vaccine in New York City and across the United States, officials have acknowledged….(More)”.