The Rise of the Pandemic Dashboard


Article by Marie Patino: “…All of these dashboards were launched very early in the pandemic,” said Damir Ivankovic, a PhD student at the University of Amsterdam. “Some of them were developed literally overnight, or over three sleepless nights in certain countries.” With Ph.D. researcher Erica Barbazza, Ivankovic has been leading a set of studies about Covid-19 dashboards with a network of researchers. For an upcoming paper that’s still unpublished, the pair have talked to more than 30 government dashboard teams across Europe and Asia to better understand their dynamics and the political decisions at stake in their creation. 

The dashboard craze can be traced back to Jan. 22, 2020, when graduate student Ensheng Dong, and Lauren Gardner, co-director of Johns Hopkins University’s Center for Systems Science and Engineering, launched the JHU interactive Covid dashboard. It would quickly achieve international fame, and screenshots of it started popping up in newspapers and on TV. The dashboard now racks up billions of daily hits. Soon after, cartography software company ESRI, through which the tool was made, spun off a variety of Covid resources and example dashboards, easy to customize and publish for those with a license. ESRI has provided about 5,000 organizations with a free license since the beginning of Covid.

That’s generated unprecedented traffic: The most-viewed public dashboards made using ESRI are all Covid-related, according to the company. The Johns Hopkins dash is number one. It made its data feed available for free, and now multiple other dashboards built by government and even news outlets, including Bloomberg, rely on Johns Hopkins to update their numbers. 

Public Health England’s dashboard is designed and hand-coded from scratch. But because of the pandemic’s urgency, many government agencies that lacked expertise in data analysis and visualization turned to off-the-shelf business analytics software to build their dashboards. Among those is ESRI, but also Tableau and Microsoft Power BI.

The pros? They provide ready-to-use templates and modules, don’t necessitate programming knowledge, are fast and easy to publish and provide users with a technical lifeline. The cons? They don’t enable design, can look clunky and cluttered, provide little wiggle room in terms of explaining the data and are rarely mobile-friendly. Also, many don’t provide multi-language support or accessibility features, and some don’t enable users to access the raw data that powers the tool. 

Dashboards everywhere
A compilation of government dashboards….(More)”.

EU Health data centre and a common data strategy for public health


Report by the European Parliament Think Tank: “Regarding health data, its availability and comparability, the Covid-19 pandemic revealed that the EU has no clear health data architecture. The lack of harmonisation in these practices and the absence of an EU-level centre for data analysis and use to support a better response to public health crises is the focus of this study. Through extensive desk review, interviews with key actors, and enquiry into experiences from outside the EU/EEA area, this study highlights that the EU must have the capacity to use data very effectively in order to make data-supported public health policy proposals and inform political decisions. The possible functions and characteristics of an EU health data centre are outlined. The centre can only fulfil its mandate if it has the power and competency to influence Member State public-health-relevant data ecosystems and institutionally link with their national level actors. The institutional structure, its possible activities and in particular its usage of advanced technologies such as AI are examined in detail….(More)”.

Public health and expert failure


Paper by Roger Koppl: “In a modern democracy, a public health system includes mechanisms for the provision of expert scientific advice to elected officials. The decisions of elected officials generally will be degraded by expert failure, that is, the provision of bad advice. The theory of expert failure suggests that competition among experts generally is the best safeguard against expert failure. Monopoly power of experts increases the chance of expert failure. The risk of expert failure also is greater when scientific advice is provided by only one or a few disciplines. A national government can simulate a competitive market for expert advice by structuring the scientific advice it receives to ensure the production of multiple perspectives from multiple disciplines. I apply these general principles to the United Kingdom’s Scientific Advisory Group for Emergencies (SAGE)….(More)”.

Contemplating the COVID crisis: what kind of inquiry do we need to learn the right lessons?


Essay by Geoff Mulgan: “Boris Johnson has announced a UK inquiry into COVID-19 to start in 2022, a parallel one is being planned in Scotland, and many more will emerge all over the world. But how should such inquiries be designed and run? What kind of inquiry can do most to mitigate or address the harms caused by the pandemic?

We’re beginning to look at this question at IPPO (the International Public Policy Observatory), including a global scan with our partners, INGSA and the Blavatnik School of Government, on how inquiries are being developed around the world, plus engagement with governments and parliaments across the UK.

It’s highly likely that the most traditional models of inquiries will be adopted – just because that’s what people at the top are used to, or because they look politically expedient. But we think it would be good to look at the options and to encourage some creativity.

The pandemic has prompted extraordinary innovation; there is no reason why inquiries should be devoid of any. Moreover, the pandemic affected every sector of life – and was far more ‘systemic’ than the kinds of issue or event addressed by typical inquiries in the past. That should be reflected in how lessons are learned.

So here are some initial thoughts on what the defaults look like, why they are likely to be inadequate, and what some alternatives might be. This article proposes the idea of a ‘whole of society’ inquiry model which has a distributed rather than centralised structure, which focuses on learning more than blame, and which can connect the thousands of organisations that have had to make so many difficult decisions throughout the crisis, and also the lived experiences of public and frontline staff. We hope that it will prompt responses, including better ideas about what kinds of inquiry will serve us best…

There are many different options for inquiries, and this is a good moment to consider them. They range from ‘truth and reconciliation’ inquiries to no-fault compensation processes to the ways industries such as airlines deal with crashes, through to academic analyses of events like the 2007/08 financial crash. They can involve representative or random samples of the public (e.g. citizens’ assemblies and juries) or just experts and officials…

The idea of a distributed inquiry is not entirely new. Colombia, for example, attempted something along these lines as part of its peace process. Many health systems use methods such as ‘collaboratives’ to organise accelerated learning. Doubtless there is much to be learned from these and other examples. For the UK in particular, it is vital there are contextually appropriate designs for the four nations as well as individual cities and regions.

As already indicated, a key is to combine sensible inquiries focused on particular sectors (e.g. what did universities do, what worked…) and make connections between them. As IPPO’s work on COVID inequalities has highlighted, the patterns are very complex but involved a huge amount of harm – captured in our ‘inequalities matrix’, below.

So, while the inquiries need to dig deep on multiple fronts and to look more like a matrix than a single question, what might connect all the inquiries would be a commitment to some common elements which would be shared:

  • Facts: In each case, a precondition for learning is establishing the facts, as well as the evidence on what did or didn’t work well. This is a process closer to what evidence intermediary organisations – such as the UK’s What Works Network – do than a judicial process designed for binary judgments (guilty/not guilty). This would be helped by some systematic curation and organisation of the evidence in easily accessible forms, of the kind that IPPO is doing….(More)”

Afyanet


About: “Afyanet is a voluntary, non-profit network of National Health Institutes and Research Centers seeking to leverage crowdsourced health data for disease surveillance and forecasting. Participation in AfyaNet for countries is free.

We aim to use technology and digital solutions to radically enhance how traditional disease surveillance systems function and the ways we can model epidemics.

Our vision is to create a common framework to collect standardized real-time data from the general population, allowing countries to leapfrog existing hurdles in disease surveillance and information sharing.

Our solution is an Early Warning System for Health based on participatory data gathering. A common, real-time framework for disease collection will help countries identify and forecast outbreaks faster and more effectively.

Crowdsourced data is gathered directly from citizens, then aggregated, anonymized, and processed in a cloud-based data lake. Our high-performance computing architecture analyzes the data and creates valuable disease spread models, which in turn provide alerts and notifications to participating countries and helps public health authorities make evidence-based decisions….(More)”

Data in Crisis — Rethinking Disaster Preparedness in the United States


Paper by Satchit Balsari, Mathew V. Kiang, and Caroline O. Buckee: “…In recent years, large-scale streams of digital data on medical needs, population vulnerabilities, physical and medical infrastructure, human mobility, and environmental conditions have become available in near-real time. Sophisticated analytic methods for combining them meaningfully are being developed and are rapidly evolving. However, the translation of these data and methods into improved disaster response faces substantial challenges. The data exist but are not readily accessible to hospitals and response agencies. The analytic pipelines to rapidly translate them into policy-relevant insights are lacking, and there is no clear designation of responsibility or mandate to integrate them into disaster-mitigation or disaster-response strategies. Building these integrated translational pipelines that use data rapidly and effectively to address the health effects of natural disasters will require substantial investments, and these investments will, in turn, rely on clear evidence of which approaches actually improve outcomes. Public health institutions face some ongoing barriers to achieving this goal, but promising solutions are available….(More)”

WHO, Germany open Hub for Pandemic and Epidemic Intelligence in Berlin


Press Release: “To better prepare and protect the world from global disease threats, H.E. German Federal Chancellor Dr Angela Merkel and Dr Tedros Adhanom Ghebreyesus, World Health Organization Director-General, will today inaugurate the new WHO Hub for Pandemic and Epidemic Intelligence, based in Berlin. 

“The world needs to be able to detect new events with pandemic potential and to monitor disease control measures on a real-time basis to create effective pandemic and epidemic risk management,” said Dr Tedros. “This Hub will be key to that effort, leveraging innovations in data science for public health surveillance and response, and creating systems whereby we can share and expand expertise in this area globally.” 

The WHO Hub, which is receiving an initial investment of US$ 100 million from the Federal Republic of Germany, will harness broad and diverse partnerships across many professional disciplines, and the latest technology, to link the data, tools and communities of practice so that actionable data and intelligence are shared for the common good.

The  WHO Hub is part of WHO’s Health Emergencies Programme and will be a new collaboration of countries and partners worldwide, driving innovations to increase availability of key data; develop state of the art analytic tools and predictive models for risk analysis; and link communities of practice around the world. Critically, the WHO Hub will support the work of public health experts and policy-makers in all countries with the tools needed to forecast, detect and assess epidemic and pandemic risks so they can take rapid decisions to prevent and respond to future public health emergencies.

“Despite decades of investment, COVID-19 has revealed the great gaps that exist in the world’s ability to forecast, detect, assess and respond to outbreaks that threaten people worldwide,” said Dr Michael Ryan, Executive Director of WHO’s Health Emergency Programme. “The WHO Hub for Pandemic and Epidemic Intelligence is designed to develop the data access, analytic tools and communities of practice to fill these very gaps, promote collaboration and sharing, and protect the world from such crises in the future.” 

The Hub will work to:

  • Enhance methods for access to multiple data sources vital to generating signals and insights on disease emergence, evolution and impact;
  • Develop state of the art tools to process, analyze and model data for detection, assessment and response;
  • Provide WHO, our Member States, and partners with these tools to underpin better, faster decisions on how to address outbreak signals and events; and
  • Connect and catalyze institutions and networks developing disease outbreak solutions for the present and future.

Dr Chikwe Ihekweazu, currently Director-General of the Nigeria Centre for Disease Control, has been appointed to lead the WHO Hub….(More)” 

The Open-Source Movement Comes to Medical Datasets


Blog by Edmund L. Andrews: “In a move to democratize research on artificial intelligence and medicine, Stanford’s Center for Artificial Intelligence in Medicine and Imaging (AIMI) is dramatically expanding what is already the world’s largest free repository of AI-ready annotated medical imaging datasets.

Artificial intelligence has become an increasingly pervasive tool for interpreting medical images, from detecting tumors in mammograms and brain scans to analyzing ultrasound videos of a person’s pumping heart.

Many AI-powered devices now rival the accuracy of human doctors. Beyond simply spotting a likely tumor or bone fracture, some systems predict the course of a patient’s illness and make recommendations.

But AI tools have to be trained on expensive datasets of images that have been meticulously annotated by human experts. Because those datasets can cost millions of dollars to acquire or create, much of the research is being funded by big corporations that don’t necessarily share their data with the public.

“What drives this technology, whether you’re a surgeon or an obstetrician, is data,” says Matthew Lungren, co-director of AIMI and an assistant professor of radiology at Stanford. “We want to double down on the idea that medical data is a public good, and that it should be open to the talents of researchers anywhere in the world.”

Launched two years ago, AIMI has already acquired annotated datasets for more than 1 million images, many of them from the Stanford University Medical Center. Researchers can download those datasets at no cost and use them to train AI models that recommend certain kinds of action.

Now, AIMI has teamed up with Microsoft’s AI for Health program to launch a new platform that will be more automated, accessible, and visible. It will be capable of hosting and organizing scores of additional images from institutions around the world. Part of the idea is to create an open and global repository. The platform will also provide a hub for sharing research, making it easier to refine different models and identify differences between population groups. The platform can even offer cloud-based computing power so researchers don’t have to worry about building local resource intensive clinical machine-learning infrastructure….(More)”.

The Illusion of Inclusion — The “All of Us” Research Program and Indigenous Peoples’ DNA


Article by Keolu Fox: “Raw data, including digital sequence information derived from human genomes, have in recent years emerged as a top global commodity. This shift is so new that experts are still evaluating what such information is worth in a global market. In 2018, the direct-to-consumer genetic-testing company 23andMe sold access to its database containing digital sequence information from approximately 5 million people to GlaxoSmithKline for $300 million. Earlier this year, 23andMe partnered with Almirall, a Spanish drug company that is using the information to develop a new antiinflammatory drug for autoimmune disorders. This move marks the first time that 23andMe has signed a deal to license a drug for development.

Eighty-eight percent of people included in large-scale studies of human genetic variation are of European ancestry, as are the majority of participants in clinical trials. Corporations such as Geisinger Health System, Regeneron Pharmaceuticals, AncestryDNA, and 23andMe have already mined genomic databases for the strongest genotype–phenotype associations. For the field to advance, a new approach is needed. There are many potential ways to improve existing databases, including “deep phenotyping,” which involves collecting precise measurements from blood panels, questionnaires, cognitive surveys, and other tests administered to research participants. But this approach is costly and physiologically and mentally burdensome for participants. Another approach is to expand existing biobanks by adding genetic information from populations whose genomes have not yet been sequenced — information that may offer opportunities for discovering globally rare but locally common population-specific variants, which could be useful for identifying new potential drug targets.

Many Indigenous populations have been geographically isolated for tens of thousands of years. Over time, these populations have developed adaptations to their environments that have left specific variant signatures in their genomes. As a result, the genomes of Indigenous peoples are a treasure trove of unexplored variation. Some of this variation will inevitably be identified by programs like the National Institutes of Health (NIH) “All of Us” research program. NIH leaders have committed to the idea that at least 50% of this program’s participants should be members of underrepresented minority populations, including U.S. Indigenous communities (Native Americans, Alaskan Natives, and Native Hawaiians), a decision that explicitly connects diversity with the program’s goal of promoting equal enjoyment of the future benefits of precision medicine.

But there are reasons to believe that this promise may be an illusion….(More)”.

Co-design and Ethical Artificial Intelligence for Health: Myths and Misconceptions


Paper by Joseph Donia and Jay Shaw: “Applications of artificial intelligence / machine learning (AI/ML) are dynamic and rapidly growing, and although multi-purpose, are particularly consequential in health care. One strategy for anticipating and addressing ethical challenges related to AI/ML for health care is co-design – or involvement of end users in design. Co-design has a diverse intellectual and practical history, however, and has been conceptualized in many different ways. Moreover, the unique features of AI/ML introduce challenges to co-design that are often underappreciated. This review summarizes the research literature on involvement in health care and design, and informed by critical data studies, examines the extent to which co-design as commonly conceptualized is capable of addressing the range of normative issues raised by AI/ML for health. We suggest that AI/ML technologies have amplified existing challenges related to co-design, and created entirely new challenges. We outline five co-design ‘myths and misconceptions’ related to AI/ML for health that form the basis for future research and practice. We conclude by suggesting that the normative strength of a co-design approach to AI/ML for health can be considered at three levels: technological, health care system, and societal. We also suggest research directions for a ‘new era’ of co-design capable of addressing these challenges….(More)”.