How Facebook and Google are helping the CDC forecast coronavirus


Karen Hao at MIT Technology Review: “When it comes to predicting the spread of an infectious disease, it’s crucial to understand what Ryan Tibshirani, an associate professor at Carnegie Mellon University, calls the “the pyramid of severity.” The bottom of the pyramid is asymptomatic carriers (those who have the infection but feel fine); the next level is symptomatic carriers (those who are feeling ill); then come hospitalizations, critical hospitalizations, and finally deaths.

Every level of the pyramid has a clear relationship to the next: “For example, sadly, it’s pretty predictable how many people will die once you know how many people are under critical care,” says Tibshirani, who is part of CMU’s Delphi research group, one of the best flu-forecasting teams in the US. The goal, therefore, is to have a clear measure of the lower levels of the pyramid, as the foundation for forecasting the higher ones.

But in the US, building such a model is a Herculean task. A lack of testing makes it impossible to assess the number of asymptomatic carriers. The results also don’t accurately reflect how many symptomatic carriers there are. Different counties have different testing requirements—some choosing only to test patients who require hospitalization. Test results also often take upwards of a week to return.

The remaining option is to measure symptomatic carriers through a large-scale, self-reported survey. But such an initiative won’t work unless it covers a big enough cross section of the entire population. Now the Delphi group, which has been working with the Centers for Disease Control and Prevention to help it coordinate the national pandemic response, has turned to the largest platforms in the US: Facebook and Google.

Facebook will help CMU Delphi research group gather data about Covid symptoms

In a new partnership with Delphi, both tech giants have agreed to help gather data from those who voluntarily choose to report whether they’re experiencing covid-like symptoms. Facebook will target a fraction of their US users with a CMU-run survey, while Google has thus far been using its Opinion Rewards app, which lets users respond to questions for app store credit. The hope is this new information will allow the lab to produce county-by-county projections that will help policymakers allocate resources more effectively.

Neither company will ever actually see the survey results; they’re merely pointing users to the questions administered and processed by the lab. The lab will also never share any of the raw data back to either company. Still, the agreements represent a major deviation from typical data-sharing practices, which could raise privacy concerns. “If this wasn’t a pandemic, I don’t know that companies would want to take the risk of being associated with or asking directly for such a personal piece of information as health,” Tibshirani says.

Without such cooperation, the researchers would’ve been hard pressed to find the data anywhere else. Several other apps allow users to self-report symptoms, including a popular one in the UK known as the Covid Symptom Tracker that has been downloaded over 1.5 million times. But none of them offer the same systematic and expansive coverage as a Facebook or Google-administered survey, says Tibshirani. He hopes the project will collect millions of responses each week….(More)”.

Tracking coronavirus: big data and the challenge to privacy


Nic Fildes and Javier Espinoza at the Financial Times: “When the World Health Organization launched a 2007 initiative to eliminate malaria on Zanzibar, it turned to an unusual source to track the spread of the disease between the island and mainland Africa: mobile phones sold by Tanzania’s telecoms groups including Vodafone, the UK mobile operator.

Working together with researchers at Southampton university, Vodafone began compiling sets of location data from mobile phones in the areas where cases of the disease had been recorded. 

Mapping how populations move between locations has proved invaluable in tracking and responding to epidemics. The Zanzibar project has been replicated by academics across the continent to monitor other deadly diseases, including Ebola in west Africa….

With much of Europe at a standstill as a result of the coronavirus pandemic, politicians want the telecoms operators to provide similar data from smartphones. Thierry Breton, the former chief executive of France Telecom who is now the European commissioner for the internal market, has called on operators to hand over aggregated location data to track how the virus is spreading and to identify spots where help is most needed.

Both politicians and the industry insist that the data sets will be “anonymised”, meaning that customers’ individual identities will be scrubbed out. Mr Breton told the Financial Times: “In no way are we going to track individuals. That’s absolutely not the case. We are talking about fully anonymised, aggregated data to anticipate the development of the pandemic.”

But the use of such data to track the virus has triggered fears of growing surveillance, including questions about how the data might be used once the crisis is over and whether such data sets are ever truly anonymous….(More)”.

New Tool to Establish Responsible Data Collaboratives in the Time of COVID-19


Announcement: “To address the COVID-19 pandemic and other dynamic threats, The GovLab has called for the development of a new data infrastructure and ecosystem. Establishing data collaboratives in a responsible manner often necessitates the creation of data sharing agreements and other legal documentation — a strain on time and capacity both for data holders and those who could use data in the public interest.

Today, to support the development of data collaboratives in a responsible and agile way, we are sharing a new tool that addresses the complexity in preparing a Data Sharing Agreement from Contracts for Data Collaboration (a joint initiative of SDSN-TReNDS, the World Economic Forum, The GovLab, and the University of Washington’s Information Risk Research Initiative). Providing a checklist to support organizations with reviewing, negotiating and preparing Data Sharing Arrangements, the intent is to strengthen stakeholder trust and help accelerate responsible data sharing arrangements given the urgency of the global pandemic.

(Please note that the check list is a tool for formulating and understanding legal issues, but we are not offering it as legal advice.)

CLICK HERE TO DOWNLOAD THE TOOL (More)”.

The Responsible Data for Children (RD4C) Case Studies


Andrew Young at Datastewards.net: “This week, as part of the Responsible Data for Children initiative (RD4C), the GovLab and UNICEF launched a new case study series to provide insights on promising practice as well as barriers to realizing responsible data for children.

Drawing upon field-based research and established good practice, RD4C aims to highlight and support responsible handling of data for and about children; identify challenges and develop practical tools to assist practitioners in evaluating and addressing them; and encourage a broader discussion on actionable principles, insights, and approaches for responsible data management.

RD4C launched in October 2019 with the release of the RD4C Synthesis ReportSelected Readings, and the RD4C Principles: Purpose-Driven, People-Centric, Participatory, Protective of Children’s Rights, Proportional, Professionally Accountable, and Prevention of Harms Across the Data Lifecycle.

The RD4C Case Studies analyze data systems deployed in diverse country environments, with a focus on their alignment with the RD4C Principles. This week’s release includes case studies arising from field missions to Romania, Kenya, and Afghanistan in 2019. The data systems examined are:

Coronavirus: country comparisons are pointless unless we account for these biases in testing


Norman Fenton, Magda Osman, Martin Neil, and Scott McLachlan at The Conversation: “Suppose we wanted to estimate how many car owners there are in the UK and how many of those own a Ford Fiesta, but we only have data on those people who visited Ford car showrooms in the last year. If 10% of the showroom visitors owned a Fiesta, then, because of the bias in the sample, this would certainly overestimate the proportion of Ford Fiesta owners in the country.

Estimating death rates for people with COVID-19 is currently undertaken largely along the same lines. In the UK, for example, almost all testing of COVID-19 is performed on people already hospitalised with COVID-19 symptoms. At the time of writing, there are 29,474 confirmed COVID-19 cases (analogous to car owners visiting a showroom) of whom 2,352 have died (Ford Fiesta owners who visited a showroom). But it misses out all the people with mild or no symptoms.

Concluding that the death rate from COVID-19 is on average 8% (2,352 out of 29,474) ignores the many people with COVID-19 who are not hospitalised and have not died (analogous to car owners who did not visit a Ford showroom and who do not own a Ford Fiesta). It is therefore equivalent to making the mistake of concluding that 10% of all car owners own a Fiesta.

There are many prominent examples of this sort of conclusion. The Oxford COVID-19 Evidence Service have undertaken a thorough statistical analysis. They acknowledge potential selection bias, and add confidence intervals showing how big the error may be for the (potentially highly misleading) proportion of deaths among confirmed COVID-19 patients.

They note various factors that can result in wide national differences – for example the UK’s 8% (mean) “death rate” is very high compared to Germany’s 0.74%. These factors include different demographics, for example the number of elderly in a population, as well as how deaths are reported. For example, in some countries everybody who dies after having been diagnosed with COVID-19 is recorded as a COVID-19 death, even if the disease was not the actual cause, while other people may die from the virus without actually having been diagnosed with COVID-19.

However, the models fail to incorporate explicit causal explanations in their modelling that might enable us to make more meaningful inferences from the available data, including data on virus testing.

What a causal model would look like. Author provided

We have developed an initial prototype “causal model” whose structure is shown in the figure above. The links between the named variables in a model like this show how they are dependent on each other. These links, along with other unknown variables, are captured as probabilities. As data are entered for specific, known variables, all of the unknown variable probabilities are updated using a method called Bayesian inference. The model shows that the COVID-19 death rate is as much a function of sampling methods, testing and reporting, as it is determined by the underlying rate of infection in a vulnerable population….(More)”

The potential of Data Collaboratives for COVID19


Blog post by Stefaan Verhulst: “We live in almost unimaginable times. The spread of COVID-19 is a human tragedy and global crisis that will impact our communities for many years to come. The social and economic costs are huge and mounting, and they are already contributing to a global slowdown. Every day, the emerging pandemic reveals new vulnerabilities in various aspects of our economic, political and social lives. These include our vastly overstretched public health services, our dysfunctional political climate, and our fragile global supply chains and financial markets.

The unfolding crisis is also making shortcomings clear in another area: the way we re-use data responsibly. Although this aspect of the crisis has been less remarked upon than other, more obvious failures, those who work with data—and who have seen its potential to impact the public good—understand that we have failed to create the necessary governance and institutional structures that would allow us to harness data responsibly to halt or at least limit this pandemic. A recent article in Stat, an online journal dedicated to health news, characterized the COVID-19 outbreak as “a once-in-a-century evidence fiasco.” The article continues: 

“At a time when everyone needs better information, […] we lack reliable evidence on how many people have been infected with SARS-CoV-2 or who continue to become infected. Better information is needed to guide decisions and actions of monumental significance and to monitor their impact.” 

It doesn’t have to be this way, and these data challenges are not an excuse for inaction. As we explain in what follows, there is ample evidence that the re-use of data can help mitigate health pandemics. A robust (if somewhat unsystematized) body of knowledge could direct policymakers and others in their efforts. In the second part of this article, we outline eight steps that key stakeholders can and should take to better re-use data in the fight against COVID-19. In particular, we argue that more responsible data stewardship and increased use of data collaboratives are critical….(More)”. 

Mobile phone data and COVID-19: Missing an opportunity?


Paper by Nuria Oliver, et al: “This paper describes how mobile phone data can guide government and public health authorities in determining the best course of action to control the COVID-19 pandemic and in assessing the effectiveness of control measures such as physical distancing. It identifies key gaps and reasons why this kind of data is only scarcely used, although their value in similar epidemics has proven in a number of use cases. It presents ways to overcome these gaps and key recommendations for urgent action, most notably the establishment of mixed expert groups on national and regional level, and the inclusion and support of governments and public authorities early on. It is authored by a group of experienced data scientists, epidemiologists, demographers and representatives of mobile network operators who jointly put their work at the service of the global effort to combat the COVID-19 pandemic….(More)”.

Why isn’t the government publishing more data about coronavirus deaths?


Article by Jeni Tennison: “Studying the past is futile in an unprecedented crisis. Science is the answer – and open-source information is paramount…Data is a necessary ingredient in day-to-day decision-making – but in this rapidly evolving situation, it’s especially vital. Everything has changed, almost overnight. Demands for foodtransport, and energy have been overhauled as more people stop travelling and work from home. Jobs have been lost in some sectors, and workers are desperately needed in others. Historic experience can no longer tell us how our society or economy is working. Past models hold little predictive power in an unprecedented situation. To know what is happening right now, we need up-to-date information….

This data is also crucial for scientists, who can use it to replicate and build upon each other’s work. Yet no open data has been published alongside the evidence for the UK government’s coronavirus response. While a model that informed the US government’s response is freely available as a Google spreadsheet, the Imperial College London model that prompted the current lockdown has still not been published as open-source code. Making data open – publishing it on the web, in spreadsheets, without restrictions on access – is the best way to ensure it can be used by the people who need it most.

There is currently no open data available on UK hospitalisation rates; no regional, age or gender breakdown of daily deaths. The more granular breakdown of registered deaths provided by the Office of National Statistics is only published on a weekly basis, and with a delay. It is hard to tell whether this data does not exist or the NHS has prioritised creating dashboards for government decision makers rather than informing the rest of the country. But the UK is making progress with regard to data: potential Covid-19 cases identified through online and call-centre triage are now being published daily by NHS Digital.

Of course, not all data should be open. Singapore has been publishing detailed data about every infected person, including their age, gender, workplace, where they have visited and whether they had contact with other infected people. This can both harm the people who are documented and incentivise others to lie to authorities, undermining the quality of data.

When people are concerned about how data about them is handled, they demand transparency. To retain our trust, governments need to be open about how data is collected and used, how it’s being shared, with whom, and for what purpose. Openness about the use of personal data to help tackle the Covid-19 crisis will become more pressing as governments seek to develop contact tracing apps and immunity passports….(More)”.

Urgently Needed for Policy Guidance: An Operational Tool for Monitoring the COVID-19 Pandemic


Paper by Stephane Luchini et al:” The radical uncertainty around the current COVID19 pandemics requires that governments around the world should be able to track in real time not only how the virus spreads but, most importantly, what policies are effective in keeping the spread of the disease under check. To improve the quality of health decision-making, we argue that it is necessary to monitor and compare acceleration/deceleration of confirmed cases over health policy responses, across countries. To do so, we provide a simple mathematical tool to estimate the convexity/concavity of trends in epidemiological surveillance data. Had it been applied at the onset of the crisis, it would have offered more opportunities to measure the impact of the policies undertaken in different Asian countries, and to allow European and North-American governments to draw quicker lessons from these Asian experiences when making policy decisions. Our tool can be especially useful as the epidemic is currently extending to lower-income African and South American countries, some of which have weaker health systems….(More)”.

Human migration: the big data perspective


Alina Sîrbu et al at the International Journal of Data Science and Analytics: “How can big data help to understand the migration phenomenon? In this paper, we try to answer this question through an analysis of various phases of migration, comparing traditional and novel data sources and models at each phase. We concentrate on three phases of migration, at each phase describing the state of the art and recent developments and ideas. The first phase includes the journey, and we study migration flows and stocks, providing examples where big data can have an impact. The second phase discusses the stay, i.e. migrant integration in the destination country. We explore various data sets and models that can be used to quantify and understand migrant integration, with the final aim of providing the basis for the construction of a novel multi-level integration index. The last phase is related to the effects of migration on the source countries and the return of migrants….(More)”.