A call for a new generation of COVID-19 models


Blog post by Alex Engler: “Existing models have been valuable, but they were not designed to support these types of critical decisions. A new generation of models that estimate the risk of COVID-19 spread for precise geographies—at the county or even more localized level—would be much more informative for these questions. Rather than produce long-term predictions of deaths or hospital utilization, these models could estimate near-term relative risk to inform local policymaking. Going forward, governors and mayors need local, current, and actionable numbers.

Broadly speaking, better models would substantially aid in the “adaptive response” approach to re-opening the economy. In this strategy, policymakers cyclically loosen and re-tighten restrictions, attempting to work back towards a healthy economy without moving so fast as to allow infections to take off again. In an ideal process, restrictions would be eased at such a pace that balances a swift return to normalcy with reducing total COVID-19 infections. Of course, this is impossible in practice, and thus some continued adjustments—the flipping of various controls off and on again—will be necessary. More precise models can help improve this process, providing another lens into when it will be safe to relax restrictions, thus making it easier to do without a disruptive back-and-forth. A more-or-less continuous easing of restrictions is especially valuable, since it is unlikely that second or third rounds of interventions (such as social distancing) would achieve the same high rates of compliance as the first round.

The proliferation of Covid19 Data

These models can incorporate cases, test-positive rates, hospitalization information, deaths, excess deaths, and other known COVID-19 data. While all these data sources are incomplete, an expanding body of research on COVID-19 is making the data more interpretable. This research will become progressively more valuable with more data on the spread of COVID-19 in the U.S. rather than data from other countries or past pandemics.

Further, a broad range of non-COVID-19 data can also inform risk estimates: Population density, age distributions, poverty and uninsured rates, the number of essential frontline workers, and co-morbidity factors can also be included. Community mobility reports from Google and Unacast’s social distancing scorecard can identify how easing restrictions are changing behavior. Small area estimates also allow the models to account for the risk of spread from other nearby geographies. Geospatial statistics cannot account for infectious spread between two large neighboring states, but they would add value for adjacent zip codes. Lastly, many more data sources are in the works, like open patient data registries, the National Institutes of Health’s (NIH) study of asymptomatic personsself-reported symptoms data from Facebook, and (potentially) new randomized surveys. In fact, there are so many diverse and relevant data streams, that models can add value simply be consolidating daily information into just a few top-line numbers that are comparable across the nation.

FiveThirtyEight has effectively explained that making these models is tremendously difficult due to incomplete data, especially since the U.S. is not testing enough or in statistically valuable ways. These challenges are real, but decision-makers are currently using this same highly flawed data to make inferences and policy choices. Despite the many known problems, elected officials and public health services have no choice. Frequently, they are evaluating the data without the time and expertise to make reasoned statistical interpretations based on epidemiological research, leaving significant opportunity for modeling to help….(More)”.