Blog by David Osimo: “…So how, then, did these new tools perform when confronted with the once-in-a-lifetime crisis of a vast global pandemic?
It turns out, some things worked. Others didn’t. And the question of how these new policymaking tools functioned in the heat of battle is already generating valuable ammunition for future crises.
So what worked?
Policy modelling – an analytical framework designed to anticipate the impact of decisions by simulating the interaction of multiple agents in a system rather than just the independent actions of atomised and rational humans – took centre stage in the pandemic and emerged with reinforced importance in policymaking. Notably, it helped governments predict how and when to introduce lockdowns or open up. But even there uptake was limited. A recent survey showed that of the 28 models used in different countries to fight the pandemic were traditional, and not the modern “agent-based models” or “system dynamics” supposed to deal best with uncertainty. Meanwhile, the concepts of system science was becoming prominent and widely communicated. It became quickly clear in the course of the crisis that social distancing was more a method to reduce the systemic pressure on the health services than a way to avoid individual contagion (the so called “flatten the curve” project).
Open government data has long promised to allow citizens and businesses to build new services at scale and make government accountable. The pandemic largely confirmed how important this data could be to allow citizens to analyse things independently. Hundreds of analysts from all walks of life and disciplines used social media to discuss their analysis and predictions, many becoming household names and go-to people in countries and regions. Yes, this led to noise and a so-called “infodemic,” but overall it served as a fundamental tool to increase confidence and consensus behind the policy measures and to make governments accountable for their actions. For instance, one Catalan analyst demonstrated that vaccines were not provided during weekends and forced the government to change its stance. Yet it is also clear that not all went well, most notably on the supply side. Governments published data of low quality, either in PDF, with delays or with missing data due to spreadsheet abuse.
In most cases, there was little demand for sophisticated data publishing solutions such as “linked” or “FAIR” data, although particularly significant was the uptake of these kinds of solutions when it came time to share crucial research data. Experts argue that the trend towards open science has accelerated dramatically and irreversibly in the last year, as shown by the portal https://www.covid19dataportal.org/ which allowed sharing of high quality data for scientific research….
But other new policy tools proved less easy to use and ultimately ineffective. Collaborative governance, for one, promised to leverage the knowledge of thousands of citizens to improve public policies and services. In practice, methodologies aiming at involving citizens in decision making and service design were of little use. Decisions related to lockdown and opening up were taken in closed committees in top down mode. Individual exceptions certainly exist: Milan, one of the cities worst hit by the pandemic, launched a co-created strategy for opening up after the lockdown, receiving almost 3000 contributions to the consultation. But overall, such initiatives had limited impact and visibility. With regard to co-design of public services, in times of emergency there was no time for prototyping or focus groups. Services such as emergency financial relief had to be launched in a hurry and “just work.”…
Citizen science promised to make every citizen a consensual data source for monitoring complex phenomena in real time through apps and Internet-of-Things sensors. In the pandemic, there were initially great expectations on digital contact tracing apps to allow for real time monitoring of contagions, most notably through bluetooth connections in the phone. However, they were mostly a disappointment. Citizens were reluctant to install them. And contact tracing soon appeared to be much more complicated – and human intensive – than originally thought. The huge debate between technology and privacy was followed by very limited impact. Much ado about nothing.
Behavioural economics (commonly known as nudge theory) is probably the most visible failure of the pandemic. It promised to move beyond traditional carrots (public funding) and sticks (regulation) in delivering policy objectives by adopting an experimental method to influence or “nudge” human behaviour towards desired outcomes. The reality is that soft nudges proved an ineffective alternative to hard lockdown choices. What makes it uniquely negative is that such methods took centre stage in the initial phase of the pandemic and particularly informed the United Kingdom’s lax approach in the first months on the basis of a hypothetical and unproven “behavioural fatigue.” This attracted heavy criticism towards the excessive reliance on nudges by the United Kingdom government, a legacy of Prime Minister David Cameron’s administration. The origin of such criticisms seems to lie not in the method shortcomings per se, which enjoyed success previously on more specific cases, but in the backlash from excessive expectations and promises, epitomised in the quote of a prominent behavioural economist: “It’s no longer a matter of supposition as it was in 2010 […] we can now say with a high degree of confidence these models give you best policy.”
Three factors emerge as the key determinants behind success and failure: maturity, institutions and leadership….(More)”.