Policy 2.0 in the Pandemic World: What Worked, What Didn’t, and Why


Blog by David Osimo: “…So how, then, did these new tools perform when confronted with the once-in-a-lifetime crisis of a vast global pandemic?

It turns out, some things worked. Others didn’t. And the question of how these new policymaking tools functioned in the heat of battle is already generating valuable ammunition for future crises.

So what worked?

Policy modelling – an analytical framework designed to anticipate the impact of decisions by simulating the interaction of multiple agents in a system rather than just the independent actions of atomised and rational humans – took centre stage in the pandemic and emerged with reinforced importance in policymaking. Notably, it helped governments predict how and when to introduce lockdowns or open up. But even there uptake was limited. A recent survey showed that of the 28 models used in different countries to fight the pandemic were traditional, and not the modern “agent-based models” or “system dynamics” supposed to deal best with uncertainty. Meanwhile, the concepts of system science was becoming prominent and widely communicated. It became quickly clear in the course of the crisis that social distancing was more a method to reduce the systemic pressure on the health services than a way to avoid individual contagion (the so called “flatten the curve” project).

Open government data has long promised to allow citizens and businesses to build new services at scale and make government accountable. The pandemic largely confirmed how important this data could be to allow citizens to analyse things independently. Hundreds of analysts from all walks of life and disciplines used social media to discuss their analysis and predictions, many becoming household names and go-to people in countries and regions. Yes, this led to noise and a so-called “infodemic,” but overall it served as a fundamental tool to increase confidence and consensus behind the policy measures and to make governments accountable for their actions. For instance, one Catalan analyst demonstrated that vaccines were not provided during weekends and forced the government to change its stance. Yet it is also clear that not all went well, most notably on the supply side. Governments published data of low quality, either in PDF, with delays or with missing data due to spreadsheet abuse.

In most cases, there was little demand for sophisticated data publishing solutions such as “linked” or “FAIR” data, although particularly significant was the uptake of these kinds of solutions when it came time to share crucial research data. Experts argue that the trend towards open science has accelerated dramatically and irreversibly in the last year, as shown by the portal https://www.covid19dataportal.org/ which allowed sharing of high quality data for scientific research….

But other new policy tools proved less easy to use and ultimately ineffective. Collaborative governance, for one, promised to leverage the knowledge of thousands of citizens to improve public policies and services. In practice, methodologies aiming at involving citizens in decision making and service design were of little use. Decisions related to lockdown and opening up were taken in closed committees in top down mode. Individual exceptions certainly exist: Milan, one of the cities worst hit by the pandemic, launched a co-created strategy for opening up after the lockdown, receiving almost 3000 contributions to the consultation. But overall, such initiatives had limited impact and visibility. With regard to co-design of public services, in times of emergency there was no time for prototyping or focus groups. Services such as emergency financial relief had to be launched in a hurry and “just work.”

Citizen science promised to make every citizen a consensual data source for monitoring complex phenomena in real time through apps and Internet-of-Things sensors. In the pandemic, there were initially great expectations on digital contact tracing apps to allow for real time monitoring of contagions, most notably through bluetooth connections in the phone. However, they were mostly a disappointment. Citizens were reluctant to install them. And contact tracing soon appeared to be much more complicated – and human intensive – than originally thought. The huge debate between technology and privacy was followed by very limited impact. Much ado about nothing.

Behavioural economics (commonly known as nudge theory) is probably the most visible failure of the pandemic. It promised to move beyond traditional carrots (public funding) and sticks (regulation) in delivering policy objectives by adopting an experimental method to influence or “nudge” human behaviour towards desired outcomes. The reality is that soft nudges proved an ineffective alternative to hard lockdown choices. What makes it uniquely negative is that such methods took centre stage in the initial phase of the pandemic and particularly informed the United Kingdom’s lax approach in the first months on the basis of a hypothetical and unproven “behavioural fatigue.” This attracted heavy criticism towards the excessive reliance on nudges by the United Kingdom government, a legacy of Prime Minister David Cameron’s administration. The origin of such criticisms seems to lie not in the method shortcomings per se, which enjoyed success previously on more specific cases, but in the backlash from excessive expectations and promises, epitomised in the quote of a prominent behavioural economist: “It’s no longer a matter of supposition as it was in 2010 […] we can now say with a high degree of confidence these models give you best policy.

Three factors emerge as the key determinants behind success and failure: maturity, institutions and leadership….(More)”.

2030 Compass CoLab


About: “2030 Compass CoLab invites a group of experts, using an online platform, to contribute their perspectives on potential interactions between the goals in the UN’s 2030 Agenda for Sustainable Development.

By combining the insight of participants who posses broad and diverse knowledge, we hope to develop a richer understanding of how the Sustainable Development Goals (SDGs) may be complementary or conflicting.

Compass 2030 CoLab is part of a larger project, The Agenda 2030 Compass Methodology and toolbox for strategic decision making, funded by Vinnova, Sweden’s government agency for innovation.

Other elements of the larger project include:

  • Deliberations by a panel of experts who will convene in a series of live meetings to undertake in-depth analysis on interactions between the goals. 
  • Quanitative analysis of SDG indicators time series data, which will examine historical correlations between progress on the SDGs.
  • Development of a knowledge repository, residing in a new software tool under development as part of the project. This tool will be made available as a resource to guide the decisions of corporate executives, policy makers, and leaders of NGOs.

The overall project was inspired by the work of researchers at the Stockholm Environment Institute, described in Towards systemic and contextual priority setting for implementing the 2030 Agenda, a 2018 paper in Sustainability Science by Nina Weitz, Henrik Carlsen, Måns Nilsson, and Kristian Skånberg….(More)”.

Intellectual Property and Artificial Intelligence


A literature review by the Joint Research Center: “Artificial intelligence has entered into the sphere of creativity and ingenuity. Recent headlines refer to paintings produced by machines, music performed or composed by algorithms or drugs discovered by computer programs. This paper discusses the possible implications of the development and adoption of this new technology in the intellectual property framework and presents the opinions expressed by practitioners and legal scholars in recent publications. The literature review, although not intended to be exhaustive, reveals a series of questions that call for further reflection. These concern the protection of artificial intelligence by intellectual property, the use of data to feed algorithms, the protection of the results generated by intelligent machines as well as the relationship between ethical requirements of transparency and explainability and the interests of rights holders….(More)”.

How Digital Trust Varies Around the World


Bhaskar Chakravorti, Ajay Bhalla, and Ravi Shankar Chaturvedi at Harvard Business Review: “As economies around the world digitalize rapidly in response to the pandemic, one component that can sometimes get left behind is user trust. What does it take to build out a digital ecosystem that users will feel comfortable actually using? To answer this question, the authors explored four components of digital trust: the security of an economy’s digital environment; the quality of the digital user experience; the extent to which users report trust in their digital environment; and the extent to which users actually use the digital tools available to them. They then used almost 200 indicators to rank 42 global economies on their performance in each of these four metrics, finding a number of interesting trends around how different economies have developed mechanisms for engendering trust, as well as how different types of trust do — or don’t — correspond to other digital development metrics…(More)”.

Far-right news sources on Facebook more engaging


Study by Laura Edelson, Minh-Kha Nguyen, Ian Goldstein, Oana Goga, Tobias Lauinger, and Damon McCoy: Facebook has become a major way people find news and information in an increasingly politically polarized nation. We analyzed how users interacted with different types of posts promoted as news in the lead-up to and aftermath of the U.S. 2020 elections. We found that politically extreme sources tend to generate more interactions from users. In particular, content from sources rated as far-right by independent news rating services consistently received the highest engagement per follower of any partisan group. Additionally, frequent purveyors of far-right misinformation had on average 65% more engagement per follower than other far-right pages. We found:

  • Sources of news and information rated as far-right generate the highest average number of interactions per follower with their posts, followed by sources from the far-left, and then news sources closer to the center of the political spectrum.
  • Looking at the far-right, misinformation sources far outperform non-misinformation sources. Far-right sources designated as spreaders of misinformation had an average of 426 interactions per thousand followers per week, while non-misinformation sources had an average of 259 weekly interactions per thousand followers.
  • Engagement with posts from far-right and far-left news sources peaked around Election Day and again on January 6, the day of the certification of the electoral count and the U.S. Capitol riot. For posts from all other political leanings of news sources, the increase in engagement was much less intense.
  • Center and left partisan categories incur a misinformation penalty, while right-leaning sources do not. Center sources of misinformation, for example, performed about 70% worse than their non-misinformation counterparts. (Note: center sources of misinformation tend to be sites presenting as health news that have no obvious ideological orientation.)…(More)”.

Europe’s Digital Decade: Commission sets the course towards a digitally empowered Europe by 2030


European Commission Press Release: “…The Commission proposes a Digital Compass to translate the EUʼs digital ambitions for 2030 into concrete terms. They evolve around four cardinal points:

1) Digitally skilled citizens and highly skilled digital professionals; By 2030, at least 80% of all adults should have basic digital skills, and there should be 20 million employed ICT specialists in the EU – while more women should take up such jobs;

2) Secure, performant and sustainable digital infrastructures; By 2030, all EU households should have gigabit connectivity and all populated areas should be covered by 5G; the production of cutting-edge and sustainable semiconductors in Europe should be 20% of world production; 10,000 climate neutral highly secure edge nodes should be deployed in the EU; and Europe should have its first quantum computer;

3) Digital transformation of businesses; By 2030, three out of four companies should use cloud computing services, big data and Artificial Intelligence; more than 90% SMEs should reach at least basic level of digital intensity; and the number of EU unicorns should double;

4) Digitalisation of public services; By 2030, all key public services should be available online; all citizens will have access to their e-medical records; and 80% citizens should use an eID solution.

The Compass sets out a robust joint governance structure with Member States based on a monitoring system with annual reporting in the form of traffic lights. The targets will be enshrined in a Policy Programme to be agreed with the European Parliament and the Council….(More)“.

Runaway Technology: Can Law Keep Up?


Book by Joshua A. T. Fairfield: “In an era of corporate surveillance, artificial intelligence, deep fakes, genetic modification, automation, and more, law often seems to take a back seat to rampant technological change. To listen to Silicon Valley barons, there’s nothing any of us can do about it. In this riveting work, Joshua A. T. Fairfield calls their bluff. He provides a fresh look at law, at what it actually is, how it works, and how we can create the kind of laws that help humans thrive in the face of technological change. He shows that law can keep up with technology because law is a kind of technology – a social technology built by humans out of cooperative fictions like firms, nations, and money. However, to secure the benefits of changing technology for all of us, we need a new kind of law, one that reflects our evolving understanding of how humans use language to cooperate….(More)”.

Improving Governance by Asking Questions that Matter


Fiona Cece, Nicola Nixon and Stefaan Verhulst at the Open Government Partnership:

“You can tell whether a man is clever by his answers. You can tell whether a man is wise by his questions” – Naguib Mahfouz

Data is at the heart of every dimension of the COVID-19 challenge. It’s been vital in the monitoring of daily rates, track and trace technologies, doctors appointments, and the vaccine roll-out. Yet our daily diet of brightly-coloured graphed global trends masks the maelstrom of inaccuracies, gaps and guesswork that underlies the ramshackle numbers on which they are so often based. Governments are unable to address their citizens’ needs in an informed way when the data itself is partial, incomplete or simply biased. And citizens’ in turn are unable to contribute to collective decision-making that impacts their lives when the channels for doing so in meaningful ways are largely non-existent. 

There is an irony here. We live in an era in which there are an unprecedented number of methods for collecting data. Even in the poorest countries with weak or largely non-existent government systems, anyone with a mobile phone or who accesses the internet is using and producing data. Yet a chasm exists between the potential of data to contribute to better governance and what it is actually collected and used for.

Even where data accuracy can be relied upon, the practice of effective, efficient and equitable data governance requires much more than its collection and dissemination.

And although governments will play a vital role, combatting the pandemic and its associated socio-economic challenges will require the combined efforts of non-government organizations (NGOs), civil society organizations (CSOs), citizens’ associations, healthcare companies and providers, universities, think tanks and so many others. Collaboration is key.

There is a need to collectively move beyond solution-driven thinking. One initiative working toward this end is The 100 Questions Initiative by The Governance Lab (The GovLab) at the NYU Tandon School of Engineering. In partnership with the The Asia Foundation, the Centre for Strategic and International Studies in Indonesia, and the BRAC Institute of Governance and Development, the Initiative is launching a Governance domain. Collectively we will draw on the expertise of over 100 “bilinguals”– experts in both data science and governance — to identify the 10 most-pressing questions on a variety of issues that can be addressed using data and data science. The cohort for this domain is multi-sectoral and geographically varied, and will provide diverse input on these governance challenges. 

Once the questions have been identified and prioritized, and we have engaged with a broader public through a voting campaign, the ultimate goal is to establish one or more data collaboratives that can generate answers to the questions at hand. Data collaboratives are an emerging structure that allow pooling of data and expertise across sectors, often resulting in new insights and public sector innovations.  Data collaboratives are fundamentally about sharing and cross-sectoral engagement. They have been deployed across countries and sectoral contexts, and their relative success shows that in the twenty-first century no single actor can solve vexing public problems. The route to success lies through broad-based collaboration. 

Multi-sectoral and geographically diverse insight is needed to address the governance challenges we are living through, especially during the time of COVIDd-19. The pandemic has exposed weak governance practices globally, and collectively we need to craft a better response. As an open governance and data-for-development community, we have not yet leveraged the best insight available to inform an effective, evidence-based response to the pandemic. It is time we leverage more data and technology to enable citizen-centrism in our service delivery and decision-making processes, to contribute to overcoming the pandemic and to building our governance systems, institutions and structures back better. Together with over 130 ‘Bilinguals’ – experts in both governance and data – we have set about identifying the priority questions that data can answer to improve governance. Join us on this journey. Stay tuned for our public voting campaign in a couple of months’ time when we will crowdsource your views on which of the questions they pose really matter….(More)”.

Theories of Choice: The Social Science and the Law of Decision Making


Book by Stefan Grundmann and Philipp Hacker: “Choice is a key concept of our time. It is a foundational mechanism for every legal order in societies that are, politically, constituted as democracies and, economically, built on the market mechanism. Thus, choice can be understood as an atomic structure that grounds core societal processes. In recent years, however, the debate over the right way to theorise choice—for example, as a rational or a behavioural type of decision making—has intensified. This collection therefore provides an in-depth discussion of the promises and perils of specific types of theories of choice. It shows how the selection of a specific theory of choice can make a difference for concrete legal questions, in particularly in the regulation of the digital economy or in choosing between market, firm, or network.

In its first part, the volume provides an accessible overview of the current debates about rational versus behavioural approaches to theories of choice. The remainder of the book structures the vast landscape of theories of choice along three main types: individual, collective, and organisational decision making. As theories of choice proliferate and become ever more sophisticated, however, the process of choosing an adequate theory of choice becomes increasingly intricate, too. This volume addresses this selection problem for the various legal arenas in which individual, organisational, and collective decisions matter. By drawing on economic, technological, political, and legal points of view, the volume shows which theories of choice are at the disposal of the legally relevant decision maker, and how they can be implemented for the solution of concrete legal problems….(More)

How Humans Judge Machines


Open Access Book by César A. Hidalgo et al : “How would you feel about losing your job to a machine? How about a tsunami alert system that fails? Would you react differently to acts of discrimination depending on whether they were carried out by a machine or by a human? What about public surveillance? How Humans Judge Machines compares people’s reactions to actions performed by humans and machines. Using data collected in dozens of experiments, this book reveals the biases that permeate human-machine interactions. Are there conditions in which we judge machines unfairly?

Is our judgment of machines affected by the moral dimensions of a scenario? Is our judgment of machine correlated with demographic factors such as education or gender? César Hidalgo and colleagues use hard science to take on these pressing technological questions. Using randomized experiments, they create revealing counterfactuals and build statistical models to explain how people judge artificial intelligence and whether they do it fairly. Through original research, How Humans Judge Machines bring us one step closer to understanding the ethical consequences of AI…(More)”.