Trust and Compliance to Public Health Policies in Times of Covid-19


Paper by Olivier Bargain and Ulugbek Aminjonov: “While degraded trust and cohesion within a country are often shown to have large socioeconomic impacts, they can also have dramatic consequences when compliance is required for collective survival. We illustrate this point in the context of the COVID-19 crisis. Policy responses all over the world aim to reduce social interaction and limit contagion.

Using data on human mobility and political trust at regional level in Europe, we examine whether the compliance to these containment policies depends on the level of trust in policy makers prior to the crisis. Using a double difference approach around the time of lockdown announcements, we find that high-trust regions decrease their mobility related to non-necessary activities significantly more than low-trust regions. We also exploit country and time variation in treatment using the daily strictness of national policies. The efficiency of policy stringency in terms of mobility reduction significantly increases with trust. The trust effect is nonlinear and increases with the degree of stringency. We assess how the impact of trust on mobility potentially translates in terms of mortality growth rate..(More)”.

Big data, privacy and COVID-19 – learning from humanitarian expertise in data protection


Andrej Zwitter & Oskar J. Gstrein at the Journal of International Humanitarian Action: “The use of location data to control the coronavirus pandemic can be fruitful and might improve the ability of governments and research institutions to combat the threat more quickly. It is important to note that location data is not the only useful data that can be used to curb the current crisis. Genetic data can be relevant for AI enhanced searches for vaccines and monitoring online communication on social media might be helpful to keep an eye on peace and security (Taulli n.d.). However, the use of such large amounts of data comes at a price for individual freedom and collective autonomy. The risks of the use of such data should ideally be mitigated through dedicated legal frameworks which describe the purpose and objectives of data use, its collection, analysis, storage and sharing, as well as the erasure of ‘raw’ data once insights have been extracted. In the absence of such clear and democratically legitimized norms, one can only resort to fundamental rights provisions such as Article 8 paragraph 2 of the ECHR that reminds us that any infringement of rights such as privacy need to be in accordance with law, necessary in a democratic society, pursuing a legitimate objective and proportionate in their application.

However as shown above, legal frameworks including human rights standards are currently not capable of effectively ensuring data protection, since they focus too much on the individual as the point of departure. Hence, we submit that currently applicable guidelines and standards for responsible data use in the humanitarian sector should also be fully applicable to corporate, academic and state efforts which are currently enacted to curb the COVID-19 crisis globally. Instead of ‘re-calibrating’ the expectations of individuals on their own privacy and collective autonomy, the requirements for the use of data should be broader and more comprehensive. Applicable principles and standards as developed by OCHA, the 510 project of the Dutch Red Cross, or by academic initiatives such as the Signal Code are valid minimum standards during a humanitarian crisis. Hence, they are also applicable minimum standards during the current pandemic.

Core findings that can be extracted from these guidelines and standards for the practical implementation into data driven responses to COVIC-19 are:

  • data sensitivity is highly contextual; one and the same data can be sensitive in different contexts. Location data during the current pandemic might be very useful for epidemiological analysis. However, if (ab-)used to re-calibrate political power relations, data can be open for misuse. Hence, any party supplying data and data analysis needs to check whether data and insights can be misused in the context they are presented.
  • privacy and data protection are important values; they do not disappear during a crisis. Nevertheless, they have to be weighed against respective benefits and risks.
  • data-breaches are inevitable; with time (t) approaching infinity, the chance of any system being hacked or becoming insecure approaches 100%. Hence, it is not a question of whether, but when. Therefore, organisations have to prepare sound data retention and deletion policies.
  • data ethics is an obligation to provide high quality analysis; using machine learning and big data might be appealing for the moment, but the quality of source data might be low, and results might be unreliable, or even harmful. Biases in incomplete datasets, algorithms and human users are abundant and widely discussed. We must not forget that in times of crisis, the risk of bias is more pronounced, and more problematic due to the vulnerability of data subjects and groups. Therefore, working to the highest standards of data processing and analysis is an ethical obligation.

The adherence to these principles is particularly relevant in times of crisis such as now, where they mark the difference between societies that focus on control and repression on the one hand, and those who believe in freedom and autonomy on the other. Eventually, we will need to think of including data policies into legal frameworks for state of emergency regulations, and coordinate with corporate stakeholders as well as private organisations on how to best deal with such crises. Data-driven practices have to be used in a responsible manner. Furthermore, it will be important to observe whether data practices and surveillance assemblages introduced under current circumstances will be rolled back to status quo ante when returning to normalcy. If not, our rights will become hollowed out, just waiting for the next crisis to eventually become irrelevant….(More)”.

Sovereigns, Viruses, and the Law: The Normative Challenges of Pandemic in Today’s Information Societies


Paper by Ugo Pagallo: “The paper examines the legal and political impact of the Covid-19 crisis, drawing the attention to fundamental questions on authority and political legitimacy, coercion and obligation, power and cooperation. National states and sovereign governments have had and still will have a crucial role in re-establishing the public health sector and addressing the colossal challenges of economic re-construction. Scholars have accordingly discussed the set of legal means displayed during this crisis: emergency decrees, lockdowns, travel bans, and generally speaking, powers of the state of exception.

The aim of this paper is to stress the limits of such perspectives on powers of national governments and sovereigns, in order to illustrate what goes beyond such powers. Focus should be on the ontological, epistemic and normative constraints that affect today’s rights and duties of national states. Such constraints correspond to a class of problems that is complex, often transnational, and increasingly data-driven. In addition, we should not overlook the lessons learnt from such fields, as environmental law and internet governance, anti-terrorism and transnational business law, up to the regulation of Artificial Intelligence (AI). Such fields show that legal co-regulation and mechanisms of coordination and cooperation complement the traditional powers of national governments even in the times of the mother of all pandemics. The Covid-19 crisis has been often interpreted as if this were the last chapter of an on-going history about the Leviathan and its bio-powers. It is not. The crisis regards the end of the first chapter on the history of today’s information societies….(More)”.

Apparent Algorithmic Bias and Algorithmic Learning


Paper by Anja Lambrecht and Catherine E. Tucker: “It is worrying to think that algorithms might discriminate against minority groups and reinforce existing inequality. Typically, such concerns have focused on the idea that the algorithm’s code could reflect bias, or the data that feeds the algorithm might lead the algorithm to produce uneven outcomes.

In this paper, we highlight another reason for why algorithms might appear biased against minority groups which is the length of time algorithms need to learn: if an algorithm has access to less data for particular groups, or accesses this data at differential speeds, it will produce differential outcomes, potentially disadvantaging minority groups.

Specifically, we revisit a classic study which documents that searches on Google for black names were more likely to return ads that highlighted the need for a criminal background check than searches for white names. We show that at least a partial explanation for this finding is that if consumer demand for a piece of information is low, an algorithm accumulates information at a lesser speed and thus takes longer to learn about consumer preferences. Since black names are less common, the algorithm learns about the quality of the underlying ad more slowly, and as a result an ad is more likely to persist for searches next to black names even if the algorithm judges the ad to be of low-quality. Therefore, the algorithm may be likely to show an ad — including an undesirable ad — in the context of searches for a disadvantaged group for a longer period of time.

We replicate this result using the context of religious affiliations and present evidence that ads targeted towards searches for religious groups persists for longer for groups that are less searched for. This suggests that the process of algorithmic learning can lead to differential outcomes across those whose characteristics are more common and those who are rarer in society….(More)”.

Testing Transparency


Paper by Brigham Daniels, Mark Buntaine and Tanner Bangerter: “In modern democracies, governmental transparency is thought to have great value. When it comes to addressing administrative corruption and mismanagement, many would agree with Justice Brandeis’s observation that sunlight is the best disinfectant. Beyond this, many credit transparency with enabling meaningful citizen participation.

But even though transparency appears highly correlated with successful governance in developed democracies, assumptions about administrative transparency have remained empirically untested. Testing effects of transparency would prove particularly helpful in developing democracies where transparency norms have not taken hold or only have done so slowly. In these contexts, does administrative transparency really create the sorts of benefits attributed to it? Transparency might grease the gears of developed democracies, but what good is grease when many of the gears seem to be broken or missing entirely?

This Article presents empirical results from a first-of-its-kind field study that tested two major promises of administrative transparency in a developing democracy: that transparency increases public participation in government affairs and that it increases government accountability. To test these hypotheses, we used two randomized controlled trials.

Surprisingly, we found transparency had no significant effect in almost any of our quantitative measurements, although our qualitative results suggested that when transparency interventions exposed corruption, some limited oversight could result. Our findings are particularly significant for developing democracies and show, at least in this context, that Justice Brandeis may have oversold the cleansing effects of transparency. A few rays of transparency shining light on government action do not disinfect the system and cure government corruption and mismanagement. Once corruption and mismanagement are identified, it takes effective government institutions and action from civil society to successfully act as a disinfectant….(More)”.

Governing Privacy in the Datafied City


Paper by Ira Rubinstein and Bilyana Petkova: “Privacy — understood in terms of freedom from identification, surveillance and profiling — is a precondition of the diversity and tolerance that define the urban experience, But with “smart” technologies eroding the anonymity of city sidewalks and streets, and turning them into surveilled spaces, are cities the first to get caught in the line of fire? Alternatively, are cities the final bastions of privacy? Will the interaction of tech companies and city governments lead cities worldwide to converge around the privatization of public spaces and monetization of data with little to no privacy protections? Or will we see different city identities take root based on local resistance and legal action?

This Article delves into these questions from a federalist and localist angle. In contrast to other fields in which American cities lack the formal authority to govern, we show that cities still enjoy ample powers when it comes to privacy regulation. Fiscal concerns, rather than state or federal preemption, play a role in privacy regulation, and the question becomes one of how cities make use of existing powers. Populous cosmopolitan cities, with a sizeable market share and significant political and cultural clout, are in particularly noteworthy positions to take advantage of agglomeration effects and drive hard deals when interacting with private firms. Nevertheless, there are currently no privacy front runners or privacy laggards; instead, cities engage in “privacy activism” and “data stewardship.”

First, as privacy activists, U.S. cities use public interest litigation to defend their citizens’ personal information in high profile political participation and consumer protection cases. Examples include legal challenges to the citizenship question in the 2020 Census, and to instances of data breach including Facebook third-party data sharing practices and the Equifax data breach. We link the Census 2020 data wars to sanctuary cities’ battles with the federal administration to demonstrate that political dissent and cities’ social capital — diversity — are intrinsically linked to privacy. Regarding the string of data breach cases, cities expand their experimentation zone by litigating privacy interests against private parties.

Second, cities as data stewards use data to regulate their urban environment. As providers of municipal services, they collect, analyze and act on a broad range of data about local citizens or cut deals with tech companies to enhance transit, housing, utility, telecom, and environmental services by making them smart while requiring firms like Uber and Airbnb to share data with city officials. This has proven contentious at times but in both North American and European cities, open data and more cooperative forms of data sharing between the city, commercial actors, and the public have emerged, spearheaded by a transportation data trust in Seattle. This Article contrasts the Seattle approach with the governance and privacy deficiencies accompanying the privately-led Quayside smart city project in Toronto. Finally, this Article finds the data trust model of data sharing to hold promise, not least since the European rhetoric of exclusively city-owned data presented by Barcelona might prove difficult to realize in practice….(More)”.

Citizen participation in food systems policy making: A case study of a citizens’ assembly


Paper by Bob Doherty et al: “In this article, we offer a contribution to the emerging debate on the role of citizen participation in food system policy making. A key driver is a recognition that solutions to complex challenges in the food system need the active participation of citizens to drive positive change. To achieve this, it is crucial to give citizens the agency in processes of designing policy interventions. This requires authentic and reflective engagement with citizens who are affected by collective decisions. One such participatory approach is citizen assemblies, which have been used to deliberate a number of key issues, including climate change by the UK Parliament’s House of Commons (House of Commons., 2019). Here, we have undertaken analysis of a citizen food assembly organized in the City of York (United Kingdom). This assembly was a way of hearing about a range of local food initiatives in Yorkshire, whose aim is to both relocalise food supply and production, and tackle food waste.

These innovative community-based business models, known as ‘food hubs’, are increasing the diversity of food supply, particularly in disadvantaged communities. Among other things, the assembly found that the process of design and sortation of the assembly is aided by the involvement of local stakeholders in the planning of the assembly. It also identified the potential for public procurement at the city level, to drive a more sustainable sourcing of food provision in the region. Furthermore, this citizen assembly has resulted in a galvanizing of individual agency with participants proactively seeking opportunities to create prosocial and environmental change in the food system….(More)”.

Polycentric governance and policy advice: lessons from Whitehall policy advisory systems


Paper by Patrick Diamond: “In countries worldwide, the provision of policy advice to central governments has been transformed by the deinstitutionalisation of policymaking, which has engaged a diverse range of actors in the policy process. Scholarship should therefore address the impact of deinstitutionalisation in terms of the scope and scale of policy advisory systems, as well as in terms of the influence of policy advisors. This article addresses this gap, presenting a programme of research on policy advice in Whitehall. Building on Craft and Halligan’s conceptualisation of a ‘policy advisory system’, it argues that in an era of polycentric governance, policy advice is shaped by ‘interlocking actors’ beyond government bureaucracy, and that the pluralisation of advisory bodies marginalises the civil service. The implications of such alterations are considered against the backdrop of governance changes, particularly the hybridisation of institutions, which has made policymaking processes complex, prone to unpredictability and at risk of policy blunders….(More)”.

Behavioural Insights Teams (BITs) and Policy Change: An Exploration of Impact, Location, and Temporality of Policy Advice


Paper by Ishani Mukherjee and Sarah Giest: “Behavioural Insights Teams (BITs) have gained prominence in government as policy advisors and are increasingly linked to the way policy instruments are designed. Despite the rise of BITs as unique knowledge brokers mediating the use of behavioral insights for policymaking, they remain underexplored in the growing literature on policy advice and advisory systems. The article emphasizes that the visible impact that BITs have on the content of policy instruments, the level of political support they garner and their structural diversity in different political departments, all set them apart from typical policy brokers in policy advisory systems connecting the science-policy divide…(More)”.

The institutionalization of digital public health: lessons learned from the COVID19 app


Paper by Ciro Cattuto and Alessandro Spina: “Amid the outbreak of the SARS-CoV-2 pandemic, there has been a call to use innovative digital tools for the purpose of protecting public health. There are a number of proposals to embed digital solutions into the regulatory strategies adopted by public authorities to control the spread of the coronavirus more effectively. They range from algorithms to detect population movements by using telecommunication data to the use of artificial intelligence and high-performance computing power to detect patterns in the spread of the virus. However, the use of a mobile phone application for contact tracing is certainly the most popular.

These proposals, which have a very powerful persuasive force, and have apparently contributed to the success of public health response in a few Asian countries, also raise questions and criticisms in particular with regard to the risks that these novel digital surveillance systems pose for privacy and in the long term for our democracies.

With this short paper, we would like to describe the pattern that has led to the institutionalization of digital tools for public health purposes. By tracing their origins to “digital epidemiology”, an approach originated in the early 2010s, we will expose that, whilst there exists limited experimental knowledge on the use of digital tools for tracking disease, this is the first time in which they are being introduced by policy-makers into the set of non-clinical emergency strategies to a major public health crisis….(More)”