Privacy in Pandemic: Law, Technology, and Public Health in the COVID-19 Crisis


Paper by Tiffany C. Li: “The COVID-19 pandemic has caused millions of deaths and disastrous consequences around the world, with lasting repercussions for every field of law, including privacy and technology. The unique characteristics of this pandemic have precipitated an increase in use of new technologies, including remote communications platforms, healthcare robots, and medical AI. Public and private actors are using new technologies, like heat sensing, and technologically-influenced programs, like contact tracing, alike in response, leading to a rise in government and corporate surveillance in sectors like healthcare, employment, education, and commerce. Advocates have raised the alarm for privacy and civil liberties violations, but the emergency nature of the pandemic has drowned out many concerns.

This Article is the first comprehensive account of privacy impacts related to technology and public health responses to the COVID-19 crisis. Many have written on the general need for better health privacy protections, education privacy protections, consumer privacy protections, and protections against government and corporate surveillance. However, this Article is the first comprehensive article to examine these problems of privacy and technology specifically in light of the pandemic, arguing that the lens of the pandemic exposes the need for both widescale and small-scale reform of privacy law. This Article approaches these problems with a focus on technical realities and social salience, and with a critical awareness of digital and political inequities, crafting normative recommendations with these concepts in mind.

Understanding privacy in this time of pandemic is critical for law and policymaking in the near future and for the long-term goals of creating a future society that protects both civil liberties and public health. It is also important to create a contemporary scholarly understanding of privacy in pandemic at this moment in time, as a matter of historical record. By examining privacy in pandemic, in the midst of pandemic, this Article seeks to create a holistic scholarly foundation for future work on privacy, technology, public health, and legal responses to global crises….(More)”

Blockchain as a Confidence Machine: The Problem of Trusts & Challenges of Governance


Paper by Primavera De Filippi, Morshed Mannan and Wessel Reijers: “Blockchain technology was created as a response to the trust crisis that swept the world in the wake of the 2008 financial crisis. Bitcoin and other blockchain-based systems were presented as a “trustless” alternative to existing financial institutions and even governments. Yet, while the trustless nature of blockchain technology has been heavily questioned, little research has been done as to what blockchain technologies actually bring to the table in place of trust. This article draws from the extensive academic discussion on the concepts of “trust” and “confidence” to argue that blockchain technology is not a ‘trustless technology’ but rather a ‘confidence machine’. First, the article provides a review of the multifaceted conceptualisations of trust and confidence, and the relationship between these two concepts. Second, the claim is made that blockchain technology relies on cryptographic rules, mathematics, and game-theoretical incentives in order to increase confidence in the operations of a computational system. Yet, such an increase in confidence ultimately relies on the proper operation and governance of the underlying blockchain-based network, which requires trusting a variety of actors. Third, the article turns to legal, constitutional and polycentric governance theory to explore the governance challenges of blockchain-based systems, in light of the tension between procedural confidence and trust….(More)”

Data Privacy Increasingly a Focus of National Security Reviews


Paper by Tamara Ehs, and Monika Mokre: “The yellow vest movement started in November 2018 and has formed the longest protest movement in France since 1945. The movement provoked different reactions of the French government—on the one hand, violence and repression; on the other hand, concessions. One of them was to provide a possibility for citizens’ participation by organizing the so-called “Grand Débat.” It was clear to all observers that this was less an attempt to further democracy in France than to calm down the protests of the yellow vests. Thus, it seemed doubtful from the beginning whether this form of participatory democracy could be understood as a real form of citizens’ deliberation, and in fact, several shortcomings with regard to procedure and participation were pointed out by theorists of deliberative democracy. The aim of this article is to analyze the Grand Débat with regard to its deliberative qualities and shortcomings….(More)”.

Innovation Policy, Structural Inequality, and COVID-19


Paper by Shobita Parthasarathy: “COVID-19 has shown the world that public policies tend to benefit the most privileged among us, and innovation policy is no exception. While the US government’s approach to innovation—research funding and patent policies and programs that value scientists’ and private sector freedoms—has been copied around the world due to its apparent success, I argue that it has hurt poor and marginalized communities. It has limited our understanding of health disparities and how to address them, and hampered access to essential technologies due to both lack of coordination and high cost. Fair and equal treatment of vulnerable citizens requires sensitive and dedicated policies that attend explicitly to the fact that the benefits of innovation do not simply trickle down….(More)”.

Monitoring global digital gender inequality using the online populations of Facebook and Google


Paper by Ridhi Kashyap, Masoomali Fatehkia, Reham Al Tamime, and Ingmar Weber: “Background: In recognition of the empowering potential of digital technologies, gender equality in internet access and digital skills is an important target in the United Nations (UN) Sustainable Development Goals (SDGs). Gender-disaggregated data on internet use are limited, particularly in less developed countries.

Objective: We leverage anonymous, aggregate data on the online populations of Google and Facebook users available from their advertising platforms to fill existing data gaps and measure global digital gender inequality.

Methods: We generate indicators of country-level gender gaps on Google and Facebook. Using these online indicators independently and in combination with offline development indicators, we build regression models to predict gender gaps in internet use and digital skills computed using available survey data from the International Telecommunications Union (ITU).

Results: We find that women are significantly underrepresented in the online populations of Google and Facebook in South Asia and sub-Saharan Africa. These platform-specific gender gaps are a strong predictor that women lack internet access and basic digital skills in these populations. Comparing platforms, we find Facebook gender gap indicators perform better than Google indicators at predicting ITU internet use and low-level digital-skill gender gaps. Models using these online indicators outperform those using only offline development indicators. The best performing models, however, are those that combine Facebook and Google online indicators with a country’s development indicators such as the Human Development Index….(More)”.

Data to the rescue: how humanitarian aid NGOs should collect information based on the GDPR


Paper by Theodora Gazi: “Data collection is valuable before, during and after interventions in order to increase the effectiveness of humanitarian projects. Although the General Data Protection Regulation (GDPR) sets forth rules for the processing of personal data, its implementation by humanitarian aid actors is crucial and presents challenges. Failure to comply triggers severe risks for both data subjects and the reputation of the actor. This article provides insights into the implementation of the guiding principles of the GDPR, the legal bases for data processing, data subjects’ rights and data sharing during the provision of humanitarian assistance…(More)”

Computational social science: Obstacles and opportunities


Paper by David M. J. Lazer et al: “The field of computational social science (CSS) has exploded in prominence over the past decade, with thousands of papers published using observational data, experimental designs, and large-scale simulations that were once unfeasible or unavailable to researchers. These studies have greatly improved our understanding of important phenomena, ranging from social inequality to the spread of infectious diseases. The institutions supporting CSS in the academy have also grown substantially, as evidenced by the proliferation of conferences, workshops, and summer schools across the globe, across disciplines, and across sources of data. But the field has also fallen short in important ways. Many institutional structures around the field—including research ethics, pedagogy, and data infrastructure—are still nascent. We suggest opportunities to address these issues, especially in improving the alignment between the organization of the 20th-century university and the intellectual requirements of the field….(More)”.

Prioritizing COVID-19 tests based on participatory surveillance and spatial scanning


Paper by O.B Leal-Neto et al: “Participatory surveillance has shown promising results from its conception to its application in several public health events. The use of a collaborative information pathway provides a rapid way for the data collection on symptomatic individuals in the territory, to complement traditional health surveillance systems. In Brazil, this methodology has been used at the national level since 2014 during mass gatherings events since they have great importance for monitoring public health emergencies.

With the occurrence of the COVID-19 pandemic, and the limitation of the main non-pharmaceutical interventions for epidemic control – in this case, testing and social isolation – added to the challenge of existing underreporting of cases and delay of notifications, there is a demand on alternative sources of up to date information to complement the current system for disease surveillance. Several studies have demonstrated the benefits of participatory surveillance in coping with COVID-19, reinforcing the opportunity to modernize the way health surveillance has been carried out. Additionally, spatial scanning techniques have been used to understand syndromic scenarios, investigate outbreaks, and analyze epidemiological risk, constituting relevant tools for health management. While there are limitations in the quality of traditional health systems, the data generated by participatory surveillance reveals an interesting application combining traditional techniques to clarify epidemiological risks that demand urgency in decision-making. Moreover, with the limitations of testing available, the identification of priority areas for intervention is an important activity in the early response to public health emergencies. This study aimed to describe and analyze priority areas for COVID-19 testing combining data from participatory surveillance and traditional surveillance for respiratory syndromes….(More)”.

Why hypothesis testers should spend less time testing hypotheses


Paper by Scheel, Anne M., Leonid Tiokhin, Peder M. Isager, and Daniel Lakens: “For almost half a century, Paul Meehl educated psychologists about how the mindless use of null-hypothesis significance tests made research on theories in the social sciences basically uninterpretable (Meehl, 1990). In response to the replication crisis, reforms in psychology have focused on formalising procedures for testing hypotheses. These reforms were necessary and impactful. However, as an unexpected consequence, psychologists have begun to realise that they may not be ready to test hypotheses. Forcing researchers to prematurely test hypotheses before they have established a sound ‘derivation chain’ between test and theory is counterproductive. Instead, various non-confirmatory research activities should be used to obtain the inputs necessary to make hypothesis tests informative.

Before testing hypotheses, researchers should spend more time forming concepts, developing valid measures, establishing the causal relationships between concepts and their functional form, and identifying boundary conditions and auxiliary assumptions. Providing these inputs should be recognised and incentivised as a crucial goal in and of itself.

In this article, we discuss how shifting the focus to non-confirmatory research can tie together many loose ends of psychology’s reform movement and help us lay the foundation to develop strong, testable theories, as Paul Meehl urged us to….(More)”

Business-to-Business Data Sharing: An Economic and Legal Analysis


Paper by Bertin Martens et al: “The European Commission announced in its Data Strategy (2020) its intentions to propose an enabling legislative framework for the governance of common European data spaces, to review and operationalize data portability, to prioritize standardization activities and foster data interoperability and to clarify usage rights for co-generated IoT data. This Strategy starts from the premise that there is not enough data sharing and that much data remain locked up and are not available for innovative re-use. The Commission will also consider the adoption of a New Competition Tool as well as the adoption of ex ante regulation for large online gate-keeping platforms as part of the announced Digital Services Act Package . In this context, the goal of this report is to examine the obstacles to Business-to-Business (B2B) data sharing: what keeps businesses from sharing or trading more of their data with other businesses and what can be done about it? For this purpose, this report uses the well-known tools of legal and economic thinking about market failures. It starts from the economic characteristics of data and explores to what extent private B2B data markets result in a socially optimal degree of data sharing, or whether there are market failures in data markets that might justify public policy intervention.

It examines the conditions under which monopolistic data market failures may occur. It contrasts these welfare losses with the welfare gains from economies of scope in data aggregation in large pools. It also discusses other potential sources of B2B data market failures due to negative externalities, risks and transaction costs and asymmetric information situations. In a next step, the paper explores solutions to overcome these market failures. Private third-party data intermediaries may be in a position to overcome market failures due to high transactions costs and risks. They can aggregate data in large pools to harvest the benefits of economies of scale and scope in data. Where third-party intervention fails, regulators can step in, with ex-post competition instruments and with ex-ante regulation. The latter includes data portability rights for personal data and mandatory data access rights….(More)”.