Data Is What Data Does: Regulating Use, Harm, and Risk Instead of Sensitive Data


Paper by Daniel J. Solove: “Heightened protection for sensitive data is becoming quite trendy in privacy laws around the world. Originating in European Union (EU) data protection law and included in the EU’s General Data Protection Regulation (GDPR), sensitive data singles out certain categories of personal data for extra protection. Commonly recognized special categories of sensitive data include racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, health, sexual orientation and sex life, biometric data, and genetic data.

Although heightened protection for sensitive data appropriately recognizes that not all situations involving personal data should be protected uniformly, the sensitive data approach is a dead end. The sensitive data categories are arbitrary and lack any coherent theory for identifying them. The borderlines of many categories are so blurry that they are useless. Moreover, it is easy to use non-sensitive data as a proxy for certain types of sensitive data.

Personal data is akin to a grand tapestry, with different types of data interwoven to a degree that makes it impossible to separate out the strands. With Big Data and powerful machine learning algorithms, most non-sensitive data can give rise to inferences about sensitive data. In many privacy laws, data that can give rise to inferences about sensitive data is also protected as sensitive data. Arguably, then, nearly all personal data can be sensitive, and the sensitive data categories can swallow up everything. As a result, most organizations are currently processing a vast amount of data in violation of the laws.

This Article argues that the problems with the sensitive data approach make it unworkable and counterproductive — as well as expose a deeper flaw at the root of many privacy laws. These laws make a fundamental conceptual mistake — they embrace the idea that the nature of personal data is a sufficiently useful focal point for the law. But nothing meaningful for regulation can be determined solely by looking at the data itself. Data is what data does. Personal data is harmful when its use causes harm or creates a risk of harm. It is not harmful if it is not used in a way to cause harm or risk of harm.

To be effective, privacy law must focus on use, harm, and risk rather than on the nature of personal data. The implications of this point extend far beyond sensitive data provisions. In many elements of privacy laws, protections should be based on the use of personal data and proportionate to the harm and risk involved with those uses…(More)”.

Why Do Innovations Fail? Lessons Learned from a Digital Democratic Innovation


Paper by Jenny Lindholm and Janne Berg: “Democratic innovations are brought forward by political scientists as a response to worrying democratic deficits. This paper aims to evaluate the design, process, and outcome of digital democratic innovations. We study a mobile application for following local politics. Data is collected using three online surveys with different groups, and a workshop with young citizens. The results show that the app did not fully meet the democratic ideal of inclusiveness at the process stage, especially in reaching young people. However, the user groups that had used the app reported positive democratic effects…(More)”.

Who owns the map? Data sovereignty and government spatial data collection, use, and dissemination


Paper by Peter A. Johnson and Teresa Scassa: “Maps, created through the collection, assembly, and analysis of spatial data are used to support government planning and decision-making. Traditionally, spatial data used to create maps are collected, controlled, and disseminated by government, although over time, this role has shifted. This shift has been driven by the availability of alternate sources of data collected by private sector companies, and data contributed by volunteers to open mapping platforms, such as OpenStreetMap. In theorizing this shift, we provide examples of how governments use data sovereignty as a tool to shape spatial data collection, use, and sharing. We frame four models of how governments may navigate shifting spatial data sovereignty regimes; first, with government retaining complete control over data collection; second, with government contracting a third party to provide specific data collection services, but with data ownership and dissemination responsibilities resting with government; third, with government purchasing data under terms of access set by third party data collectors, who disseminate data to several parties, and finally, with government retreating from or relinquishing data sovereignty altogether. Within this rapidly changing landscape of data providers, we propose that governments must consider how to address data sovereignty concerns to retain their ability to control data use in the public interest…(More)”.

Nudge and Nudging in Public Policy


Paper by Sanchayan Banerjee and damPeter John: “Nudging has been used to make public policies widely, in various fields such as personal finance, health, education, environment/climate, privacy, law, and human well-being. Nonetheless, with an increase in the applications of nudging, the toolkit of nudges also expanded massively, which ultimately led to multiple different conceptualisations and definitions of the nudge. In this entry, we review developments to nudge and nudging in public policy. First, we briefly discuss the political philosophy and psychological paradigm behind the conventional nudge, and examples of economically modelling nudge applications. Then, we highlight the role of nudges in behavioural public policy, an emerging subdiscipline of public policy which uses insights from behavioural sciences to develop new policies. We review the many definitions of nudge and introduce alternative toolkits of behaviours change, such as thinks, boosts, nudge+. We conclude with a discussion on the limitations of nudging in public policy and future research in behavioural public policy….(More)”.

Experiments of Living Constitutionalism


Paper by Cass R. Sunstein: “Experiments of Living Constitutionalism urges that the Constitution should be interpreted so as to allow both individuals and groups to experiment with different ways of living, whether we are speaking of religious practices, family arrangements, political associations, civic associations, child-rearing, schooling, romance, or work. Experiments of Living Constitutionalism prizes diversity and plurality; it gives pride of place to freedom of speech, freedom of association, and free exercise of religion (which it would protect against the imposition of secular values); it cherishes federalism; it opposes authoritarianism in all its forms. While Experiments of Living Constitutionalism has considerable appeal, my purpose in naming it is not to endorse or defend it, but as a thought experiment and to contrast it to Common Good Constitutionalism, with the aim of specifying the criteria on which one might embrace or defend any approach to constitutional law. My central conclusion is that we cannot know whether to accept or reject Experiments of Living Constitutionalism, Common Good Constitutionalism, Common Law Constitutionalism, democracy-reinforcing approaches, moral readings, originalism, or any other proposed approach without a concrete sense of what it entails – of what kind of constitutional order it would likely bring about or produce. No approach to constitutional interpretation can be evaluated without asking how it fits with the evaluator’s “fixed points,” which operate at multiple levels of generality. The search for reflective equilibrium is essential in deciding whether to accept a theory of constitutional interpretation…(More)”.

Studying open government data: Acknowledging practices and politics


Paper by Gijs van Maanen: “Open government and open data are often presented as the Asterix and Obelix of modern government—one cannot discuss one, without involving the other. Modern government, in this narrative, should open itself up, be more transparent, and allow the governed to have a say in their governance. The usage of technologies, and especially the communication of governmental data, is then thought to be one of the crucial instruments helping governments achieving these goals. Much open government data research, hence, focuses on the publication of open government data, their reuse, and re-users. Recent research trends, by contrast, divert from this focus on data and emphasize the importance of studying open government data in practice, in interaction with practitioners, while simultaneously paying attention to their political character. This commentary looks more closely at the implications of emphasizing the practical and political dimensions of open government data. It argues that researchers should explicate how and in what way open government data policies present solutions to what kind of problems. Such explications should be based on a detailed empirical analysis of how different actors do or do not do open data. The key question to be continuously asked and answered when studying and implementing open government data is how the solutions openness present latch onto the problem they aim to solve…(More)”.

Addressing the Global Data Divide through Digital Trade Law


Paper by Binit Agrawal and Neha Mishra: “The global data divide has emerged as a major policy challenge threatening equitable development, poverty alleviation, and access to information. Further, it has polarised countries on either side of the data schism, who have often reacted by implementing conflicting and sub-optimal measures. This paper surveys such policy measures, the politics behind them, and the footprints they have left on the digital trade or electronic commerce rules contained in free trade agreements (FTAs). First, this paper details an understanding of what constitutes the global data divide, focusing on three components, namely access, regulation, and use. Second, the paper surveys electronic commerce or digital trade rules in FTAs to understand whether existing rules deal with the widening data divide in a comprehensive manner and, if so, how. Our primary argument is that the existing FTA disciplines are deficient in addressing the global data divide. Key problems include insufficient participation by developing countries in framing digital trade rules, non-recognition of the data divide affecting developing countries, and lack of robust and implementable mechanisms to bridge the data divide. Finally, we present a proposal to reform digital trade rules in line with best practices emerging in FTA practice and the main areas where gaps must be bridged. Our proposals include enhancing technical assistance and capacity-building support, developing a tailored special and differential treatment (SDT) mechanism, incentivising the removal of data-related barriers by designing appropriate bargains in negotiations, and boosting international regulatory cooperation through innovative and creative mechanisms….(More)”.

A Comparative Study of Citizen Crowdsourcing Platforms and the Use of Natural Language Processing (NLP) for Effective Participatory Democracy


Paper by Carina Antonia Hallin: ‘The use of crowdsourcing platforms to harness citizen insights for policymaking has gained increasing importance in regional and national policy planning. Participatory democracy using crowdsourcing platforms includes various initiatives, such as generating ideas for new law reforms (Aitamurto and Landemore 2015], economic development, and solving challenges related to how to create inclusive social actions and interventions for better, healthier, and more prosperous local communities (Bentley and Pugalis, 2014). Such case observations, coupled with the increasing prevalence of internet-based communication, point to the real benefits of implementing participatory democracies on a mass scale in which citizens are invited to contribute their ideas, opinions, and deliberations (Salganik and Levy 2015). By adopting collective intelligence platforms, public authorities can harness local knowledge from citizens to find the right ‘policy mix’ and collaborate with citizens and relevant actors in the policymaking processes. This comparative study aims to validate the adoption of collective intelligence and artificial intelligence/natural language processing (NLP) on crowdsourcing platforms for effective participatory democracy and policymaking in local governments. The study compares 15 citizen crowdsourcing platforms, including Natural language Processing (NLP), for policymaking across Europe and the United States. The study offers a framework for working with citizen crowdsourcing platforms and exploring the usefulness of NLP on the platforms for effective participatory democracy…(More)”.

The ethical and legal landscape of brain data governance


Paper by Paschal Ochang , Bernd Carsten Stahl, and Damian Eke: “Neuroscience research is producing big brain data which informs both advancements in neuroscience research and drives the development of advanced datasets to provide advanced medical solutions. These brain data are produced under different jurisdictions in different formats and are governed under different regulations. The governance of data has become essential and critical resulting in the development of various governance structures to ensure that the quality, availability, findability, accessibility, usability, and utility of data is maintained. Furthermore, data governance is influenced by various ethical and legal principles. However, it is still not clear what ethical and legal principles should be used as a standard or baseline when managing brain data due to varying practices and evolving concepts. Therefore, this study asks what ethical and legal principles shape the current brain data governance landscape? A systematic scoping review and thematic analysis of articles focused on biomedical, neuro and brain data governance was carried out to identify the ethical and legal principles which shape the current brain data governance landscape. The results revealed that there is currently a large variation of how the principles are presented and discussions around the terms are very multidimensional. Some of the principles are still at their infancy and are barely visible. A range of principles emerged during the thematic analysis providing a potential list of principles which can provide a more comprehensive framework for brain data governance and a conceptual expansion of neuroethics…(More)”.

Liquid Democracy. Two Experiments on Delegation in Voting


Paper by Joseph Campbell, Alessandra Casella, Lucas de Lara, Victoria R. Mooers & Dilip Ravindran: “Under Liquid Democracy (LD), decisions are taken by referendum, but voters are allowed to delegate their votes to other voters. Theory shows that in common interest problems where experts are correctly identified, the outcome can be superior to simple majority voting. However, even when experts are correctly identified, delegation must be used sparely because it reduces the variety of independent information sources. We report the results of two experiments, each studying two treatments: in one treatment, participants have the option of delegating to better informed individuals; in the second, participants can choose to abstain. The first experiment follows a tightly controlled design planned for the lab; the second is a perceptual task run online where information about signals’ precision is ambiguous. The two designs are very different, but the experiments reach the same result: in both, delegation rates are unexpectedly high and higher than abstention rates, and LD underperforms relative to both universal voting and abstention…(More)”.