Contracting for Personal Data


Paper by Kevin E. Davis and Florencia Marotta-Wurgler: “Is contracting for the collection, use, and transfer of data like contracting for the sale of a horse or a car or licensing a piece of software? Many are concerned that conventional principles of contract law are inadequate when some consumers may not know or misperceive the full consequences of their transactions. Such concerns have led to proposals for reform that deviate significantly from general rules of contract law. However, the merits of these proposals rest in part on testable empirical claims.

We explore some of these claims using a hand-collected data set of privacy policies that dictate the terms of the collection, use, transfer, and security of personal data. We explore the extent to which those terms differ across markets before and after the adoption of the General Data Protection Regulation (GDPR). We find that compliance with the GDPR varies across markets in intuitive ways, indicating that firms take advantage of the flexibility offered by a contractual approach even when they must also comply with mandatory rules. We also compare terms offered to more and less sophisticated subjects to see whether firms may exploit information barriers by offering less favorable terms to more vulnerable subjects….(More)”.

Merging the ‘Social’ and the ‘Public’: How Social Media Platforms Could Be a New Public Forum


Paper by Amélie Pia Heldt: “When Facebook and other social media sites announced in August 2018 they would ban extremist speakers such as conspiracy theorist Alex Jones for violating their rules against hate speech, reactions were strong. Either they would criticize that such measures were only a drop in the bucket with regards to toxic and harmful speech online, or they would despise Facebook & Co. for penalizing only right-wing speakers, hence censoring political opinions and joining some type of anti-conservative media conglomerate. This anecdote foremost begged the question: Should someone like Alex Jones be excluded from Facebook? And the question “should” includes the one of “may Facebook exclude users for publishing political opinions?”.

As social media platforms take up more and more space in our daily lives, enabling not only individual and mass communication, but also offering payment and other services, there is still a need for a common understanding with regards to the social and communicative space they create in cyberspace. By common I mean on a global scale since this is the way most social media platforms operate or aim for (see Facebook’s mission statement: “bring the world closer together”). While in social science a new digital sphere was proclaimed and social media platforms can be categorized as “personal publics”, there is no such denomination in legal scholarship that is globally agreed upon. Public space can be defined as a free room between the state and society, as a space for freedom. Generally, it is where individuals are protected by their fundamental rights while operating in the public sphere. However, terms like forum, space, and sphere may not be used as synonyms in this discussion. Under the First Amendment, the public forum doctrine mainly serves the purposes of democracy and truth and could be perpetuated in communication services that promote direct dialogue between the state and citizens. But where and by whom is the public forum guaranteed in cyberspace? The notion of the public space in cyberspace is central and it constantly evolves as platforms become broader in their services, hence it needs to be examined more closely. When looking at social media platforms we need to take into account how they moderate speech and subsequently how they influence social processes. If representative democracies are built on the grounds of deliberation, it is essential to safeguard the room for public discourse to actually happen. Are constitutional concepts for the analog space transferable into the digital? Should private actors such as social media platforms be bound by freedom of speech without being considered state actors? And, accordingly, create a new type of public forum?

The goal of this article is to provide answers to the questions mentioned….(More)”.

GROW Citizens’ Observatory: Leveraging the power of citizens, open data and technology to generate engagement, and action on soil policy and soil moisture monitoring


Paper by M. Woods et al: “Citizens’ Observatories (COs) seek to extend conventional citizen science activities to scale up the potential of citizen sensing for environmental monitoring and creation of open datasets, knowledge and action around environmental issues, both local and global. The GROW CO has connected the planetary dimension of satellites with the hyperlocal context of farmers and their soil. GROW has faced three main interrelated challenges associated with each of the three core audiences of the observatory, namely citizens, scientists and policy makers: one is sustained citizen engagement, quality assurance of citizen-generated data and the challenge to move from data to action in practice and policy. We discuss how each of these challenges were overcome and gave way to the following related project outputs: 1) Contributing to satellite validation and enhancing the collective intelligence of GEOSS 2) Dynamic maps and visualisations for growers, scientists and policy makers 3) Social-technical innovations data art…(More)”.

Supporting priority setting in science using research funding landscapes


Report by the Research on Research Institute: “In this working paper, we describe how to map research funding landscapes in order to support research funders in setting priorities. Based on data on scientific publications, a funding landscape highlights the research fields that are supported by different funders. The funding landscape described here has been created using data from the Dimensions database. It is presented using a freely available web-based tool that provides an interactive visualization of the landscape. We demonstrate the use of the tool through a case study in which we analyze funding of mental health research…(More)”.

Secure Shouldn’t Mean Secret: A Call for Public Policy Schools to Share, Support, and Teach Data Stewardship


Paper by Maggie Reeves and Robert McMillan: “The public has long benefitted from researchers using individual-level administrative data (microdata) to answer questions on a gamut of issues related to the efficiency, effectiveness, and causality of programs and policies. However, these benefits have not been pervasive because few researchers have had access to microdata, and their tools, security practices, and technology have rarely been shared. With a clear push to expand access to microdata for purposes of rigorous analysis (Abraham et al., 2017; ADRF Network Working Group Participants, 2018), public policy schools must grapple with imperfect options and decide how to support secure data facilities for their faculty and students. They also must take the lead to educate students as data stewards who can navigate the challenges of microdata access for public policy research.

This white paper outlines the essential components of any secure facility, the pros and cons of four types of secure microdata facilities used for public policy research, the benefits of sharing tools and resources, and the importance of training. It closes with a call on public policy schools to include data stewardship as part of the standard curriculum…(More)”.

Individualism and Governance of the Commons


Paper by Meina Cai et al: “Individualistic cultures are associated with economic growth and development. Do they also improve governance of the commons? According to the property rights literature, conservation is more likely when the institutions of property arise from a spontaneous process in response to local problems. We argue that individualistic cultures contribute to conservation by encouraging property rights entrepreneurship: efforts by individuals and communities to resolve commons dilemmas, including their investment of resources in securing political recognition of spontaneously arising property rights. We use the theory to explain cross-country rates of change in forest cover. Using both subjective measures of individualistic values and the historical prevalence of disease as instruments for individualism, we find that individualistic societies have higher reforestation rates than collectivist ones, consistent with our theory…(More)”.

Urban Slums in a Datafying Milieu: Challenges for Data-Driven Research Practice


Paper by Bijal Brahmbhatt et al: “With the ongoing trend of urban datafication and growing use of data/evidence to shape developmental initiatives by state as well as non-state actors, this exploratory case study engages with the complex and often contested domains of data use. This study uses on-the-ground experience of working with informal settlements in Indian cities to examine how information value chains work in practice and the contours of their power to intervene in building an agenda of social justice into governance regimes. Using illustrative examples from ongoing action-oriented projects of Mahila Housing Trust in India such as the Energy Audit Project, Slum Mapping Exercise and women-led climate resilience building under the Global Resilience Partnership, it raises questions about challenges of making effective linkages between data, knowledge and action in and for slum communities in the global South by focussing on two issues.

First, it reveals dilemmas of achieving data accuracy when working with slum communities in developing cities where populations are dynamically changing, and where digitisation and use of ICT has limited operational currency. The second issue focuses on data ownership. It foregrounds the need for complementary inputs and the heavy requirement for support systems in informal settlements in order to translate data-driven knowledge into actionable forms. Absence of these will blunt the edge of data-driven community participation in local politics. Through these intersecting streams, the study attempts to address how entanglements between southern urbanism, datafication, governance and social justice diversify the discourse on data justice. It highlights existing hurdles and structural hierarchies within a data-heavy developmental register emergent across multiple cities in the global South where data-driven governmental regimes interact with convoluted urban forms and realities….(More)”.

Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations


Paper by Margot E. Kaminski and Gianclaudio Malgieri: “Policy-makers, scholars, and commentators are increasingly concerned with the risks of using profiling algorithms and automated decision-making. The EU’s General Data Protection Regulation (GDPR) has tried to address these concerns through an array of regulatory tools. As one of us has argued, the GDPR combines individual rights with systemic governance, towards algorithmic accountability. The individual tools are largely geared towards individual “legibility”: making the decision-making system understandable to an individual invoking her rights. The systemic governance tools, instead, focus on bringing expertise and oversight into the system as a whole, and rely on the tactics of “collaborative governance,” that is, use public-private partnerships towards these goals. How these two approaches to transparency and accountability interact remains a largely unexplored question, with much of the legal literature focusing instead on whether there is an individual right to explanation.

The GDPR contains an array of systemic accountability tools. Of these tools, impact assessments (Art. 35) have recently received particular attention on both sides of the Atlantic, as a means of implementing algorithmic accountability at early stages of design, development, and training. The aim of this paper is to address how a Data Protection Impact Assessment (DPIA) links the two faces of the GDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance. We address the relationship between DPIAs and individual transparency rights. We propose, too, that impact assessments link the GDPR’s two methods of governing algorithmic decision-making by both providing systemic governance and serving as an important “suitable safeguard” (Art. 22) of individual rights….(More)”.

Data Fiduciary in Order to Alleviate Principal-Agent Problems in the Artificial Big Data Age


Paper by Julia M. Puaschunder: “The classic principal-agent problem in political science and economics describes agency dilemmas or problems when one person, the agent, is put in a situation to make decisions on behalf of another entity, the principal. A dilemma occurs in situations when individual profit maximization or principal and agent are pitted against each other. This so-called moral hazard is nowadays emerging in the artificial big data age, when big data reaping entities have to act on behalf of agents, who provide their data with trust in the principal’s integrity and responsible big data conduct. Yet to this day, no data fiduciary has been clearly described and established to protect the agent from misuse of data. This article introduces the agent’s predicament between utility derived from information sharing and dignity in privacy as well as hyper-hyperbolic discounting fallibilities to not clearly foresee what consequences information sharing can have over time and in groups. The principal’s predicament between secrecy and selling big data insights or using big data for manipulative purposes will be outlined. Finally, the article draws a clear distinction between manipulation and nudging in relation to the potential social class division of those who nudge and those who are nudged…(More)”.

Nudging the Nudger: Toward a Choice Architecture for Regulators


Working Paper by Susan E. Dudley and Zhoudan Xie: “Behavioral research has shown that individuals do not always behave in ways that match textbook definitions of rationality. Recognizing that “bounded rationality” also occurs in the regulatory process and building on public choice insights that focus on how institutional incentives affect behavior, this article explores the interaction between the institutions in which regulators operate and their cognitive biases. It attempts to understand the extent to which the “choice architecture” regulators face reinforces or counteracts predictable cognitive biases. Just as behavioral insights are increasingly used to design choice architecture to frame individual decisions in ways that encourage welfare-enhancing choices, consciously designing the institutions that influence regulators’ policy decisions with behavioral insights in mind could lead to more public-welfare-enhancing policies. The article concludes with some modest ideas for improving regulators’ choice architecture and suggestions for further research….(More)”.