Contracting for Personal Data


Paper by Kevin E. Davis and Florencia Marotta-Wurgler: “Is contracting for the collection, use, and transfer of data like contracting for the sale of a horse or a car or licensing a piece of software? Many are concerned that conventional principles of contract law are inadequate when some consumers may not know or misperceive the full consequences of their transactions. Such concerns have led to proposals for reform that deviate significantly from general rules of contract law. However, the merits of these proposals rest in part on testable empirical claims.

We explore some of these claims using a hand-collected data set of privacy policies that dictate the terms of the collection, use, transfer, and security of personal data. We explore the extent to which those terms differ across markets before and after the adoption of the General Data Protection Regulation (GDPR). We find that compliance with the GDPR varies across markets in intuitive ways, indicating that firms take advantage of the flexibility offered by a contractual approach even when they must also comply with mandatory rules. We also compare terms offered to more and less sophisticated subjects to see whether firms may exploit information barriers by offering less favorable terms to more vulnerable subjects….(More)”.

From smart to rebel city? Worlding, provincialising and the Barcelona Model


Paper by Greig Charnock, Hug March, Ramon Ribera-Fumaz: “This article examines the evolution of the ‘Barcelona Model’ of urban transformation through the lenses of worlding and provincialising urbanism. We trace this evolution from an especially dogmatic worlding vision of the smart city, under a centre-right city council, to its radical repurposing under the auspices of a municipal government led, after May 2015, by the citizens’ platform Barcelona en Comú. We pay particular attention to the new council’s objectives to harness digital platform technologies to enhance participative democracy, and its agenda to secure technological sovereignty and digital rights for its citizens. While stressing the progressive intent of these aims, we also acknowledge the challenge of going beyond the repurposing of smart technologies so as to engender new and radical forms of subjectivity among citizens themselves; a necessary basis for any urban revolution….(More)”.

How to ensure that your data science is inclusive


Blog by Samhir Vasdev: “As a new generation of data scientists emerges in Africa, they will encounter relatively little trusted, accurate, and accessible data upon which to apply their skills. It’s time to acknowledge the limitations of the data sources upon which data science relies, particularly in lower-income countries.

The potential of data science to support, measure, and amplify sustainable development is undeniable. As public, private, and civic institutions around the world recognize the role that data science can play in advancing their growth, an increasingly robust array of efforts has emerged to foster data science in lower-income countries.

This phenomenon is particularly salient in Sub-Saharan Africa. There, foundations are investing millions into building data literacy and data science skills across the continent. Multilaterals and national governments are pioneering new investments into data science, artificial intelligence, and smart cities. Private and public donors are building data science centers to build cohorts of local, indigenous data science talent. Local universities are launching graduate-level data science courses.

Despite this progress, among the hype surrounding data science rests an unpopular and inconvenient truth: As a new generation of data scientists emerges in Africa, they will encounter relatively little trusted, accurate, and accessible data that they can use for data science.

We hear promises of how data science can help teachers tailor curricula according to students’ performances, but many school systems don’t collect or track that performance data with enough accuracy and timeliness to perform those data science–enabled tweaks. We believe that data science can help us catch disease outbreaks early, but health care facilities often lack the specific data, like patient origin or digitized information, that is needed to discern those insights.

These fundamental data gaps invite the question: Precisely what data would we perform data science on to achieve sustainable development?…(More)”.

Human Rights in the Age of Platforms


Book edited by Rikke Frank Jørgensen: “Today such companies as Apple, Facebook, Google, Microsoft, and Twitter play an increasingly important role in how users form and express opinions, encounter information, debate, disagree, mobilize, and maintain their privacy. What are the human rights implications of an online domain managed by privately owned platforms? According to the Guiding Principles on Business and Human Rights, adopted by the UN Human Right Council in 2011, businesses have a responsibility to respect human rights and to carry out human rights due diligence. But this goal is dependent on the willingness of states to encode such norms into business regulations and of companies to comply. In this volume, contributors from across law and internet and media studies examine the state of human rights in today’s platform society.

The contributors consider the “datafication” of society, including the economic model of data extraction and the conceptualization of privacy. They examine online advertising, content moderation, corporate storytelling around human rights, and other platform practices. Finally, they discuss the relationship between human rights law and private actors, addressing such issues as private companies’ human rights responsibilities and content regulation…(More)”.

Supporting priority setting in science using research funding landscapes


Report by the Research on Research Institute: “In this working paper, we describe how to map research funding landscapes in order to support research funders in setting priorities. Based on data on scientific publications, a funding landscape highlights the research fields that are supported by different funders. The funding landscape described here has been created using data from the Dimensions database. It is presented using a freely available web-based tool that provides an interactive visualization of the landscape. We demonstrate the use of the tool through a case study in which we analyze funding of mental health research…(More)”.

Ethical guidelines issued by engineers’ organization fail to gain traction


Blogpost by Nicolas Kayser-Bril: “In early 2016, the Institute of Electrical and Electronics Engineers, a professional association known as IEEE, launched a “global initiative to advance ethics in technology.” After almost three years of work and multiple rounds of exchange with experts on the topic, it released last April the first edition of Ethically Aligned Design, a 300-page treatise on the ethics of automated systems.

The general principles issued in the report focus on transparency, human rights and accountability, among other topics. As such, they are not very different from the 83 other ethical guidelines that researchers from the Health Ethics and Policy Lab of the Swiss Federal Institute of Technology in Zurich reviewed in an article published in Nature Machine Intelligence in September. However, one key aspect makes IEEE different from other think-tanks. With over 420,000 members, it is the world’s largest engineers’ association with roots reaching deep into Silicon Valley. Vint Cerf, one of Google’s Vice Presidents, is an IEEE “life fellow.”

Because the purpose of the IEEE principles is to serve as a “key reference for the work of technologists”, and because many technologists contributed to their conception, we wanted to know how three technology companies, Facebook, Google and Twitter, were planning to implement them.

Transparency and accountability

Principle number 5, for instance, requires that the basis of a particular automated decision be “discoverable”. On Facebook and Instagram, the reasons why a particular item is shown on a user’s feed are all but discoverable. Facebook’s “Why You’re Seeing This Post” feature explains that “many factors” are involved in the decision to show a specific item. The help page designed to clarify the matter fails to do so: many sentences there use opaque wording (users are told that “some things influence ranking”, for instance) and the basis of the decisions governing their newsfeeds are impossible to find.

Principle number 6 states that any autonomous system shall “provide an unambiguous rationale for all decisions made.” Google’s advertising systems do not provide an unambiguous rationale when explaining why a particular advert was shown to a user. A click on “Why This Ad” states that an “ad may be based on general factors … [and] information collected by the publisher” (our emphasis). Such vagueness is antithetical to the requirement for explicitness.

AlgorithmWatch sent detailed letters (which you can read below this article) with these examples and more, asking Google, Facebook and Twitter how they planned to implement the IEEE guidelines. This was in June. After a great many emails, phone calls and personal meetings, only Twitter answered. Google gave a vague comment and Facebook promised an answer which never came…(More)”

The weather data gap: How can mobile technology make smallholder farmers climate resilient?


Rishi Raithatha at GSMA: “In the new GSMA AgriTech report, Mobile Technology for Climate Resilience: The role of mobile operators in bridging the data gap, we explore how mobile network operators (MNOs) can play a bigger role in developing and delivering services to strengthen the climate resilience of smallholder farmers. By harnessing their own assets and data, MNOs can improve a broad suite of weather products that are especially relevant for farming communities. These include a variety of weather forecasts (daily, weekly, sub-seasonal and seasonal) and nowcasts, as real-time monitoring and one- to two-hour predictions are often used for Early Warning Systems (EWS) to prevent weather-related disasters. MNOs can also help strengthen the value proposition of other climate products, such as weather index insurance and decision agriculture.

Why do we need more weather data?

Agriculture is highly dependent on regional climates, especially in developing countries where farming is largely rain-fed. Smallholder farmers, who are responsible for the bulk of agricultural production in developing countries, are particularly vulnerable to changing weather patterns – especially given their reliance on natural resources and exclusion from social protection schemes. However, the use of climate adaptation approaches, such as localised weather forecasts and weather index insurance, can enhance smallholder farmers’ ability to withstand the risks posed by climate change and maintain agricultural productivity.

Ground-level measurements are an essential component of climate resilience products; the creation of weather forecasts and nowcasts starts with the analysis of ground, spatial and aerial observations. This involves the use of algorithms, weather models and current and historical observational weather data. Observational instruments, such as radar, weather stations and satellites, are necessary in measuring ground-level weather. However, National Hydrological and Meteorological Services (NHMSs) in developing countries often lack the capacity to generate accurate ground-level measurements beyond a few areas, resulting in gaps in local weather data.

While satellite offers better quality resolution than before, and is more affordable and available to NHMSs, there is a need to complement this data with ground-level measurements. This is especially true in tropical and sub-tropical regions where most smallholder farmers live, where variable local weather patterns can lead to skewed averages from satellite data….(More).”

Secure Shouldn’t Mean Secret: A Call for Public Policy Schools to Share, Support, and Teach Data Stewardship


Paper by Maggie Reeves and Robert McMillan: “The public has long benefitted from researchers using individual-level administrative data (microdata) to answer questions on a gamut of issues related to the efficiency, effectiveness, and causality of programs and policies. However, these benefits have not been pervasive because few researchers have had access to microdata, and their tools, security practices, and technology have rarely been shared. With a clear push to expand access to microdata for purposes of rigorous analysis (Abraham et al., 2017; ADRF Network Working Group Participants, 2018), public policy schools must grapple with imperfect options and decide how to support secure data facilities for their faculty and students. They also must take the lead to educate students as data stewards who can navigate the challenges of microdata access for public policy research.

This white paper outlines the essential components of any secure facility, the pros and cons of four types of secure microdata facilities used for public policy research, the benefits of sharing tools and resources, and the importance of training. It closes with a call on public policy schools to include data stewardship as part of the standard curriculum…(More)”.

Urban Slums in a Datafying Milieu: Challenges for Data-Driven Research Practice


Paper by Bijal Brahmbhatt et al: “With the ongoing trend of urban datafication and growing use of data/evidence to shape developmental initiatives by state as well as non-state actors, this exploratory case study engages with the complex and often contested domains of data use. This study uses on-the-ground experience of working with informal settlements in Indian cities to examine how information value chains work in practice and the contours of their power to intervene in building an agenda of social justice into governance regimes. Using illustrative examples from ongoing action-oriented projects of Mahila Housing Trust in India such as the Energy Audit Project, Slum Mapping Exercise and women-led climate resilience building under the Global Resilience Partnership, it raises questions about challenges of making effective linkages between data, knowledge and action in and for slum communities in the global South by focussing on two issues.

First, it reveals dilemmas of achieving data accuracy when working with slum communities in developing cities where populations are dynamically changing, and where digitisation and use of ICT has limited operational currency. The second issue focuses on data ownership. It foregrounds the need for complementary inputs and the heavy requirement for support systems in informal settlements in order to translate data-driven knowledge into actionable forms. Absence of these will blunt the edge of data-driven community participation in local politics. Through these intersecting streams, the study attempts to address how entanglements between southern urbanism, datafication, governance and social justice diversify the discourse on data justice. It highlights existing hurdles and structural hierarchies within a data-heavy developmental register emergent across multiple cities in the global South where data-driven governmental regimes interact with convoluted urban forms and realities….(More)”.

Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations


Paper by Margot E. Kaminski and Gianclaudio Malgieri: “Policy-makers, scholars, and commentators are increasingly concerned with the risks of using profiling algorithms and automated decision-making. The EU’s General Data Protection Regulation (GDPR) has tried to address these concerns through an array of regulatory tools. As one of us has argued, the GDPR combines individual rights with systemic governance, towards algorithmic accountability. The individual tools are largely geared towards individual “legibility”: making the decision-making system understandable to an individual invoking her rights. The systemic governance tools, instead, focus on bringing expertise and oversight into the system as a whole, and rely on the tactics of “collaborative governance,” that is, use public-private partnerships towards these goals. How these two approaches to transparency and accountability interact remains a largely unexplored question, with much of the legal literature focusing instead on whether there is an individual right to explanation.

The GDPR contains an array of systemic accountability tools. Of these tools, impact assessments (Art. 35) have recently received particular attention on both sides of the Atlantic, as a means of implementing algorithmic accountability at early stages of design, development, and training. The aim of this paper is to address how a Data Protection Impact Assessment (DPIA) links the two faces of the GDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance. We address the relationship between DPIAs and individual transparency rights. We propose, too, that impact assessments link the GDPR’s two methods of governing algorithmic decision-making by both providing systemic governance and serving as an important “suitable safeguard” (Art. 22) of individual rights….(More)”.