Data Ownership: Exploring Implications for Data Privacy Rights and Data Valuation


Hearing by the Senate Committee on Banking, Housing and Urban Affairs:”…As a result of an increasingly digital economy, more personal information is available to companies than ever before.
Private companies are collecting, processing, analyzing and sharing considerable data on individuals for all kinds of purposes.

There have been many questions about what personal data is being collected, how it is being collected, with whom it is being shared and how it is being used, including in ways that affect individuals’ financial lives.

Given the vast amount of personal information flowing through the economy, individuals need real control over their personal data. This Committee has held a series of data privacy hearings exploring possible
frameworks for facilitating privacy rights to consumers. Nearly all have included references to data as a new currency or commodity.

The next question, then, is who owns it? There has been much debate about the concept of data ownership, the monetary value of personal information and its potential role in data privacy…..The witnesses will be: 

  1. Mr. Jeffrey Ritter Founding Chair, American Bar Association Committee on Cyberspace Law, External Lecturer
  2. Mr. Chad Marlow Senior Advocacy And Policy Counsel American Civil Liberties Union
  3. Mr. Will Rinehart Director Of Technology And Innovation Policy American Action Forum
  4. Ms. Michelle Dennedy Chief Executive Officer DrumWave Inc.

Should Consumers Be Able to Sell Their Own Personal Data?


The Wall Street Journal: “People around the world are confused and concerned about what companies do with the data they collect from their interactions with consumers.

A global survey conducted last fall by the research firm Ipsos gives a sense of the scale of people’s worries and uncertainty. Roughly two-thirds of those surveyed said they knew little or nothing about how much data companies held about them or what companies did with that data. And only about a third of respondents on average said they had at least a fair amount of trust that a variety of corporate and government organizations would use the information they had about them in the right way….

Christopher Tonetti, an associate professor of economics at Stanford Graduate School of Business, says consumers should own and be able to sell their personal data. Cameron F. Kerry, a visiting fellow at the Brookings Institution and former general counsel and acting secretary of the U.S. Department of Commerce, opposes the idea….

YES: It Would Encourage Sharing of Data—a Plus for Consumers and Society…Data isn’t like other commodities in one fundamental way—it doesn’t diminish with use. And that difference is the key to why consumers should own the data that’s created when they interact with companies, and have the right to sell it.YES: It Would Encourage Sharing of Data—a Plus for Consumers and Society…

NO: It Would Do Little to Help Consumers, and Could Leave Them Worse Off Than Now…

But owning data will do little to help consumers’ privacy—and may well leave them worse off. Meanwhile, consumer property rights would create enormous friction for valid business uses of personal information and for the free flow of information we value as a society.

In our current system, consumers reflexively click away rights to data in exchange for convenience, free services, connection, endorphins or other motivations. In a market where consumers could sell or license personal information they generate from web browsing, ride-sharing apps and other digital activities, is there any reason to expect that they would be less motivated to share their information? …(More)”.

Contracting for Personal Data


Paper by Kevin E. Davis and Florencia Marotta-Wurgler: “Is contracting for the collection, use, and transfer of data like contracting for the sale of a horse or a car or licensing a piece of software? Many are concerned that conventional principles of contract law are inadequate when some consumers may not know or misperceive the full consequences of their transactions. Such concerns have led to proposals for reform that deviate significantly from general rules of contract law. However, the merits of these proposals rest in part on testable empirical claims.

We explore some of these claims using a hand-collected data set of privacy policies that dictate the terms of the collection, use, transfer, and security of personal data. We explore the extent to which those terms differ across markets before and after the adoption of the General Data Protection Regulation (GDPR). We find that compliance with the GDPR varies across markets in intuitive ways, indicating that firms take advantage of the flexibility offered by a contractual approach even when they must also comply with mandatory rules. We also compare terms offered to more and less sophisticated subjects to see whether firms may exploit information barriers by offering less favorable terms to more vulnerable subjects….(More)”.

Ethical guidelines issued by engineers’ organization fail to gain traction


Blogpost by Nicolas Kayser-Bril: “In early 2016, the Institute of Electrical and Electronics Engineers, a professional association known as IEEE, launched a “global initiative to advance ethics in technology.” After almost three years of work and multiple rounds of exchange with experts on the topic, it released last April the first edition of Ethically Aligned Design, a 300-page treatise on the ethics of automated systems.

The general principles issued in the report focus on transparency, human rights and accountability, among other topics. As such, they are not very different from the 83 other ethical guidelines that researchers from the Health Ethics and Policy Lab of the Swiss Federal Institute of Technology in Zurich reviewed in an article published in Nature Machine Intelligence in September. However, one key aspect makes IEEE different from other think-tanks. With over 420,000 members, it is the world’s largest engineers’ association with roots reaching deep into Silicon Valley. Vint Cerf, one of Google’s Vice Presidents, is an IEEE “life fellow.”

Because the purpose of the IEEE principles is to serve as a “key reference for the work of technologists”, and because many technologists contributed to their conception, we wanted to know how three technology companies, Facebook, Google and Twitter, were planning to implement them.

Transparency and accountability

Principle number 5, for instance, requires that the basis of a particular automated decision be “discoverable”. On Facebook and Instagram, the reasons why a particular item is shown on a user’s feed are all but discoverable. Facebook’s “Why You’re Seeing This Post” feature explains that “many factors” are involved in the decision to show a specific item. The help page designed to clarify the matter fails to do so: many sentences there use opaque wording (users are told that “some things influence ranking”, for instance) and the basis of the decisions governing their newsfeeds are impossible to find.

Principle number 6 states that any autonomous system shall “provide an unambiguous rationale for all decisions made.” Google’s advertising systems do not provide an unambiguous rationale when explaining why a particular advert was shown to a user. A click on “Why This Ad” states that an “ad may be based on general factors … [and] information collected by the publisher” (our emphasis). Such vagueness is antithetical to the requirement for explicitness.

AlgorithmWatch sent detailed letters (which you can read below this article) with these examples and more, asking Google, Facebook and Twitter how they planned to implement the IEEE guidelines. This was in June. After a great many emails, phone calls and personal meetings, only Twitter answered. Google gave a vague comment and Facebook promised an answer which never came…(More)”

Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations


Paper by Margot E. Kaminski and Gianclaudio Malgieri: “Policy-makers, scholars, and commentators are increasingly concerned with the risks of using profiling algorithms and automated decision-making. The EU’s General Data Protection Regulation (GDPR) has tried to address these concerns through an array of regulatory tools. As one of us has argued, the GDPR combines individual rights with systemic governance, towards algorithmic accountability. The individual tools are largely geared towards individual “legibility”: making the decision-making system understandable to an individual invoking her rights. The systemic governance tools, instead, focus on bringing expertise and oversight into the system as a whole, and rely on the tactics of “collaborative governance,” that is, use public-private partnerships towards these goals. How these two approaches to transparency and accountability interact remains a largely unexplored question, with much of the legal literature focusing instead on whether there is an individual right to explanation.

The GDPR contains an array of systemic accountability tools. Of these tools, impact assessments (Art. 35) have recently received particular attention on both sides of the Atlantic, as a means of implementing algorithmic accountability at early stages of design, development, and training. The aim of this paper is to address how a Data Protection Impact Assessment (DPIA) links the two faces of the GDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance. We address the relationship between DPIAs and individual transparency rights. We propose, too, that impact assessments link the GDPR’s two methods of governing algorithmic decision-making by both providing systemic governance and serving as an important “suitable safeguard” (Art. 22) of individual rights….(More)”.

Data Fiduciary in Order to Alleviate Principal-Agent Problems in the Artificial Big Data Age


Paper by Julia M. Puaschunder: “The classic principal-agent problem in political science and economics describes agency dilemmas or problems when one person, the agent, is put in a situation to make decisions on behalf of another entity, the principal. A dilemma occurs in situations when individual profit maximization or principal and agent are pitted against each other. This so-called moral hazard is nowadays emerging in the artificial big data age, when big data reaping entities have to act on behalf of agents, who provide their data with trust in the principal’s integrity and responsible big data conduct. Yet to this day, no data fiduciary has been clearly described and established to protect the agent from misuse of data. This article introduces the agent’s predicament between utility derived from information sharing and dignity in privacy as well as hyper-hyperbolic discounting fallibilities to not clearly foresee what consequences information sharing can have over time and in groups. The principal’s predicament between secrecy and selling big data insights or using big data for manipulative purposes will be outlined. Finally, the article draws a clear distinction between manipulation and nudging in relation to the potential social class division of those who nudge and those who are nudged…(More)”.

Andrew Yang proposes that your digital data be considered personal property


Michael Grothaus at Fast Company: “2020 Democratic presidential candidate Andrew Yang may not be at the top of the race when it comes to polling (Politico currently has him ranked as the 7th most-popular Democratic contender), but his policies, including support for universal basic income, have made him popular among a subset of young, liberal-leaning, tech-savvy voters. Yang’s latest proposal, too, is sure to strike a chord with them.

The presidential candidate published his latest policy proposal today: to treat data as a property right. Announcing the proposal on his website, Yang lamented how our data is collected, used, and abused by companies, often with little awareness or consent from us. “This needs to stop,” Yang says. “Data generated by each individual needs to be owned by them, with certain rights conveyed that will allow them to know how it’s used and protect it.”

The rights Yang is proposing:

  • The right to be informed as to what data will be collected, and how it will be used
  • The right to opt out of data collection or sharing
  • The right to be told if a website has data on you, and what that data is
  • The right to be forgotten; to have all data related to you deleted upon request
  • The right to be informed if ownership of your data changes hands
  • The right to be informed of any data breaches including your information in a timely manner
  • The right to download all data in a standardized format to port to another platform…(More)”.

A fairer way forward for AI in health care


Linda Nordling at Nature: “When data scientists in Chicago, Illinois, set out to test whether a machine-learning algorithm could predict how long people would stay in hospital, they thought that they were doing everyone a favour. Keeping people in hospital is expensive, and if managers knew which patients were most likely to be eligible for discharge, they could move them to the top of doctors’ priority lists to avoid unnecessary delays. It would be a win–win situation: the hospital would save money and people could leave as soon as possible.

Starting their work at the end of 2017, the scientists trained their algorithm on patient data from the University of Chicago academic hospital system. Taking data from the previous three years, they crunched the numbers to see what combination of factors best predicted length of stay. At first they only looked at clinical data. But when they expanded their analysis to other patient information, they discovered that one of the best predictors for length of stay was the person’s postal code. This was puzzling. What did the duration of a person’s stay in hospital have to do with where they lived?

As the researchers dug deeper, they became increasingly concerned. The postal codes that correlated to longer hospital stays were in poor and predominantly African American neighbourhoods. People from these areas stayed in hospitals longer than did those from more affluent, predominantly white areas. The reason for this disparity evaded the team. Perhaps people from the poorer areas were admitted with more severe conditions. Or perhaps they were less likely to be prescribed the drugs they needed.

The finding threw up an ethical conundrum. If optimizing hospital resources was the sole aim of their programme, people’s postal codes would clearly be a powerful predictor for length of hospital stay. But using them would, in practice, divert hospital resources away from poor, black people towards wealthy white people, exacerbating existing biases in the system.

“The initial goal was efficiency, which in isolation is a worthy goal,” says Marshall Chin, who studies health-care ethics at University of Chicago Medicine and was one of the scientists who worked on the project. But fairness is also important, he says, and this was not explicitly considered in the algorithm’s design….(More)”.

How cities can leverage citizen data while protecting privacy


MIT News: “India is on a path with dual — and potentially conflicting — goals related to the use of citizen data.

To improve the efficiency their municipal services, many Indian cities have started enabling government-service requests, which involves collecting and sharing citizen data with government officials and, potentially, the public. But there’s also a national push to protect citizen privacy, potentially restricting data usage. Cities are now beginning to question how much citizen data, if any, they can use to track government operations.

In a new study, MIT researchers find that there is, in fact, a way for Indian cities to preserve citizen privacy while using their data to improve efficiency.

The researchers obtained and analyzed data from more than 380,000 government service requests by citizens across 112 cities in one Indian state for an entire year. They used the dataset to measure each city government’s efficiency based on how quickly they completed each service request. Based on field research in three of these cities, they also identified the citizen data that’s necessary, useful (but not critical), or unnecessary for improving efficiency when delivering the requested service.

In doing so, they identified “model” cities that performed very well in both categories, meaning they maximized privacy and efficiency. Cities worldwide could use similar methodologies to evaluate their own government services, the researchers say. …(More)”.

Big Data, Political Campaigning and the Law


Book edited by Normann Witzleb, Moira Paterson, and Janice Richardson on “Democracy and Privacy in the Age of Micro-Targeting”…: “In this multidisciplinary book, experts from around the globe examine how data-driven political campaigning works, what challenges it poses for personal privacy and democracy, and how emerging practices should be regulated.

The rise of big data analytics in the political process has triggered official investigations in many countries around the world, and become the subject of broad and intense debate. Political parties increasingly rely on data analytics to profile the electorate and to target specific voter groups with individualised messages based on their demographic attributes. Political micro-targeting has become a major factor in modern campaigning, because of its potential to influence opinions, to mobilise supporters and to get out votes. The book explores the legal, philosophical and political dimensions of big data analytics in the electoral process. It demonstrates that the unregulated use of big personal data for political purposes not only infringes voters’ privacy rights, but also has the potential to jeopardise the future of the democratic process, and proposes reforms to address the key regulatory and ethical questions arising from the mining, use and storage of massive amounts of voter data.

Providing an interdisciplinary assessment of the use and regulation of big data in the political process, this book will appeal to scholars from law, political science, political philosophy, and media studies, policy makers and anyone who cares about democracy in the age of data-driven political campaigning….(More)”.