Kenya passes data protection law crucial for tech investments


George Obulutsa and Duncan Miriri at Reuters: “Kenyan President Uhuru Kenyatta on Friday approved a data protection law which complies with European Union legal standards as it looks to bolster investment in its information technology sector.

The East African nation has attracted foreign firms with innovations such as Safaricom’s M-Pesa mobile money services, but the lack of safeguards in handling personal data has held it back from its full potential, officials say.

“Kenya has joined the global community in terms of data protection standards,” Joe Mucheru, minister for information, technology and communication, told Reuters.

The new law sets out restrictions on how personally identifiable data obtained by firms and government entities can be handled, stored and shared, the government said.

Mucheru said it complies with the EU’s General Data Protection Regulation which came into effect in May 2018 and said an independent office will investigate data infringements….

A lack of data protection legislation has also hampered the government’s efforts to digitize identity records for citizens.

The registration, which the government said would boost its provision of services, suffered a setback this year when the exercise was challenged in court.

“The lack of a data privacy law has been an enormous lacuna in Kenya’s digital rights landscape,” said Nanjala Nyabola, author of a book on information technology and democracy in Kenya….(More)”.

The Ethical Algorithm: The Science of Socially Aware Algorithm Design


Book by Michael Kearns and Aaron Roth: “Over the course of a generation, algorithms have gone from mathematical abstractions to powerful mediators of daily life. Algorithms have made our lives more efficient, more entertaining, and, sometimes, better informed. At the same time, complex algorithms are increasingly violating the basic rights of individual citizens. Allegedly anonymized datasets routinely leak our most sensitive personal information; statistical models for everything from mortgages to college admissions reflect racial and gender bias. Meanwhile, users manipulate algorithms to “game” search engines, spam filters, online reviewing services, and navigation apps.

Understanding and improving the science behind the algorithms that run our lives is rapidly becoming one of the most pressing issues of this century. Traditional fixes, such as laws, regulations and watchdog groups, have proven woefully inadequate. Reporting from the cutting edge of scientific research, The Ethical Algorithm offers a new approach: a set of principled solutions based on the emerging and exciting science of socially aware algorithm design. Michael Kearns and Aaron Roth explain how we can better embed human principles into machine code – without halting the advance of data-driven scientific exploration. Weaving together innovative research with stories of citizens, scientists, and activists on the front lines, The Ethical Algorithm offers a compelling vision for a future, one in which we can better protect humans from the unintended impacts of algorithms while continuing to inspire wondrous advances in technology….(More)”.

The Value of Data: Towards a Framework to Redistribute It


Paper by Maria Savona: “This note attempts a systematisation of different pieces of literature that underpin the recent policy and academic debate on the value of data. It mainly poses foundational questions around the definition, economic nature and measurement of data value, and discusses the opportunity to redistribute it. It then articulates a framework to compare ways of implementing redistribution, distinguishing between data as capital, data as labour or data as an intellectual property. Each of these raises challenges, revolving around the notions of data property and data rights, that are also briefly discussed. The note concludes by indicating areas for policy considerations and a research agenda to shape the future structure of data governance more at large….(More)”.

Restrictions on Privacy and Exploitation in the Digital Economy: A Competition Law Perspective


Paper by Nicholas Economides and Ioannis Lianos: “The recent controversy on the intersection of competition law with the protection of privacy, following the emergence of big data and social media is a major challenge for competition authorities worldwide. Recent technological progress in data analytics may greatly facilitate the prediction of personality traits and attributes from even a few digital records of human behaviour.


There are different perspectives globally as to the level of personal data protection and the role competition law may play in this context, hence the discussion of integrating such concerns in competition law enforcement may be premature for some jurisdictions. However, a market failure approach may provide common intellectual foundations for the assessment of harms associated to the exploitation of personal data, even when the specific legal system does not formally recognize a fundamental right to privacy.


The paper presents a model of market failure based on a requirement provision in the acquisition of personal information from users of other products/services. We establish the economic harm from the market failure and the requirement using the traditional competition law toolbox and focusing more on situations in which the restriction on privacy may be analysed as a form of exploitation. Eliminating the requirement and the market failure by creating a functioning market for the sale of personal information is imperative. This emphasis on exploitation does not mean that restrictions on privacy may not result from exclusionary practices. However, we analyse this issue in a separate study.


Besides the traditional analysis of the requirement and market failure, we note that there are typically informational asymmetries between the data controller and the data subject. The latter may not be aware that his data was harvested, in the first place, or that the data will be processed by the data controller for a different purpose or shared and sold to third parties. The exploitation of personal data may also result from economic coercion, on the basis of resource-dependence or lock-in of the user, the latter having no other choice, in order to enjoy the consumption of a specific service provided by the data controller or its ecosystem, in particular in the presence of dominance, than to consent to the harvesting and use of his data. A behavioural approach would also emphasise the possible internalities (demand-side market failures) coming out of the bounded rationality, or the fact that people do not internalise all consequences of their actions and face limits in their cognitive capacities.
The paper also addresses the way competition law could engage with exploitative conduct leading to privacy harm, both for ex ante and ex post enforcement.


With regard to ex ante enforcement, the paper explores how privacy concerns may be integrated in merger control as part of the definition of product quality, the harm in question being merely exploitative (the possibility the data aggregation provides to the merged entity to exploit (personal) data in ways that harm directly consumers), rather than exclusionary (harming consumers by enabling the merged entity to marginalise a rival with better privacy policies), which is examined in a separate paper.


With regard to ex post enforcement, the paper explores different theories of harm that may give rise to competition law concerns and suggest specific tests for their assessment. In particular, we analyse old and new exploitative theories of harm relating to excessive data extraction, personalised pricing, unfair commercial practices and trading conditions, exploitative requirement contracts, behavioural manipulation.
We are in favour of collective action to restore the conditions of a well-functioning data market and the paper makes several policy recommendations….(More)”.

User Data as Public Resource: Implications for Social Media Regulation


Paper by Philip Napoli: “Revelations about the misuse and insecurity of user data gathered by social media platforms have renewed discussions about how best to characterize property rights in user data. At the same time, revelations about the use of social media platforms to disseminate disinformation and hate speech have prompted debates over the need for government regulation to assure that these platforms serve the public interest. These debates often hinge on whether any of the established rationales for media regulation apply to social media. This article argues that the public resource rationale that has been utilized in traditional media regulation in the United States applies to social media.

The public resource rationale contends that, when a media outlet utilizes a public resource—such as the broadcast spectrum, or public rights of way—the outlet must abide by certain public interest obligations that may infringe upon its First Amendment rights. This article argues that aggregate user data can be conceptualized as a public resource that triggers the application of a public interest regulatory framework to social media sites and other digital platforms that derive their revenue from the gathering, sharing, and monetization of massive aggregations of user data….(More)”.

Data Ownership: Exploring Implications for Data Privacy Rights and Data Valuation


Hearing by the Senate Committee on Banking, Housing and Urban Affairs:”…As a result of an increasingly digital economy, more personal information is available to companies than ever before.
Private companies are collecting, processing, analyzing and sharing considerable data on individuals for all kinds of purposes.

There have been many questions about what personal data is being collected, how it is being collected, with whom it is being shared and how it is being used, including in ways that affect individuals’ financial lives.

Given the vast amount of personal information flowing through the economy, individuals need real control over their personal data. This Committee has held a series of data privacy hearings exploring possible
frameworks for facilitating privacy rights to consumers. Nearly all have included references to data as a new currency or commodity.

The next question, then, is who owns it? There has been much debate about the concept of data ownership, the monetary value of personal information and its potential role in data privacy…..The witnesses will be: 

  1. Mr. Jeffrey Ritter Founding Chair, American Bar Association Committee on Cyberspace Law, External Lecturer
  2. Mr. Chad Marlow Senior Advocacy And Policy Counsel American Civil Liberties Union
  3. Mr. Will Rinehart Director Of Technology And Innovation Policy American Action Forum
  4. Ms. Michelle Dennedy Chief Executive Officer DrumWave Inc.

Should Consumers Be Able to Sell Their Own Personal Data?


The Wall Street Journal: “People around the world are confused and concerned about what companies do with the data they collect from their interactions with consumers.

A global survey conducted last fall by the research firm Ipsos gives a sense of the scale of people’s worries and uncertainty. Roughly two-thirds of those surveyed said they knew little or nothing about how much data companies held about them or what companies did with that data. And only about a third of respondents on average said they had at least a fair amount of trust that a variety of corporate and government organizations would use the information they had about them in the right way….

Christopher Tonetti, an associate professor of economics at Stanford Graduate School of Business, says consumers should own and be able to sell their personal data. Cameron F. Kerry, a visiting fellow at the Brookings Institution and former general counsel and acting secretary of the U.S. Department of Commerce, opposes the idea….

YES: It Would Encourage Sharing of Data—a Plus for Consumers and Society…Data isn’t like other commodities in one fundamental way—it doesn’t diminish with use. And that difference is the key to why consumers should own the data that’s created when they interact with companies, and have the right to sell it.YES: It Would Encourage Sharing of Data—a Plus for Consumers and Society…

NO: It Would Do Little to Help Consumers, and Could Leave Them Worse Off Than Now…

But owning data will do little to help consumers’ privacy—and may well leave them worse off. Meanwhile, consumer property rights would create enormous friction for valid business uses of personal information and for the free flow of information we value as a society.

In our current system, consumers reflexively click away rights to data in exchange for convenience, free services, connection, endorphins or other motivations. In a market where consumers could sell or license personal information they generate from web browsing, ride-sharing apps and other digital activities, is there any reason to expect that they would be less motivated to share their information? …(More)”.

Contracting for Personal Data


Paper by Kevin E. Davis and Florencia Marotta-Wurgler: “Is contracting for the collection, use, and transfer of data like contracting for the sale of a horse or a car or licensing a piece of software? Many are concerned that conventional principles of contract law are inadequate when some consumers may not know or misperceive the full consequences of their transactions. Such concerns have led to proposals for reform that deviate significantly from general rules of contract law. However, the merits of these proposals rest in part on testable empirical claims.

We explore some of these claims using a hand-collected data set of privacy policies that dictate the terms of the collection, use, transfer, and security of personal data. We explore the extent to which those terms differ across markets before and after the adoption of the General Data Protection Regulation (GDPR). We find that compliance with the GDPR varies across markets in intuitive ways, indicating that firms take advantage of the flexibility offered by a contractual approach even when they must also comply with mandatory rules. We also compare terms offered to more and less sophisticated subjects to see whether firms may exploit information barriers by offering less favorable terms to more vulnerable subjects….(More)”.

Ethical guidelines issued by engineers’ organization fail to gain traction


Blogpost by Nicolas Kayser-Bril: “In early 2016, the Institute of Electrical and Electronics Engineers, a professional association known as IEEE, launched a “global initiative to advance ethics in technology.” After almost three years of work and multiple rounds of exchange with experts on the topic, it released last April the first edition of Ethically Aligned Design, a 300-page treatise on the ethics of automated systems.

The general principles issued in the report focus on transparency, human rights and accountability, among other topics. As such, they are not very different from the 83 other ethical guidelines that researchers from the Health Ethics and Policy Lab of the Swiss Federal Institute of Technology in Zurich reviewed in an article published in Nature Machine Intelligence in September. However, one key aspect makes IEEE different from other think-tanks. With over 420,000 members, it is the world’s largest engineers’ association with roots reaching deep into Silicon Valley. Vint Cerf, one of Google’s Vice Presidents, is an IEEE “life fellow.”

Because the purpose of the IEEE principles is to serve as a “key reference for the work of technologists”, and because many technologists contributed to their conception, we wanted to know how three technology companies, Facebook, Google and Twitter, were planning to implement them.

Transparency and accountability

Principle number 5, for instance, requires that the basis of a particular automated decision be “discoverable”. On Facebook and Instagram, the reasons why a particular item is shown on a user’s feed are all but discoverable. Facebook’s “Why You’re Seeing This Post” feature explains that “many factors” are involved in the decision to show a specific item. The help page designed to clarify the matter fails to do so: many sentences there use opaque wording (users are told that “some things influence ranking”, for instance) and the basis of the decisions governing their newsfeeds are impossible to find.

Principle number 6 states that any autonomous system shall “provide an unambiguous rationale for all decisions made.” Google’s advertising systems do not provide an unambiguous rationale when explaining why a particular advert was shown to a user. A click on “Why This Ad” states that an “ad may be based on general factors … [and] information collected by the publisher” (our emphasis). Such vagueness is antithetical to the requirement for explicitness.

AlgorithmWatch sent detailed letters (which you can read below this article) with these examples and more, asking Google, Facebook and Twitter how they planned to implement the IEEE guidelines. This was in June. After a great many emails, phone calls and personal meetings, only Twitter answered. Google gave a vague comment and Facebook promised an answer which never came…(More)”

Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations


Paper by Margot E. Kaminski and Gianclaudio Malgieri: “Policy-makers, scholars, and commentators are increasingly concerned with the risks of using profiling algorithms and automated decision-making. The EU’s General Data Protection Regulation (GDPR) has tried to address these concerns through an array of regulatory tools. As one of us has argued, the GDPR combines individual rights with systemic governance, towards algorithmic accountability. The individual tools are largely geared towards individual “legibility”: making the decision-making system understandable to an individual invoking her rights. The systemic governance tools, instead, focus on bringing expertise and oversight into the system as a whole, and rely on the tactics of “collaborative governance,” that is, use public-private partnerships towards these goals. How these two approaches to transparency and accountability interact remains a largely unexplored question, with much of the legal literature focusing instead on whether there is an individual right to explanation.

The GDPR contains an array of systemic accountability tools. Of these tools, impact assessments (Art. 35) have recently received particular attention on both sides of the Atlantic, as a means of implementing algorithmic accountability at early stages of design, development, and training. The aim of this paper is to address how a Data Protection Impact Assessment (DPIA) links the two faces of the GDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance. We address the relationship between DPIAs and individual transparency rights. We propose, too, that impact assessments link the GDPR’s two methods of governing algorithmic decision-making by both providing systemic governance and serving as an important “suitable safeguard” (Art. 22) of individual rights….(More)”.