Data Governance Regimes in the Digital Economy: The Example of Connected Cars


Paper by Wolfgang Kerber and Jonas Severin Frank: “The Internet of Things raises a number of so far unsolved legal and regulatory questions. Particularly important are the issues of privacy, data ownership, and data access. One particularly interesting example are connected cars with their huge amount of produced data. Also based upon the recent discussion about data ownership and data access in the context of the EU Communication “Building a European data economy” this paper has two objectives:

(1) It intends to provide a General economic theoretical framework for the analysis of data governance regimes for data in Internet of Things contexts, in which two levels of data governance are distinguished (private data governance based upon contracts and the legal and regulatory framework for markets). This framework focuses on potential market failures that can emerge in regard to data and privacy.

(2) It applies this analytical framework to the complex problem of data governance in connected cars (with its different stakeholders car manufacturers, car owners, car component suppliers, repair service providers, insurance companies, and other service providers), and identifies several potential market failure problems in regard to this specific data governance problem (esp. competition problems, information/behavioral Problems and privacy problems).

These results can be an important input for future research that focuses more on the specific policy implications for data governance in connected cars. Although the paper is primarily an economic paper, it tries to take into account important aspects of the legal discussion….(More)”.

Nobody reads privacy policies – here’s how to fix that


 at the Conversation: “…The key to turning privacy notices into something useful for consumers is to rethink their purpose. A company’s policy might show compliance with the regulations the firm is bound to follow, but remains impenetrable to a regular reader.

The starting point for developing consumer-friendly privacy notices is to make them relevant to the user’s activity, understandable and actionable. As part of the Usable Privacy Policy Project, my colleagues and I developed a way to make privacy notices more effective.

The first principle is to break up the documents into smaller chunks and deliver them at times that are appropriate for users. Right now, a single multi-page policy might have many sections and paragraphs, each relevant to different services and activities. Yet people who are just casually browsing a website need only a little bit of information about how the site handles their IP addresses, if what they look at is shared with advertisers and if they can opt out of interest-based ads. Those people doesn’t need to know about many other things listed in all-encompassing policies, like the rules associated with subscribing to the site’s email newsletter, nor how the site handles personal or financial information belonging to people who make purchases or donations on the site.

When a person does decide to sign up for email updates or pay for a service through the site, then an additional short privacy notice could tell her the additional information she needs to know. These shorter documents should also offer users meaningful choices about what they want a company to do – or not do – with their data. For instance, a new subscriber might be allowed to choose whether the company can share his email address or other contact information with outside marketing companies by clicking a check box.

Understanding users’ expectations

Notices can be made even simpler if they focus particularly on unexpected or surprising types of data collection or sharing. For instance, in another study, we learned that most people know their fitness tracker counts steps – so they didn’t really need a privacy notice to tell them that. But they did not expect their data to be collectedaggregated and shared with third parties. Customers should be asked for permission to do this, and allowed to restrict sharing or opt out entirely.

Most importantly, companies should test new privacy notices with users, to ensure final versions are understandable and not misleading, and that offered choices are meaningful….(More)”

Selected Readings on Blockchain and Identity


By Hannah Pierce and Stefaan Verhulst

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of blockchain and identity was originally published in 2017.

The potential of blockchain and other distributed ledger technologies to create positive social change has inspired enthusiasm, broad experimentation, and some skepticism. In this edition of the Selected Readings series, we explore and curate the literature on blockchain and how it impacts identity as a means to access services and rights. (In a previous edition we considered the Potential of Blockchain for Transforming Governance).

Introduction

In 2008, an unknown source calling itself Satoshi Nakamoto released a paper named Bitcoin: A Peer-to-Peer Electronic Cash System which introduced Blockchain. Blockchain is a novel technology that uses a distributed ledger to record transactions and ensure compliance. Blockchain and other Distributed Ledger technologies (DLTs) rely on an ability to act as a vast, transparent, and secure public database.

Distributed ledger technologies (DLTs) have disruptive potential beyond innovation in products, services, revenue streams and operating systems within industry. By providing transparency and accountability in new and distributed ways, DLTs have the potential to positively empower underserved populations in myriad ways, including providing a means for establishing a trusted digital identity.

Consider the potential of DLTs for 2.4 billion people worldwide, about 1.5 billion of whom are over the age of 14, who are unable to prove identity to the satisfaction of authorities and other organizations – often excluding them from property ownership, free movement, and social protection as a result. At the same time, transition to a DLT led system of ID management involves various risks, that if not understood and mitigated properly, could harm potential beneficiaries.

Annotated Selected Reading List

Governance

Cuomo, Jerry, Richard Nash, Veena Pureswaran, Alan Thurlow, Dave Zaharchuk. “Building trust in government: Exploring the potential of blockchains.” IBM Institute for Business Value. January 2017.

This paper from the IBM Institute for Business Value culls findings from surveys conducted with over 200 government leaders in 16 countries regarding their experiences and expectations for blockchain technology. The report also identifies “Trailblazers”, or governments that expect to have blockchain technology in place by the end of the year, and details the views and approaches that these early adopters are taking to ensure the success of blockchain in governance. These Trailblazers also believe that there will be high yields from utilizing blockchain in identity management and that citizen services, such as voting, tax collection and land registration, will become increasingly dependent upon decentralized and secure identity management systems. Additionally, some of the Trailblazers are exploring blockchain application in borderless services, like cross-province or state tax collection, because the technology removes the need for intermediaries like notaries or lawyers to verify identities and the authenticity of transactions.

Mattila, Juri. “The Blockchain Phenomenon: The Disruptive Potential of Distributed Consensus Architectures.” Berkeley Roundtable on the International Economy. May 2016.

This working paper gives a clear introduction to blockchain terminology, architecture, challenges, applications (including use cases), and implications for digital trust, disintermediation, democratizing the supply chain, an automated economy, and the reconfiguration of regulatory capacity. As far as identification management is concerned, Mattila argues that blockchain can remove the need to go through a trusted third party (such as a bank) to verify identity online. This could strengthen the security of personal data, as the move from a centralized intermediary to a decentralized network lowers the risk of a mass data security breach. In addition, using blockchain technology for identity verification allows for a more standardized documentation of identity which can be used across platforms and services. In light of these potential capabilities, Mattila addresses the disruptive power of blockchain technology on intermediary businesses and regulating bodies.

Identity Management Applications

Allen, Christopher.  “The Path to Self-Sovereign Identity.” Coindesk. April 27, 2016.

In this Coindesk article, author Christopher Allen lays out the history of digital identities, then explains a concept of a “self-sovereign” identity, where trust is enabled without compromising individual privacy. His ten principles for self-sovereign identity (Existence, Control, Access, Transparency, Persistence, Portability, Interoperability, Consent, Minimization, and Protection) lend themselves to blockchain technology for administration. Although there are actors making moves toward the establishment of self-sovereign identity, there are a few challenges that face the widespread implementation of these tenets, including legal risks, confidentiality issues, immature technology, and a reluctance to change established processes.

Jacobovitz, Ori. “Blockchain for Identity Management.” Department of Computer Science, Ben-Gurion University. December 11, 2016.

This technical report discusses advantages of blockchain technology in managing and authenticating identities online, such as the ability for individuals to create and manage their own online identities, which offers greater control over access to personal data. Using blockchain for identity verification can also afford the potential of “digital watermarks” that could be assigned to each of an individual’s transactions, as well as negating the creation of unique usernames and passwords online. After arguing that this decentralized model will allow individuals to manage data on their own terms, Jacobvitz provides a list of companies, projects, and movements that are using blockchain for identity management.

Mainelli, Michael. “Blockchain Will Help Us Prove Our Identities in a Digital World.” Harvard Business Review. March 16, 2017.

In this Harvard Business Review article, author Michael Mainelli highlights a solution to identity problems for rich and poor alike–mutual distributed ledgers (MDLs), or blockchain technology. These multi-organizational data bases with unalterable ledgers and a “super audit trail” have three parties that deal with digital document exchanges: subjects are individuals or assets, certifiers are are organizations that verify identity, and inquisitors are entities that conducts know-your-customer (KYC) checks on the subject. This system will allow for a low-cost, secure, and global method of proving identity. After outlining some of the other benefits that this technology may have in creating secure and easily auditable digital documents, such as greater tolerance that comes from viewing widely public ledgers, Mainelli questions if these capabilities will turn out to be a boon or a burden to bureaucracy and societal behavior.

Personal Data Security Applications

Banafa, Ahmed. “How to Secure the Internet of Things (IoT) with Blockchain.” Datafloq. August 15, 2016.

This article details the data security risks that are coming up as the Internet of Things continues to expand, and how using blockchain technology can protect the personal data and identity information that is exchanged between devices. Banafa argues that, as the creation and collection of data is central to the functions of Internet of Things devices, there is an increasing need to better secure data that largely confidential and often personally identifiable. Decentralizing IoT networks, then securing their communications with blockchain can allow to remain scalable, private, and reliable. Enabling blockchain’s peer-to-peer, trustless communication may also enable smart devices to initiate personal data exchanges like financial transactions, as centralized authorities or intermediaries will not be necessary.

Shrier, David, Weige Wu and Alex Pentland. “Blockchain & Infrastructure (Identity, Data Security).” Massachusetts Institute of Technology. May 17, 2016.

This paper, the third of a four-part series on potential blockchain applications, covers the potential of blockchains to change the status quo of identity authentication systems, privacy protection, transaction monitoring, ownership rights, and data security. The paper also posits that, as personal data becomes more and more valuable, that we should move towards a “New Deal on Data” which provides individuals data protection–through blockchain technology– and the option to contribute their data to aggregates that work towards the common good. In order to achieve this New Deal on Data, robust regulatory standards and financial incentives must be provided to entice individuals to share their data to benefit society.

A Better Way to Trace Scattered Refugees


Tina Rosenberg in The New York Times: “…No one knew where his family had gone. Then an African refugee in Ottawa told him about Refunite. He went on its website and opened an account. He gave his name, phone number and place of origin, and listed family members he was searching for.

Three-quarters of a century ago, while World War II still raged, the Allies created the International Tracing Service to help the millions who had fled their homes. Its central name index grew to 50 million cards, with information on 17.5 million individuals. The index still exists — and still gets queries — today.

Index cards have become digital databases, of course. And some agencies have brought tracing into the digital age in other ways. Unicef, for example, equips staff during humanitarian emergencies with a software called Primero, which helps them get children food, medical care and other help — and register information about unaccompanied children. A parent searching for a child can register as well. An algorithm makes the connection — “like a date-finder or matchmaker,” said Robert MacTavish, who leads the Primero project.

Most United Nations agencies rely for family tracing on the International Committee of the Red Cross, the global network of national Red Cross and Red Crescent societies. Florence Anselmo, who directs the I.C.R.C.’s Central Tracing Agency, said that the I.C.R.C. and United Nations agencies can’t look in one another’s databases. That’s necessary for privacy reasons, but it’s an obstacle to family tracing.

Another problem: Online databases allow the displaced to do their own searches. But the I.C.R.C. has these for only a few emergency situations. Anselmo said that most tracing is done by the staff of national Red Cross societies, who respond to requests from other countries. But there is no global database, so people looking for loved ones must guess which countries to search.

The organization is working on developing an algorithm for matching, but for now, the search engines are human. “When we talk about tracing, it’s not only about data matching,” Anselmo said. “There’s a whole part about accompanying families: the human aspect, professionals as well as volunteers who are able to look for people — even go house to house if needed.”

This is the mom-and-pop general store model of tracing: The customer makes a request at the counter, then a shopkeeper with knowledge of her goods and a kind smile goes to the back and brings it out, throwing in a lollipop. But the world has 65 million forcibly displaced people, a record number. Personalized help to choose from limited stock is appropriate in many cases. But it cannot possibly be enough.

Refunite seeks to become the eBay of family tracing….(More)”

Are countries with a poor democratic record more likely to mandate an Aadhaar-like ID?


 at the Centre for Communication Governance: “Can a country’s democratic record indicate whether it is likely to mandate a national biometric identity? Research by scholars at the National Law University, Delhi suggests there may be some correlation, at least to indicate that robust democracies have been more cautious about adopting biometric identity systems.

The Supreme Court’s decision last month upholding a fundamental Right to Privacy for all Indians has put a renewed focus on Aadhaar, India’s 12-digit biometric identity programme that has been criticised for not only violating privacy but also lacking sufficient data protection safeguards. Challenges to the Aadhaar project, in fact, prompted the Supreme Court to take up the question of a Right to Privacy, and the apex court will hear petitions against the unique identity initiative later this year.

Ahead of those hearings, researchers from the Centre for Communication Governance at the National Law University, Delhi sought to look at the adoption of biometric identity systems by countries across the world. While examining whether countries were instituting these Aadhaar-like systems, researchers from the Centre noticed a trend wherein nations with strong biometric identity systems were less likely to have robust democratic governments.

“As we gathered and analysed the data, we noticed an interesting trend where many countries that had strong biometric ID systems, also did not have strong democratic governments,” the researchers said.

So they sought to map out their research, based on data collected primarily from countries within the Commonwealth, measured against their positions on Freedom House’s Freedom in the World index and the Economist Intelligence Unit’s Democracy index. The results show a cluster of nations with less freedoms also instituting a biometric system, while others higher up the democracy index do not have similar identity programmes….(More)”.

Cape Town as a Smart and Safe City: Implications for Governance and Data Privacy


Nora Ni Loideain at the Journal of International Data Privacy Law: “Promises abound that ‘smart city’ technologies could play a major role in developing safer, more sustainable, and equitable cities, creating paragons of democracy. However, there are concerns that governance led by ‘Big Data’ processes marks the beginning of a trend of encroachment on the individual’s liberty and privacy, even if such technologies are employed legitimately for the public’s safety and security. There are many ways in which personal data processing for law enforcement and public safety purposes may pose a threat to the privacy and data protection rights of individuals. Furthermore, the risk of such powers being misused is increased by the covert nature of the processing, and the ever-increasing capacity, and pervasiveness, of the retention, sharing, and monitoring of personal data by public authorities and business. The focus of this article concerns the use of these smart city technologies for the purposes of countering crime and ensuring public safety. Specifically, this research examines these policy-making developments, and the key initiatives to date, undertaken by the municipal authorities within the city of Cape Town. Subsequently, the examination then explores the implications of these policies and initiatives for governance, and compliance with the right to data privacy, as guaranteed under international human rights law, the Constitution of South Africa, and the national statutory framework governing data protection. In conclusion, the discussion provides reflections on the findings from this analysis, including some policy recommendations….(More)”.

Privacy and Outrage


Paper by Jordan M. Blanke: “Technology has dramatically altered virtually every aspect of our life in recent years. While technology has always driven change, it seems that these changes are occurring more rapidly and more extensively than ever before. Society and its laws will evolve; but it is not always an easy process. Privacy has changed dramatically in our data-driven world – and continues to change daily. It has always been difficult to define exactly what privacy is, and therefore, it is even more difficult to propose what it should become. As the meaning of privacy often varies from person to person, it is difficult to establish a one-size-fits-all concept. This paper explores some of the historical, legal and ethical development of privacy, discusses how some of the normative values of privacy may survive or change, and examines how outrage has been – and will continue to be – a driver of such change….(More)”.

Reclaiming personal data for the common good


Theo Bass at Nesta: “…The argument of our new report for DECODE is that more of the social value of personal data can be discovered by tools and platforms that give people the power to decide how their data is used. We need to flip the current model on its head, giving people back full control and respecting our data protection and fundamental rights framework.

The report describes how this might pave the way for a fairer distribution of the value generated by data, while opening up new use-cases that are valuable to government, society and individuals themselves. In order to achieve this vision, the DECODE project will develop and test the following:

Flexible rules to give people full control:  There is currently a lack of  technical and legal norms that would allow people to control and share data on their own terms. If this were possible, then people might be able to share their data for the public good, or publish it as anonymised open data under specific conditions, or for specific use-cases (say, non-commercial purposes). DECODE is working with the Making Sense project and Barcelona City Council to assist local communities with new forms of citizen sensing. The pilots will tackle the challenges of collating, storing and sharing data anonymously to influence policy on the city’s digital democracy platform Decidim (part of the D-CENT toolkit).

Trusted platforms to realise the collective value of data: Much of the opportunity will only be realised where individuals are able to pool their data together to leverage its potential economic and social value. Platform cooperatives offer a feasible model, highlighting the potential of digital technologies to help members collectively govern themselves. Effective data sharing has to be underpinned by high levels of user trust, and platform co-ops achieve this by embedding openness, respect for individual users’ privacy, and democratic participation over how decisions are made. DECODE is working with two platform co-ops – a neighbourhood social networking site called Gebied Online; and a democratic alternative to Airbnb in Amsterdam called FairBnB – to test new privacy-preserving features and granular data sharing options….(More)”

Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation


Paper by Jack Balkin: “We have now moved from the early days of the Internet to the Algorithmic Society. The Algorithmic Society features the use of algorithms, artificial intelligence agents, and Big Data to govern populations. It also features digital infrastructure companies, large multi-national social media platforms, and search engines that sit between traditional nation states and ordinary individuals, and serve as special-purpose governors of speech.

The Algorithmic Society presents two central problems for freedom of expression. First, Big Data allows new forms of manipulation and control, which private companies will attempt to legitimate and insulate from regulation by invoking free speech principles. Here First Amendment arguments will likely be employed to forestall digital privacy guarantees and prevent consumer protection regulation. Second, privately owned digital infrastructure companies and online platforms govern speech much as nation states once did. Here the First Amendment, as normally construed, is simply inadequate to protect the practical ability to speak.

The first part of the essay describes how to regulate online businesses that employ Big Data and algorithmic decision making consistent with free speech principles. Some of these businesses are “information fiduciaries” toward their end-users; they must exercise duties of good faith and non-manipulation. Other businesses who are not information fiduciaries have a duty not to engage in “algorithmic nuisance”: they may not externalize the costs of their analysis and use of Big Data onto innocent third parties.

The second part of the essay turns to the emerging pluralist model of online speech regulation. This pluralist model contrasts with the traditional dyadic model in which nation states regulated the speech of their citizens.

In the pluralist model, territorial governments continue to regulate the speech directly. But they also attempt to coerce or co-opt owners of digital infrastructure to regulate the speech of others. This is “new school” speech regulation….(More)”.

The Promise of Evidence-Based Policymaking


Final Report by the Commission on Evidence-Based Policymaking: “…There are many barriers to the efective use of government data to generate evidence. Better access to these data holds the potential for substantial gains for society. The Commission’s recommendations recognize that the country’s laws and practices are not currently optimized to support the use of data for evidence building, nor in a manner that best protects privacy. To correct these problems, the Commission makes the following recommendations:

  • Establish a National Secure Data Service to facilitate access to data for evidence building while ensuring privacy and transparency in how those data are used. As a state-of-the-art resource for improving government’s capacity to use the data it already collects, the National Secure Data Service will be able to temporarily link existing data and provide secure access to those data for exclusively statistical purposes in connection with approved projects. The National Secure Data Service will do this without creating a data clearinghouse or warehouse.
  • Require stringent privacy qualifcations for acquiring and combining data for statistical purposes at the National Secure Data Service to ensure that data continue to be efectively protected while improving the government’s ability to understand the impacts of programs on a wider range of outcomes. At the same time, consider additional statutory changes to enable ongoing statistical production that, under the same stringent privacy qualifcations, may make use of combined data.
  • Review and, where needed, revise laws authorizing Federal data collection and use to ensure that limited access to administrative and survey data is possible to return benefts to the public through improved programs and policies, but only under strict privacy controls.
  • Ensure state-collected quarterly earnings data are available for statistical purposes, including to support the many evidence-building activities for which earnings are an important outcome.
  • Make additional state-collected data about Federal programs available for evidence building. Where appropriate, states that administer programs with substantial Federal investment should in return provide the data necessary for evidence building.
  • Develop a uniform process for external researchers to apply and qualify for secure access to confdential government data for evidence-building purposes while protecting privacy by carefully restricting data access to qualifed and approved researchers…(More)”