The EU General Data Protection Regulation: A Commentary/Update of Selected Articles


Open Access Book edited by C. Kuner, L.A. Bygrave and C. Docksey et al: ” provides an update for selected articles of the GDPR Commentary published in 2020 by Oxford University Press. It covers developments between the last date of coverage of the Commentary (1 August 2019) and 1 January 2021 (with a few exceptions when later developments are taken into account). Edited by Christopher Kuner, Lee A. Bygrave, Chris Docksey, Laura Drechsler, and Luca Tosoni, it covers 49 articles of the GDPR, and is being made freely accessible with the kind permission of Oxford University Press. It also includes two appendices that cover the same period as the rest of this update: the first deals with judgments of the European courts and some selected judgments of particular importance from national courts, and the second with EDPB papers…(More)”

Responsible Data Science


Book by Peter Bruce and Grant Fleming: “The increasing popularity of data science has resulted in numerous well-publicized cases of bias, injustice, and discrimination. The widespread deployment of “Black box” algorithms that are difficult or impossible to understand and explain, even for their developers, is a primary source of these unanticipated harms, making modern techniques and methods for manipulating large data sets seem sinister, even dangerous. When put in the hands of authoritarian governments, these algorithms have enabled suppression of political dissent and persecution of minorities. To prevent these harms, data scientists everywhere must come to understand how the algorithms that they build and deploy may harm certain groups or be unfair.

Responsible Data Science delivers a comprehensive, practical treatment of how to implement data science solutions in an even-handed and ethical manner that minimizes the risk of undue harm to vulnerable members of society. Both data science practitioners and managers of analytics teams will learn how to:

  • Improve model transparency, even for black box models
  • Diagnose bias and unfairness within models using multiple metrics
  • Audit projects to ensure fairness and minimize the possibility of unintended harm…(More)”

Governing Privacy in Knowledge Commons


Open Access Book edited by Madelyn Rose Sanfilippo et al: “…explores how privacy impacts knowledge production, community formation, and collaborative governance in diverse contexts, ranging from academia and IoT, to social media and mental health. Using nine new case studies and a meta-analysis of previous knowledge commons literature, the book integrates the Governing Knowledge Commons framework with Helen Nissenbaum’s Contextual Integrity framework. The multidisciplinary case studies show that personal information is often a key component of the resources created by knowledge commons. Moreover, even when it is not the focus of the commons, personal information governance may require community participation and boundaries. Taken together, the chapters illustrate the importance of exit and voice in constructing and sustaining knowledge commons through appropriate personal information flows. They also shed light on the shortcomings of current notice-and-consent style regulation of social media platforms….(More)”.

Regulating Personal Data : Data Models and Digital Services Trade


Report by Martina Francesca Ferracane and Erik van der Marel: “While regulations on personal data diverge widely between countries, it is nonetheless possible to identify three main models based on their distinctive features: one model based on open transfers and processing of data, a second model based on conditional transfers and processing, and third a model based on limited transfers and processing. These three data models have become a reference for many other countries when defining their rules on the cross-border transfer and domestic processing of personal data.

The study reviews their main characteristics and systematically identifies for 116 countries worldwide to which model they adhere for the two components of data regulation (i.e. cross-border transfers and domestic processing of data). In a second step, using gravity analysis, the study estimates whether countries sharing the same data model exhibit higher or lower digital services trade compared to countries with different regulatory data models. The results show that sharing the open data model for cross-border data transfers is positively associated with trade in digital services, while sharing the conditional model for domestic data processing is also positively correlated with trade in digital services. Country-pairs sharing the limited model, instead, exhibit a double whammy: they show negative trade correlations throughout the two components of data regulation. Robustness checks control for restrictions in digital services, the quality of digital infrastructure, as well as for the use of alternative data sources….(More)”.

More than a number: The telephone and the history of digital identification


Article by Jennifer Holt and Michael Palm: “This article examines the telephone’s entangled history within contemporary infrastructural systems of ‘big data’, identity and, ultimately, surveillance. It explores the use of telephone numbers, keypads and wires to offer new perspective on the imbrication of telephonic information, interface and infrastructure within contemporary surveillance regimes. The article explores telephone exchanges as arbiters of cultural identities, keypads as the foundation of digital transactions and wireline networks as enacting the transformation of citizens and consumers into digital subjects ripe for commodification and surveillance. Ultimately, this article argues that telephone history – specifically the histories of telephone numbers and keypads as well as infrastructure and policy in the United States – continues to inform contemporary practices of social and economic exchange as they relate to consumer identity, as well as to current discourses about surveillance and privacy in a digital age…(More)”.

Towards an accountable Internet of Things: A call for ‘reviewability’


Paper by Chris Norval, Jennifer Cobbe and Jatinder Singh: “As the IoT becomes increasingly ubiquitous, concerns are being raised about how IoT systems are being built and deployed. Connected devices will generate vast quantities of data, which drive algorithmic systems and result in real-world consequences. Things will go wrong, and when they do, how do we identify what happened, why they happened, and who is responsible? Given the complexity of such systems, where do we even begin?
This chapter outlines aspects of accountability as they relate to IoT, in the context of the increasingly interconnected and data-driven nature of such systems. Specifically, we argue the urgent need for mechanisms – legal, technical, and organisational – that facilitate the review of IoT systems. Such mechanisms work to support accountability, by enabling the relevant stakeholders to better understand, assess, interrogate and challenge the connected environments that increasingly pervade our world….(More)”

Pretty Good Phone Privacy


Paper by Paul Schmitt and Barath Raghavan: “To receive service in today’s cellular architecture, phones uniquely identify themselves to towers and thus to operators. This is now a cause of major privacy violations, as operators sell and leak identity and location data of hundreds of millions of mobile users. In this paper, we take an end-to-end perspective on the cellular architecture and find key points of decoupling that enable us to protect user identity and location privacy with no changes to physical infrastructure, no added latency, and no requirement of direct cooperation from existing operators. We describe Pretty Good Phone Privacy (PGPP) and demonstrate how our modified backend stack (NGC) works with real phones to provide ordinary yet privacy-preserving connectivity. We explore inherent privacy and efficiency tradeoffs in a simulation of a large metropolitan region. We show how PGPP maintains today’s control overheads while significantly improving user identity and location privacy…(More)”.

How a Google Street View image of your house predicts your risk of a car accident


MIT Technology Review: “Google Street View has become a surprisingly useful way to learn about the world without stepping into it. People use it to plan journeys, to explore holiday destinations, and to virtually stalk friends and enemies alike.

But researchers have found more insidious uses. In 2017 a team of researchers used the images to study the distribution of car types in the US and then used that data to determine the demographic makeup of the country. It turns out that the car you drive is a surprisingly reliable proxy for your income level, your education, your occupation, and even the way you vote in elections.

Street view of houses in Poland

Now a different group has gone even further. Łukasz Kidziński at Stanford University in California and Kinga Kita-Wojciechowska at the University of Warsaw in Poland have used Street View images of people’s houses to determine how likely they are to be involved in a car accident. That’s valuable information that an insurance company could use to set premiums.

The result raises important questions about the way personal information can leak from seemingly innocent data sets and whether organizations should be able to use it for commercial purposes.

Insurance data

The researchers’ method is straightforward. They began with a data set of 20,000 records of people who had taken out car insurance in Poland between 2013 and 2015. These were randomly selected from the database of an undisclosed insurance company.

Each record included the address of the policyholder and the number of damage claims he or she made during the 2013–’15 period. The insurer also shared its own prediction of future claims, calculated using its state-of-the-art risk model that takes into account the policyholder’s zip code and the driver’s age, sex, claim history, and so on.

The question that Kidziński and Kita-Wojciechowska investigated is whether they could make a more accurate prediction using a Google Street View image of the policyholder’s house….(More)”.

Democratizing data in a 5G world


Blog by Dimitrios Dosis at Mastercard: “The next generation of mobile technology has arrived, and it’s more powerful than anything we’ve experienced before. 5G can move data faster, with little delay — in fact, with 5G, you could’ve downloaded a movie in the time you’ve read this far. 5G will also create a vast network of connected machines. The Internet of Things will finally deliver on its promise to fuse all our smart products — vehicles, appliances, personal devices — into a single streamlined ecosystem.

My smartwatch could monitor my blood pressure and schedule a doctor’s appointment, while my car could collect data on how I drive and how much gas I use while behind the wheel. In some cities, petrol trucks already act as roving gas stations, receiving pings when cars are low on gas and refueling them as needed, wherever they are.

This amounts to an incredible proliferation of data. By 2025, every connected person will conduct nearly 5,000 data interactions every day — one every 18 seconds — whether they know it or not. 

Enticing and convenient as new 5G-powered developments may be, it also raises complex questions about data. Namely, who is privy to our personal information? As your smart refrigerator records the foods you buy, will the refrigerator’s manufacturer be able to see your eating habits? Could it sell that information to a consumer food product company for market research without your knowledge? And where would the information go from there? 

People are already asking critical questions about data privacy. In fact, 72% of them say they are paying attention to how companies collect and use their data, according to a global survey released last year by the Harvard Business Review Analytic Services. The survey, sponsored by Mastercard, also found that while 60% of executives believed consumers think the value they get in exchange for sharing their data is worthwhile, only 44% of consumers actually felt that way.

There are many reasons for this data disconnect, including the lack of transparency that currently exists in data sharing and the tension between an individual’s need for privacy and his or her desire for personalization.

This paradox can be solved by putting data in the hands of the people who create it — giving consumers the ability to manage, control and share their own personal information when they want to, with whom they want to, and in a way that benefits them.

That’s the basis of Mastercard’s core set of principles regarding data responsibility – and in this 5G world, it’s more important than ever. We will be able to gain from these new technologies, but this change must come with trust and user control at its core. The data ecosystem needs to evolve from schemes dominated by third parties, where some data brokers collect inferred, often unreliable and inaccurate data, then share it without the consumer’s knowledge….(More)”.

Give more data, awareness and control to individual citizens, and they will help COVID-19 containment


Paper by Mirco Nanni et al: “The rapid dynamics of COVID-19 calls for quick and effective tracking of virus transmission chains and early detection of outbreaks, especially in the “phase 2” of the pandemic, when lockdown and other restriction measures are progressively withdrawn, in order to avoid or minimize contagion resurgence. For this purpose, contact-tracing apps are being proposed for large scale adoption by many countries. A centralized approach, where data sensed by the app are all sent to a nation-wide server, raises concerns about citizens’ privacy and needlessly strong digital surveillance, thus alerting us to the need to minimize personal data collection and avoiding location tracking. We advocate the conceptual advantage of a decentralized approach, where both contact and location data are collected exclusively in individual citizens’ “personal data stores”, to be shared separately and selectively (e.g., with a backend system, but possibly also with other citizens), voluntarily, only when the citizen has tested positive for COVID-19, and with a privacy preserving level of granularity. This approach better protects the personal sphere of citizens and affords multiple benefits: it allows for detailed information gathering for infected people in a privacy-preserving fashion; and, in turn this enables both contact tracing, and, the early detection of outbreak hotspots on more finely-granulated geographic scale. The decentralized approach is also scalable to large populations, in that only the data of positive patients need be handled at a central level. Our recommendation is two-fold. First to extend existing decentralized architectures with a light touch, in order to manage the collection of location data locally on the device, and allow the user to share spatio-temporal aggregates—if and when they want and for specific aims—with health authorities, for instance. Second, we favour a longer-term pursuit of realizing a Personal Data Store vision, giving users the opportunity to contribute to collective good in the measure they want, enhancing self-awareness, and cultivating collective efforts for rebuilding society….(More)”.