Regulating Personal Data : Data Models and Digital Services Trade


Report by Martina Francesca Ferracane and Erik van der Marel: “While regulations on personal data diverge widely between countries, it is nonetheless possible to identify three main models based on their distinctive features: one model based on open transfers and processing of data, a second model based on conditional transfers and processing, and third a model based on limited transfers and processing. These three data models have become a reference for many other countries when defining their rules on the cross-border transfer and domestic processing of personal data.

The study reviews their main characteristics and systematically identifies for 116 countries worldwide to which model they adhere for the two components of data regulation (i.e. cross-border transfers and domestic processing of data). In a second step, using gravity analysis, the study estimates whether countries sharing the same data model exhibit higher or lower digital services trade compared to countries with different regulatory data models. The results show that sharing the open data model for cross-border data transfers is positively associated with trade in digital services, while sharing the conditional model for domestic data processing is also positively correlated with trade in digital services. Country-pairs sharing the limited model, instead, exhibit a double whammy: they show negative trade correlations throughout the two components of data regulation. Robustness checks control for restrictions in digital services, the quality of digital infrastructure, as well as for the use of alternative data sources….(More)”.

More than a number: The telephone and the history of digital identification


Article by Jennifer Holt and Michael Palm: “This article examines the telephone’s entangled history within contemporary infrastructural systems of ‘big data’, identity and, ultimately, surveillance. It explores the use of telephone numbers, keypads and wires to offer new perspective on the imbrication of telephonic information, interface and infrastructure within contemporary surveillance regimes. The article explores telephone exchanges as arbiters of cultural identities, keypads as the foundation of digital transactions and wireline networks as enacting the transformation of citizens and consumers into digital subjects ripe for commodification and surveillance. Ultimately, this article argues that telephone history – specifically the histories of telephone numbers and keypads as well as infrastructure and policy in the United States – continues to inform contemporary practices of social and economic exchange as they relate to consumer identity, as well as to current discourses about surveillance and privacy in a digital age…(More)”.

Towards an accountable Internet of Things: A call for ‘reviewability’


Paper by Chris Norval, Jennifer Cobbe and Jatinder Singh: “As the IoT becomes increasingly ubiquitous, concerns are being raised about how IoT systems are being built and deployed. Connected devices will generate vast quantities of data, which drive algorithmic systems and result in real-world consequences. Things will go wrong, and when they do, how do we identify what happened, why they happened, and who is responsible? Given the complexity of such systems, where do we even begin?
This chapter outlines aspects of accountability as they relate to IoT, in the context of the increasingly interconnected and data-driven nature of such systems. Specifically, we argue the urgent need for mechanisms – legal, technical, and organisational – that facilitate the review of IoT systems. Such mechanisms work to support accountability, by enabling the relevant stakeholders to better understand, assess, interrogate and challenge the connected environments that increasingly pervade our world….(More)”

Pretty Good Phone Privacy


Paper by Paul Schmitt and Barath Raghavan: “To receive service in today’s cellular architecture, phones uniquely identify themselves to towers and thus to operators. This is now a cause of major privacy violations, as operators sell and leak identity and location data of hundreds of millions of mobile users. In this paper, we take an end-to-end perspective on the cellular architecture and find key points of decoupling that enable us to protect user identity and location privacy with no changes to physical infrastructure, no added latency, and no requirement of direct cooperation from existing operators. We describe Pretty Good Phone Privacy (PGPP) and demonstrate how our modified backend stack (NGC) works with real phones to provide ordinary yet privacy-preserving connectivity. We explore inherent privacy and efficiency tradeoffs in a simulation of a large metropolitan region. We show how PGPP maintains today’s control overheads while significantly improving user identity and location privacy…(More)”.

How a Google Street View image of your house predicts your risk of a car accident


MIT Technology Review: “Google Street View has become a surprisingly useful way to learn about the world without stepping into it. People use it to plan journeys, to explore holiday destinations, and to virtually stalk friends and enemies alike.

But researchers have found more insidious uses. In 2017 a team of researchers used the images to study the distribution of car types in the US and then used that data to determine the demographic makeup of the country. It turns out that the car you drive is a surprisingly reliable proxy for your income level, your education, your occupation, and even the way you vote in elections.

Street view of houses in Poland

Now a different group has gone even further. Łukasz Kidziński at Stanford University in California and Kinga Kita-Wojciechowska at the University of Warsaw in Poland have used Street View images of people’s houses to determine how likely they are to be involved in a car accident. That’s valuable information that an insurance company could use to set premiums.

The result raises important questions about the way personal information can leak from seemingly innocent data sets and whether organizations should be able to use it for commercial purposes.

Insurance data

The researchers’ method is straightforward. They began with a data set of 20,000 records of people who had taken out car insurance in Poland between 2013 and 2015. These were randomly selected from the database of an undisclosed insurance company.

Each record included the address of the policyholder and the number of damage claims he or she made during the 2013–’15 period. The insurer also shared its own prediction of future claims, calculated using its state-of-the-art risk model that takes into account the policyholder’s zip code and the driver’s age, sex, claim history, and so on.

The question that Kidziński and Kita-Wojciechowska investigated is whether they could make a more accurate prediction using a Google Street View image of the policyholder’s house….(More)”.

Democratizing data in a 5G world


Blog by Dimitrios Dosis at Mastercard: “The next generation of mobile technology has arrived, and it’s more powerful than anything we’ve experienced before. 5G can move data faster, with little delay — in fact, with 5G, you could’ve downloaded a movie in the time you’ve read this far. 5G will also create a vast network of connected machines. The Internet of Things will finally deliver on its promise to fuse all our smart products — vehicles, appliances, personal devices — into a single streamlined ecosystem.

My smartwatch could monitor my blood pressure and schedule a doctor’s appointment, while my car could collect data on how I drive and how much gas I use while behind the wheel. In some cities, petrol trucks already act as roving gas stations, receiving pings when cars are low on gas and refueling them as needed, wherever they are.

This amounts to an incredible proliferation of data. By 2025, every connected person will conduct nearly 5,000 data interactions every day — one every 18 seconds — whether they know it or not. 

Enticing and convenient as new 5G-powered developments may be, it also raises complex questions about data. Namely, who is privy to our personal information? As your smart refrigerator records the foods you buy, will the refrigerator’s manufacturer be able to see your eating habits? Could it sell that information to a consumer food product company for market research without your knowledge? And where would the information go from there? 

People are already asking critical questions about data privacy. In fact, 72% of them say they are paying attention to how companies collect and use their data, according to a global survey released last year by the Harvard Business Review Analytic Services. The survey, sponsored by Mastercard, also found that while 60% of executives believed consumers think the value they get in exchange for sharing their data is worthwhile, only 44% of consumers actually felt that way.

There are many reasons for this data disconnect, including the lack of transparency that currently exists in data sharing and the tension between an individual’s need for privacy and his or her desire for personalization.

This paradox can be solved by putting data in the hands of the people who create it — giving consumers the ability to manage, control and share their own personal information when they want to, with whom they want to, and in a way that benefits them.

That’s the basis of Mastercard’s core set of principles regarding data responsibility – and in this 5G world, it’s more important than ever. We will be able to gain from these new technologies, but this change must come with trust and user control at its core. The data ecosystem needs to evolve from schemes dominated by third parties, where some data brokers collect inferred, often unreliable and inaccurate data, then share it without the consumer’s knowledge….(More)”.

Give more data, awareness and control to individual citizens, and they will help COVID-19 containment


Paper by Mirco Nanni et al: “The rapid dynamics of COVID-19 calls for quick and effective tracking of virus transmission chains and early detection of outbreaks, especially in the “phase 2” of the pandemic, when lockdown and other restriction measures are progressively withdrawn, in order to avoid or minimize contagion resurgence. For this purpose, contact-tracing apps are being proposed for large scale adoption by many countries. A centralized approach, where data sensed by the app are all sent to a nation-wide server, raises concerns about citizens’ privacy and needlessly strong digital surveillance, thus alerting us to the need to minimize personal data collection and avoiding location tracking. We advocate the conceptual advantage of a decentralized approach, where both contact and location data are collected exclusively in individual citizens’ “personal data stores”, to be shared separately and selectively (e.g., with a backend system, but possibly also with other citizens), voluntarily, only when the citizen has tested positive for COVID-19, and with a privacy preserving level of granularity. This approach better protects the personal sphere of citizens and affords multiple benefits: it allows for detailed information gathering for infected people in a privacy-preserving fashion; and, in turn this enables both contact tracing, and, the early detection of outbreak hotspots on more finely-granulated geographic scale. The decentralized approach is also scalable to large populations, in that only the data of positive patients need be handled at a central level. Our recommendation is two-fold. First to extend existing decentralized architectures with a light touch, in order to manage the collection of location data locally on the device, and allow the user to share spatio-temporal aggregates—if and when they want and for specific aims—with health authorities, for instance. Second, we favour a longer-term pursuit of realizing a Personal Data Store vision, giving users the opportunity to contribute to collective good in the measure they want, enhancing self-awareness, and cultivating collective efforts for rebuilding society….(More)”.

Privacy and digital ethics after the pandemic


Carissa Véliz at Nature: “The coronavirus pandemic has permanently changed our relationship with technology, accelerating the drive towards digitization. While this change has brought advantages, such as increased opportunities to work from home and innovations in e-commerce, it has also been accompanied with steep drawbacks, which include an increase in inequality and undesirable power dynamics.

Power asymmetries in the digital age have been a worry since big tech became big. Technophiles have often argued that if users are unhappy about online services, they can always opt-out. But opting-out has not felt like a meaningful alternative for years for at least two reasons.

First, the cost of not using certain services can amount to a competitive disadvantage — from not seeing a job advert to not having access to useful tools being used by colleagues. When a platform becomes too dominant, asking people not to use it is like asking them to refrain from being full participants in society. Second, platforms such as Facebook and Google are unavoidable — no one who has an online life can realistically steer clear of them. Google ads and their trackers creep throughout much of the Internet1, and Facebook has shadow profiles on netizens even when they have never had an account on the platform2.

Citizens have responded to the countless data abuses in the past few years with what has been described as a ‘techlash’3. Tech companies whose business model is based on surveillance ceased to be perceived as good guys in hoodies who offered services to make our lives better. They were instead data predators jeopardizing, not only users’ privacy and security, but also democracy itself. During lockdown, communication apps became necessary for any and all social interaction beyond our homes. People have had to use online tools to work, get an education, receive medical attention, and enjoy much-needed entertainment. Gratefulness for having technology that allows us to stay in contact during such circumstances has thus watered down the general techlash. Big tech’s stocks have been consistently on the rise during the pandemic, in line with its accumulating power.

As a result of the pandemic, however, any lingering illusion of voluntariness in the use of technology has disappeared. It is not only citizens who rely on big tech to perform their jobs: businesses, universities, health services, and governments need the platforms to carry out their everyday functions. All over the world, governmental and diplomatic meetings are being carried out on platforms such as Zoom and Teams. Since governments do not have full control over the platforms they use, confidentiality is uncertain.

Enhanced power asymmetries have also worsened the vulnerability of ordinary citizens in areas that range from the interaction with government to ordering food online, and almost everything in between. The pandemic has, for example, led to an increase in the surveillance of employees as they work from home4. Students are likewise being subjected to more scrutiny: by their schools and teachers, and above all, by the companies on which they depend5. Surveillance for public health purposes has likewise increased. Privacy losses disempower citizens and often lead to further abuses of power. In the UK, for example, companies collecting data for pubs and restaurants for contact-tracing purposes have sold on that information6.

Such abuses are not isolated events. For the past two decades, we have allowed an unethical business model that depends on the systematic violation of the right to privacy to run amok. As long as we treat personal data as a commodity, there will be a high risk of it being misused — by being stolen in a hack or by being sold to the highest bidder (which often includes nefarious agents)….(More)”.

Sharing Student Health Data with Health Agencies: Considerations and Recommendations


Memo by the Center for Democracy and Technology: “As schools respond to COVID-19 on their campuses, some have shared student information with state and local health agencies, often to aid in contact tracing or to provide services to students. Federal and state student privacy laws, however, do not necessarily permit that sharing, and schools should seek to protect both student health and student privacy.

How Are Schools Sharing COVID-Related Student Data?

When it comes to sharing student data, schools’ practices vary widely. For example, the New York City Department of Education provides a consent form for sharing COVID-related student data. Other schools do not have consent forms, but instead, share COVID-related data as required by local or state health agencies. For instance, Orange County Public Schools in Florida assists the local health agency in contact tracing by collecting information such as students’ names and dates of birth. Some districts, such as the Dallas Independent School District in Texas, report positive cases to the county, but do not publicly specify what information is reported. Many schools, however, do not publicly disclose their collection and sharing of COVID-related student data….(More)”

Citizen acceptance of mass surveillance? Identity, intelligence, and biodata concerns


Paper by Westerlund, Mika; Isabelle, Diane A; Leminen, Seppo: “News media and human rights organizations are warning about the rise of the surveillance state that builds on distrust and mass surveillance of its citizens. Further, the global pandemic has fostered public-private collaboration such as the launch of contact tracing apps to tackle COVID-19. Thus, such apps also contribute to the diffusion of technologies that can collect and analyse large amounts of sensitive data and the growth of the surveillance society. This study examines the impacts of citizens’ concerns about digital identity, government’s intelligence activities, and security of the increasing biodata on their trust in and acceptance of government’s use of personal data. Our analysis of survey data from 1,486 Canadians suggest that those concerns have direct effects on people’s acceptance of government’s use of personal data, but not necessarily on the trust in the government being respectful of privacy. Authorities should be more transparent about the collection and uses of data….(More)”