The Razor’s Edge: Liberalizing the Digital Surveillance Ecosystem


Report by CNAS: “The COVID-19 pandemic is accelerating global trends in digital surveillance. Public health imperatives, combined with opportunism by autocratic regimes and authoritarian-leaning leaders, are expanding personal data collection and surveillance. This tendency toward increased surveillance is taking shape differently in repressive regimes, open societies, and the nation-states in between.

China, run by the Chinese Communist Party (CCP), is leading the world in using technology to enforce social control, monitor populations, and influence behavior. Part of maximizing this control depends on data aggregation and a growing capacity to link the digital and physical world in real time, where online offenses result in brisk repercussions. Further, China is increasing investments in surveillance technology and attempting to influence the patterns of technology’s global use through the export of authoritarian norms, values, and governance practices. For example, China champions its own technology standards to the rest of the world, while simultaneously peddling legislative models abroad that facilitate access to personal data by the state. Today, the COVID-19 pandemic offers China and other authoritarian nations the opportunity to test and expand their existing surveillance powers internally, as well as make these more extensive measures permanent.

Global swing states are already exhibiting troubling trends in their use of digital surveillance, including establishing centralized, government-held databases and trading surveillance practices with authoritarian regimes. Amid the pandemic, swing states like India seem to be taking cues from autocratic regimes by mandating the download of government-enabled contact-tracing applications. Yet, for now, these swing states appear responsive to their citizenry and sensitive to public agitation over privacy concerns.

Today, the COVID-19 pandemic offers China and other authoritarian nations the opportunity to test and expand their existing surveillance powers internally, as well as make these more extensive measures permanent.

Open societies and democracies can demonstrate global surveillance trends similar to authoritarian regimes and swing states, including the expansion of digital surveillance in the name of public safety and growing private sector capabilities to collect and analyze data on individuals. Yet these trends toward greater surveillance still occur within the context of pluralistic, open societies that feature ongoing debates about the limits of surveillance. However, the pandemic stands to shift the debate in these countries from skepticism over personal data collection to wider acceptance. Thus far, the spectrum of responses to public surveillance reflects the diversity of democracies’ citizenry and processes….(More)”.

The Pandemic Is No Excuse to Surveil Students


 Zeynep Tufekci in the Atlantic: “In Michigan, a small liberal-arts college is requiring students to install an app called Aura, which tracks their location in real time, before they come to campus. Oakland University, also in Michigan, announced a mandatory wearable that would track symptoms, but, facing a student-led petition, then said it would be optional. The University of Missouri, too, has an app that tracks when students enter and exit classrooms. This practice is spreading: In an attempt to open during the pandemic, many universities and colleges around the country are forcing students to download location-tracking apps, sometimes as a condition of enrollment. Many of these apps function via Bluetooth sensors or Wi-Fi networks. When students enter a classroom, their phone informs a sensor that’s been installed in the room, or the app checks the Wi-Fi networks nearby to determine the phone’s location.

As a university professor, I’ve seen surveillance like this before. Many of these apps replicate the tracking system sometimes installed on the phones of student athletes, for whom it is often mandatory. That system tells us a lot about what we can expect with these apps.

There is a widespread charade in the United States that university athletes, especially those who play high-profile sports such as football and basketball, are just students who happen to be playing sports as amateurs “in their free time.” The reality is that these college athletes in high-level sports, who are aggressively recruited by schools, bring prestige and financial resources to universities, under a regime that requires them to train like professional athletes despite their lack of salary. However, making the most of one’s college education and training at that level are virtually incompatible, simply because the day is 24 hours long and the body, even that of a young, healthy athlete, can only take so much when training so hard. Worse, many of these athletes are minority students, specifically Black men, who were underserved during their whole K–12 education and faced the same challenge then as they do now: Train hard in hopes of a scholarship and try to study with what little time is left, often despite being enrolled in schools with mediocre resources. Many of them arrive at college with an athletic scholarship but not enough academic preparation compared with their peers who went to better schools and could also concentrate on schooling….(More)”

How Competition Impacts Data Privacy


Paper by Aline Blankertz: “A small number of large digital platforms increasingly shape the space for most online interactions around the globe and they often act with hardly any constraint from competing services. The lack of competition puts those platforms in a powerful position that may allow them to exploit consumers and offer them limited choice. Privacy is increasingly considered one area in which the lack of competition may create harm. Because of these concerns, governments and other institutions are developing proposals to expand the scope for competition authorities to intervene to limit the power of the large platforms and to revive competition.  


The first case that has explicitly addressed anticompetitive harm to privacy is the German Bundeskartellamt’s case against Facebook in which the authority argues that imposing bad privacy terms can amount to an abuse of dominance. Since that case started in 2016, more cases deal with the link between competition and privacy. For example, the proposed Google/Fitbit merger has raised concerns about sensitive health data being merged with existing Google profiles and Apple is under scrutiny for not sharing certain personal data while using it for its own services.

However, addressing bad privacy outcomes through competition policy is effective only if those outcomes are caused, at least partly, by a lack of competition. Six distinct mechanisms can be distinguished through which competition may affect privacy, as summarized in Table 1. These mechanisms constitute different hypotheses through which less competition may influence privacy outcomes and lead either to worse privacy in different ways (mechanisms 1-5) or even better privacy (mechanism 6). The table also summarizes the available evidence on whether and to what extent the hypothesized effects are present in actual markets….(More)”.

Privacy-Preserving Record Linkage in the context of a National Statistics Institute


Guidance by Rainer Schnell: “Linking existing administrative data sets on the same units is used increasingly as a research strategy in many different fields. Depending on the academic field, this kind of operation has been given different names, but in application areas, this approach is mostly denoted as record linkage. Although linking data on organisations or economic entities is common, the most interesting applications of record linkage concern data on persons. Starting in medicine, this approach is now also being used in the social sciences and official statistics. Furthermore, the joint use of survey data with administrative data is now standard practice. For example, victimisation surveys are linked to police records, labour force surveys are linked to social security databases, and censuses are linked to surveys.

Merging different databases containing information on the same unit is technically trivial if all involved databases have a common identification number, such as a social security number or, as in the Scandinavian countries, a permanent personal identification number. Most of the modern identification numbers contain checksum mechanisms so that errors in these identifiers can be easily detected and corrected. Due to the many advantages of permanent personal identification numbers, similar systems have been introduced or discussed in some European countries outside Scandinavia.

In many jurisdictions, no permanent personal identification number is available for linkage. Examples are New Zealand, Australia, the UK, and Germany. Here, the linkage is most often based on alphanumeric identifiers such as surname, first name, address, and place of birth. In the literature, such identifiers are most often denoted as indirect or quasi-identifiers. Such identifiers are prone to error, for example, due to typographical errors, memory faults (previous addresses), different recordings of the same identifier (for example, swapping of substrings: reversal of first name and last name), deliberately false information (for example, year of birth) or changes of values over time (for example name changes due to marriages). Linking on exact matching information, therefore, yields only a non-randomly selected subset of records.

Furthermore, the quality of identifying information in databases containing only indirect identifiers is much lower than usually expected. Error rates in excess of 20% and more records containing incomplete or erroneous identifiers are encountered in practice….(More)”.

Personal data, public data, privacy & power: GDPR & company data


Open Corporates: “…there are three other aspects which are relevant when talking about access to EU company data.

Cargo-culting GDPR

The first, is a tendency to take this complex and subtle legislation that is GDPR and use a poorly understood version in other legislation and regulation, even if that regulation is already covered by GDPR. This actually undermines the GDPR regime, and prevents it from working effectively, and should strongly be resisted. In the tech world, such approaches are called ‘cargo-culting’.

Similarly GDPR is often used as an excuse for not releasing company information as open data, even when the same data is being sold to third parties apparently without concerns — if one is covered by GDPR, the other certainly should be.

Widened power asymmetries

The second issue is the unintended consequences of GDPR, specifically the way it increases asymmetries of power and agency. For example, something like the so-called Right To Be Forgotten takes very significant resources to implement, and so actually strengthens the position of the giant tech companies — for such companies, investing millions in large teams to decide who should and should not be given the Right To Be Forgotten is just a relatively small cost of doing business.

Another issue is the growth of a whole new industry dedicated to removing traces of people’s past from the internet (2), which is also increasing the asymmetries of power. The vast majority of people are not directors of companies, or beneficial owners, and it is only the relatively rich and powerful (including politicians and criminals) who can afford lawyers to stifle free speech, or remove parts of their past they would rather not be there, from business failures to associations with criminals.

OpenCorporates, for example, was threatened with a lawsuit from a member of one of the wealthiest families in Europe for reproducing a gazette notice from the Luxembourg official gazette (a publication that contains public notices). We refused to back down, believing we had a good case in law and in the public interest, and the other side gave up. But such so-called SLAPP suits are becoming increasingly common, although unlike many US states there are currently no defences in place to resist these in the EU, despite pressure from civil society to address this….

At the same time, the automatic assumption that all Personally Identifiable Information (PII), someone’s name for example, is private is highly problematic, confusing both citizens and policy makers, and further undermining democracies and fair societies. As an obvious case, it’s critical that we know the names of our elected representatives, and those in positions of power, otherwise we would have an opaque society where decisions are made by nameless individuals with opaque agendas and personal interests — such as a leader awarding a contract to their brother’s company, for example.

As the diagram below illustrates, there is some personally identifiable information that it’s strongly in the public interest to know. Take the director or beneficial owner of a company, for example, of course their details are PII — clearly you need to know their name (and other information too), otherwise what actually do you know about them, or the company (only that some unnamed individual has been given special protection under law to be shielded from the company’s debts and actions, and yet can benefit from its profits)?

On the other hand, much of the data which is truly about our privacy — the profiles, inferences and scores that companies store on us — is explicitly outside GDPR, if it doesn’t contain PII.

Image for post

Hopefully, as awareness of the issues increases, we will develop a more nuanced, deeper, understanding of privacy, such that case law around GDPR, and successors to this legislation begin to rebalance and case law starts to bring clarity to the ambiguities of the GDPR….(More)”.

Health Data Privacy under the GDPR: Big Data Challenges and Regulatory Responses


Book edited by Maria Tzanou: “The growth of data collecting goods and services, such as ehealth and mhealth apps, smart watches, mobile fitness and dieting apps, electronic skin and ingestible tech, combined with recent technological developments such as increased capacity of data storage, artificial intelligence and smart algorithms have spawned a big data revolution that has reshaped how we understand and approach health data. Recently the COVID-19 pandemic has foregrounded a variety of data privacy issues. The collection, storage, sharing and analysis of health- related data raises major legal and ethical questions relating to privacy, data protection, profiling, discrimination, surveillance, personal autonomy and dignity.

This book examines health privacy questions in light of the GDPR and the EU’s general data privacy legal framework. The GDPR is a complex and evolving body of law that aims to deal with several technological and societal health data privacy problems, while safeguarding public health interests and addressing its internal gaps and uncertainties. The book answers a diverse range of questions including: What role can the GDPR play in regulating health surveillance and big (health) data analytics? Can it catch up with the Internet age developments? Are the solutions to the challenges posed by big health data to be found in the law? Does the GDPR provide adequate tools and mechanisms to ensure public health objectives and the effective protection of privacy? How does the GDPR deal with data that concern children’s health and academic research?

By analysing a number of diverse questions concerning big health data under the GDPR from various different perspectives, this book will appeal to those interested in privacy, data protection, big data, health sciences, information technology, the GDPR, EU and human rights law….(More)”.

Data Justice and COVID-19: Global Perspectives


Book edited by edited by Linnet Taylor, Aaron Martin, Gargi Sharma and Shazade Jameson: “In early 2020, as the COVID-19 pandemic swept the world and states of emergency were declared by one country after another, the global technology sector—already equipped with unprecedented wealth, power, and influence—mobilised to seize the opportunity. This collection is an account of what happened next and captures the emergent conflicts and responses around the world. The essays provide a global perspective on the implications of these developments for justice: they make it possible to compare how the intersection of state and corporate power—and the way that power is targeted and exercised—confronts, and invites resistance from, civil society in countries worldwide.

This edited volume captures the technological response to the pandemic in 33 countries, accompanied by nine thematic reflections, and reflects the unfolding of the first wave of the pandemic.

This book can be read as a guide to the landscape of technologies deployed during the pandemic and also be used to compare individual country strategies. It will prove useful as a tool for teaching and learning in various disciplines and as a reference point for activists and analysts interested in issues of data justice.

The essays interrogate these technologies and the political, legal, and regulatory structures that determine how they are applied. In doing so,the book exposes the workings of state technological power to critical assessment and contestation….(More)”

Strengthening Privacy Protections in COVID-19 Mobile Phone–Enhanced Surveillance Programs


Rand Report: “Dozens of countries, including the United States, have been using mobile phone tools and data sources for COVID-19 surveillance activities, such as tracking infections and community spread, identifying populated areas at risk, and enforcing quarantine orders. These tools can augment traditional epidemiological interventions, such as contact tracing with technology-based data collection (e.g., automated signaling and record-keeping on mobile phone apps). As the response progresses, other beneficial technologies could include tools that authenticate those with low risk of contagion or that build community trust as stay-at-home orders are lifted.

However, the potential benefits that COVID-19 mobile phone–enhanced public health (“mobile”) surveillance program tools could provide are also accompanied by potential for harm. There are significant risks to citizens from the collection of sensitive data, including personal health, location, and contact data. People whose personal information is being collected might worry about who will receive the data, how those recipients might use the data, how the data might be shared with other entities, and what measures will be taken to safeguard the data from theft or abuse.

The risk of privacy violations can also impact government accountability and public trust. The possibility that one’s privacy will be violated by government officials or technology companies might dissuade citizens from getting tested for COVID-19, downloading public health–oriented mobile phone apps, or sharing symptom or location data. More broadly, real or perceived privacy violations might discourage citizens from believing government messaging or complying with government orders regarding COVID-19.

As U.S. public health agencies consider COVID-19-related mobile surveillance programs, they will need to address privacy concerns to encourage broad uptake and protect against privacy harms. Otherwise, COVID-19 mobile surveillance programs likely will be ineffective and the data collected unrepresentative of the situation on the ground….(More)“.

What privacy preserving techniques make possible: for transport authorities


Blog by Georgina Bourke: “The Mayor of London listed cycling and walking as key population health indicators in the London Health Inequalities Strategy. The pandemic has only amplified the need for people to use cycling as a safer and healthier mode of transport. Yet as the majority of cyclists are white, Black communities are less likely to get the health benefits that cycling provides. Groups like Transport for London (TfL) should monitor how different communities cycle and who is excluded. Organisations like the London Office of Technology and Innovation (LOTI) could help boroughs procure privacy preserving technology to help their efforts.

But at the moment, it’s difficult for public organisations to access mobility data held by private companies. One reason is because mobility data is sensitive. Even if you remove identifiers like name and address, there’s still a risk you can reidentify someone by linking different data sets together. This means you could track how an individual moved around a city. I wrote more about the privacy risks with mobility data in a previous blog post. The industry’s awareness of privacy issues in using and sharing mobility data is rising. In the case of Los Angeles Department of Transport’s Mobility Data Specification (LADOT), Uber is concerned about sharing anonymised data because of the privacy risk. Both organisations are now involved in a legal battle to see which has the rights to the data. This might have been avoided if Uber had applied privacy preserving techniques….

Privacy preserving techniques can help mobility providers share important insights with authorities without compromising peoples’ privacy.

Instead of requiring access to all customer trip data, authorities could ask specific questions like, where are the least popular places to cycle? If mobility providers apply techniques like randomised response, an individual’s identity is obscured by the noise added to the data. This means it’s highly unlikely that someone could be reidentified later on. And because this technique requires authorities to ask very specific questions – for randomised response to work, the answer has to be binary, ie Yes or No – authorities will also be practicing data minimisation by default.

It’s easy to imagine transport authorities like TfL combining privacy preserved mobility data from multiple mobility providers to compare insights and measure service provision. They could cross reference the privacy preserved bike trip data with demographic data in the local area to learn how different communities cycle. The first step to addressing inequality is being able to measure it….(More)”.

Public perceptions on data sharing: key insights from the UK and the USA


Paper by Saira Ghafur, Jackie Van Dael, Melanie Leis and Ara Darzi, and Aziz Sheikh: “Data science and artificial intelligence (AI) have the potential to transform the delivery of health care. Health care as a sector, with all of the longitudinal data it holds on patients across their lifetimes, is positioned to take advantage of what data science and AI have to offer. The current COVID-19 pandemic has shown the benefits of sharing data globally to permit a data-driven response through rapid data collection, analysis, modelling, and timely reporting.

Despite its obvious advantages, data sharing is a controversial subject, with researchers and members of the public justifiably concerned about how and why health data are shared. The most common concern is privacy; even when data are (pseudo-)anonymised, there remains a risk that a malicious hacker could, using only a few datapoints, re-identify individuals. For many, it is often unclear whether the risks of data sharing outweigh the benefits.

A series of surveys over recent years indicate that the public holds a range of views about data sharing. Over the past few years, there have been several important data breaches and cyberattacks. This has resulted in patients and the public questioning the safety of their data, including the prospect or risk of their health data being shared with unauthorised third parties.

We surveyed people across the UK and the USA to examine public attitude towards data sharing, data access, and the use of AI in health care. These two countries were chosen as comparators as both are high-income countries that have had substantial national investments in health information technology (IT) with established track records of using data to support health-care planning, delivery, and research. The UK and USA, however, have sharply contrasting models of health-care delivery, making it interesting to observe if these differences affect public attitudes.

Willingness to share anonymised personal health information varied across receiving bodies (figure). The more commercial the purpose of the receiving institution (eg, for an insurance or tech company), the less often respondents were willing to share their anonymised personal health information in both the UK and the USA. Older respondents (≥35 years) in both countries were generally less likely to trust any organisation with their anonymised personal health information than younger respondents (<35 years)…

Despite the benefits of big data and technology in health care, our findings suggest that the rapid development of novel technologies has been received with concern. Growing commodification of patient data has increased awareness of the risks involved in data sharing. There is a need for public standards that secure regulation and transparency of data use and sharing and support patient understanding of how data are used and for what purposes….(More)”.