A Closer Look at Location Data: Privacy and Pandemics


Assessment by Stacey Gray: “In light of COVID-19, there is heightened global interest in harnessing location data held by major tech companies to track individuals affected by the virus, better understand the effectiveness of social distancing, or send alerts to individuals who might be affected based on their previous proximity to known cases. Governments around the world are considering whether and how to use mobile location data to help contain the virus: Israel’s government passed emergency regulations to address the crisis using cell phone location data; the European Commission requested that mobile carriers provide anonymized and aggregate mobile location data; and South Korea has created a publicly available map of location data from individuals who have tested positive. 

Public health agencies and epidemiologists have long been interested in analyzing device location data to track diseases. In general, the movement of devices effectively mirrors movement of people (with some exceptions discussed below). However, its use comes with a range of ethical and privacy concerns. 

In order to help policymakers address these concerns, we provide below a brief explainer guide of the basics: (1) what is location data, (2) who holds it, and (3) how is it collected? Finally we discuss some preliminary ethical and privacy considerations for processing location data. Researchers and agencies should consider: how and in what context location data was collected; the fact and reasoning behind location data being classified as legally “sensitive” in most jurisdictions; challenges to effective “anonymization”; representativeness of the location dataset (taking into account potential bias and lack of inclusion of low-income and elderly subpopulations who do not own phones); and the unique importance of purpose limitation, or not re-using location data for other civil or law enforcement purposes after the pandemic is over….(More)”.

Why we need responsible data for children


Andrew Young and Stefaan Verhulst at The Conversation: “…Without question, the increased use of data poses unique risks for and responsibilities to children. While practitioners may have well-intended purposes to leverage data for and about children, the data systems used are often designed with (consenting) adults in mind without a focus on the unique needs and vulnerabilities of children. This can lead to the collection of inaccurate and unreliable data as well as the inappropriate and potentially harmful use of data for and about children….

Research undertaken in the context of the RD4C initiative uncovered the following trends and realities. These issues make clear why we need a dedicated data responsibility approach for children.

  • Today’s children are the first generation growing up at a time of rapid datafication where almost all aspects of their lives, both on and off-line, are turned into data points. An entire generation of young people is being datafied – often starting even before birth. Every year the average child will have more data collected about them in their lifetime than would a similar child born any year prior. The potential uses of such large volumes of data and the impact on children’s lives are unpredictable, and could potentially be used against them.
  • Children typically do not have full agency to make decisions about their participation in programs or services which may generate and record personal data. Children may also lack the understanding to assess a decision’s purported risks and benefits. Privacy terms and conditions are often barely understood by educated adults, let alone children. As a result, there is a higher duty of care for children’s data.
  • Disaggregating data according to socio-demographic characteristics can improve service delivery and assist with policy development. However, it also creates risks for group privacy. Children can be identified, exposing them to possible harms. Disaggregated data for groups such as child-headed households and children experiencing gender-based violence can put vulnerable communities and children at risk. Data about children’s location itself can be risky, especially if they have some additional vulnerability that could expose them to harm.
  • Mishandling data can cause children to lose trust in institutions that deliver essential services including vaccines, medicine, and nutrition supplies. For organizations dealing with child well-being, these retreats can have severe consequences. Distrust can cause families and children to refuse health, education, child protection and other public services. Such privacy protective behavior can impact children throughout the course of their lifetime, and potentially exacerbate existing inequities and vulnerabilities.
  • As volumes of collected and stored data increase, obligations and protections traditionally put in place for children may be difficult or impossible to uphold. The interests of children are not always prioritized when organizations define their legitimate interest to access or share personal information of children. The immediate benefit of a service provided does not always justify the risk or harm that might be caused by it in the future. Data analysis may be undertaken by people who do not have expertise in the area of child rights, as opposed to traditional research where practitioners are specifically educated in child subject research. Similarly, service providers collecting children’s data are not always specially trained to handle it, as international standards recommend.
  • Recent events around the world reveal the promise and pitfalls of algorithmic decision-making. While it can expedite certain processes, algorithms and their inferences can possess biases that can have adverse effects on people, for example those seeking medical care and attempting to secure jobs. The danger posed by algorithmic bias is especially pronounced for children and other vulnerable populations. These groups often lack the awareness or resources necessary to respond to instances of bias or to rectify any misconceptions or inaccuracies in their data.
  • Many of the children served by child welfare organizations have suffered trauma. Whether physical, social, emotional in nature, repeatedly making children register for services or provide confidential personal information can amount to revictimization – re-exposing them to traumas or instigating unwarranted feelings of shame and guilt.

These trends and realities make clear the need for new approaches for maximizing the value of data to improve children’s lives, while mitigating the risks posed by our increasingly datafied society….(More)”.

Location Surveillance to Counter COVID-19: Efficacy Is What Matters


Susan Landau at Lawfare: “…Some government officials believe that the location information that phones can provide will be useful in the current crisis. After all, if cellphone location information can be used to track terrorists and discover who robbed a bank, perhaps it can be used to determine whether you rubbed shoulders yesterday with someone who today was diagnosed as having COVID-19, the respiratory disease that the novel coronavirus causes. But such thinking ignores the reality of how phone-tracking technology works.

Let’s look at the details of what we can glean from cellphone location information. Cell towers track which phones are in their locale—but that is a very rough measure, useful perhaps for tracking bank robbers, but not for the six-foot proximity one wants in order to determine who might have been infected by the coronavirus.

Finer precision comes from GPS signals, but these can only work outside. That means the location information supplied by your phone—if your phone and that of another person are both on—can tell you if you both went into the same subway stop around the same time. But it won’t tell you whether you rode the same subway car. And the location information from your phone isn’t fully precise. So not only can’t it reveal if, for example, you were in the same aisle in the supermarket as the ill person, but sometimes it will make errors about whether you made it into the store, as opposed to just sitting on a bench outside. What’s more, many people won’t have the location information available because GPS drains the battery, so they’ll shut it off when they’re not using it. Their phones don’t have the location information—and neither do the providers, at least not at the granularity to determine coronavirus exposure.

GPS is not the only way that cellphones can collect location information. Various other ways exist, including through the WiFi network to which a phone is connected. But while two individuals using the same WiFi network are likely to be close together inside a building, the WiFi data would typically not be able to determine whether they were in that important six-foot proximity range.

Other devices can also get within that range, including Bluetooth beacons. These are used within stores, seeking to determine precisely what people are—and aren’t—buying; they track peoples’ locations indoors within inches. But like WiFi, they’re not ubiquitous, so their ability to track exposure will be limited.

If the apps lead to the government’s dogging people’s whereabouts at work, school, in the supermarket and at church, will people still be willing to download the tracking apps that get them get discounts when they’re passing the beer aisle? China follows this kind of surveillance model, but such a surveillance-state solution is highly unlikely to be acceptable in the United States. Yet anything less is unlikely to pinpoint individuals exposed to the virus.

South Korea took a different route. In precisely tracking coronavirus exposure, the country used additional digital records, including documentation of medical and pharmacy visits, history of credit card transactions, and CCTV videos, to determine where potentially exposed people had been—then followed up with interviews not just of infected people but also of their acquaintances, to determine where they had traveled.

Validating such records is labor intensive. And for the United States, it may not be the best use of resources at this time. There’s an even more critical reason that the Korean solution won’t work for the U.S.: South Korea was able to test exposed people. The U.S. can’t do this. Currently the country has a critical shortage of test kits; patients who are not sufficiently ill as to be hospitalized are not being tested. The shortage of test kits is sufficiently acute that in New York City, the current epicenter of the pandemic, the rule is, “unless you are hospitalized and a diagnosis will impact your care, you will not be tested.” With this in mind, moving to the South Korean model of tracking potentially exposed individuals won’t change the advice from federal and state governments that everyone should engage in social distancing—but employing such tracking would divert government resources and thus be counterproductive.

Currently, phone tracking in the United States is not efficacious. It cannot be unless all people are required to carry such location-tracking devices at all times; have location tracking on; and other forms of information tracking, including much wider use of CCTV cameras, Bluetooth beacons, and the like, are also in use. There are societies like this. But so far, even in the current crisis, no one is seriously contemplating the U.S. heading in that direction….(More)”.

Cellphone tracking could help stem the spread of coronavirus. Is privacy the price?


Kelly Servick at Science: “…At its simplest, digital contact tracing might work like this: Phones log their own locations; when the owner of a phone tests positive for COVID-19, a record of their recent movements is shared with health officials; owners of any other phones that recently came close to that phone get notified of their risk of infection and are advised to self-isolate. But designers of a tracking system will have to work out key details: how to determine the proximity among phones and the health status of users, where that information gets stored, who sees it, and in what format.

Digital contact tracing systems are already running in several countries, but details are scarce and privacy concerns abound. Protests greeted Israeli Prime Minister Benjamin Netanyahu’s rollout this week of a surveillance program that uses the country’s domestic security agency to track the locations of people potentially infected with the virus. South Korea has released detailed information on infected individuals—including their recent movements—viewable through multiple private apps that send alerts to users in their vicinity. “They’re essentially texting people, saying, ‘Hey, there’s been a 60-year-old woman who’s positive for COVID. Click this for more information about her path,’” says Anne Liu, a global health expert at Columbia University. She warns that the South Korean approach risks unmasking and stigmatizing infected people and the businesses they frequent.

But digital tracking is probably “identifying more contacts than you would with traditional methods,” Liu says. A contact-tracing app might not have much impact in a city where a high volume of coronavirus cases and extensive community transmission has already shuttered businesses and forced citizens inside, she adds. But it could be powerful in areas, such as in sub-Saharan Africa, that are at an earlier stage of the outbreak, and where isolating potential cases could avert the need to shut down all schools and businesses. “If you can package this type of information in a way that protects individual privacy as best you can, it can be something positive,” she says.

Navigating privacy laws

In countries with strict data privacy laws, one option for collecting data is to ask telecommunications and other tech companies to share anonymous, aggregated information they’ve already gathered. Laws in the United States and the European Union are very specific about how app and device users must consent to the use of their data—and how much information companies must disclose about how those data will be used, stored, and shared. Working within those constraints, mobile carriers in Germany and Italy have started to share cellphone location data with health officials in an aggregated, anonymized format. Even though individual users aren’t identified, the data could reveal general trends about where and when people are congregating and risk spreading infection.

Google and Facebook are both in discussions with the U.S. government about sharing anonymized location data, The Washington Post reported this week. U.S. companies have to deal with a patchwork of state and federal privacy regulations, says Melissa Krasnow, a privacy and data security partner at VLP Law Group. App and devicemakers could face user lawsuits for sharing data in a way that wasn’t originally specified in their terms of service—unless federal or local officials pass legislation that would free them from liability. “Now you’ve got a global pandemic, so you would think that [you] would be able to use this information for the global good, but you can’t,” Krasnow says. “There’s expectations about privacy.”

Another option is to start fresh with a coronavirus-specific app that asks users to voluntarily share their location and health data. For example, a basic symptom-checking app could do more than just keeping people who don’t need urgent care out of overstretched emergency rooms, says Samuel Scarpino, an epidemiologist at Northeastern University. Health researchers could use also use location data from the app to estimate the size of an outbreak. “That could be done, I think, without risking being evil,” he says.

For Scarpino, the calculus changes if governments want to track the movements of a specific person who has coronavirus relative to the paths of other people, as China and South Korea have apparently done. That kind of tracking “could easily swing towards a privacy violation that isn’t justified by the potential public health benefit,” he says….(More)”.

Privacy and Pandemics


Emily Benson at the Bertelsmann Foundation: “In bucolic China, a child has braved cold temperatures for some fresh outdoors air. Overhead, a drone hovers. Its loudspeaker, a haunting combination of human direction in the machine age, chides him for being outdoors. “Hey kid! We’re in unusual times… The coronavirus is very serious… run!!” it barks. “Staying at home is contributing to society.”

The ferocious spread of COVID-19 in 2020 has revealed stark policy differences among governments. The type of actions and degrees of severity with which governments have responded varies widely, but one pressing issue the crisis raises is how COVID-19 will affect civil liberties in the digital age.

The Chinese Approach

Images of riot gear with heat-sensing cameras and temperature gun checks in metro stations have been plastered in the news since the beginning of 2020, when the Chinese government undertook drastic measures to contain the spread of COVID-19. The government quickly set about enacting strict restraints on society that dictated where people went and what they could do.

In China, Alipay, an Alibaba subsidiary and equivalent of Elon Musk’s PayPal, joined forces with Ant Financial to launch Alipay Health Code, a software for smart phones. It indicates individuals’ health in green, yellow, and red, ultimately determining where citizens can and cannot go. The government has since mandated that citizens use this software, despite inaccuracies of temperature-reading technology that has led to the confinement of otherwise healthy individuals. It also remains unclear how this data will be used going forward–whether it will be stored indefinitely or used to augment civilians’ social scores. As the New York Times noted, this Chinese gathering of data would be akin to the Centers for Disease Control (CDC) using data from Amazon, Facebook, and Google to track citizens and then share that data with law enforcement–something that no longer seems so far-fetched.

An Evolving EU

The European Union is home to what is arguably the most progressive privacy regime in the world. In May 2018, the EU implemented the General Data Protection Regulation (GDPR). While processing personal data is generally permitted in cases in which individuals have provided explicit consent to the use of their data, several exceptions to these mining prohibitions are proving problematic in the time of COVID-19. For example, GDPR Article 9 provides an exception for public interest, permitting the processing of personal data when it is necessary for reasons of substantial public interest, and on the basis of Union or Member State law which must be proportionate to the aim pursued…(More)”.

Statement of the EDPB Chair on the processing of personal data in the context of the COVID-19 outbreak


European Data Protection Board: “Governments, public and private organisations throughout Europe are taking measures to contain and mitigate COVID-19. This can involve the processing of different types of personal data.  

Andrea Jelinek, Chair of the European Data Protection Board (EDPB), said: “Data protection rules (such as GDPR) do not hinder measures taken in the fight against the coronavirus pandemic. However, I would like to underline that, even in these exceptional times, the data controller must ensure the protection of the personal data of the data subjects. Therefore, a number of considerations should be taken into account to guarantee the lawful processing of personal data.”

The GDPR is a broad legislation and also provides for the rules to apply to the processing of personal data in a context such as the one relating to COVID-19. Indeed, the GDPR provides for the legal grounds to enable the employers and the competent public health authorities to process personal data in the context of epidemics, without the need to obtain the consent of the data subject. This applies for instance when the processing of personal data is necessary for the employers for reasons of public interest in the area of public health or to protect vital interests (Art. 6 and 9 of the GDPR) or to comply with another legal obligation.

For the processing of electronic communication data, such as mobile location data, additional rules apply. The national laws implementing the ePrivacy Directive provide for the principle that the location data can only be used by the operator when they are made anonymous, or with the consent of the individuals. The public authorities should first aim for the processing of location data in an anonymous way (i.e. processing data aggregated in a way that it cannot be reversed to personal data). This could enable to generate reports on the concentration of mobile devices at a certain location (“cartography”).  

When it is not possible to only process anonymous data, Art. 15 of the ePrivacy Directive enables the member states to introduce legislative measures pursuing national security and public security *. This emergency legislation is possible under the condition that it constitutes a necessary, appropriate and proportionate measure within a democratic society. If such measures are introduced, a Member State is obliged to put in place adequate safeguards, such as granting individuals the right to judicial remedy….(More)”.

CARE Principles for Indigenous Data Governance


The Global Indigenous Data Alliance: “The current movement toward open data and open science does not fully engage with Indigenous Peoples rights and interests. Existing principles within the open data movement (e.g. FAIR: findable, accessible, interoperable, reusable) primarily focus on characteristics of data that will facilitate increased data sharing among entities while ignoring power differentials and historical contexts. The emphasis on greater data sharing alone creates a tension for Indigenous Peoples who are also asserting greater control over the application and use of Indigenous data and Indigenous Knowledge for collective benefit.

This includes the right to create value from Indigenous data in ways that are grounded in Indigenous worldviews and realise opportunities within the knowledge economy. The CARE Principles for Indigenous Data Governance are people and purpose-oriented, reflecting the crucial role of data in advancing Indigenous innovation and self-determination. These principles complement the existing FAIR principles encouraging open and other data movements to consider both people and purpose in their advocacy and pursuits….(More)”.

Personal privacy matters during a pandemic — but less than it might at other times


Nicole Wetsman at the Verge: “…The balance between protecting individual privacy and collecting information that is critical to the public good changes over the course of a disease’s spread. The amount of data public health officials need to collect and disclose changes as well. Right now, the COVID-19 pandemic is accelerating, and there is still a lot doctors and scientists don’t know about the disease. Collecting detailed health information is, therefore, more useful and important. That could change as the outbreak progresses, Lee says.

For example, as the virus starts to circulate in the community, it might not be as important to know exactly where a sick person has been. If the virus is everywhere already, that information won’t have as much additional benefit to the community. “It depends a lot on the maturity of an epidemic,” she says.

Digital tracking information is ubiquitous today, and that can make data collection easier. In Singapore, where there’s extensive surveillance, publicly available data details where people with confirmed cases of COVID-19 are and have been. The Iranian government built an app for people to check their symptoms that also included a geo-tracking feature. When deciding to use those types of tools, Lee says, the same public health principles should still apply.

“Should a public health official know where a person has gone, should that be public information — it’s not different. It’s a lot easier to do that now, but it doesn’t make it any more right or less right,” she says. “Tracking where people go and who they interact with is something public health officials have been doing for centuries. It’s just easier with digital information.”

In addition, just because personal information about a person and their health is important to a public health official, it doesn’t mean that information is important for the general public. It’s why, despite questioning from reporters, public health officials only gave out a limited amount of information on the people who had the first few cases of COVID-19 in the US…

Health officials worry about the stigmatization of individuals or communities affected by diseases, which is why they aim to disclose only necessary information to the public. Anti-Asian racism in the US and other countries around the world spiked with the outbreak because the novel coronavirus originated in China. People who were on cruise ships with positive cases reported fielding angry phone calls from strangers when they returned home, and residents of New Rochelle, New York, which is the first containment zone in the US, said that they’re worried about their hometown being forever associated with the virus.

“This kind of group-level harm is concerning,” Lee says. “That’s why we worry about group identity privacy, as well. I’m nervous and sad to see that starting to poke its head out.”

People can’t expect the same level of personal health privacy during public health emergencies involving infectious diseases as they can in other elements of their health. But the actions public health officials can take, like collecting information, aren’t designed to limit privacy, Fairchild says. “It’s to protect the broader population. The principle we embrace is the principle of reciprocity. We recognize that our liberty is limited, but we are doing that for others.”…(More)”.

COVID-19 response and data protection law in the EU and US


Article by Cathy Cosgrove: “Managing the COVID-19 outbreak and stopping its spread is now a global challenge. In addition to the significant health and medical responses underway around the world, governments and public health officials are focused on how to monitor, understand and prevent the spread of the virus. Data protection and privacy laws, including the EU General Data Protection Regulation and various U.S. laws, are informing these responses.

One major response to limiting the spread of infection is contact tracing, which is the practice of identifying and monitoring anyone who may have come into contact with an infected person. Employers and educational institutions are also imposing travel restrictions, instituting self-quarantine policies, limiting visitors, and considering whether to require medical examinations. These responses necessarily involve obtaining and potentially sharing personal information, including data about an individual’s health, travel, personal contacts, and employment. For example, in the U.S., the Centers for Disease Control and Prevention has asked airlines for the name, date of birth, address, phone number and email address for passengers on certain flights. 

As IAPP Editorial Director Jedidiah Bracy, CIPP, explored in his piece on balancing personal privacy with public interest last week, this collection and processing of personal data is creating substantial discussion about what data protection limitations may be required or appropriate. Even China — which is using AI and big data to manage the outbreak — has issued guidance recognizing the need to limit the collection of data and its use during this public health crisis….(More)”.

Is Your Data Being Collected? These Signs Will Tell You Where


Flavie Halais at Wired: “Alphabet’s Sidewalk Labs is testing icons that provide “digital transparency” when information is collected in public spaces….

As cities incorporate digital technologies into their landscapes, they face the challenge of informing people of the many sensors, cameras, and other smart technologies that surround them. Few people have the patience to read through the lengthy privacy notice on a website or smartphone app. So how can a city let them know how they’re being monitored?

Sidewalk Labs, the Google sister company that applies technology to urban problems, is taking a shot. Through a project called Digital Transparency in the Public Realm, or DTPR, the company is demonstrating a set of icons, to be displayed in public spaces, that shows where and what kinds of data are being collected. The icons are being tested as part Sidewalk Labs’ flagship project in Toronto, where it plans to redevelop a 12-acre stretch of the city’s waterfront. The signs would be displayed at each location where data would be collected—streets, parks, businesses, and courtyards.

Data collection is a core feature of the project, called Sidewalk Toronto, and the source of much of the controversy surrounding it. In 2017, Waterfront Toronto, the organization in charge of administering the redevelopment of the city’s eastern waterfront, awarded Sidewalk Labs the contract to develop the waterfront site. The project has ambitious goals: It says it could create 44,000 direct jobs by 2040 and has the potential to be the largest “climate-positive” community—removing more CO2 from the atmosphere than it produces—in North America. It will make use of new urban technology like modular street pavers and underground freight delivery. Sensors, cameras, and Wi-Fi hotspots will monitor and control traffic flows, building temperature, and crosswalk signals.

All that monitoring raises inevitable concerns about privacy, which Sidewalk aims to address—at least partly—by posting signs in the places where data is being collected.

The signs display a set of icons in the form of stackable hexagons, derived in part from a set of design rules developed by Google in 2014. Some describe the purpose for collecting the data (mobility, energy efficiency, or waste management, for example). Others refer to the type of data that’s collected, such as photos, air quality, or sound. When the data is identifiable, meaning it can be associated with a person, the hexagon is yellow. When the information is stripped of personal identifiers, the hexagon is blue…(More)”.