Are Requirements to Deposit Data in Research Repositories Compatible With the European Union’s General Data Protection Regulation?


Paper by Deborah Mascalzoni et al: “To reproduce study findings and facilitate new discoveries, many funding bodies, publishers, and professional communities are encouraging—and increasingly requiring—investigators to deposit their data, including individual-level health information, in research repositories. For example, in some cases the National Institutes of Health (NIH) and editors of some Springer Nature journals require investigators to deposit individual-level health data via a publicly accessible repository (12). However, this requirement may conflict with the core privacy principles of European Union (EU) General Data Protection Regulation 2016/679 (GDPR), which focuses on the rights of individuals as well as researchers’ obligations regarding transparency and accountability.

The GDPR establishes legally binding rules for processing personal data in the EU, as well as outside the EU in some cases. Researchers in the EU, and often their global collaborators, must comply with the regulation. Health and genetic data are considered special categories of personal data and are subject to relatively stringent rules for processing….(More)”.

The privacy threat posed by detailed census data


Gillian Tett at the Financial Times: “Wilbur Ross suffered the political equivalent of a small(ish) black eye last month: a federal judge blocked the US commerce secretary’s attempts to insert a question about citizenship into the 2020 census and accused him of committing “egregious” legal violations.

The Supreme Court has agreed to hear the administration’s appeal in April. But while this high-profile fight unfolds, there is a second, less noticed, census issue about data privacy emerging that could have big implications for businesses (and citizens). Last weekend John Abowd, the Census Bureau’s chief scientist, told an academic gathering that statisticians had uncovered shortcomings in the protection of personal data in past censuses. There is no public evidence that anyone has actually used these weaknesses to hack records, and Mr Abowd insisted that the bureau is using cutting-edge tools to fight back. But, if nothing else, this revelation shows the mounting problem around data privacy. Or, as Mr Abowd, noted: “These developments are sobering to everyone.” These flaws are “not just a challenge for statistical agencies or internet giants,” he added, but affect any institution engaged in internet commerce and “bioinformatics”, as well as commercial lenders and non-profit survey groups. Bluntly, this includes most companies and banks.

The crucial problem revolves around what is known as “re-identification” risk. When companies and government institutions amass sensitive information about individuals, they typically protect privacy in two ways: they hide the full data set from outside eyes or they release it in an “anonymous” manner, stripped of identifying details. The census bureau does both: it is required by law to publish detailed data and protect confidentiality. Since 1990, it has tried to resolve these contradictory mandates by using “household-level swapping” — moving some households from one geographic location to another to generate enough uncertainty to prevent re-identification. This used to work. But today there are so many commercially-available data sets and computers are so powerful that it is possible to re-identify “anonymous” data by combining data sets. …

Thankfully, statisticians think there is a solution. The Census Bureau now plans to use a technique known as “differential privacy” which would introduce “noise” into the public statistics, using complex algorithms. This technique is expected to create just enough statistical fog to protect personal confidentiality in published data — while also preserving information in an encrypted form that statisticians can later unscramble, as needed. Companies such as Google, Microsoft and Apple have already used variants of this technique for several years, seemingly successfully. However, nobody has employed this system on the scale that the Census Bureau needs — or in relation to such a high stakes event. And the idea has sparked some controversy because some statisticians fear that even “differential privacy” tools can be hacked — and others fret it makes data too “noisy” to be useful….(More)”.

Tomorrow’s Data Heroes


Article by Florian GrönePierre Péladeau, and Rawia Abdel Samad: “Telecom companies are struggling to find a profitable identity in today’s digital sphere. What about helping customers control their information?…

By 2025, Alex had had enough. There no longer seemed to be any distinction between her analog and digital lives. Everywhere she went, every purchase she completed, and just about every move she made, from exercising at the gym to idly surfing the Web, triggered a vast flow of data. That in turn meant she was bombarded with personalized advertising messages, targeted more and more eerily to her. As she walked down the street, messages appeared on her phone about the stores she was passing. Ads popped up on her all-purpose tablet–computer–phone pushing drugs for minor health problems she didn’t know she had — until the symptoms appeared the next day. Worse, she had recently learned that she was being reassigned at work. An AI machine had mastered her current job by analyzing her use of the firm’s productivity software.

It was as if the algorithms of global companies knew more about her than she knew herself — and they probably did. How was it that her every action and conversation, even her thoughts, added to the store of data held about her? After all, it was her data: her preferences, dislikes, interests, friendships, consumer choices, activities, and whereabouts — her very identity — that was being collected, analyzed, profited from, and even used to manage her. All these companies seemed to be making money buying and selling this information. Why shouldn’t she gain some control over the data she generated, and maybe earn some cash by selling it to the companies that had long collected it free of charge?

So Alex signed up for the “personal data manager,” a new service that promised to give her control over her privacy and identity. It was offered by her U.S.-based connectivity company (in this article, we’ll call it DigiLife, but it could be one of many former telephone companies providing Internet services in 2025). During the previous few years, DigiLife had transformed itself into a connectivity hub: a platform that made it easier for customers to join, manage, and track interactions with media and software entities across the online world. Thanks to recently passed laws regarding digital identity and data management, including the “right to be forgotten,” the DigiLife data manager was more than window dressing. It laid out easy-to-follow choices that all Web-based service providers were required by law to honor….

Today, in 2019, personal data management applications like the one Alex used exist only in nascent form, and consumers have yet to demonstrate that they trust these services. Nor can they yet profit by selling their data. But the need is great, and so is the opportunity for companies that fulfill it. By 2025, the total value of the data economy as currently structured will rise to more than US$400 billion, and by monetizing the vast amounts of data they produce, consumers can potentially recapture as much as a quarter of that total.

Given the critical role of telecom operating companies within the digital economy — the central position of their data networks, their networking capabilities, their customer relationships, and their experience in government affairs — they are in a good position to seize this business opportunity. They might not do it alone; they are likely to form consortia with software companies or other digital partners. Nonetheless, for legacy connectivity companies, providing this type of service may be the most sustainable business option. It may also be the best option for the rest of us, as we try to maintain control in a digital world flooded with our personal data….(More)”.

Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice


Paper by Rashida Richardson, Jason Schultz, and Kate Crawford: “Law enforcement agencies are increasingly using algorithmic predictive policing systems to forecast criminal activity and allocate police resources. Yet in numerous jurisdictions, these systems are built on data produced within the context of flawed, racially fraught and sometimes unlawful practices (‘dirty policing’). This can include systemic data manipulation, falsifying police reports, unlawful use of force, planted evidence, and unconstitutional searches. These policing practices shape the environment and the methodology by which data is created, which leads to inaccuracies, skews, and forms of systemic bias embedded in the data (‘dirty data’). Predictive policing systems informed by such data cannot escape the legacy of unlawful or biased policing practices that they are built on. Nor do claims by predictive policing vendors that these systems provide greater objectivity, transparency, or accountability hold up. While some systems offer the ability to see the algorithms used and even occasionally access to the data itself, there is no evidence to suggest that vendors independently or adequately assess the impact that unlawful and bias policing practices have on their systems, or otherwise assess how broader societal biases may affect their systems.

In our research, we examine the implications of using dirty data with predictive policing, and look at jurisdictions that (1) have utilized predictive policing systems and (2) have done so while under government commission investigations or federal court monitored settlements, consent decrees, or memoranda of agreement stemming from corrupt, racially biased, or otherwise illegal policing practices. In particular, we examine the link between unlawful and biased police practices and the data used to train or implement these systems across thirteen case studies. We highlight three of these: (1) Chicago, an example of where dirty data was ingested directly into the city’s predictive system; (2) New Orleans, an example where the extensive evidence of dirty policing practices suggests an extremely high risk that dirty data was or will be used in any predictive policing application, and (3) Maricopa County where despite extensive evidence of dirty policing practices, lack of transparency and public accountability surrounding predictive policing inhibits the public from assessing the risks of dirty data within such systems. The implications of these findings have widespread ramifications for predictive policing writ large. Deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed, biased, and unlawful predictions which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system. Thus, for any jurisdiction where police have been found to engage in such practices, the use of predictive policing in any context must be treated with skepticism and mechanisms for the public to examine and reject such systems are imperative….(More)”.

Hundreds of Bounty Hunters Had Access to AT&T, T-Mobile, and Sprint Customer Location Data for Years


Joseph Cox at Motherboard: ” In January, Motherboard revealed that AT&T, T-Mobile, and Sprint were selling their customers’ real-time location data, which trickled down through a complex network of companies until eventually ending up in the hands of at least one bounty hunter. Motherboard was also able to purchase the real-time location of a T-Mobile phone on the black market from a bounty hunter source for $300. In response, telecom companies said that this abuse was a fringe case.

In reality, it was far from an isolated incident.

Around 250 bounty hunters and related businesses had access to AT&T, T-Mobile, and Sprint customer location data, with one bail bond firm using the phone location service more than 18,000 times, and others using it thousands or tens of thousands of times, according to internal documents obtained by Motherboard from a company called CerCareOne, a now-defunct location data seller that operated until 2017. The documents list not only the companies that had access to the data, but specific phone numbers that were pinged by those companies.

In some cases, the data sold is more sensitive than that offered by the service used by Motherboard last month, which estimated a location based on the cell phone towers that a phone connected to. CerCareOne sold cell phone tower data, but also sold highly sensitive and accurate GPS data to bounty hunters; an unprecedented move that means users could locate someone so accurately so as to see where they are inside a building. This company operated in near-total secrecy for over 5 years by making its customers agree to “keep the existence of CerCareOne.com confidential,” according to a terms of use document obtained by Motherboard.

Some of these bounty hunters then resold location data to those unauthorized to handle it, according to two independent sources familiar with CerCareOne’s operations.

The news shows how widely available Americans’ sensitive location data was to bounty hunters. This ease-of-access dramatically increased the risk of abuse….(More)”.

Using Personal Informatics Data in Collaboration among People with Different Expertise


Dissertation by Chia-Fang Chung: “Many people collect and analyze data about themselves to improve their health and wellbeing. With the prevalence of smartphones and wearable sensors, people are able to collect detailed and complex data about their everyday behaviors, such as diet, exercise, and sleep. This everyday behavioral data can support individual health goals, help manage health conditions, and complement traditional medical examinations conducted in clinical visits. However, people often need support to interpret this self-tracked data. For example, many people share their data with health experts, hoping to use this data to support more personalized diagnosis and recommendations as well as to receive emotional support. However, when attempting to use this data in collaborations, people and their health experts often struggle to make sense of the data. My dissertation examines how to support collaborations between individuals and health experts using personal informatics data.

My research builds an empirical understanding of individual and collaboration goals around using personal informatics data, current practices of using this data to support collaboration, and challenges and expectations for integrating the use of this data into clinical workflows. These understandings help designers and researchers advance the design of personal informatics systems as well as the theoretical understandings of patient-provider collaboration.

Based on my formative work, I propose design and theoretical considerations regarding interactions between individuals and health experts mediated by personal informatics data. System designers and personal informatics researchers need to consider collaborations occurred throughout the personal tracking process. Patient-provider collaboration might influence individual decisions to track and to review, and systems supporting this collaboration need to consider individual and collaborative goals as well as support communication around these goals. Designers and researchers should also attend to individual privacy needs when personal informatics data is shared and used across different healthcare contexts. With these design guidelines in mind, I design and develop Foodprint, a photo-based food diary and visualization system. I also conduct field evaluations to understand the use of lightweight data collection and integration to support collaboration around personal informatics data. Findings from these field deployments indicate that photo-based visualizations allow both participants and health experts to easily understand eating patterns relevant to individual health goals. Participants and health experts can then focus on individual health goals and questions, exchange knowledge to support individualized diagnoses and recommendations, and develop actionable and feasible plans to accommodate individual routines….(More)”.

Privacy concerns collide with the public interest in data


Gillian Tett in the Financial Times: “Late last year Statistics Canada — the agency that collects government figures — launched an innovation: it asked the country’s banks to supply “individual-level financial transactions data” for 500,000 customers to allow it to track economic trends. The agency argued this was designed to gather better figures for the public interest. However, it tipped the banks into a legal quandary. Under Canadian law (as in most western countries) companies are required to help StatsCan by supplying operating information. But data privacy laws in Canada also say that individual bank records are confidential. When the StatsCan request leaked out, it sparked an outcry — forcing the agency to freeze its plans. “It’s a mess,” a senior Canadian banker says, adding that the laws “seem contradictory”.

Corporate boards around the world should take note. In the past year, executive angst has exploded about the legal and reputational risks created when private customer data leak out, either by accident or in a cyber hack. Last year’s Facebook scandals have been a hot debating topic among chief executives at this week’s World Economic Forum in Davos, as has the EU’s General Data Protection Regulation. However, there is another important side to this Big Data debate: must companies provide private digital data to public bodies for statistical and policy purposes? Or to put it another way, it is time to widen the debate beyond emotive privacy issues to include the public interest and policy needs. The issue has received little public debate thus far, except in Canada. But it is becoming increasingly important.

Companies are sitting on a treasure trove of digital data that offers valuable real-time signals about economic activity. This information could be even more significant than existing statistics, because they struggle to capture how the economy is changing. Take Canada. StatsCan has hitherto tracked household consumption by following retail sales statistics, supplemented by telephone surveys. But consumers are becoming less willing to answer their phones, which undermines the accuracy of surveys, and consumption of digital services cannot be easily pursued. ...

But the biggest data collections sit inside private companies. Big groups know this, and some are trying to respond. Google has created its own measures to track inflation, which it makes publicly available. JPMorgan and other banks crunch customer data and publish reports about general economic and financial trends. Some tech groups are even starting to volunteer data to government bodies. LinkedIn has offered to provide anonymised data on education and employment to municipal and city bodies in America and beyond, to help them track local trends; the group says this is in the public interest for policy purposes, as “it offers a different perspective” than official data sources. But it is one thing for LinkedIn to offer anonymised data when customers have signed consent forms permitting the transfer of data; it is quite another for banks (or other companies) who have operated with strict privacy rules. If nothing else, the CanStat saga shows there urgently needs to be more public debate, and more clarity, around these rules. Consumer privacy issues matter (a lot). But as corporate data mountains grow, we will need to ask whether we want to live in a world where Amazon and Google — and Mastercard and JPMorgan — know more about economic trends than central banks or finance ministries. Personally, I would say “no”. But sooner or later politicians will need to decide on their priorities in this brave new Big Data world; the issue cannot be simply left to the half-hidden statisticians….(More)”.

Google’s Sidewalk Labs Plans to Package and Sell Location Data on Millions of Cellphones


Ava Kofman at the Intercept: “Most of the data collected by urban planners is messy, complex, and difficult to represent. It looks nothing like the smooth graphs and clean charts of city life in urban simulator games like “SimCity.” A new initiative from Sidewalk Labs, the city-building subsidiary of Google’s parent company Alphabet, has set out to change that.

The program, known as Replica, offers planning agencies the ability to model an entire city’s patterns of movement. Like “SimCity,” Replica’s “user-friendly” tool deploys statistical simulations to give a comprehensive view of how, when, and where people travel in urban areas. It’s an appealing prospect for planners making critical decisions about transportation and land use. In recent months, transportation authorities in Kansas City, Portland, and the Chicago area have signed up to glean its insights. The only catch: They’re not completely sure where the data is coming from.

Typical urban planners rely on processes like surveys and trip counters that are often time-consuming, labor-intensive, and outdated. Replica, instead, uses real-time mobile location data. As Nick Bowden of Sidewalk Labs has explained, “Replica provides a full set of baseline travel measures that are very difficult to gather and maintain today, including the total number of people on a highway or local street network, what mode they’re using (car, transit, bike, or foot), and their trip purpose (commuting to work, going shopping, heading to school).”

To make these measurements, the program gathers and de-identifies the location of cellphone users, which it obtains from unspecified third-party vendors. It then models this anonymized data in simulations — creating a synthetic population that faithfully replicates a city’s real-world patterns but that “obscures the real-world travel habits of individual people,” as Bowden told The Intercept.

The program comes at a time of growing unease with how tech companies use and share our personal data — and raises new questions about Google’s encroachment on the physical world….(More)”.

Survey: Majority of Americans Willing to Share Their Most Sensitive Personal Data


Center for Data Innovation: “Most Americans (58 percent) are willing to allow third parties to collect at least some sensitive personal data, according to a new survey from the Center for Data Innovation.

While many surveys measure public opinions on privacy, few ask consumers about their willingness to make tradeoffs, such as sharing certain personal information in exchange for services or benefits they want. In this survey, the Center asked respondents whether they would allow a mobile app to collect their biometrics or location data for purposes such as making it easier to sign into an account or getting free navigational help, and it asked whether they would allow medical researchers to collect sensitive data about their health if it would lead to medical cures for their families or others. Only one-third of respondents (33 percent) were unwilling to let mobile apps collect either their biometrics or location data under any of the described scenarios. And overall, nearly 6 in 10 respondents (58 percent) were willing to let a third party collect at least one piece of sensitive personal data, such as biometric, location, or medical data, in exchange for a service or benefit….(More)”.

Research Handbook on Human Rights and Digital Technology


Book edited by Ben Wagner, Matthias C. Kettemann and Kilian Vieth: “In a digitally connected world, the question of how to respect, protect and implement human rights has become unavoidable. This contemporary Research Handbook offers new insights into well-established debates by framing them in terms of human rights. It examines the issues posed by the management of key Internet resources, the governance of its architecture, the role of different stakeholders, the legitimacy of rule making and rule-enforcement, and the exercise of international public authority over users. Highly interdisciplinary, its contributions draw on law, political science, international relations and even computer science and science and technology studies…(More)”.