On the Meaning of Community Consent in a Biorepository Context


Article by Astha Kapoor, Samuel Moore, and Megan Doerr: “Biorepositories, vital for medical research, collect and store human biological samples and associated data for future use. However, our reliance solely on the individual consent of data contributors for biorepository data governance is becoming inadequate. Big data analysis focuses on large-scale behaviors and patterns, shifting focus from singular data points to identifying data “journeys” relevant to a collective. The individual becomes a small part of the analysis, with the harms and benefits emanating from the data occurring at an aggregated level.

Community refers to a particular qualitative aspect of a group of people that is not well captured by quantitative measures in biorepositories. This is not an excuse to dodge the question of how to account for communities in a biorepository context; rather, it shows that a framework is needed for defining different types of community that may be approached from a biorepository perspective. 

Engaging with communities in biorepository governance presents several challenges. Moving away from a purely individualized understanding of governance towards a more collectivizing approach necessitates an appreciation of the messiness of group identity, its ephemerality, and the conflicts entailed therein. So while community implies a certain degree of homogeneity (i.e., that all members of a community share something in common), it is important to understand that people can simultaneously consider themselves a member of a community while disagreeing with many of its members, the values the community holds, or the positions for which it advocates. The complex nature of community participation therefore requires proper treatment for it to be useful in a biorepository governance context…(More)”.

Murky Consent: An Approach to the Fictions of Consent in Privacy Law


Paper by Daniel J. Solove: “Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. As to privacy, consent authorizes and legitimizes a wide range of data collection and processing.

There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates; organizations post a notice of their privacy practices and people are deemed to consent if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.

Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems – people are ill-equipped to decide about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.

In this Article, I contend that most of the time, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious….(More)”. See also: The Urgent Need to Reimagine Data Consent

The Secret Life of Data


Book by Aram Sinnreich and Jesse Gilbert: “…explore the many unpredictable, and often surprising, ways in which data surveillance, AI, and the constant presence of algorithms impact our culture and society in the age of global networks. The authors build on this basic premise: no matter what form data takes, and what purpose we think it’s being used for, data will always have a secret life. How this data will be used, by other people in other times and places, has profound implications for every aspect of our lives—from our intimate relationships to our professional lives to our political systems.

With the secret uses of data in mind, Sinnreich and Gilbert interview dozens of experts to explore a broad range of scenarios and contexts—from the playful to the profound to the problematic. Unlike most books about data and society that focus on the short-term effects of our immense data usage, The Secret Life of Data focuses primarily on the long-term consequences of humanity’s recent rush toward digitizing, storing, and analyzing every piece of data about ourselves and the world we live in. The authors advocate for “slow fixes” regarding our relationship to data, such as creating new laws and regulations, ethics and aesthetics, and models of production for our datafied society.

Cutting through the hype and hopelessness that so often inform discussions of data and society, The Secret Life of Data clearly and straightforwardly demonstrates how readers can play an active part in shaping how digital technology influences their lives and the world at large…(More)”

The CFPB wants to rein in data brokers


Article by Gaby Del Valle: “The Consumer Financial Protection Bureau wants to propose new regulations that would require data brokers to comply with the Fair Credit Reporting Act. In a speech at the White House earlier this month, CFPB Director Rohit Chopra said the agency is looking into policies to “ensure greater accountability” for companies that buy and sell consumer data, in keeping with an executive order President Joe Biden issued in late February.

Chopra said the agency is considering proposals that would define data brokers that sell certain types of data as “consumer reporting agencies,” thereby requiring those companies to comply with the Fair Credit Reporting Act (FCRA). The statute bans sharing certain kinds of data (e.g., your credit report) with entities unless they serve a specific purpose outlined in the law (e.g., if the report is used for employment purposes or to extend a line of credit to someone).

The CFBP views the buying and selling of consumer data as a national security issue, not just a matter of privacy. Chopra mentioned three massive data breaches — the 2015 Anthem leak, the 2017 Equifax hack, and the 2018 Marriott breach — as examples of foreign adversaries illicitly obtaining Americans’ personal data. “When Americans’ health information, financial information, and even their travel whereabouts can be assembled into detailed dossiers, it’s no surprise that this raises risks when it comes to safety and security,” Chopra said. But the focus on high-profile hacks obscures a more pervasive, totally legal phenomenon: data brokers’ ability to sell detailed personal information to anyone who’s willing to pay for it…(More)”.

AI-driven public services and the privacy paradox: do citizens really care about their privacy?


Paper by Based on privacy calculus theory, we derive hypotheses on the role of perceived usefulness and privacy risks of artificial intelligence (AI) in public services. In a representative vignette experiment (n = 1,048), we asked citizens whether they would download a mobile app to interact in an AI-driven public service. Despite general concerns about privacy, we find that citizens are not susceptible to the amount of personal information they must share, nor to a more anthropomorphic interface. Our results confirm the privacy paradox, which we frame in the literature on the government’s role to safeguard ethical principles, including citizens’ privacy…(More)”.

Why data about people are so hard to govern


Paper by Wendy H. Wong, Jamie Duncan, and David A. Lake: “How data on individuals are gathered, analyzed, and stored remains largely ungoverned at both domestic and global levels. We address the unique governance problem posed by digital data to provide a framework for understanding why data governance remains elusive. Data are easily transferable and replicable, making them a useful tool. But this characteristic creates massive governance problems for all of us who want to have some agency and choice over how (or if) our data are collected and used. Moreover, data are co-created: individuals are the object from which data are culled by an interested party. Yet, any data point has a marginal value of close to zero and thus individuals have little bargaining power when it comes to negotiating with data collectors. Relatedly, data follow the rule of winner take all—the parties that have the most can leverage that data for greater accuracy and utility, leading to natural oligopolies. Finally, data’s value lies in combination with proprietary algorithms that analyze and predict the patterns. Given these characteristics, private governance solutions are ineffective. Public solutions will also likely be insufficient. The imbalance in market power between platforms that collect data and individuals will be reproduced in the political sphere. We conclude that some form of collective data governance is required. We examine the challenges to the data governance by looking a public effort, the EU’s General Data Protection Regulation, a private effort, Apple’s “privacy nutrition labels” in their App Store, and a collective effort, the First Nations Information Governance Centre in Canada…(More)”

Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies


Article by Kashmir Hill: “Kenn Dahl says he has always been a careful driver. The owner of a software company near Seattle, he drives a leased Chevrolet Bolt. He’s never been responsible for an accident.

So Mr. Dahl, 65, was surprised in 2022 when the cost of his car insurance jumped by 21 percent. Quotes from other insurance companies were also high. One insurance agent told him his LexisNexis report was a factor.

LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets. Upon Mr. Dahl’s request, LexisNexis sent him a 258-page “consumer disclosure report,” which it must provide per the Fair Credit Reporting Act.

What it contained stunned him: more than 130 pages detailing each time he or his wife had driven the Bolt over the previous six months. It included the dates of 640 trips, their start and end times, the distance driven and an accounting of any speeding, hard braking or sharp accelerations. The only thing it didn’t have is where they had driven the car.

On a Thursday morning in June for example, the car had been driven 7.33 miles in 18 minutes; there had been two rapid accelerations and two incidents of hard braking.

According to the report, the trip details had been provided by General Motors — the manufacturer of the Chevy Bolt. LexisNexis analyzed that driving data to create a risk score “for insurers to use as one factor of many to create more personalized insurance coverage,” according to a LexisNexis spokesman, Dean Carney. Eight insurance companies had requested information about Mr. Dahl from LexisNexis over the previous month.

“It felt like a betrayal,” Mr. Dahl said. “They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.”..(More)”.

Surveilling Alone


Essay by Christine Rosen: “When Jane Jacobs, author of the 1961 classic The Death and Life of Great American Cities, outlined the qualities of successful neighborhoods, she included “eyes on the street,” or, as she described this, the “eyes belonging to those we might call the natural proprietors of the street,” including shopkeepers and residents going about their daily routines. Not every neighborhood enjoyed the benefit of this informal sense of community, of course, but it was widely seen to be desirable. What Jacobs understood is that the combined impact of many local people practicing normal levels of awareness in their neighborhoods on any given day is surprisingly effective for community-building, with the added benefit of building trust and deterring crime.

Jacobs’s championing of these “natural proprietors of the street” was a response to a mid-century concern that aggressive city planning would eradicate the vibrant experience of neighborhoods like her own, the Village in New York City. Jacobs famously took on “master planner” Robert Moses after he proposed building an expressway through Lower Manhattan, a scheme that, had it succeeded, would have destroyed Washington Square Park and the Village, and turned neighborhoods around SoHo into highway underpasses. For Jacobs and her fellow citizen activists, the efficiency of the proposed highway was not enough to justify eliminating bustling sidewalks and streets, where people played a crucial role in maintaining the health and order of their communities.

Today, a different form of efficient design is eliminating “eyes on the street” — by replacing them with technological ones. The proliferation of neighborhood surveillance technologies such as Ring cameras and digital neighborhood-watch platforms and apps such as Nextdoor and Citizen have freed us from the constraints of having to be physically present to monitor our homes and streets. Jacobs’s “eyes on the street” are now cameras on many homes, and the everyday interactions between neighbors and strangers are now a network of cameras and platforms that promise to put “neighborhood security in your hands,” as the Ring Neighbors app puts it.

Inside our homes, we monitor ourselves and our family members with equal zeal, making use of video baby monitors, GPS-tracking software for children’s smartphones (or for covert surveillance by a suspicious spouse), and “smart” speakers that are always listening and often recording when they shouldn’t. A new generation of domestic robots, such as Amazon’s Astro, combines several of these features into a roving service-machine always at your beck and call around the house and ever watchful of its security when you are away…(More)”.

What Happens to Your Sensitive Data When a Data Broker Goes Bankrupt?


Article by Jon Keegan: “In 2021, a company specializing in collecting and selling location data called Near bragged that it was “The World’s Largest Dataset of People’s Behavior in the Real-World,” with data representing “1.6B people across 44 countries.” Last year the company went public with a valuation of $1 billion (via a SPAC). Seven months later it filed for bankruptcy and has agreed to sell the company.

But for the “1.6B people” that Near said its data represents, the important question is: What happens to Near’s mountain of location data? Any company could gain access to it through purchasing the company’s assets.

The prospect of this data, including Near’s collection of location data from sensitive locations such as abortion clinics, being sold off in bankruptcy has raised alarms in Congress. Last week, Sen. Ron Wyden wrote the Federal Trade Commission (FTC) urging the agency to “protect consumers and investors from the outrageous conduct” of Near, citing his office’s investigation into the India-based company. 

Wyden’s letter also urged the FTC “to intervene in Near’s bankruptcy proceedings to ensure that all location and device data held by Near about Americans is promptly destroyed and is not sold off, including to another data broker.” The FTC took such an action in 2010 to block the use of 11 years worth of subscriber personal data during the bankruptcy proceedings of the XY Magazine, which was oriented to young gay men. The agency requested that the data be destroyed to prevent its misuse.

Wyden’s investigation was spurred by a May 2023 Wall Street Journal report that Near had licensed location data to the anti-abortion group Veritas Society so it could target ads to visitors of Planned Parenthood clinics and attempt to dissuade women from seeking abortions. Wyden’s investigation revealed that the group’s geofencing campaign focused on 600 Planned Parenthood clinics in 48 states. The Journal also revealed that Near had been selling its location data to the Department of Defense and intelligence agencies...(More)”.

Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World


Paper by Jennifer King, Caroline Meinhardt: “In this paper, we present a series of arguments and predictions about how existing and future privacy and data protection regulation will impact the development and deployment of AI systems.

➜ Data is the foundation of all AI systems. Going forward, AI development will continue to increase developers’ hunger for training data, fueling an even greater race for data acquisition than we have already seen in past decades.

➜ Largely unrestrained data collection poses unique risks to privacy that extend beyond the individual level—they aggregate to pose societal-level harms that cannot be addressed through the exercise of individual data rights alone.

➜ While existing and proposed privacy legislation, grounded in the globally accepted Fair Information Practices (FIPs), implicitly regulate AI development, they are not sufficient to address the data acquisition race as well as the resulting individual and systemic privacy harms.

➜ Even legislation that contains explicit provisions on algorithmic decision-making and other forms of AI does not provide the data governance measures needed to meaningfully regulate the data used in AI systems.

➜ We present three suggestions for how to mitigate the risks to data privacy posed by the development and adoption of AI:

1. Denormalize data collection by default by shifting away from opt-out to opt-in data collection. Data collectors must facilitate true data minimization through “privacy by default” strategies and adopt technical standards and infrastructure for meaningful consent mechanisms.

2. Focus on the AI data supply chain to improve privacy and data protection. Ensuring dataset transparency and accountability across the entire life cycle must be a focus of any regulatory system that addresses data privacy.

3. Flip the script on the creation and management of personal data. Policymakers should support the development of new governance mechanisms and technical infrastructure (e.g., data intermediaries and data permissioning infrastructure) to support and automate the exercise of individual data rights and preferences…(More)”.