Beyond the Checkbox: Upgrading the Right to Opt Out


Article by Sebastian Zimmeck: “…rights, as currently encoded in privacy laws, put too much onus on individuals when many privacy problems are systematic.5 Indeed, privacy is a systems property. If we want to make progress toward a more privacy-friendly Web as well as mobile and smart TV platforms, we need to take a systems perspective. For example, instead of requiring people to opt out from individual websites, there should be opt-out settings in browsers and operating systems. If a law requires individual opt-outs, those can be generalized by applying one opt-out toward all future sites visited or apps used, if a user so desires.8

Another problem is that the ad ecosystem is structured such that if people opt out, in many cases, their data is still being shared just as if they would not have opted out. The only difference is that in the latter case the data is accompanied by a privacy flag propagating the opt-out to the data recipient.7 However, if people opt out, their data should not be shared in the first place! The current system relying on the propagation of opt-out signals and deletion of incoming data by the recipient is complicated, error-prone, violates the principle of data minimization, and is an obstacle for effective privacy enforcement. Changing the ad ecosystem is particularly important as it is not only used on the web but also on many other platforms. Companies and the online ad industry as a whole need to do better!..(More)”

Children’s Voice Privacy: First Steps And Emerging Challenges


Paper by Ajinkya Kulkarni, et al: “Children are one of the most under-represented groups in speech technologies, as well as one of the most vulnerable in terms of privacy. Despite this, anonymization techniques targeting this population have received little attention. In this study, we seek to bridge this gap, and establish a baseline for the use of voice anonymization techniques designed for adult speech when applied to children’s voices. Such an evaluation is essential, as children’s speech presents a distinct set of challenges when compared to that of adults. This study comprises three children’s datasets, six anonymization methods, and objective and subjective utility metrics for evaluation. Our results show that existing systems for adults are still able to protect children’s voice privacy, but suffer from much higher utility degradation. In addition, our subjective study displays the challenges of automatic evaluation methods for speech quality in children’s speech, highlighting the need for further research…(More)”. See also: Responsible Data for Children.

Surveillance pricing: How your data determines what you pay


Article by Douglas Crawford: “Surveillance pricing, also known as personalized or algorithmic pricing, is a practice where companies use your personal data, such as your location, the device you’re using, your browsing history, and even your income, to determine what price to show you. It’s not just about supply and demand — it’s about you as a consumer and how much the system thinks you’re able (or willing) to pay.

Have you ever shopped online for a flight(new window), only to find that the price mysteriously increased the second time you checked? Or have you and a friend searched for the same hotel room on your phones, only to find your friend sees a lower price? This isn’t a glitch — it’s surveillance pricing at work.

In the United States, surveillance pricing is becoming increasingly prevalent across various industries, including airlines, hotels, and e-commerce platforms. It exists elsewhere, but in other parts of the world, such as the European Union, there is a growing recognition of the danger this pricing model presents to citizens’ privacy, resulting in stricter data protection laws aimed at curbing it. The US appears to be moving in the opposite direction…(More)”.

How Being Watched Changes How You Think


Article by Simon Makin: “In 1785 English philosopher Jeremy Bentham designed the perfect prison: Cells circle a tower from which an unseen guard can observe any inmate at will. As far as a prisoner knows, at any given time, the guard may be watching—or may not be. Inmates have to assume they’re constantly observed and behave accordingly. Welcome to the Panopticon.

Many of us will recognize this feeling of relentless surveillance. Information about who we are, what we do and buy and where we go is increasingly available to completely anonymous third parties. We’re expected to present much of our lives to online audiences and, in some social circles, to share our location with friends. Millions of effectively invisible closed-circuit television (CCTV) cameras and smart doorbells watch us in public, and we know facial recognition with artificial intelligence can put names to faces.

So how does being watched affect us? “It’s one of the first topics to have been studied in psychology,” says Clément Belletier, a psychologist at University of Clermont Auvergne in France. In 1898 psychologist Norman Triplett showed that cyclists raced harder in the presence of others. From the 1970s onward, studies showed how we change our overt behavior when we are watched to manage our reputation and social consequences.

But being watched doesn’t just change our behavior; decades of research show it also infiltrates our mind to impact how we think. And now a new study reveals how being watched affects unconscious processing in our brain. In this era of surveillance, researchers say, the findings raise concerns about our collective mental health…(More)”.

How we think about protecting data


Article by Peter Dizikes: “How should personal data be protected? What are the best uses of it? In our networked world, questions about data privacy are ubiquitous and matter for companies, policymakers, and the public.

A new study by MIT researchers adds depth to the subject by suggesting that people’s views about privacy are not firmly fixed and can shift significantly, based on different circumstances and different uses of data.

“There is no absolute value in privacy,” says Fabio Duarte, principal research scientist in MIT’s Senseable City Lab and co-author of a new paper outlining the results. “Depending on the application, people might feel use of their data is more or less invasive.”

The study is based on an experiment the researchers conducted in multiple countries using a newly developed game that elicits public valuations of data privacy relating to different topics and domains of life.

“We show that values attributed to data are combinatorial, situational, transactional, and contextual,” the researchers write.

The open-access paper, “Data Slots: tradeoffs between privacy concerns and benefits of data-driven solutions,” is published today in Nature: Humanities and Social Sciences Communications. The authors are Martina Mazzarello, a postdoc in the Senseable City Lab; Duarte; Simone Mora, a research scientist at Senseable City Lab; Cate Heine PhD ’24 of University College London; and Carlo Ratti, director of the Senseable City Lab.

The study is based around a card game with poker-type chips the researchers created to study the issue, called Data Slots. In it, players hold hands of cards with 12 types of data — such as a personal profile, health data, vehicle location information, and more — that relate to three types of domains where data are collected: home life, work, and public spaces. After exchanging cards, the players generate ideas for data uses, then assess and invest in some of those concepts. The game has been played in-person in 18 different countries, with people from another 74 countries playing it online; over 2,000 individual player-rounds were included in the study…(More)”.

DOGE’s Growing Reach into Personal Data: What it Means for Human Rights


Article by Deborah Brown: “Expansive interagency sharing of personal data could fuel abuses against vulnerable people and communities who are already being targeted by Trump administration policies, like immigrants, lesbian, gay, bisexual, and transgender (LGBT) people, and student protesters. The personal data held by the government reveals deeply sensitive information, such as people’s immigration status, race, gender identity, sexual orientation, and economic status.

A massive centralized government database could easily be used for a range of abusive purposes, like to discriminate against current federal employees and future job applicants on the basis of their sexual orientation or gender identity, or to facilitate the deportation of immigrants. It could result in people forgoing public services out of fear that their data will be weaponized against them by another federal agency.

But the danger doesn’t stop with those already in the administration’s crosshairs. The removal of barriers keeping private data siloed could allow the government or DOGE to deny federal loans for education or Medicaid benefits based on unrelated or even inaccurate data. It could also facilitate the creation of profiles containing all of the information various agencies hold on every person in the country. Such profiles, combined with social media activity, could facilitate the identification and targeting of people for political reasons, including in the context of elections.

Information silos exist for a reason. Personal data should be collected for a determined, specific, and legitimate purpose, and not used for another purpose without notice or justification, according to the key internationally recognized data protection principle, “purpose limitation.” Sharing data seamlessly across federal or even state agencies in the name of an undefined and unmeasurable goal of efficiency is incompatible with this core data protection principle…(More)”.

Trump Wants to Merge Government Data. Here Are 314 Things It Might Know About You.


Article by Emily Badger and Sheera Frenkel: “The federal government knows your mother’s maiden name and your bank account number. The student debt you hold. Your disability status. The company that employs you and the wages you earn there. And that’s just a start. It may also know your …and at least 263 more categories of data.These intimate details about the personal lives of people who live in the United States are held in disconnected data systems across the federal government — some at the Treasury, some at the Social Security Administration and some at the Department of Education, among other agencies.

The Trump administration is now trying to connect the dots of that disparate information. Last month, President Trump signed an executive order calling for the “consolidation” of these segregated records, raising the prospect of creating a kind of data trove about Americans that the government has never had before, and that members of the president’s own party have historically opposed.

The effort is being driven by Elon Musk, the world’s richest man, and his lieutenants with the Department of Government Efficiency, who have sought access to dozens of databases as they have swept through agencies across the federal government. Along the way, they have elbowed past the objections of career staff, data security protocols, national security experts and legal privacy protections…(More)”.

Privacy-Enhancing and Privacy-Preserving Technologies in AI: Enabling Data Use and Operationalizing Privacy by Design and Default


Paper by the Centre for Information Policy Leadership at Hunton (“CIPL”): “provides an in-depth exploration of how privacy-enhancing technologies (“PETs”) are being deployed to address privacy within artificial intelligence (“AI”) systems. It aims to describe how these technologies can help operationalize privacy by design and default and serve as key business enablers, allowing companies and public sector organizations to access, share and use data that would otherwise be unavailable. It also seeks to demonstrate how PETs can address challenges and provide new opportunities across the AI life cycle, from data sourcing to model deployment, and includes real-world case studies…

As further detailed in the Paper, CIPL’s recommendations for boosting the adoption of PETs for AI are as follows:

Stakeholders should adopt a holistic view of the benefits of PETs in AI. PETs deliver value beyond addressing privacy and security concerns, such as fostering trust and enabling data sharing. It is crucial that stakeholders consider all these advantages when making decisions about their use.

Regulators should issue more clear and practical guidance to reduce regulatory uncertainty in the use of PETs in AI. While regulators increasingly recognize the value of PETs, clearer and more practical guidance is needed to help organizations implement these technologies effectively.

Regulators should adopt a risk-based approach to assess how PETs can meet standards for data anonymization, providing clear guidance to eliminate uncertainty. There is uncertainty around whether various PETs meet legal standards for data anonymization. A risk-based approach to defining anonymization standards could encourage wider adoption of PETs.

Deployers should take steps to provide contextually appropriate transparency to customers and data subjects. Given the complexity of PETs, deployers should ensure customers and data subjects understand how PETs function within AI models…(More)”.

Europe’s GDPR privacy law is headed for red tape bonfire within ‘weeks’


Article by Ellen O’Regan: “Europe’s most famous technology law, the GDPR, is next on the hit list as the European Union pushes ahead with its regulatory killing spree to slash laws it reckons are weighing down its businesses.

The European Commission plans to present a proposal to cut back the General Data Protection Regulation, or GDPR for short, in the next couple of weeks. Slashing regulation is a key focus for Commission President Ursula von der Leyen, as part of an attempt to make businesses in Europe more competitive with rivals in the United States, China and elsewhere. 

The EU’s executive arm has already unveiled packages to simplify rules around sustainability reporting and accessing EU investment. The aim is for companies to waste less time and money on complying with complex legal and regulatory requirements imposed by EU laws…Seven years later, Brussels is taking out the scissors to give its (in)famous privacy law a trim.

There are “a lot of good things about GDPR, [and] privacy is completely necessary. But we don’t need to regulate in a stupid way. We need to make it easy for businesses and for companies to comply,” Danish Digital Minister Caroline Stage Olsen told reporters last week. Denmark will chair the work in the EU Council in the second half of 2025 as part of its rotating presidency.

The criticism of the GDPR echoes the views of former Italian Prime Minister Mario Draghi, who released a landmark economic report last September warning that Europe’s complex laws were preventing its economy from catching up with the United States and China. “The EU’s regulatory stance towards tech companies hampers innovation,” Draghi wrote, singling out the Artificial Intelligence Act and the GDPR…(More)”.

Differential Privacy


Open access book by  Simson L. Garfinkel: “Differential privacy (DP) is an increasingly popular, though controversial, approach to protecting personal data. DP protects confidential data by introducing carefully calibrated random numbers, called statistical noise, when the data is used. Google, Apple, and Microsoft have all integrated the technology into their software, and the US Census Bureau used DP to protect data collected in the 2020 census. In this book, Simson Garfinkel presents the underlying ideas of DP, and helps explain why DP is needed in today’s information-rich environment, why it was used as the privacy protection mechanism for the 2020 census, and why it is so controversial in some communities.

When DP is used to protect confidential data, like an advertising profile based on the web pages you have viewed with a web browser, the noise makes it impossible for someone to take that profile and reverse engineer, with absolute certainty, the underlying confidential data on which the profile was computed. The book also chronicles the history of DP and describes the key participants and its limitations. Along the way, it also presents a short history of the US Census and other approaches for data protection such as de-identification and k-anonymity…(More)”.