Accept All: Unacceptable? 


Report by Demos and Schillings: “…sought to investigate how our data footprints are being created and exploited online. It involved an exploratory investigation into how data sharing and data regulation practices are impacting citizens: looking into how individuals’ data footprints are created, what people experience when they want to exercise their data rights, and how they feel about how their data is being used. This was a novel approach, using live case studies as they embarked on a data odyssey in order to understand, in real time, the data challenge people face.

We then held a series of stakeholder roundtables with academics, lawyers, technologists, people working in industry and civil society, which focused on diagnosing the problems and what potential solutions already look like, or could look like in the future, across multiple stakeholder groups….(More)” See also: documentary produced by the project partners, law firm Schillings and the independent consumer data action service Rightly, and TVN, alongside this report, here.

The Future of Consent: The Coming Revolution in Privacy and Consumer Trust


Report by Ogilvy: “The future of consent will be determined by how we – as individuals, nations, and a global species – evolve our understanding of what counts as meaningful consent. For consumers and users, the greatest challenge lies in connecting consent to a mechanism of relevant, personal control over their data. For businesses and other organizations, the task will be to recast consent as a driver of positive economic outcomes, rather than an obstacle.

In the coming years of digital privacy innovation, regulation, and increasing market maturity, everyone will need to think more deeply about their relationship with consent. As an initial step, we’ve assembled this snapshot on the current and future state of (meaningful) consent: what it means, what the obstacles are, and which critical changes we need to embrace to evolve…(More)”.

The Surveillance Ad Model Is Toxic — Let’s Not Install Something Worse


Article by Elizabeth M. Renieris: “At this stage, law and policy makerscivil society and academic researchers largely agree that the existing business model of the Web — algorithmically targeted behavioural advertising based on personal data, sometimes also referred to as surveillance advertising — is toxic. They blame it for everything from the erosion of individual privacy to the breakdown of democracy. Efforts to address this toxicity have largely focused on a flurry of new laws (and legislative proposals) requiring enhanced notice to, and consent from, users and limiting the sharing or sale of personal data by third parties and data brokers, as well as the application of existing laws to challenge ad-targeting practices.

In response to the changing regulatory landscape and zeitgeist, industry is also adjusting its practices. For example, Google has introduced its Privacy Sandbox, a project that includes a planned phaseout of third-party cookies from its Chrome browser — a move that, although lagging behind other browsers, is nonetheless significant given Google’s market share. And Apple has arguably dealt one of the biggest blows to the existing paradigm with the introduction of its AppTrackingTransparency (ATT) tool, which requires apps to obtain specific, opt-in consent from iPhone users before collecting and sharing their data for tracking purposes. The ATT effectively prevents apps from collecting a user’s Identifier for Advertisers, or IDFA, which is a unique Apple identifier that allows companies to recognize a user’s device and track its activity across apps and websites.

But the shift away from third-party cookies on the Web and third-party tracking of mobile device identifiers does not equate to the end of tracking or even targeted ads; it just changes who is doing the tracking or targeting and how they go about it. Specifically, it doesn’t provide any privacy protections from first parties, who are more likely to be hegemonic platforms with the most user data. The large walled gardens of Apple, Google and Meta will be less impacted than smaller players with limited first-party data at their disposal…(More)”.

Authoritarian Privacy


Paper by Mark Jia: “Privacy laws are traditionally associated with democracy. Yet autocracies increasingly have them. Why do governments that repress their citizens also protect their privacy? This Article answers this question through a study of China. China is a leading autocracy and the architect of a massive surveillance state. But China is also a major player in data protection, having enacted and enforced a number of laws on information privacy. To explain how this came to be, the Article first turns to several top-down objectives often said to motivate China’s privacy laws: advancing its digital economy, expanding its global influence, and protecting its national security. Although each has been a factor in China’s turn to privacy law, even together they tell only a partial story.

More fundamental to China’s privacy turn is the party-state’s use of privacy law to shore up its legitimacy against a backdrop of digital abuse. China’s whiplashed transition into the digital age has given rise to significant vulnerabilities and dependencies for ordinary citizens. Through privacy law, China’s leaders have sought to interpose themselves as benevolent guardians of privacy rights against other intrusive actors—individuals, firms, even state agencies and local governments. So framed, privacy law can enhance perceptions of state performance and potentially soften criticism of the center’s own intrusions. China did not enact privacy law in spite of its surveillance state; it embraced privacy law in order to maintain it. The Article adds to our understanding of privacy law, complicates the conceptual relationship between privacy and democracy, and points towards a general theory of authoritarian privacy..(More)”.

Suspicion Machines


Lighthouse Reports: “Governments all over the world are experimenting with predictive algorithms in ways that are largely invisible to the public. What limited reporting there has been on this topic has largely focused on predictive policing and risk assessments in criminal justice systems. But there is an area where even more far-reaching experiments are underway on vulnerable populations with almost no scrutiny.

Fraud detection systems are widely deployed in welfare states ranging from complex machine learning models to crude spreadsheets. The scores they generate have potentially life-changing consequences for millions of people. Until now, public authorities have typically resisted calls for transparency, either by claiming that disclosure would increase the risk of fraud or to protect proprietary technology.

The sales pitch for these systems promises that they will recover millions of euros defrauded from the public purse. And the caricature of the benefit cheat is a modern take on the classic trope of the undeserving poor and much of the public debate in Europe — which has the most generous welfare states — is intensely politically charged.

The true extent of welfare fraud is routinely exaggerated by consulting firms, who are often the algorithm vendors, talking it up to near 5 percent of benefits spending while some national auditors’ offices estimate it at between 0.2 and 0.4 of spending. Distinguishing between honest mistakes and deliberate fraud in complex public systems is messy and hard.

When opaque technologies are deployed in search of political scapegoats the potential for harm among some of the poorest and most marginalised communities is significant.

Hundreds of thousands of people are being scored by these systems based on data mining operations where there has been scant public consultation. The consequences of being flagged by the “suspicion machine” can be drastic, with fraud controllers empowered to turn the lives of suspects inside out…(More)”.

Americans Can’t Consent to Companies Use of their Data


A Report from the Annenberg School for Communication: “Consent has always been a central part of Americans’ interactions with the commercial internet. Federal and state laws, as well as decisions from the Federal Trade Commission (FTC), require either implicit (“opt out”) or explicit (“opt in”) permission from individuals for companies to take and use data about them. Genuine opt out and opt in consent requires that people have knowledge about commercial data-extraction practices as well as a belief they can do something about them. As we approach the 30th anniversary of the commercial internet, the latest Annenberg national survey finds that Americans have neither. High percentages of Americans don’t know, admit they don’t know, and believe they can’t do anything about basic practices and policies around companies’ use of people’s data…
High levels of frustration, concern, and fear compound Americans’ confusion: 80% say they have little control over how marketers can learn about them online; 80% agree that what companies know about them from their online behaviors can harm them. These and related discoveries from our survey paint a picture of an unschooled and admittedly incapable society that rejects the internet industry’s insistence that people will accept tradeoffs for benefits and despairs of its inability to predictably control its digital life in the face of powerful corporate forces. At a time when individual consent lies at the core of key legal frameworks governing the collection and use of personal information, our findings describe an environment where genuine consent may not be possible….The aim of this report is to chart the particulars of Americans’ lack of knowledge about the commercial use of their data and their “dark resignation” in connection to it. Our goal is also to raise questions and suggest solutions about public policies that allow companies to gather, analyze, trade, and otherwise benefit from information they extract from large populations of people who are uninformed about how that information will be used and deeply concerned about the consequences of its use. In short, we find that informed consent at scale is a myth, and we urge policymakers to act with that in mind.”…(More)”.

‘Neurorights’ and the next flashpoint of medical privacy


Article by Alex LaCasse: “Around the world, leading neuroscientists, neuroethicists, privacy advocates and legal minds are taking greater interest in brain data and its potential.

Opinions vary widely on the long-term advancements in technology designed to measure brain activity and their impacts on society, as new products trickle out of clinical settings and gain traction for commercial applications.

Some say alarm bells should already be sounding and argue the technology could have corrosive effects on democratic society. Others counter such claims are hyperbolic, given the uncertainty that technology can even measure certain brain activities in the purported way.

Today, neurotechnology is primarily confined to medical and research settings, with the use of various clinical-grade devices to monitor the brain activity of patients who may suffer from mental illnesses or paralysis to gauge muscle movement and record electroencephalography (the measurement of electrical activity and motor function in the brain)….

“I intentionally don’t call this neurorights or brain rights. I call it cognitive liberty,” Duke University Law and Philosophy Professor Nita Farahany said during a LinkedIn Live session. “There is promise of this technology, not only for people who are struggling with a loss of speech and loss of motor activity, but for everyday people.”

The jumping-off point of the panel centered around Farahany’s new book, “The Battle for Your Brain: The Ability to Think Freely in the Age of Neurotechnology,” which examines the neurotechnology landscape and potential negative outcomes without regulatory oversight.

Farahany was motivated to write the book because she saw a “chasm” between what she thought neurotechnology was capable of and the reality of some companies working to one day decode people’s inner thoughts on some level…(More)” (Book)”.

Privacy Decisions are not Private: How the Notice and Choice Regime Induces us to Ignore Collective Privacy Risks and what Regulation should do about it


Paper by Christopher Jon Sprigman and Stephan Tontrup: “For many reasons the current notice and choice privacy framework fails to empower individuals in effectively making their own privacy choices. In this Article we offer evidence from three novel experiments showing that at the core of this failure is a cognitive error. Notice and choice caters to a heuristic that people employ to make privacy decisions. This heuristic is meant to judge trustworthiness in face-to-face-situations. In the online context, it distorts privacy decision-making and leaves potential disclosers vulnerable to exploitation.

From our experimental evidence exploring the heuristic’s effect, we conclude that privacy law must become more behaviorally aware. Specifically, privacy law must be redesigned to intervene in the cognitive mechanisms that keep individuals from making better privacy decisions. A behaviorally-aware privacy regime must centralize, standardize and simplify the framework for making privacy choices.

To achieve these goals, we propose a master privacy template which requires consumers to define their privacy preferences in advance—doing so avoids presenting the consumer with a concrete counterparty, and this, in turn, prevents them from applying the trust heuristic and reduces many other biases that affect privacy decision-making. Our data show that blocking the heuristic enables consumers to consider relevant privacy cues and be considerate of externalities their privacy decisions cause.

The master privacy template provides a much more effective platform for regulation. Through the master template the regulator can set the standard for automated communication between user clients and website interfaces, a facility which we expect to enhance enforcement and competition about privacy terms…(More)”.

Data and the Digital Self


Report by the ACS: “A series of essays by some of the leading minds on data sharing and privacy in Australia, this book takes a look at some of the critical data-related issues facing Australia today and tomorrow. It looks at digital identity and privacy in the 21st century; at privacy laws and what they need to look like to be effective in the era of big data; at how businesses and governments can work better to build trust in this new era; and at how we need to look beyond just privacy and personal information as we develop solutions over the coming decades…(More)”.

“How Dare They Peep into My Private Life”


Report by Human Rights Watch on “Children’s Rights Violations by Governments that Endorsed Online Learning During the Covid-19 Pandemic”: “The coronavirus pandemic upended the lives and learning of children around the world. Most countries pivoted to some form of online learning, replacing physical classrooms with EdTech websites and apps; this helped fill urgent gaps in delivering some form of education to many children.

But in their rush to connect children to virtual classrooms, few governments checked whether the EdTech they were rapidly endorsing or procuring for schools were safe for children. As a result, children whose families were able to afford access to the internet and connected devices, or who made hard sacrifices in order to do so, were exposed to the privacy practices of the EdTech products they were told or required to use during Covid-19 school closures.

Human Rights Watch conducted its technical analysis of the products between March and August 2021, and subsequently verified its findings as detailed in the methodology section. Each analysis essentially took a snapshot of the prevalence and frequency of tracking technologies embedded in each product on a given date in that window. That prevalence and frequency may fluctuate over time based on multiple factors, meaning that an analysis conducted on later dates might observe variations in the behavior of the products…(More)”