Authoritarian Privacy


Paper by Mark Jia: “Privacy laws are traditionally associated with democracy. Yet autocracies increasingly have them. Why do governments that repress their citizens also protect their privacy? This Article answers this question through a study of China. China is a leading autocracy and the architect of a massive surveillance state. But China is also a major player in data protection, having enacted and enforced a number of laws on information privacy. To explain how this came to be, the Article first turns to several top-down objectives often said to motivate China’s privacy laws: advancing its digital economy, expanding its global influence, and protecting its national security. Although each has been a factor in China’s turn to privacy law, even together they tell only a partial story.

More fundamental to China’s privacy turn is the party-state’s use of privacy law to shore up its legitimacy against a backdrop of digital abuse. China’s whiplashed transition into the digital age has given rise to significant vulnerabilities and dependencies for ordinary citizens. Through privacy law, China’s leaders have sought to interpose themselves as benevolent guardians of privacy rights against other intrusive actors—individuals, firms, even state agencies and local governments. So framed, privacy law can enhance perceptions of state performance and potentially soften criticism of the center’s own intrusions. China did not enact privacy law in spite of its surveillance state; it embraced privacy law in order to maintain it. The Article adds to our understanding of privacy law, complicates the conceptual relationship between privacy and democracy, and points towards a general theory of authoritarian privacy..(More)”.

Suspicion Machines


Lighthouse Reports: “Governments all over the world are experimenting with predictive algorithms in ways that are largely invisible to the public. What limited reporting there has been on this topic has largely focused on predictive policing and risk assessments in criminal justice systems. But there is an area where even more far-reaching experiments are underway on vulnerable populations with almost no scrutiny.

Fraud detection systems are widely deployed in welfare states ranging from complex machine learning models to crude spreadsheets. The scores they generate have potentially life-changing consequences for millions of people. Until now, public authorities have typically resisted calls for transparency, either by claiming that disclosure would increase the risk of fraud or to protect proprietary technology.

The sales pitch for these systems promises that they will recover millions of euros defrauded from the public purse. And the caricature of the benefit cheat is a modern take on the classic trope of the undeserving poor and much of the public debate in Europe — which has the most generous welfare states — is intensely politically charged.

The true extent of welfare fraud is routinely exaggerated by consulting firms, who are often the algorithm vendors, talking it up to near 5 percent of benefits spending while some national auditors’ offices estimate it at between 0.2 and 0.4 of spending. Distinguishing between honest mistakes and deliberate fraud in complex public systems is messy and hard.

When opaque technologies are deployed in search of political scapegoats the potential for harm among some of the poorest and most marginalised communities is significant.

Hundreds of thousands of people are being scored by these systems based on data mining operations where there has been scant public consultation. The consequences of being flagged by the “suspicion machine” can be drastic, with fraud controllers empowered to turn the lives of suspects inside out…(More)”.

Americans Can’t Consent to Companies Use of their Data


A Report from the Annenberg School for Communication: “Consent has always been a central part of Americans’ interactions with the commercial internet. Federal and state laws, as well as decisions from the Federal Trade Commission (FTC), require either implicit (“opt out”) or explicit (“opt in”) permission from individuals for companies to take and use data about them. Genuine opt out and opt in consent requires that people have knowledge about commercial data-extraction practices as well as a belief they can do something about them. As we approach the 30th anniversary of the commercial internet, the latest Annenberg national survey finds that Americans have neither. High percentages of Americans don’t know, admit they don’t know, and believe they can’t do anything about basic practices and policies around companies’ use of people’s data…
High levels of frustration, concern, and fear compound Americans’ confusion: 80% say they have little control over how marketers can learn about them online; 80% agree that what companies know about them from their online behaviors can harm them. These and related discoveries from our survey paint a picture of an unschooled and admittedly incapable society that rejects the internet industry’s insistence that people will accept tradeoffs for benefits and despairs of its inability to predictably control its digital life in the face of powerful corporate forces. At a time when individual consent lies at the core of key legal frameworks governing the collection and use of personal information, our findings describe an environment where genuine consent may not be possible….The aim of this report is to chart the particulars of Americans’ lack of knowledge about the commercial use of their data and their “dark resignation” in connection to it. Our goal is also to raise questions and suggest solutions about public policies that allow companies to gather, analyze, trade, and otherwise benefit from information they extract from large populations of people who are uninformed about how that information will be used and deeply concerned about the consequences of its use. In short, we find that informed consent at scale is a myth, and we urge policymakers to act with that in mind.”…(More)”.

‘Neurorights’ and the next flashpoint of medical privacy


Article by Alex LaCasse: “Around the world, leading neuroscientists, neuroethicists, privacy advocates and legal minds are taking greater interest in brain data and its potential.

Opinions vary widely on the long-term advancements in technology designed to measure brain activity and their impacts on society, as new products trickle out of clinical settings and gain traction for commercial applications.

Some say alarm bells should already be sounding and argue the technology could have corrosive effects on democratic society. Others counter such claims are hyperbolic, given the uncertainty that technology can even measure certain brain activities in the purported way.

Today, neurotechnology is primarily confined to medical and research settings, with the use of various clinical-grade devices to monitor the brain activity of patients who may suffer from mental illnesses or paralysis to gauge muscle movement and record electroencephalography (the measurement of electrical activity and motor function in the brain)….

“I intentionally don’t call this neurorights or brain rights. I call it cognitive liberty,” Duke University Law and Philosophy Professor Nita Farahany said during a LinkedIn Live session. “There is promise of this technology, not only for people who are struggling with a loss of speech and loss of motor activity, but for everyday people.”

The jumping-off point of the panel centered around Farahany’s new book, “The Battle for Your Brain: The Ability to Think Freely in the Age of Neurotechnology,” which examines the neurotechnology landscape and potential negative outcomes without regulatory oversight.

Farahany was motivated to write the book because she saw a “chasm” between what she thought neurotechnology was capable of and the reality of some companies working to one day decode people’s inner thoughts on some level…(More)” (Book)”.

Privacy Decisions are not Private: How the Notice and Choice Regime Induces us to Ignore Collective Privacy Risks and what Regulation should do about it


Paper by Christopher Jon Sprigman and Stephan Tontrup: “For many reasons the current notice and choice privacy framework fails to empower individuals in effectively making their own privacy choices. In this Article we offer evidence from three novel experiments showing that at the core of this failure is a cognitive error. Notice and choice caters to a heuristic that people employ to make privacy decisions. This heuristic is meant to judge trustworthiness in face-to-face-situations. In the online context, it distorts privacy decision-making and leaves potential disclosers vulnerable to exploitation.

From our experimental evidence exploring the heuristic’s effect, we conclude that privacy law must become more behaviorally aware. Specifically, privacy law must be redesigned to intervene in the cognitive mechanisms that keep individuals from making better privacy decisions. A behaviorally-aware privacy regime must centralize, standardize and simplify the framework for making privacy choices.

To achieve these goals, we propose a master privacy template which requires consumers to define their privacy preferences in advance—doing so avoids presenting the consumer with a concrete counterparty, and this, in turn, prevents them from applying the trust heuristic and reduces many other biases that affect privacy decision-making. Our data show that blocking the heuristic enables consumers to consider relevant privacy cues and be considerate of externalities their privacy decisions cause.

The master privacy template provides a much more effective platform for regulation. Through the master template the regulator can set the standard for automated communication between user clients and website interfaces, a facility which we expect to enhance enforcement and competition about privacy terms…(More)”.

Data and the Digital Self


Report by the ACS: “A series of essays by some of the leading minds on data sharing and privacy in Australia, this book takes a look at some of the critical data-related issues facing Australia today and tomorrow. It looks at digital identity and privacy in the 21st century; at privacy laws and what they need to look like to be effective in the era of big data; at how businesses and governments can work better to build trust in this new era; and at how we need to look beyond just privacy and personal information as we develop solutions over the coming decades…(More)”.

“How Dare They Peep into My Private Life”


Report by Human Rights Watch on “Children’s Rights Violations by Governments that Endorsed Online Learning During the Covid-19 Pandemic”: “The coronavirus pandemic upended the lives and learning of children around the world. Most countries pivoted to some form of online learning, replacing physical classrooms with EdTech websites and apps; this helped fill urgent gaps in delivering some form of education to many children.

But in their rush to connect children to virtual classrooms, few governments checked whether the EdTech they were rapidly endorsing or procuring for schools were safe for children. As a result, children whose families were able to afford access to the internet and connected devices, or who made hard sacrifices in order to do so, were exposed to the privacy practices of the EdTech products they were told or required to use during Covid-19 school closures.

Human Rights Watch conducted its technical analysis of the products between March and August 2021, and subsequently verified its findings as detailed in the methodology section. Each analysis essentially took a snapshot of the prevalence and frequency of tracking technologies embedded in each product on a given date in that window. That prevalence and frequency may fluctuate over time based on multiple factors, meaning that an analysis conducted on later dates might observe variations in the behavior of the products…(More)”

The Economics of Digital Privacy


Paper by Avi Goldfarb & Verina F. Que: “There has been increasing attention to privacy in the media and in regulatory discussions. This is a consequence of the increased usefulness of digital data. The literature has emphasized the benefits and costs of digital data flows to consumers and firms. The benefits arise in the form of data-driven innovation, higher quality products and services that match consumer needs, and increased profits. The costs relate to intrinsic and instrumental values of privacy. Under standard economic assumptions, this framing of a cost-benefit tradeoff might suggest little role for regulation beyond ensuring consumers are appropriately informed in a robust competitive environment. The empirical literature thus far has focused on this direct cost-benefit assessment, examining how privacy regulations have affected various market outcomes. However, an increasing body of theory work emphasizes externalities related to data flows. These externalities, both positive and negative, suggest benefits to the targeted regulation of digital privacy…(More)”.

UN Guide on Privacy-Enhancing Technologies for Official Statistics


UN Guide: “This document presents methodologies and approaches to mitigating privacy risks when using sensitive or confidential data, which are collectively referred to as privacy-enhancing technologies (PETs). National Statistics Offices (NSOs) are entrusted with data that has the potential to drive innovation and improve national services, research, and social benefit. Yet, there has been a rise in sustained cyber threats, complex networks of intermediaries motivated to procure sensitive data, and advances in methods to re-identify and link data to individuals and across multiple data sources. Data breaches erode public trust and can have serious negative consequences for individuals, groups, and communities. This document focuses on PETs that protect data during analysis and dissemination of sensitive information so that the benefits of using data for official statistics can be realized while minimizing privacy risks to those entrusting sensitive data to NSOs…(More)”.

Privacy


Book edited by Carissa Veliz and Steven M. Cahn: “Companies collect and share much of your daily life, from your location and search history, to your likes, habits, and relationships. As more and more of our personal data is collected, analyzed, and distributed, we need to think carefully about what we might be losing when we give up our privacy.

Privacy is a thought-provoking collection of philosophical essays on privacy, offering deep insights into the nature of privacy, its value, and the consequences of its loss. Bringing together both classic and contemporary work, this timely volume explores the theories, issues, debates, and applications of the philosophical study of privacy. The essays address concealment and exposure, the liberal value of privacy, privacy in social media, privacy rights and public information, privacy and the limits of law, and more…(More)”.