Gaming Public Opinion


Article by Albert Zhang , Tilla Hoja & Jasmine Latimore: “The Chinese Communist Party’s (CCP’s) embrace of large-scale online influence operations and spreading of disinformation on Western social-media platforms has escalated since the first major attribution from Silicon Valley companies in 2019. While Chinese public diplomacy may have shifted to a softer tone in 2023 after many years of wolf-warrior online rhetoric, the Chinese Government continues to conduct global covert cyber-enabled influence operations. Those operations are now more frequent, increasingly sophisticated and increasingly effective in supporting the CCP’s strategic goals. They focus on disrupting the domestic, foreign, security and defence policies of foreign countries, and most of all they target democracies.

Currently—in targeted democracies—most political leaders, policymakers, businesses, civil society groups and publics have little understanding of how the CCP currently engages in clandestine activities online in their countries, even though this activity is escalating and evolving quickly. The stakes are high for democracies, given the indispensability of the internet and their reliance on open online spaces, free from interference. Despite years of monitoring covert CCP cyber-enabled influence operations by social-media platforms, governments, and research institutes such as ASPI, definitive public attribution of the actors driving these activities is rare. Covert online operations, by design, are difficult to detect and attribute to state actors. 

Social-media platforms and governments struggle to devote adequate resources to identifying, preventing and deterring increasing levels of malicious activity, and sometimes they don’t want to name and shame the Chinese Government for political, economic and/or commercial reasons…(More)”.

Operationalizing digital self-determination


Paper by Stefaan G. Verhulst: “A proliferation of data-generating devices, sensors, and applications has led to unprecedented amounts of digital data. We live in an era of datafication, one in which life is increasingly quantified and transformed into intelligence for private or public benefit. When used responsibly, this offers new opportunities for public good. The potential of data is evident in the possibilities offered by open data and data collaboratives—both instances of how wider access to data can lead to positive and often dramatic social transformation. However, three key forms of asymmetry currently limit this potential, especially for already vulnerable and marginalized groups: data asymmetries, information asymmetries, and agency asymmetries. These asymmetries limit human potential, both in a practical and psychological sense, leading to feelings of disempowerment and eroding public trust in technology. Existing methods to limit asymmetries (such as open data or consent) as well as some alternatives under consideration (data ownership, collective ownership, personal information management systems) have limitations to adequately address the challenges at hand. A new principle and practice of digital self-determination (DSD) is therefore required. The study and practice of DSD remain in its infancy. The characteristics we have outlined here are only exploratory, and much work remains to be done so as to better understand what works and what does not. We suggest the need for a new research framework or agenda to explore DSD and how it can address the asymmetries, imbalances, and inequalities—both in data and society more generally—that are emerging as key public policy challenges of our era…(More)”.

LGBTQ+ data availability


Report by Beyond Deng and Tara Watson: “LGBTQ+ (Lesbian, Gay, Bisexual, Transgender, Queer/Questioning) identification has doubled over the past decade, yet data on the overall LGBTQ+ population remains limited in large, nationally representative surveys such as the American Community Survey. These surveys are consistently used to understand the economic wellbeing of individuals, but they fail to fully capture information related to one’s sexual orientation and gender identity (SOGI).[1]

Asking incomplete SOGI questions leaves a gap in research that, if left unaddressed, will continue to grow in importance with the increase of the LGBTQ+ population, particularly among younger cohorts. In this report, we provide an overview of four large, nationally representative, and publicly accessible datasets that include information relevant for economic analysis. These include the Behavioral Risk Factor Surveillance System (BRFSS), National Health Interview Survey (NHIS), the American Community Survey (ACS), and the Census Household Pulse Survey. Each survey varies by sample size, sample unit, periodicity, geography, and the SOGI information they collect.[2]

The difference in how these datasets collect SOGI information impacts the estimates of LGBTQ+ prevalence. While we find considerable difference in measured LGBT prevalence across datasets, each survey documents a substantial increase in non-straight identity over time. Figure 1 shows that this is largely driven by young adults, who are increasingly likely to identify as LGBT over almost the past ten years. Using data from NHIS, around 4% of 18–24-year-olds in 2013 identified as LGB, which increased to 9.5% in 2021. Because of the short time horizon in these surveys, it is unclear how the current young adult cohort will identify as they age. Despite this, an important takeaway is that younger age groups clearly represent a substantial portion of the LGB community and are important to incorporate in economic analyses…(More)”.

The Surveillance Ad Model Is Toxic — Let’s Not Install Something Worse


Article by Elizabeth M. Renieris: “At this stage, law and policy makerscivil society and academic researchers largely agree that the existing business model of the Web — algorithmically targeted behavioural advertising based on personal data, sometimes also referred to as surveillance advertising — is toxic. They blame it for everything from the erosion of individual privacy to the breakdown of democracy. Efforts to address this toxicity have largely focused on a flurry of new laws (and legislative proposals) requiring enhanced notice to, and consent from, users and limiting the sharing or sale of personal data by third parties and data brokers, as well as the application of existing laws to challenge ad-targeting practices.

In response to the changing regulatory landscape and zeitgeist, industry is also adjusting its practices. For example, Google has introduced its Privacy Sandbox, a project that includes a planned phaseout of third-party cookies from its Chrome browser — a move that, although lagging behind other browsers, is nonetheless significant given Google’s market share. And Apple has arguably dealt one of the biggest blows to the existing paradigm with the introduction of its AppTrackingTransparency (ATT) tool, which requires apps to obtain specific, opt-in consent from iPhone users before collecting and sharing their data for tracking purposes. The ATT effectively prevents apps from collecting a user’s Identifier for Advertisers, or IDFA, which is a unique Apple identifier that allows companies to recognize a user’s device and track its activity across apps and websites.

But the shift away from third-party cookies on the Web and third-party tracking of mobile device identifiers does not equate to the end of tracking or even targeted ads; it just changes who is doing the tracking or targeting and how they go about it. Specifically, it doesn’t provide any privacy protections from first parties, who are more likely to be hegemonic platforms with the most user data. The large walled gardens of Apple, Google and Meta will be less impacted than smaller players with limited first-party data at their disposal…(More)”.

The Rule of Law


Paper by Cass R. Sunstein: “The concept of the rule of law is invoked for purposes that are both numerous and diverse, and that concept is often said to overlap with, or to require, an assortment of other practices and ideals, including democracy, free elections, free markets, property rights, and freedom of speech. It is best to understand the concept in a more specific way, with a commitment to seven principles: (1) clear, general, publicly accessible rules laid down in advance; (2) prospectivity rather than retroactivity; (3) conformity between law on the books and law in the world; (4) hearing rights; (5) some degree of separation between (a) law-making and law enforcement and (b) interpretation of law; (6) no unduly rapid changes in the law; and (7) no contradictions or palpable inconsistency in the law. This account of the rule of law conflicts with those offered by (among many others) Friedrich Hayek and Morton Horwitz, who conflate the idea with other, quite different ideas and practices. Of course it is true that the seven principles can be specified in different ways, broadly compatible with the goal of describing the rule of law as a distinct concept, and some of the seven principles might be understood to be more fundamental than others…(More)”.

Law, AI, and Human Rights


Article by John Croker: “Technology has been at the heart of two injustices that courts have labelled significant miscarriages of justice. The first example will be familiar now to many people in the UK: colloquially known as the ‘post office’ or ‘horizon’ scandal. The second is from Australia, where the Commonwealth Government sought to utilise AI to identify overpayment in the welfare system through what is colloquially known as the ‘Robodebt System’. The first example resulted in the most widespread miscarriage of justice in the UK legal system’s history. The second example was labelled “a shameful chapter” in government administration in Australia and led to the government unlawfully asserting debts amounting to $1.763 billion against 433,000 Australians, and is now the subject of a Royal Commission seeking to identify how public policy failures could have been made on such a significant scale.

Both examples show that where technology and AI goes wrong, the scale of the injustice can result in unprecedented impacts across societies….(More)”.

The Right To Be Free From Automation


Essay by Ziyaad Bhorat: “Is it possible to free ourselves from automation? The idea sounds fanciful, if not outright absurd. Industrial and technological development have reached a planetary level, and automation, as the general substitution or augmentation of human work with artificial tools capable of completing tasks on their own, is the bedrock of all the technologies designed to save, assist and connect us. 

From industrial lathes to OpenAI’s ChatGPT, automation is one of the most groundbreaking achievements in the history of humanity. As a consequence of the human ingenuity and imagination involved in automating our tools, the sky is quite literally no longer a limit. 

But in thinking about our relationship to automation in contemporary life, my unease has grown. And I’m not alone — America’s Blueprint for an AI Bill of Rights and the European Union’s GDPR both express skepticism of automated tools and systems: The “use of technology, data and automated systems in ways that threaten the rights of the American public”; the “right not to be subject to a decision based solely on automated processing.” 

If we look a little deeper, we find this uneasy language in other places where people have been guarding three important abilities against automated technologies. Historically, we have found these abilities so important that we now include them in various contemporary rights frameworks: the right to work, the right to know and understand the source of the things we consume, and the right to make our own decisions. Whether we like it or not, therefore, communities and individuals are already asserting the importance of protecting people from the ubiquity of automated tools and systems.

Consider the case of one of South Africa’s largest retailers, Pick n Pay, which in 2016 tried to introduce self-checkout technology in its retail stores. In post-Apartheid South Africa, trade unions are immensely powerful and unemployment persistently high, so any retail firm that wants to introduce technology that might affect the demand for labor faces huge challenges. After the country’s largest union federation threatened to boycott the new Pick n Pay machines, the company scrapped its pilot. 

As the sociologist Christopher Andrews writes in “The Overworked Consumer,” self-checkout technology is by no means a universally good thing. Firms that introduce it need to deal with new forms of theft, maintenance and bottleneck, while customers end up doing more work themselves. These issues are in addition to the ill fortunes of displaced workers…(More)”.

The Law of AI for Good


Paper by Orly Lobel: “Legal policy and scholarship are increasingly focused on regulating technology to safeguard against risks and harms, neglecting the ways in which the law should direct the use of new technology, and in particular artificial intelligence (AI), for positive purposes. This article pivots the debates about automation, finding that the focus on AI wrongs is descriptively inaccurate, undermining a balanced analysis of the benefits, potential, and risks involved in digital technology. Further, the focus on AI wrongs is normatively and prescriptively flawed, narrowing and distorting the law reforms currently dominating tech policy debates. The law-of-AI-wrongs focuses on reactive and defensive solutions to potential problems while obscuring the need to proactively direct and govern increasingly automated and datafied markets and societies. Analyzing a new Federal Trade Commission (FTC) report, the Biden administration’s 2022 AI Bill of Rights and American and European legislative reform efforts, including the Algorithmic Accountability Act of 2022, the Data Privacy and Protection Act of 2022, the European General Data Protection Regulation (GDPR) and the new draft EU AI Act, the article finds that governments are developing regulatory strategies that almost exclusively address the risks of AI while paying short shrift to its benefits. The policy focus on risks of digital technology is pervaded by logical fallacies and faulty assumptions, failing to evaluate AI in comparison to human decision-making and the status quo. The article presents a shift from the prevailing absolutist approach to one of comparative cost-benefit. The role of public policy should be to oversee digital advancements, verify capabilities, and scale and build public trust in the most promising technologies.

A more balanced regulatory approach to AI also illuminates tensions between current AI policies. Because AI requires better, more representative data, the right to privacy can conflict with the right to fair, unbiased, and accurate algorithmic decision-making. This article argues that the dominant policy frameworks regulating AI risks—emphasizing the right to human decision-making (human-in-the-loop) and the right to privacy (data minimization)—must be complemented with new corollary rights and duties: a right to automated decision-making (human-out-of-the-loop) and a right to complete and connected datasets (data maximization). Moreover, a shift to proactive governance of AI reveals the necessity for behavioral research on how to establish not only trustworthy AI, but also human rationality and trust in AI. Ironically, many of the legal protections currently proposed conflict with existing behavioral insights on human-machine trust. The article presents a blueprint for policymakers to engage in the deliberate study of how irrational aversion to automation can be mitigated through education, private-public governance, and smart policy design…(More)”

Americans Can’t Consent to Companies Use of their Data


A Report from the Annenberg School for Communication: “Consent has always been a central part of Americans’ interactions with the commercial internet. Federal and state laws, as well as decisions from the Federal Trade Commission (FTC), require either implicit (“opt out”) or explicit (“opt in”) permission from individuals for companies to take and use data about them. Genuine opt out and opt in consent requires that people have knowledge about commercial data-extraction practices as well as a belief they can do something about them. As we approach the 30th anniversary of the commercial internet, the latest Annenberg national survey finds that Americans have neither. High percentages of Americans don’t know, admit they don’t know, and believe they can’t do anything about basic practices and policies around companies’ use of people’s data…
High levels of frustration, concern, and fear compound Americans’ confusion: 80% say they have little control over how marketers can learn about them online; 80% agree that what companies know about them from their online behaviors can harm them. These and related discoveries from our survey paint a picture of an unschooled and admittedly incapable society that rejects the internet industry’s insistence that people will accept tradeoffs for benefits and despairs of its inability to predictably control its digital life in the face of powerful corporate forces. At a time when individual consent lies at the core of key legal frameworks governing the collection and use of personal information, our findings describe an environment where genuine consent may not be possible….The aim of this report is to chart the particulars of Americans’ lack of knowledge about the commercial use of their data and their “dark resignation” in connection to it. Our goal is also to raise questions and suggest solutions about public policies that allow companies to gather, analyze, trade, and otherwise benefit from information they extract from large populations of people who are uninformed about how that information will be used and deeply concerned about the consequences of its use. In short, we find that informed consent at scale is a myth, and we urge policymakers to act with that in mind.”…(More)”.

‘Neurorights’ and the next flashpoint of medical privacy


Article by Alex LaCasse: “Around the world, leading neuroscientists, neuroethicists, privacy advocates and legal minds are taking greater interest in brain data and its potential.

Opinions vary widely on the long-term advancements in technology designed to measure brain activity and their impacts on society, as new products trickle out of clinical settings and gain traction for commercial applications.

Some say alarm bells should already be sounding and argue the technology could have corrosive effects on democratic society. Others counter such claims are hyperbolic, given the uncertainty that technology can even measure certain brain activities in the purported way.

Today, neurotechnology is primarily confined to medical and research settings, with the use of various clinical-grade devices to monitor the brain activity of patients who may suffer from mental illnesses or paralysis to gauge muscle movement and record electroencephalography (the measurement of electrical activity and motor function in the brain)….

“I intentionally don’t call this neurorights or brain rights. I call it cognitive liberty,” Duke University Law and Philosophy Professor Nita Farahany said during a LinkedIn Live session. “There is promise of this technology, not only for people who are struggling with a loss of speech and loss of motor activity, but for everyday people.”

The jumping-off point of the panel centered around Farahany’s new book, “The Battle for Your Brain: The Ability to Think Freely in the Age of Neurotechnology,” which examines the neurotechnology landscape and potential negative outcomes without regulatory oversight.

Farahany was motivated to write the book because she saw a “chasm” between what she thought neurotechnology was capable of and the reality of some companies working to one day decode people’s inner thoughts on some level…(More)” (Book)”.