Synthetic Politics: Preparing democracy for Generative AI


Report by Demos: “This year is a politically momentous one, with almost half the world voting in elections. Generative AI may revolutionise our political information environments by making them more effective, relevant, and participatory. But it’s also possible that they will become more manipulative, confusing, and dangerous. We’ve already seen AI-generated audio of politicians going viral and chatbots offering incorrect information about elections.

This report, produced in partnership with University College London, explores how synthetic content produced by generative AI poses risks to the core democratic values of truthequality, and non-violence. It proposes two action plans for what private and public decision-makers should be doing to safeguard democratic integrity immediately and in the long run:

  • In Action Plan 1, we consider the actions that should be urgently put in place to reduce the acute risks to democratic integrity presented by generative AI tools. This includes reducing the production and dissemination of harmful synthetic content and empowering users so that harmful impacts of synthetic content are reduced in the immediate term.
  • In Action Plan 2, we set out a longer-term vision for how the fundamental risks to democratic integrity should be addressed. We explore the ways in which generative AI tools can help bolster equality, truth and non-violence, from enabling greater democratic participation to improving how key information institutions operate…(More)”.

EBP+: Integrating science into policy evaluation using Evidential Pluralism


Article by Joe Jones, Alexandra Trofimov, Michael Wilde & Jon Williamson: “…While the need to integrate scientific evidence in policymaking is clear, there isn’t a universally accepted framework for doing so in practice. Orthodox evidence-based approaches take Randomised Controlled Trials (RCTs) as the gold standard of evidence. Others argue that social policy issues require theory-based methods to understand the complexities of policy interventions. These divisions may only further decrease trust in science at this critical time.

EBP+ offers a broader framework within which both orthodox and theory-based methods can sit. EBP+ also provides a systematic account of how to integrate and evaluate these different types of evidence. EBP+ can offer consistency and objectivity in policy evaluation, and could yield a unified approach that increases public trust in scientifically-informed policy…

EBP+ is motivated by Evidential Pluralism, a philosophical theory of causal enquiry that has been developed over the last 15 years. Evidential Pluralism encompasses two key claims. The first, object pluralism, says that establishing that A is a cause of B (e.g., that a policy intervention causes a specific outcome) requires establishing both that and B are appropriately correlated and that there is some mechanism which links the two and which can account for the extent of the correlation. The second claim, study pluralism, maintains that assessing whether is a cause of B requires assessing both association studies (studies that repeatedly measure and B, together with potential confounders, to measure their association) and mechanistic studies (studies of features of the mechanisms linking A to B), where available…(More)”.

A diagrammatic representation of Evidential Pluralism
Evidential Pluralism (© Jon Williamson)

What Will AI Do to Elections?


Article by Rishi Iyengar: “…Requests to X’s press team on how the platform was preparing for elections in 2024 yielded an automated response: “Busy now, please check back later”—a slight improvement from the initial Musk-era change where the auto-reply was a poop emoji.

X isn’t the only major social media platform with fewer content moderators. Meta, which owns Facebook, Instagram, and WhatsApp, has laid off more than 20,000 employees since November 2022—several of whom worked on trust and safety—while many YouTube employees working on misinformation policy were impacted by layoffs at parent company Google.

There could scarcely be a worse time to skimp on combating harmful content online. More than 50 countries, including the world’s three biggest democracies and Taiwan, an increasingly precarious geopolitical hot spot, are expected to hold national elections in 2024. Seven of the world’s 10 most populous countries—Bangladesh, India, Indonesia, Mexico, Pakistan, Russia, and the United States—will collectively send a third of the world’s population to the polls.

Elections, with their emotionally charged and often tribal dynamics, are where misinformation missteps come home to roost. If social media misinformation is the equivalent of yelling “fire” in a crowded theater, election misinformation is like doing so when there’s a horror movie playing and everyone’s already on edge.

Katie Harbath prefers a different analogy, one that illustrates how nebulous and thorny the issues are and the sheer uncertainty surrounding them. “The metaphor I keep using is a kaleidoscope because there’s so many different aspects to this but depending how you turn the kaleidoscope, the pattern changes of what it’s going to look like,” she said in an interview in October. “And that’s how I feel about life post-2024. … I don’t know where in the kaleidoscope it’s going to land.”

Harbath has become something of an election whisperer to the tech industry, having spent a decade at Facebook from 2011 building the company’s election integrity efforts from scratch. She left in 2021 and founded Anchor Change, a public policy consulting firm that helps other platforms combat misinformation and prepare for elections in particular.

Had she been in her old job, Harbath said, her team would have completed risk assessments of global elections by late 2022 or early 2023 and then spent the rest of the year tailoring Meta’s products to them as well as setting up election “war rooms” where necessary. “Right now, we would be starting to move into execution mode.” She cautions against treating the resources that companies are putting into election integrity as a numbers game—“once you build some of those tools, maintaining them doesn’t take as many people”—but acknowledges that the allocation of resources reveals a company leadership’s priorities.

The companies insist they remain committed to election integrity. YouTube has “heavily invested in the policies and systems that help us successfully support elections around the world,” spokesperson Ivy Choi said in a statement. TikTok said it has a total of 40,000 safety professionals and works with 16 fact-checking organizations across 50 global languages. Meta declined to comment for this story, but a company representative directed Foreign Policy to a recent blog post by Nick Clegg, a former U.K. deputy prime minister who now serves as Meta’s head of global affairs. “We have around 40,000 people working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016,” Clegg wrote in the post.

But there are other troubling signs. YouTube announced last June that it would stop taking down content spreading false claims about the 2020 U.S. election or past elections, and Meta quietly made a similar policy change to its political ad rules in 2022. And as past precedent has shown, the platforms tend to have even less cover outside the West, with major blind spots in local languages and context making misinformation and hate speech not only more pervasive but also more dangerous…(More)”.

Technology, Data and Elections: An Updated Checklist on the Election Cycle


Checklist by Privacy International: “In the last few years, electoral processes and related activities have undergone significant changes, driven by the development of digital technologies.

The use of personal data has redefined political campaigning and enabled the proliferation of political advertising tailor-made for audiences sharing specific characteristics or personalised to the individual. These new practices, combined with the platforms that enable them, create an environment that facilitate the manipulation of opinion and, in some cases, the exclusion of voters.

In parallel, governments are continuing to invest in modern infrastructure that is inherently data-intensive. Several states are turning to biometric voter registration and verification technologies ostensibly to curtail fraud and vote manipulation. This modernisation often results in the development of nationwide databases containing masses of personal, sensitive information, that require heightened safeguards and protection.

The number and nature of actors involved in the election process is also changing, and so are the relationships between electoral stakeholders. The introduction of new technologies, for example for purposes of voter registration and verification, often goes hand-in-hand with the involvement of private companies, a costly investment that is not without risk and requires robust safeguards to avoid abuse.

This new electoral landscape comes with many challenges that must be addressed in order to protect free and fair elections: a fact that is increasingly recognised by policymakers and regulatory bodies…(More)”.

How AI could take over elections – and undermine democracy


Article by Archon Fung and Lawrence Lessig: “Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways?

Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.

Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election.

While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.

As a political scientist and a legal scholar who study the intersection of technology and democracy, we believe that something like Clogger could use automation to dramatically increase the scale and potentially the effectiveness of behavior manipulation and microtargeting techniques that political campaigns have used since the early 2000s. Just as advertisers use your browsing and social media history to individually target commercial and political ads now, Clogger would pay attention to you – and hundreds of millions of other voters – individually.

It would offer three advances over the current state-of-the-art algorithmic behavior manipulation. First, its language model would generate messages — texts, social media and email, perhaps including images and videos — tailored to you personally. Whereas advertisers strategically place a relatively small number of ads, language models such as ChatGPT can generate countless unique messages for you personally – and millions for others – over the course of a campaign.

Second, Clogger would use a technique called reinforcement learning to generate a succession of messages that become increasingly more likely to change your vote. Reinforcement learning is a machine-learning, trial-and-error approach in which the computer takes actions and gets feedback about which work better in order to learn how to accomplish an objective. Machines that can play Go, Chess and many video games better than any human have used reinforcement learning.How reinforcement learning works.

Third, over the course of a campaign, Clogger’s messages could evolve in order to take into account your responses to the machine’s prior dispatches and what it has learned about changing others’ minds. Clogger would be able to carry on dynamic “conversations” with you – and millions of other people – over time. Clogger’s messages would be similar to ads that follow you across different websites and social media…(More)”.

LocalView, a database of public meetings for the study of local politics and policy-making in the United State


Paper by Soubhik Barari and Tyler Simko: “Despite the fundamental importance of American local governments for service provision in areas like education and public health, local policy-making remains difficult and expensive to study at scale due to a lack of centralized data. This article introduces LocalView, the largest existing dataset of real-time local government public meetings–the central policy-making process in local government. In sum, the dataset currently covers 139,616 videos and their corresponding textual and audio transcripts of local government meetings publicly uploaded to YouTube–the world’s largest public video-sharing website– from 1,012 places and 2,861 distinct governments across the United States between 2006–2022. The data are processed, downloaded, cleaned, and publicly disseminated (at localview.net) for analysis across places and over time. We validate this dataset using a variety of methods and demonstrate how it can be used to map local governments’ attention to policy areas of interest. Finally, we discuss how LocalView may be used by journalists, academics, and other users for understanding how local communities deliberate crucial policy questions on topics including climate change, public health, and immigration…(More)”.

All Eyes on Them: A Field Experiment on Citizen Oversight and Electoral Integrity


Paper by Natalia Garbiras-Díaz and Mateo Montenegro: “Can information and communication technologies help citizens monitor their elections? We analyze a large-scale field experiment designed to answer this question in Colombia. We leveraged Facebook advertisements sent to over 4 million potential voters to encourage citizen reporting of electoral irregularities. We also cross-randomized whether candidates were informed about the campaign in a subset of municipalities. Total reports, and evidence-backed ones, experienced a large increase. Across a wide array of measures, electoral irregularities decreased. Finally, the reporting campaign reduced the vote share of candidates dependent on irregularities. This light-touch intervention is more cost-effective than monitoring efforts traditionally used by policymakers…(More)”.

Orbán used Hungarians’ COVID data to boost election campaign, report says


Article by Louis Westendarp: “Hungarian Prime Minister Viktor Orbán’s ruling party Fidesz used citizens’ data from COVID-19 vaccine signups to spread Fidesz campaign messages before Hungary’s election in April 2022, according to a report by Human Rights Watch.

Not only was data from vaccine jabs used to help Fidesz, but also data from tax benefits applications and association membership registrations. This violates privacy rights, said the report — and blurs the line between the ruling party and government resources in Hungary, which has repeatedly been warned by the EU to clean up its act regarding the rule of law.

“Using people’s personal data collected so they could access public services to bombard them with political campaign messages is a betrayal of trust and an abuse of power,” said Deborah Brown, senior technology researcher at Human Rights Watch…(More)”.

Philanthropy to Protect US Democracy


Essay by Lukas Haynes: “…Given the threat of election subversion, philanthropists who care about democracy across the political spectrum must now deploy donations as effectively as they can. In their seminal book, Money Well Spent: A Strategic Plan for Smart Philanthropy, Paul Brest and Hal Harvey argue that generating “alternative solutions” to hard problems “requires creativity or innovation akin to that of a scientist or engineer—creativity that is goal-oriented, that aims to come up with pragmatic solutions to a problem.”

In seeking the most effective solutions, Brest and Harvey do not find that nonpartisan, charitable efforts are the only legitimate form of strategic giving. Instead, they encourage donors to identify clear problem-solving goals, sound strategy, and clarity about risk tolerance.

Given the concerted attack on democratic norms by political candidates, there is no more effective alternative at hand than using political donations to defeat those candidates. If it is not already part of donors’ philanthropic toolkit to protect democracy, it needs to be and soon.

Once Big Lie-promoting candidates win and take power over elections, it will be too late to repeal their authority, especially in states where Republicans control the state legislatures. Should they successfully subvert a national presidential election in a deeply polarized nation, the United States will have crossed an undemocratic Rubicon no well-intentioned American wants to witness. So what are the most effective ways for political donors to respond to this perilous moment?…(More)”.

Citizens can effectively monitor the integrity of their elections: Evidence from Colombia


Paper by Natalia Garbiras-Díaz and Mateo Montenegro: “ICT-enabled monitoring tools effectively encourage citizens to oversee their elections and reduce fraud

Despite many efforts by governments and international organizations to guarantee free and fair elections, in many democracies, electoral integrity continues to be threatened. Irregularities including fraud, vote buying or voter intimidation reduce political accountability, which can distort the allocation of public goods and services (Hicken 2011, Khemani 2015). 

But why is it so hard to prevent and curb electoral irregularities? While traditional strategies such as the deployment of electoral observers and auditors have proven effective (Hyde 2010, Enikolopov et al. 2013, Leefers and Vicente 2019), these are difficult to scale up and involve large investments in the training, security and transportation of personnel to remote and developing areas.

In Garbiras-Díaz and Montenegro (2022), we designed and implemented a large-scale field experiment during the election period in Colombia to study an innovative and light-touch strategy that circumvents many of these costs. We examine whether citizens can effectively oversee elections through online platforms, and demonstrate that delegating monitoring to citizens can provide a cost-effective alternative to more traditional strategies. Moreover, with growing access to the internet in developing countries reducing the barriers to online monitoring, this strategy is scalable and can be particularly impactful. Our results show how citizens can be encouraged to monitor elections, and, more importantly, illustrate how this form of monitoring can prevent politicians from using electoral irregularities to undermine the integrity of elections…(More)”.