Generative AI and Democracy: Impacts and Interventions


Report by Demos (UK): “This week’s election announcement has set all political parties firmly into campaign mode and over the next 40 days the public will be weighing up who will get their vote on 4th July.

This democratic moment, however, will take place against the backdrop of a new and largely untested threat; generative-AI. In the lead up to the election, the strength of our electoral integrity is likely to be tested by the spread of AI-generated content and deepfakes – an issue that over 60% of the public are concerned about, according to recent Demos and Full Fact polling.

Our new paper takes a look at the near and long-term solutions at our disposal for bolstering the resilience of our democratic institutions amidst the modern technological age. We explore the top four pressing mechanisms by which generative-AI challenges the stability of democracy, and how to mitigate them…(More)”.

QuantGov


About: “QuantGov is an open-source policy analytics platform designed to help create greater understanding and analysis of the breadth of government actions through quantifying policy text. By using the platform, researchers can quickly and effectively retrieve unique data that lies embedded in large bodies of text – data on text complexity, part of speech metrics, topic modeling, etc. …

QuantGov is a tool designed to make policy text more accessible. Think about it in terms of a hyper-powerful Google search that not only finds (1) specified content within massive quantities of text, but (2) also finds patterns and groupings and can even make predictions about what is in a document. Some recent use cases include the following:

  • Analyzing state regulatory codes and predicting which parts of those codes are related to occupational licensing….And predicting which occupation the regulation is talking about….And determining the cost to receive the license.
  • Analyzing Canadian province regulatory code while grouping individual regulations by industry-topic….And determining which Ministers are responsible for those regulations….And determining the complexity of the text for those regulation.
  • Quantifying the number of tariff exclusions that exists due to the Trade Expansion Act of 1962 and recent tariff polices….And determining which products those exclusions target.
  • Comparing the regulatory codes and content of 46 US states, 11 Canadian provinces, and 7 Australian states….While using consistent metrics that can lead to insights that provide legitimate policy improvements…(More)”.

How do you accidentally run for President of Iceland?


Blog by Anna Andersen: “Content design can have real consequences — for democracy, even…

To run for President of Iceland, you need to be an Icelandic citizen, at least 35 years old, and have 1,500 endorsements.

For the first time in Icelandic history, this endorsement process is digital. Instead of collecting all their signatures on paper the old-fashioned way, candidates can now send people to https://island.is/forsetaframbod to submit their endorsement.

This change has, also for the first time in Icelandic history, given the nation a clear window into who is trying to run — and it’s a remarkably large number. To date, 82 people are collecting endorsements, including a comedian, a model, the world’s first double-arm transplant receiver, and my aunt Helga.

Many of these people are seriously vying for president (yep, my aunt Helga), some of them have undoubtedly signed up as a joke (nope, not the comedian), and at least 11 of them accidentally registered and had no idea that they were collecting endorsements for their candidacy.

“I’m definitely not about to run for president, this was just an accident,” one person told a reporter after having a good laugh about it.

“That’s hilarious!” another person said, thanking the reporter for letting them know that they were in the running.

As a content designer, I was intrigued. How could so many people accidentally start a campaign for President of Iceland?

It turns out, the answer largely has to do with content design.Presidential hopefuls were sending people a link to a page where they could be endorsed, but instead of endorsing the candidate, some people accidentally registered to be a candidate…(More)”.

Synthetic Politics: Preparing democracy for Generative AI


Report by Demos: “This year is a politically momentous one, with almost half the world voting in elections. Generative AI may revolutionise our political information environments by making them more effective, relevant, and participatory. But it’s also possible that they will become more manipulative, confusing, and dangerous. We’ve already seen AI-generated audio of politicians going viral and chatbots offering incorrect information about elections.

This report, produced in partnership with University College London, explores how synthetic content produced by generative AI poses risks to the core democratic values of truthequality, and non-violence. It proposes two action plans for what private and public decision-makers should be doing to safeguard democratic integrity immediately and in the long run:

  • In Action Plan 1, we consider the actions that should be urgently put in place to reduce the acute risks to democratic integrity presented by generative AI tools. This includes reducing the production and dissemination of harmful synthetic content and empowering users so that harmful impacts of synthetic content are reduced in the immediate term.
  • In Action Plan 2, we set out a longer-term vision for how the fundamental risks to democratic integrity should be addressed. We explore the ways in which generative AI tools can help bolster equality, truth and non-violence, from enabling greater democratic participation to improving how key information institutions operate…(More)”.

EBP+: Integrating science into policy evaluation using Evidential Pluralism


Article by Joe Jones, Alexandra Trofimov, Michael Wilde & Jon Williamson: “…While the need to integrate scientific evidence in policymaking is clear, there isn’t a universally accepted framework for doing so in practice. Orthodox evidence-based approaches take Randomised Controlled Trials (RCTs) as the gold standard of evidence. Others argue that social policy issues require theory-based methods to understand the complexities of policy interventions. These divisions may only further decrease trust in science at this critical time.

EBP+ offers a broader framework within which both orthodox and theory-based methods can sit. EBP+ also provides a systematic account of how to integrate and evaluate these different types of evidence. EBP+ can offer consistency and objectivity in policy evaluation, and could yield a unified approach that increases public trust in scientifically-informed policy…

EBP+ is motivated by Evidential Pluralism, a philosophical theory of causal enquiry that has been developed over the last 15 years. Evidential Pluralism encompasses two key claims. The first, object pluralism, says that establishing that A is a cause of B (e.g., that a policy intervention causes a specific outcome) requires establishing both that and B are appropriately correlated and that there is some mechanism which links the two and which can account for the extent of the correlation. The second claim, study pluralism, maintains that assessing whether is a cause of B requires assessing both association studies (studies that repeatedly measure and B, together with potential confounders, to measure their association) and mechanistic studies (studies of features of the mechanisms linking A to B), where available…(More)”.

A diagrammatic representation of Evidential Pluralism
Evidential Pluralism (© Jon Williamson)

What Will AI Do to Elections?


Article by Rishi Iyengar: “…Requests to X’s press team on how the platform was preparing for elections in 2024 yielded an automated response: “Busy now, please check back later”—a slight improvement from the initial Musk-era change where the auto-reply was a poop emoji.

X isn’t the only major social media platform with fewer content moderators. Meta, which owns Facebook, Instagram, and WhatsApp, has laid off more than 20,000 employees since November 2022—several of whom worked on trust and safety—while many YouTube employees working on misinformation policy were impacted by layoffs at parent company Google.

There could scarcely be a worse time to skimp on combating harmful content online. More than 50 countries, including the world’s three biggest democracies and Taiwan, an increasingly precarious geopolitical hot spot, are expected to hold national elections in 2024. Seven of the world’s 10 most populous countries—Bangladesh, India, Indonesia, Mexico, Pakistan, Russia, and the United States—will collectively send a third of the world’s population to the polls.

Elections, with their emotionally charged and often tribal dynamics, are where misinformation missteps come home to roost. If social media misinformation is the equivalent of yelling “fire” in a crowded theater, election misinformation is like doing so when there’s a horror movie playing and everyone’s already on edge.

Katie Harbath prefers a different analogy, one that illustrates how nebulous and thorny the issues are and the sheer uncertainty surrounding them. “The metaphor I keep using is a kaleidoscope because there’s so many different aspects to this but depending how you turn the kaleidoscope, the pattern changes of what it’s going to look like,” she said in an interview in October. “And that’s how I feel about life post-2024. … I don’t know where in the kaleidoscope it’s going to land.”

Harbath has become something of an election whisperer to the tech industry, having spent a decade at Facebook from 2011 building the company’s election integrity efforts from scratch. She left in 2021 and founded Anchor Change, a public policy consulting firm that helps other platforms combat misinformation and prepare for elections in particular.

Had she been in her old job, Harbath said, her team would have completed risk assessments of global elections by late 2022 or early 2023 and then spent the rest of the year tailoring Meta’s products to them as well as setting up election “war rooms” where necessary. “Right now, we would be starting to move into execution mode.” She cautions against treating the resources that companies are putting into election integrity as a numbers game—“once you build some of those tools, maintaining them doesn’t take as many people”—but acknowledges that the allocation of resources reveals a company leadership’s priorities.

The companies insist they remain committed to election integrity. YouTube has “heavily invested in the policies and systems that help us successfully support elections around the world,” spokesperson Ivy Choi said in a statement. TikTok said it has a total of 40,000 safety professionals and works with 16 fact-checking organizations across 50 global languages. Meta declined to comment for this story, but a company representative directed Foreign Policy to a recent blog post by Nick Clegg, a former U.K. deputy prime minister who now serves as Meta’s head of global affairs. “We have around 40,000 people working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016,” Clegg wrote in the post.

But there are other troubling signs. YouTube announced last June that it would stop taking down content spreading false claims about the 2020 U.S. election or past elections, and Meta quietly made a similar policy change to its political ad rules in 2022. And as past precedent has shown, the platforms tend to have even less cover outside the West, with major blind spots in local languages and context making misinformation and hate speech not only more pervasive but also more dangerous…(More)”.

Technology, Data and Elections: An Updated Checklist on the Election Cycle


Checklist by Privacy International: “In the last few years, electoral processes and related activities have undergone significant changes, driven by the development of digital technologies.

The use of personal data has redefined political campaigning and enabled the proliferation of political advertising tailor-made for audiences sharing specific characteristics or personalised to the individual. These new practices, combined with the platforms that enable them, create an environment that facilitate the manipulation of opinion and, in some cases, the exclusion of voters.

In parallel, governments are continuing to invest in modern infrastructure that is inherently data-intensive. Several states are turning to biometric voter registration and verification technologies ostensibly to curtail fraud and vote manipulation. This modernisation often results in the development of nationwide databases containing masses of personal, sensitive information, that require heightened safeguards and protection.

The number and nature of actors involved in the election process is also changing, and so are the relationships between electoral stakeholders. The introduction of new technologies, for example for purposes of voter registration and verification, often goes hand-in-hand with the involvement of private companies, a costly investment that is not without risk and requires robust safeguards to avoid abuse.

This new electoral landscape comes with many challenges that must be addressed in order to protect free and fair elections: a fact that is increasingly recognised by policymakers and regulatory bodies…(More)”.

How AI could take over elections – and undermine democracy


Article by Archon Fung and Lawrence Lessig: “Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways?

Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.

Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election.

While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.

As a political scientist and a legal scholar who study the intersection of technology and democracy, we believe that something like Clogger could use automation to dramatically increase the scale and potentially the effectiveness of behavior manipulation and microtargeting techniques that political campaigns have used since the early 2000s. Just as advertisers use your browsing and social media history to individually target commercial and political ads now, Clogger would pay attention to you – and hundreds of millions of other voters – individually.

It would offer three advances over the current state-of-the-art algorithmic behavior manipulation. First, its language model would generate messages — texts, social media and email, perhaps including images and videos — tailored to you personally. Whereas advertisers strategically place a relatively small number of ads, language models such as ChatGPT can generate countless unique messages for you personally – and millions for others – over the course of a campaign.

Second, Clogger would use a technique called reinforcement learning to generate a succession of messages that become increasingly more likely to change your vote. Reinforcement learning is a machine-learning, trial-and-error approach in which the computer takes actions and gets feedback about which work better in order to learn how to accomplish an objective. Machines that can play Go, Chess and many video games better than any human have used reinforcement learning.How reinforcement learning works.

Third, over the course of a campaign, Clogger’s messages could evolve in order to take into account your responses to the machine’s prior dispatches and what it has learned about changing others’ minds. Clogger would be able to carry on dynamic “conversations” with you – and millions of other people – over time. Clogger’s messages would be similar to ads that follow you across different websites and social media…(More)”.

LocalView, a database of public meetings for the study of local politics and policy-making in the United State


Paper by Soubhik Barari and Tyler Simko: “Despite the fundamental importance of American local governments for service provision in areas like education and public health, local policy-making remains difficult and expensive to study at scale due to a lack of centralized data. This article introduces LocalView, the largest existing dataset of real-time local government public meetings–the central policy-making process in local government. In sum, the dataset currently covers 139,616 videos and their corresponding textual and audio transcripts of local government meetings publicly uploaded to YouTube–the world’s largest public video-sharing website– from 1,012 places and 2,861 distinct governments across the United States between 2006–2022. The data are processed, downloaded, cleaned, and publicly disseminated (at localview.net) for analysis across places and over time. We validate this dataset using a variety of methods and demonstrate how it can be used to map local governments’ attention to policy areas of interest. Finally, we discuss how LocalView may be used by journalists, academics, and other users for understanding how local communities deliberate crucial policy questions on topics including climate change, public health, and immigration…(More)”.

All Eyes on Them: A Field Experiment on Citizen Oversight and Electoral Integrity


Paper by Natalia Garbiras-Díaz and Mateo Montenegro: “Can information and communication technologies help citizens monitor their elections? We analyze a large-scale field experiment designed to answer this question in Colombia. We leveraged Facebook advertisements sent to over 4 million potential voters to encourage citizen reporting of electoral irregularities. We also cross-randomized whether candidates were informed about the campaign in a subset of municipalities. Total reports, and evidence-backed ones, experienced a large increase. Across a wide array of measures, electoral irregularities decreased. Finally, the reporting campaign reduced the vote share of candidates dependent on irregularities. This light-touch intervention is more cost-effective than monitoring efforts traditionally used by policymakers…(More)”.