When it comes to AI and democracy, we cannot be careful enough


Article by Marietje Schaake: “Next year is being labelled the “Year of Democracy”: a series of key elections are scheduled to take place, including in places with significant power and populations, such as the US, EU, India, Indonesia and Mexico. In many of these jurisdictions, democracy is under threat or in decline. It is certain that our volatile world will look different after 2024. The question is how — and why.

Artificial intelligence is one of the wild cards that may well play a decisive role in the upcoming elections. The technology already features in varied ways in the electoral process — yet many of these products have barely been tested before their release into society.

Generative AI, which makes synthetic texts, videos and voice messages easy to produce and difficult to distinguish from human-generated content, has been embraced by some political campaign teams. A controversial video showing a crumbling world should Joe Biden be re-elected was not created by a foreign intelligence service seeking to manipulate US elections, but by the Republican National Committee. 

Foreign intelligence services are also using generative AI to boost their influence operations. My colleague at Stanford, Alex Stamos, warns that: “What once took a team of 20 to 40 people working out of [Russia or Iran] to produce 100,000 pieces can now be done by one person using open-source gen AI”.

AI also makes it easier to target messages so they reach specific audiences. This individualised experience will increase the complexity of investigating whether internet users and voters are being fed disinformation.

While much of generative AI’s impact on elections is still being studied, what is known does not reassure. We know people find it hard to distinguish between synthetic media and authentic voices, making it easy to deceive them. We also know that AI repeats and entrenches bias against minorities. Plus, we’re aware that AI companies seeking profits do not also seek to promote democratic values.  

Many members of the teams hired to deal with foreign manipulation and disinformation by social media companies, particularly since 2016, have been laid off. YouTube has explicitly said it will no longer remove “content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections”. It is, of course, highly likely that lies about past elections will play a role in 2024 campaigns.

Similarly, after Elon Musk took over X, formerly known as Twitter, he gutted trust and safety teams. Right when defence barriers are needed the most, they are being taken down…(More)”.