How AI could take over elections – and undermine democracy


Article by Archon Fung and Lawrence Lessig: “Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways?

Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.

Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election.

While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.

As a political scientist and a legal scholar who study the intersection of technology and democracy, we believe that something like Clogger could use automation to dramatically increase the scale and potentially the effectiveness of behavior manipulation and microtargeting techniques that political campaigns have used since the early 2000s. Just as advertisers use your browsing and social media history to individually target commercial and political ads now, Clogger would pay attention to you – and hundreds of millions of other voters – individually.

It would offer three advances over the current state-of-the-art algorithmic behavior manipulation. First, its language model would generate messages — texts, social media and email, perhaps including images and videos — tailored to you personally. Whereas advertisers strategically place a relatively small number of ads, language models such as ChatGPT can generate countless unique messages for you personally – and millions for others – over the course of a campaign.

Second, Clogger would use a technique called reinforcement learning to generate a succession of messages that become increasingly more likely to change your vote. Reinforcement learning is a machine-learning, trial-and-error approach in which the computer takes actions and gets feedback about which work better in order to learn how to accomplish an objective. Machines that can play Go, Chess and many video games better than any human have used reinforcement learning.How reinforcement learning works.

Third, over the course of a campaign, Clogger’s messages could evolve in order to take into account your responses to the machine’s prior dispatches and what it has learned about changing others’ minds. Clogger would be able to carry on dynamic “conversations” with you – and millions of other people – over time. Clogger’s messages would be similar to ads that follow you across different websites and social media…(More)”.

How Differential Privacy Will Affect Estimates of Air Pollution Exposure and Disparities in the United States


Article by Madalsa Singh: “Census data is crucial to understand energy and environmental justice outcomes such as poor air quality which disproportionately impact people of color in the U.S. With the advent of sophisticated personal datasets and analysis, Census Bureau is considering adding top-down noise (differential privacy) and post-processing 2020 census data to reduce the risk of identification of individual respondents. Using 2010 demonstration census and pollution data, I find that compared to the original census, differentially private (DP) census significantly changes ambient pollution exposure in areas with sparse populations. White Americans have lowest variability, followed by Latinos, Asian, and Black Americans. DP underestimates pollution disparities for SO2 and PM2.5 while overestimates the pollution disparities for PM10…(More)”.

Shallowfakes


Essay by James R. Ostrowski: “…This dystopian fantasy, we are told, is what the average social media feed looks like today: a war zone of high-tech disinformation operations, vying for your attention, your support, your compliance. Journalist Joseph Bernstein, in his 2021 Harper’s piece “Bad News,” attributes this perception of social media to “Big Disinfo” — a cartel of think tanks, academic institutions, and prestige media outlets that spend their days spilling barrels of ink into op-eds about foreign powers’ newest disinformation tactics. The technology’s specific impact is always vague, yet somehow devastating. Democracy is dying, shot in the chest by artificial intelligence.

The problem with Big Disinfo isn’t that disinformation campaigns aren’t happening but that claims of mind-warping, AI-enabled propaganda go largely unscrutinized and often amount to mere speculation. There is little systematic public information about the scale at which foreign governments use deepfakes, bot armies, or generative text in influence ops. What little we know is gleaned through irregular investigations or leaked documents. In lieu of data, Big Disinfo squints into the fog, crying “Bigfoot!” at every oak tree.

Any machine learning researcher will admit that there is a critical disconnect between what’s possible in the lab and what’s happening in the field. Take deepfakes. When the technology was first developed, public discourse was saturated with proclamations that it would slacken society’s grip on reality. A 2019 New York Times op-ed, indicative of the general sentiment of this time, was titled “Deepfakes Are Coming. We Can No Longer Believe What We See.” That same week, Politico sounded the alarm in its article “‘Nightmarish’: Lawmakers brace for swarm of 2020 deepfakes.” A Forbes article asked us to imagine a deepfake video of President Trump announcing a nuclear weapons launch against North Korea. These stories, like others in the genre, gloss over questions of practicality…(More)”.

Governing the Unknown


Article by Kaushik Basu: “Technology is changing the world faster than policymakers can devise new ways to cope with it. As a result, societies are becoming polarized, inequality is rising, and authoritarian regimes and corporations are doctoring reality and undermining democracy.

For ordinary people, there is ample reason to be “a little bit scared,” as OpenAI CEO Sam Altman recently put it. Major advances in artificial intelligence raise concerns about education, work, warfare, and other risks that could destabilize civilization long before climate change does. To his credit, Altman is urging lawmakers to regulate his industry.

In confronting this challenge, we must keep two concerns in mind. The first is the need for speed. If we take too long, we may find ourselves closing the barn door after the horse has bolted. That is what happened with the 1968 Nuclear Non-Proliferation Treaty: It came 23 years too late. If we had managed to establish some minimal rules after World War II, the NPT’s ultimate goal of nuclear disarmament might have been achievable.

The other concern involves deep uncertainty. This is such a new world that even those working on AI do not know where their inventions will ultimately take us. A law enacted with the best intentions can still backfire. When America’s founders drafted the Second Amendment conferring the “right to keep and bear arms,” they could not have known how firearms technology would change in the future, thereby changing the very meaning of the word “arms.” Nor did they foresee how their descendants would fail to realize this even after seeing the change.

But uncertainty does not justify fatalism. Policymakers can still effectively govern the unknown as long as they keep certain broad considerations in mind. For example, one idea that came up during a recent Senate hearing was to create a licensing system whereby only select corporations would be permitted to work on AI.

This approach comes with some obvious risks of its own. Licensing can often be a step toward cronyism, so we would also need new laws to deter politicians from abusing the system. Moreover, slowing your country’s AI development with additional checks does not mean that others will adopt similar measures. In the worst case, you may find yourself facing adversaries wielding precisely the kind of malevolent tools that you eschewed. That is why AI is best regulated multilaterally, even if that is a tall order in today’s world…(More)”.

Filling Africa’s Data Gap


Article by Jendayi Frazer and Peter Blair Henry: “Every few years, the U.S. government launches a new initiative to boost economic growth in Africa. In bold letters and with bolder promises, the White House announces that public-private partnerships hold the key to growth on the continent. It pledges to make these partnerships a cornerstone of its Africa policy, but time and again it fails to deliver.

A decade after U.S. President Barack Obama rolled out Power Africa—his attempt to solve Africa’s energy crisis by mobilizing private capital—half of the continent’s sub-Saharan population remains without access to electricity. In 2018, the Trump administration proclaimed that its Prosper Africa initiative would counter China’s debt-trap diplomacy and “expand African access to business finance.” Five years on, Chad, Ethiopia, Ghana, and Zambia are in financial distress and pleading for debt relief from Beijing and other creditors. Yet the Biden administration is once more touting the potential of public-private investment in Africa, organizing high-profile visits and holding leadership summits to prove that this time, the United States is “all in” on the continent.

There is a reason these efforts have yielded so little: goodwill tours, clever slogans, and a portfolio of G-7 pet projects in Africa do not amount to a sound investment pitch. Potential investors, public and private, need to know which projects in which countries are economically and financially worthwhile. Above all, that requires current and comprehensive data on the expected returns that investment in infrastructure in the developing world can yield. At present, investors lack this information, so they pass. If the United States wants to “build back better” in Africa—to expand access to business finance and encourage countries on the continent to choose sustainable and high-quality foreign investment over predatory lending from China and Russia—it needs to give investors access to better data…(More)”.

How Would You Defend the Planet From Asteroids? 


Article by Mahmud Farooque, Jason L. Kessler: “On September 26, 2022, NASA successfully smashed a spacecraft into a tiny asteroid named Dimorphos, altering its orbit. Although it was 6.8 million miles from Earth, the Double Asteroid Redirect Test (DART) was broadcast in real time, turning the impact into a rare pan-planetary moment accessible from smartphones around the world. 

For most people, the DART mission was the first glimmer—outside of the movies—that NASA was seriously exploring how to protect Earth from asteroids. Rightly famous for its technological prowess, NASA is less recognized for its social innovations. But nearly a decade before DART, the agency had launched the Asteroid Grand Challenge. In a pioneering approach to public engagement, the challenge brought citizens together to weigh in on how the taxpayer-funded agency might approach some technical decisions involving asteroids. 

The following account of how citizens came to engage with strategies for planetary defense—and the unexpected conclusions they reached—is based on the experiences of NASA employees, members of the Expert and Citizen Assessment of Science and Technology (ECAST) network, and forum participants…(More)”.

Technological Obsolescence


Essay by Jonathan Coopersmith: “In addition to killing over a million Americans, Covid-19 revealed embarrassing failures of local, state, and national public health systems to accurately and effectively collect, transmit, and process information. To some critics and reporters, the visible and easily understood face of those failures was the continued use of fax machines.

In reality, the critics were attacking the symptom, not the problem. Instead of “why were people still using fax machines?,” the better question was “what factors made fax machines more attractive than more capable technologies?” Those answers provide a better window into the complex, evolving world of technological obsolescence, a key component of our modern world—and on a smaller scale, provide a template to decide whether the NAE and other organizations should retain their fax machines.

The marketing dictionary of Monash University Business School defines technological obsolescence as “when a technical product or service is no longer needed or wanted even though it could still be in working order.” Significantly, the source is a business school, which implies strong economic and social factors in decision making about technology.  

Determining technological obsolescence depends not just on creators and promoters of new technologies but also on users, providers, funders, accountants, managers, standards setters—and, most importantly, competing needs and options. In short, it’s complicated.  

Like most aspects of technology, perspectives on obsolescence depend on your position. If existing technology meets your needs, upgrading may not seem worth the resources needed (e.g., for purchase and training). If, on the other hand, your firm or organization depends on income from providing, installing, servicing, training, advising, or otherwise benefiting from a new technology, not upgrading could jeopardize your future, especially in a very competitive market. And if you cannot find the resources to upgrade, you—and your users—may incur both visible and invisible costs…(More)”.

Citizens’ juries can help fix democracy


Article by Martin Wolf: “…our democratic processes do not work very well. Adding referendums to elections does not solve the problem. But adding citizens’ assemblies might.

In his farewell address, George Washington warned against the spirit of faction. He argued that the “alternate domination of one faction over another . . . is itself a frightful despotism. But . . . The disorders and miseries which result gradually incline the minds of men to seek security and repose in the absolute power of an individual”. If one looks at the US today, that peril is evident. In current electoral politics, manipulation of the emotions of a rationally ill-informed electorate is the path to power. The outcome is likely to be rule by those with the greatest talent for demagogy.

Elections are necessary. But unbridled majoritarianism is a disaster. A successful liberal democracy requires constraining institutions: independent oversight over elections, an independent judiciary and an independent bureaucracy. But are they enough? No. In my book, The Crisis of Democratic Capitalism, I follow the Australian economist Nicholas Gruen in arguing for the addition of citizens’ assemblies or citizens’ juries. These would insert an important element of ancient Greek democracy into the parliamentary tradition.

There are two arguments for introducing sortition (lottery) into the political process. First, these assemblies would be more representative than professional politicians can ever be. Second, it would temper the impact of political campaigning, nowadays made more distorting by the arts of advertising and the algorithms of social media…(More)”.

The promise and pitfalls of the metaverse for science


Paper by Diego Gómez-Zará, Peter Schiffer & Dashun Wang: “The future of the metaverse remains uncertain and continues to evolve, as was the case for many technological advances of the past. Now is the time for scientists, policymakers and research institutions to start considering actions to capture the potential of the metaverse and take concrete steps to avoid its pitfalls. Proactive investments in the form of competitive grants, internal agency efforts and infrastructure building should be considered, supporting innovation and adaptation to the future in which the metaverse may be more pervasive in society.

Government agencies and other research funders could also have a critical role in funding and promoting interoperability and shared protocols among different metaverse technologies and environments. These aspects will help the scientific research community to ensure broad adoption and reproducibility. For example, government research agencies may create an open and publicly accessible metaverse platform with open-source code and standard protocols that can be translated to commercial platforms as needed. In the USA, an agency such as the National Institute of Standards and Technology could set standards for protocols that are suitable for the research enterprise or, alternatively, an international convention could set global standards. Similarly, an agency such as the National Institutes of Health could leverage its extensive portfolio of behavioural research and build and maintain a metaverse for human subjects studies. Within such an ecosystem, researchers could develop and implement their own research protocols with appropriate protections, standardized and reproducible conditions, and secure data management. A publicly sponsored research-focused metaverse — which could be cross-compatible with commercial platforms — may create and capture substantial value for science, from augmenting scientific productivity to protecting research integrity.

There are important precedents for this sort of action in that governments and universities have built open repositories for data in fields such as astronomy and crystallography, and both the US National Science Foundation and the US Department of Energy have built and maintained high-performance computing environments that are available to the broader research community. Such efforts could be replicated and adapted for emerging metaverse technologies, which would be especially beneficial for under-resourced institutions to access and leverage common resources. Critically, the encouragement of private sector innovation and the development of public–private alliances must be balanced with the need for interoperability, openness and accessibility to the broader research community…(More)”.

WHO Launches Global Infectious Disease Surveillance Network


Article by Shania Kennedy: “The World Health Organization (WHO) launched the International Pathogen Surveillance Network (IPSN), a public health network to prevent and detect infectious disease threats before they become epidemics or pandemics.

IPSN will rely on insights generated from pathogen genomics, which helps analyze the genetic material of viruses, bacteria, and other disease-causing micro-organisms to determine how they spread and how infectious or deadly they may be.

Using these data, researchers can identify and track diseases to improve outbreak prevention, response, and treatments.

“The goal of this new network is ambitious, but it can also play a vital role in health security: to give every country access to pathogen genomic sequencing and analytics as part of its public health system,” said WHO Director-General Tedros Adhanom Ghebreyesus, PhD, in the press release.  “As was so clearly demonstrated to us during the COVID-19 pandemic, the world is stronger when it stands together to fight shared health threats.”

Genomics capacity worldwide was scaled up during the pandemic, but the press release indicates that many countries still lack effective tools and systems for public health data collection and analysis. This lack of resources and funding could slow the development of a strong global health surveillance infrastructure, which IPSN aims to help address.

The network will bring together experts in genomics and data analytics to optimize routine disease surveillance, including for COVID-19. According to the press release, pathogen genomics-based analyses of the SARS-COV-2 virus helped speed the development of effective vaccines and the identification of more transmissible virus variants…(More)”.