How AI could take over elections – and undermine democracy


Article by Archon Fung and Lawrence Lessig: “Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways?

Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.

Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election.

While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.

As a political scientist and a legal scholar who study the intersection of technology and democracy, we believe that something like Clogger could use automation to dramatically increase the scale and potentially the effectiveness of behavior manipulation and microtargeting techniques that political campaigns have used since the early 2000s. Just as advertisers use your browsing and social media history to individually target commercial and political ads now, Clogger would pay attention to you – and hundreds of millions of other voters – individually.

It would offer three advances over the current state-of-the-art algorithmic behavior manipulation. First, its language model would generate messages — texts, social media and email, perhaps including images and videos — tailored to you personally. Whereas advertisers strategically place a relatively small number of ads, language models such as ChatGPT can generate countless unique messages for you personally – and millions for others – over the course of a campaign.

Second, Clogger would use a technique called reinforcement learning to generate a succession of messages that become increasingly more likely to change your vote. Reinforcement learning is a machine-learning, trial-and-error approach in which the computer takes actions and gets feedback about which work better in order to learn how to accomplish an objective. Machines that can play Go, Chess and many video games better than any human have used reinforcement learning.How reinforcement learning works.

Third, over the course of a campaign, Clogger’s messages could evolve in order to take into account your responses to the machine’s prior dispatches and what it has learned about changing others’ minds. Clogger would be able to carry on dynamic “conversations” with you – and millions of other people – over time. Clogger’s messages would be similar to ads that follow you across different websites and social media…(More)”.

Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better


Book by Jennifer Pahlka: “Just when we most need our government to work—to decarbonize our infrastructure and economy, to help the vulnerable through a pandemic, to defend ourselves against global threats—it is faltering. Government at all levels has limped into the digital age, offering online services that can feel even more cumbersome than the paperwork that preceded them and widening the gap between the policy outcomes we intend and what we get.

But it’s not more money or more tech we need. Government is hamstrung by a rigid, industrial-era culture, in which elites dictate policy from on high, disconnected from and too often disdainful of the details of implementation. Lofty goals morph unrecognizably as they cascade through a complex hierarchy. But there is an approach taking hold that keeps pace with today’s world and reclaims government for the people it is supposed to serve. Jennifer Pahlka shows why we must stop trying to move the government we have today onto new technology and instead consider what it would mean to truly recode American government…(More)”.

How Differential Privacy Will Affect Estimates of Air Pollution Exposure and Disparities in the United States


Article by Madalsa Singh: “Census data is crucial to understand energy and environmental justice outcomes such as poor air quality which disproportionately impact people of color in the U.S. With the advent of sophisticated personal datasets and analysis, Census Bureau is considering adding top-down noise (differential privacy) and post-processing 2020 census data to reduce the risk of identification of individual respondents. Using 2010 demonstration census and pollution data, I find that compared to the original census, differentially private (DP) census significantly changes ambient pollution exposure in areas with sparse populations. White Americans have lowest variability, followed by Latinos, Asian, and Black Americans. DP underestimates pollution disparities for SO2 and PM2.5 while overestimates the pollution disparities for PM10…(More)”.

How civic capacity gets urban social innovations started


Article by Christof Brandtner: “After President Trump withdrew from the Paris Climate Accords, several hundred mayors signed national and global treaties announcing their commitments to “step up and do more,” as a senior official of the City of New York told me in a poorly lit room in 2017. Cities were rushing to the forefront of adopting practices and policies to address contemporary social and environmental problems, such as climate change.

What the general enthusiasm masked is significant variation in the extent and speed at which cities adopt these innovations…My study of the geographic dispersion of green buildings certified with the U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) rating system, published in the American Journal of Sociology, suggests that the organizational communities within cities play a significant role in adopting urban innovations. Cities with a robust civic capacity, where values-oriented organizations actively address social problems, are more likely to adopt new practices quickly and extensively. Civic capacity matters not only through structural channels, as a sign of ample resources and community social capital, but also through organizational channels. Values-oriented organizations are often early adopters of new practices, such as green construction, solar panels, electric vehicles, or equitable hiring practices. By creating proofs of concepts, these early adopters can serve as catalysts of municipal policies and widespread adoption…(More)”.

How Would You Defend the Planet From Asteroids? 


Article by Mahmud Farooque, Jason L. Kessler: “On September 26, 2022, NASA successfully smashed a spacecraft into a tiny asteroid named Dimorphos, altering its orbit. Although it was 6.8 million miles from Earth, the Double Asteroid Redirect Test (DART) was broadcast in real time, turning the impact into a rare pan-planetary moment accessible from smartphones around the world. 

For most people, the DART mission was the first glimmer—outside of the movies—that NASA was seriously exploring how to protect Earth from asteroids. Rightly famous for its technological prowess, NASA is less recognized for its social innovations. But nearly a decade before DART, the agency had launched the Asteroid Grand Challenge. In a pioneering approach to public engagement, the challenge brought citizens together to weigh in on how the taxpayer-funded agency might approach some technical decisions involving asteroids. 

The following account of how citizens came to engage with strategies for planetary defense—and the unexpected conclusions they reached—is based on the experiences of NASA employees, members of the Expert and Citizen Assessment of Science and Technology (ECAST) network, and forum participants…(More)”.

The Metaverse and Homeland Security


Report by Timothy Marler, Zara Fatima Abdurahaman, Benjamin Boudreaux, and Timothy R. Gulden: “The metaverse is an emerging concept and capability supported by multiple underlying emerging technologies, but its meaning and key characteristics can be unclear and will likely change over time. Thus, its relevance to some organizations, such as the U.S. Department of Homeland Security (DHS), can be unclear. This lack of clarity can lead to unmitigated threats and missed opportunities. It can also inhibit healthy public discourse and effective technology management generally. To help address these issues, this Perspective provides an initial review of the metaverse concept and how it might be relevant to DHS. As a critical first step with the analysis of any emerging technology, the authors review current definitions and identify key practical characteristics. Often, regardless of a precise definition, it is the fundamental capabilities that are central to discussion and management. Then, given a foundational understanding of what a metaverse entails, the authors summarize primary goals and relevant needs for DHS. Ultimately, in order to be relevant, technologies must align with actual needs for various organizations or users. By cross-walking exemplary DHS needs that stem from a variety of mission sets with pervasive characteristics of metaverses, the authors demonstrate that metaverses are, in fact, relevant to DHS. Finally, the authors identify specific threats and opportunities that DHS could proactively manage. Although this work focuses the discussion of threats and opportunities on DHS, it has broad implications. This work provides a foundation on which further discussions and research can build, minimizing disparities and discoordination in development and policy…(More)”.

Yes, No, Maybe? Legal & Ethical Considerations for Informed Consent in Data Sharing and Integration


Report by Deja Kemp, Amy Hawn Nelson, & Della Jenkins: “Data sharing and integration are increasingly commonplace at every level of government, as cross-program and cross-sector data provide valuable insights to inform resource allocation, guide program implementation, and evaluate policies. Data sharing, while routine, is not without risks, and clear legal frameworks for data sharing are essential to mitigate those risks, protect privacy, and guide responsible data use. In some cases, federal privacy laws offer clear consent requirements and outline explicit exceptions where consent is not required to share data. In other cases, the law is unclear or silent regarding whether consent is needed for data sharing. Importantly, consent can present both ethical and logistical challenges, particularly when integrating cross-sector data. This brief will frame out key concepts related to consent; explore major federal laws governing the sharing of administrative data, including individually identifiable information; and examine important ethical implications of consent, particularly in cases when the law is silent or unclear. Finally, this brief will outline the foundational role of strong governance and consent frameworks in ensuring ethical data use and offer technical alternatives to consent that may be appropriate for certain data uses….(More)”.

Generative Artificial Intelligence and Data Privacy: A Primer


Report by Congressional Research Service: “Since the public release of Open AI’s ChatGPT, Google’s Bard, and other similar systems, some Members of Congress have expressed interest in the risks associated with “generative artificial intelligence (AI).” Although exact definitions vary, generative AI is a type of AI that can generate new content—such as text, images, and videos—through learning patterns from pre-existing data.
It is a broad term that may include various technologies and techniques from AI and machine learning (ML). Generative AI models have received significant attention and scrutiny due to their potential harms, such as risks involving privacy, misinformation, copyright, and non-consensual sexual imagery. This report focuses on privacy issues and relevant policy considerations for Congress. Some policymakers and stakeholders have raised privacy concerns about how individual data may be used to develop and deploy generative models. These concerns are not new or unique to generative AI, but the scale, scope, and capacity of such technologies may present new privacy challenges for Congress…(More)”.

A Hiring Law Blazes a Path for A.I. Regulation


Article by Steve Lohr: “European lawmakers are finishing work on an A.I. act. The Biden administration and leaders in Congress have their plans for reining in artificial intelligence. Sam Altman, the chief executive of OpenAI, maker of the A.I. sensation ChatGPT, recommended the creation of a federal agency with oversight and licensing authority in Senate testimony last week. And the topic came up at the Group of 7 summit in Japan.

Amid the sweeping plans and pledges, New York City has emerged as a modest pioneer in A.I. regulation.

The city government passed a law in 2021 and adopted specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement begins in July.

The city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used. It also requires companies to have independent auditors check the technology annually for bias. Candidates can request and be told what data is being collected and analyzed. Companies will be fined for violations.

New York City’s focused approach represents an important front in A.I. regulation. At some point, the broad-stroke principles developed by governments and international organizations, experts say, must be translated into details and definitions. Who is being affected by the technology? What are the benefits and harms? Who can intervene, and how?

“Without a concrete use case, you are not in a position to answer those questions,” said Julia Stoyanovich, an associate professor at New York University and director of its Center for Responsible A.I.

But even before it takes effect, the New York City law has been a magnet for criticism. Public interest advocates say it doesn’t go far enough, while business groups say it is impractical.

The complaints from both camps point to the challenge of regulating A.I., which is advancing at a torrid pace with unknown consequences, stirring enthusiasm and anxiety.

Uneasy compromises are inevitable.

Ms. Stoyanovich is concerned that the city law has loopholes that may weaken it. “But it’s much better than not having a law,” she said. “And until you try to regulate, you won’t learn how.”…(More)” – See also AI Localism: Governing AI at the Local Level

Boston Isn’t Afraid of Generative AI


Article by Beth Simone Noveck: “After ChatGPT burst on the scene last November, some government officials raced to prohibit its use. Italy banned the chatbot. New York City, Los Angeles Unified, Seattle, and Baltimore School Districts either banned or blocked access to generative AI tools, fearing that ChatGPT, Bard, and other content generation sites could tempt students to cheat on assignments, induce rampant plagiarism, and impede critical thinking. This week, US Congress heard testimony from Sam Altman, CEO of OpenAI, and AI researcher Gary Marcus as it weighed whether and how to regulate the technology.

In a rapid about-face, however, a few governments are now embracing a less fearful and more hands-on approach to AI. New York City Schools chancellor David Banks announced yesterday that NYC is reversing its ban because “the knee jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial.” And yesterday, City of Boston chief information officer Santiago Garces sent guidelines to every city official encouraging them to start using generative AI “to understand their potential.” The city also turned on use of Google Bard as part of the City of Boston’s enterprise-wide use of Google Workspace so that all public servants have access.

The “responsible experimentation approach” adopted in Boston—the first policy of its kind in the US—could, if used as a blueprint, revolutionize the public sector’s use of AI across the country and cause a sea change in how governments at every level approach AI. By promoting greater exploration of how AI can be used to improve government effectiveness and efficiency, and by focusing on how to use AI for governance instead of only how to govern AI, the Boston approach might help to reduce alarmism and focus attention on how to use AI for social good…(More)”.