How AI could take over elections – and undermine democracy


Article by Archon Fung and Lawrence Lessig: “Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways?

Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.

Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election.

While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.

As a political scientist and a legal scholar who study the intersection of technology and democracy, we believe that something like Clogger could use automation to dramatically increase the scale and potentially the effectiveness of behavior manipulation and microtargeting techniques that political campaigns have used since the early 2000s. Just as advertisers use your browsing and social media history to individually target commercial and political ads now, Clogger would pay attention to you – and hundreds of millions of other voters – individually.

It would offer three advances over the current state-of-the-art algorithmic behavior manipulation. First, its language model would generate messages — texts, social media and email, perhaps including images and videos — tailored to you personally. Whereas advertisers strategically place a relatively small number of ads, language models such as ChatGPT can generate countless unique messages for you personally – and millions for others – over the course of a campaign.

Second, Clogger would use a technique called reinforcement learning to generate a succession of messages that become increasingly more likely to change your vote. Reinforcement learning is a machine-learning, trial-and-error approach in which the computer takes actions and gets feedback about which work better in order to learn how to accomplish an objective. Machines that can play Go, Chess and many video games better than any human have used reinforcement learning.How reinforcement learning works.

Third, over the course of a campaign, Clogger’s messages could evolve in order to take into account your responses to the machine’s prior dispatches and what it has learned about changing others’ minds. Clogger would be able to carry on dynamic “conversations” with you – and millions of other people – over time. Clogger’s messages would be similar to ads that follow you across different websites and social media…(More)”.

Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better


Book by Jennifer Pahlka: “Just when we most need our government to work—to decarbonize our infrastructure and economy, to help the vulnerable through a pandemic, to defend ourselves against global threats—it is faltering. Government at all levels has limped into the digital age, offering online services that can feel even more cumbersome than the paperwork that preceded them and widening the gap between the policy outcomes we intend and what we get.

But it’s not more money or more tech we need. Government is hamstrung by a rigid, industrial-era culture, in which elites dictate policy from on high, disconnected from and too often disdainful of the details of implementation. Lofty goals morph unrecognizably as they cascade through a complex hierarchy. But there is an approach taking hold that keeps pace with today’s world and reclaims government for the people it is supposed to serve. Jennifer Pahlka shows why we must stop trying to move the government we have today onto new technology and instead consider what it would mean to truly recode American government…(More)”.

How Differential Privacy Will Affect Estimates of Air Pollution Exposure and Disparities in the United States


Article by Madalsa Singh: “Census data is crucial to understand energy and environmental justice outcomes such as poor air quality which disproportionately impact people of color in the U.S. With the advent of sophisticated personal datasets and analysis, Census Bureau is considering adding top-down noise (differential privacy) and post-processing 2020 census data to reduce the risk of identification of individual respondents. Using 2010 demonstration census and pollution data, I find that compared to the original census, differentially private (DP) census significantly changes ambient pollution exposure in areas with sparse populations. White Americans have lowest variability, followed by Latinos, Asian, and Black Americans. DP underestimates pollution disparities for SO2 and PM2.5 while overestimates the pollution disparities for PM10…(More)”.

How Would You Defend the Planet From Asteroids? 


Article by Mahmud Farooque, Jason L. Kessler: “On September 26, 2022, NASA successfully smashed a spacecraft into a tiny asteroid named Dimorphos, altering its orbit. Although it was 6.8 million miles from Earth, the Double Asteroid Redirect Test (DART) was broadcast in real time, turning the impact into a rare pan-planetary moment accessible from smartphones around the world. 

For most people, the DART mission was the first glimmer—outside of the movies—that NASA was seriously exploring how to protect Earth from asteroids. Rightly famous for its technological prowess, NASA is less recognized for its social innovations. But nearly a decade before DART, the agency had launched the Asteroid Grand Challenge. In a pioneering approach to public engagement, the challenge brought citizens together to weigh in on how the taxpayer-funded agency might approach some technical decisions involving asteroids. 

The following account of how citizens came to engage with strategies for planetary defense—and the unexpected conclusions they reached—is based on the experiences of NASA employees, members of the Expert and Citizen Assessment of Science and Technology (ECAST) network, and forum participants…(More)”.

The Metaverse and Homeland Security


Report by Timothy Marler, Zara Fatima Abdurahaman, Benjamin Boudreaux, and Timothy R. Gulden: “The metaverse is an emerging concept and capability supported by multiple underlying emerging technologies, but its meaning and key characteristics can be unclear and will likely change over time. Thus, its relevance to some organizations, such as the U.S. Department of Homeland Security (DHS), can be unclear. This lack of clarity can lead to unmitigated threats and missed opportunities. It can also inhibit healthy public discourse and effective technology management generally. To help address these issues, this Perspective provides an initial review of the metaverse concept and how it might be relevant to DHS. As a critical first step with the analysis of any emerging technology, the authors review current definitions and identify key practical characteristics. Often, regardless of a precise definition, it is the fundamental capabilities that are central to discussion and management. Then, given a foundational understanding of what a metaverse entails, the authors summarize primary goals and relevant needs for DHS. Ultimately, in order to be relevant, technologies must align with actual needs for various organizations or users. By cross-walking exemplary DHS needs that stem from a variety of mission sets with pervasive characteristics of metaverses, the authors demonstrate that metaverses are, in fact, relevant to DHS. Finally, the authors identify specific threats and opportunities that DHS could proactively manage. Although this work focuses the discussion of threats and opportunities on DHS, it has broad implications. This work provides a foundation on which further discussions and research can build, minimizing disparities and discoordination in development and policy…(More)”.

Yes, No, Maybe? Legal & Ethical Considerations for Informed Consent in Data Sharing and Integration


Report by Deja Kemp, Amy Hawn Nelson, & Della Jenkins: “Data sharing and integration are increasingly commonplace at every level of government, as cross-program and cross-sector data provide valuable insights to inform resource allocation, guide program implementation, and evaluate policies. Data sharing, while routine, is not without risks, and clear legal frameworks for data sharing are essential to mitigate those risks, protect privacy, and guide responsible data use. In some cases, federal privacy laws offer clear consent requirements and outline explicit exceptions where consent is not required to share data. In other cases, the law is unclear or silent regarding whether consent is needed for data sharing. Importantly, consent can present both ethical and logistical challenges, particularly when integrating cross-sector data. This brief will frame out key concepts related to consent; explore major federal laws governing the sharing of administrative data, including individually identifiable information; and examine important ethical implications of consent, particularly in cases when the law is silent or unclear. Finally, this brief will outline the foundational role of strong governance and consent frameworks in ensuring ethical data use and offer technical alternatives to consent that may be appropriate for certain data uses….(More)”.

Generative Artificial Intelligence and Data Privacy: A Primer


Report by Congressional Research Service: “Since the public release of Open AI’s ChatGPT, Google’s Bard, and other similar systems, some Members of Congress have expressed interest in the risks associated with “generative artificial intelligence (AI).” Although exact definitions vary, generative AI is a type of AI that can generate new content—such as text, images, and videos—through learning patterns from pre-existing data.
It is a broad term that may include various technologies and techniques from AI and machine learning (ML). Generative AI models have received significant attention and scrutiny due to their potential harms, such as risks involving privacy, misinformation, copyright, and non-consensual sexual imagery. This report focuses on privacy issues and relevant policy considerations for Congress. Some policymakers and stakeholders have raised privacy concerns about how individual data may be used to develop and deploy generative models. These concerns are not new or unique to generative AI, but the scale, scope, and capacity of such technologies may present new privacy challenges for Congress…(More)”.

Actualizing Digital Self Determination: From Theory to Practice


Blog by Stefaan G. Verhulst: “The world is undergoing a rapid process of datafication, providing immense potential for addressing various challenges in society and the environment through responsible data reuse. However, datafication also results in imbalances, asymmetries, and silos that hinder the full realization of this potential and pose significant public policy challenges. In a recent paper, I suggest a key way to address these asymmetries–through a process of operationalizing digital self-determination. The paper, published open access in the journal Data and Policy (Cambridge University Press), is built around four key themes:…

Operationalizing DSD requires translating theoretical concepts into practical implementation. The paper proposes a four-pronged framework that covers processes, people and organizations, policies, products and technologies:

  • Processes include citizen engagement programs, public deliberations, and participatory impact assessments, can inform responsible data use.
  • People and organizations, including data stewards and intermediaries, play a vital role in fostering a culture of data agency and responsible data reuse.
  • Effective governance and policies, such as charters, social licenses, and codes of conduct, are key for implementing DSD.
  • Finally, technological tools and products need to focus on trusted data spaces, data portability, privacy-enhancing technologies, transparency, consent management, algorithmic accountability, and ethical AI….(More)” See also: International Network on Digital Self Determination.
Four ways to actualize digital self determination, Stefaan G. Verhulst

Design of services or designing for service? The application of design methodology in public service settings


Article by Kirsty Strokosch and Stephen P. Osborne: “The design of public services has traditionally been conducted by managers who aim to improve efficiency. In recent years though, human-centred design has been used increasingly to improve the experience of public service users, citizens and public service staff (Trischler and Scott, 2016). Design also encourages collaboration and creativity to understand problems and develop solutions (Wetter-Edman et al., 2014). This can include user research to understand current experiences and/or testing prototypes through quick repeated cycles of re-design.

To date, there has been little primary research on the application of design approaches in public service settings (Hermus, et al., 2020). Our article just published in Policy & PoliticsDesign of services or designing for service? The application of design methodology in public service settings, seeks to fill that gap.

It considers two cases in the United Kingdom: Social Security services in Scotland and Local Authority services in England. The research explores the application of design, asking three important questions: what is being designed, how is service design being practised and what are its implications?…

The research also offers three important implications for practice:

  1. Service design should be applied pragmatically. A one-size-fits-all design approach is not appropriate for public services. We need to think about the type of service, who is using it and its aims.
  2. Services should be understood in their entirety with a holistic view of both the front-end components and the back-end operational processes.  However, the complex social and institutional factors that shape service experience also need to be considered.
  3. Design needs flexibility to enable creativity. Part of this involves reducing bureaucratic work practices and a commitment from senior managers to make available the time, resources and space for creativity, testing and iteration. There needs to be space to learn and improve…(More)“.

Digital inclusion in peace processes – no silver bullet, but a major opportunity


Article by Peace Research Institute Oslo: “Digital inclusion is paving the way for women and other marginalized groups to participate in peace processes. Through digital platforms, those who are unable to participate in physical meetings, such as women with children, youth or disabled, can get their voices heard. However, digital technologies provide no silver bullet, and mitigating their risks requires careful context analysis and process design.  

Women remain underrepresented in peace processes, and even in cases where they are included, they may have difficulties to attend in-person meetings. Going beyond physical inclusion, digital inclusion offers a way to include a wider variety of people, views and interests in a peace process…

The most frequent aim of digital inclusion in peace processes is related to increased legitimacy and political support, as digital tools allow for wider participation, and a larger number and variety of voices to be heard. This, in turn, can increase the ownership of the process. Meetings, consultations and processes using easy and widely available technological platforms such as Zoom, Facebook and WhatsApp make participation easier for those who have often been excluded….

Digital technologies offer various functions for peacemaking and increased inclusion. Their utility can be seen in gathering, analysing and disseminating relevant data. For strategic communications, digital technologies offer tools to amplify and diversify messages. Additionally, they offer platforms for connecting actors and enabling collaboration between them…(More)”.