Using AI to Inform Policymaking


Paper for the AI4Democracy series at The Center for the Governance of Change at IE University: “Good policymaking requires a multifaceted approach, incorporating diverse tools and processes to address the varied needs and expectations of constituents. The paper by Turan and McKenzie focuses on an LLM-based tool, “Talk to the City” (TttC), developed to facilitate collective decision-making by soliciting, analyzing, and organizing public opinion. This tool has been tested in three distinct applications:

1. Finding Shared Principles within Constituencies: Through large-scale citizen consultations, TttC helps identify common values and priorities.

2. Compiling Shared Experiences in Community Organizing: The tool aggregates and synthesizes the experiences of community members, providing a cohesive overview.

3. Action-Oriented Decision Making in Decentralized Governance: TttC supports decision-making processes in decentralized governance structures by providing actionable insights from diverse inputs.

CAPABILITIES AND BENEFITS OF LLM TOOLS

LLMs, when applied to democratic decision-making, offer significant advantages:

  • Processing Large Volumes of Qualitative Inputs: LLMs can handle extensive qualitative data, summarizing discussions and identifying overarching themes with high accuracy.
  • Producing Aggregate Descriptions in Natural Language: The ability to generate clear, comprehensible summaries from complex data makes these tools invaluable for communicating nuanced topics.
  • Facilitating Understanding of Constituents’ Needs: By organizing public input, LLM tools help leaders gain a better understanding of their constituents’ needs and priorities.

CASE STUDIES AND TOOL EFFICACY

The paper presents case studies using TttC, demonstrating its effectiveness in improving collective deliberation and decision-making. Key functionalities include:

  • Aggregating Responses and Clustering Ideas: TttC identifies common themes and divergences within a population’s opinions.
  • Interactive Interface for Exploration: The tool provides an interactive platform for exploring the diversity of opinions at both individual and group scales, revealing complexity, common ground, and polarization…(More)”

Enrolling Citizens: A Primer on Archetypes of Democratic Engagement with AI


Paper by Wanheng Hu and Ranjit Singh: “In response to rapid advances in artificial intelligence, lawmakers, regulators, academics, and technologists alike are sifting through technical jargon and marketing hype as they take on the challenge of safeguarding citizens from the technology’s potential harms while maximizing their access to its benefits. A common feature of these efforts is including citizens throughout the stages of AI development and governance. Yet doing so is impossible without a clear vision of what citizens ideally should do. This primer takes up this imperative and asks: What approaches can ensure that citizens have meaningful involvement in the development of AI, and how do these approaches envision the role of a “good citizen”?

The primer highlights three major approaches to involving citizens in AI — AI literacy, AI governance, and participatory AI — each of them premised on the importance of enrolling citizens but envisioning different roles for citizens to play. While recognizing that it is largely impossible to come up with a universal standard for building AI in the public interest, and that all approaches will remain local and situated, this primer invites a critical reflection on the underlying assumptions about technology, democracy, and citizenship that ground how we think about the ethics and role of public(s) in large-scale sociotechnical change. ..(More)”.

Why policy failure is a prerequisite for innovation in the public sector


Blog by Philipp Trein and Thenia Vagionaki: “In our article entitled, “Why policy failure is a prerequisite for innovation in the public sector,” we explore the relationship between policy failure and innovation within public governance. Drawing inspiration from the “Innovator’s Dilemma,”—a theory from the management literature—we argue that the very nature of policymaking, characterized by myopia of voters, blame avoidance by decisionmakers, and the complexity (ill-structuredness) of societal challenges, has an inherent tendency to react with innovation only after failure of existing policies.  

Our analysis implies that we need to be more critical of what the policy process can achieve in terms of public sector innovation. Cognitive limitations tend to lead to a misperception of problems and inaccurate assessment of risks by decision makers according to the “Innovator’s Dilemma”.  This problem implies that true innovation (non-trivial policy changes) are unlikely to happen before an existing policy has failed visibly. However, our perspective does not want to paint a gloomy picture for public policy making but rather offers a more realistic interpretation of what public sector innovation can achieve. As a consequence, learning from experts in the policy process should be expected to correct failures in public sector problem-solving during the political process, rather than raise expectations beyond what is possible. 

The potential impact of our findings is profound. For practitioners and policymakers, this insight offers a new lens through which to evaluate the failure and success of public policies. Our work advocates a paradigm shift in how we perceive, manage, and learn from policy failures in the public sector, and for the expectations we have towards learning and the use of evidence in policymaking. By embracing the limitations of innovation in public policy, we can better manage expectations and structure the narrative regarding the capacity of public policy to address collective problems…(More)”.


The Character of Consent


Book by Meg Leta Jones about The History of Cookies and the Future of Technology Policy: “Consent pop-ups continually ask us to download cookies to our computers, but is this all-too-familiar form of privacy protection effective? No, Meg Leta Jones explains in The Character of Consent, rather than promote functionality, privacy, and decentralization, cookie technology has instead made the internet invasive, limited, and clunky. Good thing, then, that the cookie is set for retirement in 2024. In this eye-opening book, Jones tells the little-known story of this broken consent arrangement, tracing it back to the major transnational conflicts around digital consent over the last twenty-five years. What she finds is that the policy controversy is not, in fact, an information crisis—it’s an identity crisis.

Instead of asking how people consent, Jones asks who exactly is consenting and to what. Packed into those cookie pop-ups, she explains, are three distinct areas of law with three different characters who can consent. Within (mainly European) data protection law, the data subject consents. Within communication privacy law, the user consents. And within consumer protection law, the privacy consumer consents. These areas of law have very different histories, motivations, institutional structures, expertise, and strategies, so consent—and the characters who can consent—plays a unique role in those areas of law….(More)”.

Can Artificial Intelligence Bring Deliberation to the Masses?


Chapter by Hélène Landemore: “A core problem in deliberative democracy is the tension between two seemingly equally important conditions of democratic legitimacy: deliberation, on the one hand, and mass participation, on the other. Might artificial intelligence help bring quality deliberation to the masses? The answer is a qualified yes. The chapter first examines the conundrum in deliberative democracy around the trade-off between deliberation and mass participation by returning to the seminal debate between Joshua Cohen and Jürgen Habermas. It then turns to an analysis of the 2019 French Great National Debate, a low-tech attempt to involve millions of French citizens in a two-month-long structured exercise of collective deliberation. Building on the shortcomings of this process, the chapter then considers two different visions for an algorithm-powered form of mass deliberation—Mass Online Deliberation (MOD), on the one hand, and Many Rotating Mini-publics (MRMs), on the other—theorizing various ways artificial intelligence could play a role in them. To the extent that artificial intelligence makes the possibility of either vision more likely to come to fruition, it carries with it the promise of deliberation at the very large scale….(More)”

Governing with Artificial Intelligence


OECD Report: “OECD countries are increasingly investing in better understanding the potential value of using Artificial Intelligence (AI) to improve public governance. The use of AI by the public sector can increase productivity, responsiveness of public services, and strengthen the accountability of governments. However, governments must also mitigate potential risks, building an enabling environment for trustworthy AI. This policy paper outlines the key trends and policy challenges in the development, use, and deployment of AI in and by the public sector. First, it discusses the potential benefits and specific risks associated with AI use in the public sector. Second, it looks at how AI in the public sector can be used to improve productivity, responsiveness, and accountability. Third, it provides an overview of the key policy issues and presents examples of how countries are addressing them across the OECD…(More)”.

Handbook of Public Participation in Impact Assessment


Book edited by Tanya Burdett and A. John Sinclair: “… provides a clear overview of how to achieve meaningful public participation in impact assessment (IA). It explores conceptual elements, including the democratic core of public participation in IA, as well as practical challenges, such as data sharing, with diverse perspectives from 39 leading academics and practitioners.

Critically examining how different engagement frameworks have evolved over time, this Handbook underlines the ways in which tokenistic approaches and wider planning and approvals structures challenge the implementation of meaningful public participation. Contributing authors discuss the impact of international agreements, legislation and regulatory regimes, and review commonly used professional association frameworks such as the International Association for Public Participation core values for practice. They demonstrate through case studies what meaningful public participation looks like in diverse regional contexts, addressing the intentions of being purposeful, inclusive, transformative and proactive. By emphasising the strength of community engagement, the Handbook argues that public participation in IA can contribute to enhanced democracy and sustainability for all…(More)”.

The Deliberative Turn in Democratic Theory


Book by Antonino Palumbo: “Thirty years of developments in deliberative democracy (DD) have consolidated this subfield of democratic theory. The acquired disciplinary prestige has made theorist and practitioners very confident about the ability of DD to address the legitimacy crisis experienced by liberal democracies at present at both theoretical and practical levels. The book advance a critical analysis of these developments that casts doubts on those certainties — current theoretical debates are reproposing old methodological divisions, and are afraid to move beyond the minimalist model of democracy advocated by liberal thinkers; democratic experimentation at the micro-level seems to have no impact at the macro-level, and remain sets of isolated experiences. The book indicates that those defects are mainly due to the liberal minimalist frame of reference within which reflection in democratic theory and practice takes place. Consequently, it suggests to move beyond liberal understandings of democracy as a game in need of external rules, and adopt instead a vision of democracy as a self-correcting metagame…(More)”.

Using Artificial Intelligence to Accelerate Collective Intelligence


Paper by Róbert Bjarnason, Dane Gambrell and Joshua Lanthier-Welch: “In an era characterized by rapid societal changes and complex challenges, institutions’ traditional methods of problem-solving in the public sector are increasingly proving inadequate. In this study, we present an innovative and effective model for how institutions can use artificial intelligence to enable groups of people to generate effective solutions to urgent problems more efficiently. We describe a proven collective intelligence method, called Smarter Crowdsourcing, which is designed to channel the collective intelligence of those with expertise about a problem into actionable solutions through crowdsourcing. Then we introduce Policy Synth, an innovative toolkit which leverages AI to make the Smarter Crowdsourcing problem-solving approach both more scalable, more effective and more efficient. Policy Synth is crafted using a human-centric approach, recognizing that AI is a tool to enhance human intelligence and creativity, not replace it. Based on a real-world case study comparing the results of expert crowdsourcing alone with expert sourcing supported by Policy Synth AI agents, we conclude that Smarter Crowdsourcing with Policy Synth presents an effective model for integrating the collective wisdom of human experts and the computational power of AI to enhance and scale up public problem-solving processes.

The potential for artificial intelligence to enhance the performance of groups of people has been a topic of great interest among scholars of collective intelligence. Though many AI toolkits exist, they too often are not fitted to the needs of institutions and policymakers. While many existing approaches view AI as a tool to make crowdsourcing and deliberative processes better and more efficient, Policy Synth goes a step further, recognizing that AI can also be used to synthesize the findings from engagements together with research to develop evidence-based solutions and policies. This study contributes significantly to the fields of collective intelligence, public problem-solving, and AI. The study offers practical tools and insights for institutions looking to engage communities effectively in addressing urgent societal challenges…(More)”

The revolution shall not be automated: On the political possibilities of activism through data & AI


Article by Isadora Cruxên: “Every other day now, there are headlines about some kind of artificial intelligence (AI) revolution that is taking place. If you read the news or check social media regularly, you have probably come across these too: flashy pieces either trumpeting or warning against AI’s transformative potential. Some headlines promise that AI will fundamentally change how we work and learn or help us tackle critical challenges such as biodiversity conservation and climate change. Others question its intelligence, point to its embedded biases, and draw attention to its extractive labour record and high environmental costs.

Scrolling through these headlines, it is easy to feel like the ‘AI revolution’ is happening to us — or perhaps blowing past us at speed — while we are enticed to take the backseat and let AI-powered chat-boxes like ChatGPT do the work. But the reality is that we need to take the driver’s seat.

If we want to leverage this technology to advance social justice and confront the intersecting socio-ecological challenges before us, we need to stop simply wondering what the AI revolution will do to us and start thinking collectively about how we can produce data and AI models differently. As Mimi Ọnụọha and Mother Cyborg put it in A People’s Guide to AI, “the path to a fair future starts with the humans behind the machines, not the machines themselves.”

Sure, this might seem easier said than done. Most AI research and development is being driven by big tech corporations and start-ups. As Lauren Klein and Catherine D’Ignazio discuss in “Data Feminism for AI” (see “Further reading” at the end for all works cited), the results are models, tools, and platforms that are opaque to users, and that cater to the tech ambitions and profit motives of private actors, with broader societal needs and concerns becoming afterthoughts. There is excellent critical work that explores the extractive practices and unequal power relations that underpin AI production, including its relationship to processes of dataficationcolonial data epistemologies, and surveillance capitalism (to link but a few). Interrogating, illuminating, and challenging these dynamics is paramount if we are to take the driver’s seat and find alternative paths…(More)”.