Can Artificial Intelligence Bring Deliberation to the Masses?


Chapter by Hélène Landemore: “A core problem in deliberative democracy is the tension between two seemingly equally important conditions of democratic legitimacy: deliberation, on the one hand, and mass participation, on the other. Might artificial intelligence help bring quality deliberation to the masses? The answer is a qualified yes. The chapter first examines the conundrum in deliberative democracy around the trade-off between deliberation and mass participation by returning to the seminal debate between Joshua Cohen and Jürgen Habermas. It then turns to an analysis of the 2019 French Great National Debate, a low-tech attempt to involve millions of French citizens in a two-month-long structured exercise of collective deliberation. Building on the shortcomings of this process, the chapter then considers two different visions for an algorithm-powered form of mass deliberation—Mass Online Deliberation (MOD), on the one hand, and Many Rotating Mini-publics (MRMs), on the other—theorizing various ways artificial intelligence could play a role in them. To the extent that artificial intelligence makes the possibility of either vision more likely to come to fruition, it carries with it the promise of deliberation at the very large scale….(More)”

A Generation of AI Guinea Pigs


Article by Caroline Mimbs Nyce: “This spring, the Los Angeles Unified School District—the second-largest public school district in the United States—introduced students and parents to a new “educational friend” named Ed. A learning platform that includes a chatbot represented by a small illustration of a smiling sun, Ed is being tested in 100 schools within the district and is accessible at all hours through a website. It can answer questions about a child’s courses, grades, and attendance, and point users to optional activities.

As Superintendent Alberto M. Carvalho put it to me, “AI is here to stay. If you don’t master it, it will master you.” Carvalho says he wants to empower teachers and students to learn to use AI safely. Rather than “keep these assets permanently locked away,” the district has opted to “sensitize our students and the adults around them to the benefits, but also the challenges, the risks.” Ed is just one manifestation of that philosophy; the school district also has a mandatory Digital Citizenship in the Age of AI course for students ages 13 and up.

Ed is, according to three first graders I spoke with this week at Alta Loma Elementary School, very good. They especially like it when Ed awards them gold stars for completing exercises. But even as they use the program, they don’t quite understand it. When I asked them if they know what AI is, they demurred. One asked me if it was a supersmart robot…(More)”.

Handbook of Public Participation in Impact Assessment


Book edited by Tanya Burdett and A. John Sinclair: “… provides a clear overview of how to achieve meaningful public participation in impact assessment (IA). It explores conceptual elements, including the democratic core of public participation in IA, as well as practical challenges, such as data sharing, with diverse perspectives from 39 leading academics and practitioners.

Critically examining how different engagement frameworks have evolved over time, this Handbook underlines the ways in which tokenistic approaches and wider planning and approvals structures challenge the implementation of meaningful public participation. Contributing authors discuss the impact of international agreements, legislation and regulatory regimes, and review commonly used professional association frameworks such as the International Association for Public Participation core values for practice. They demonstrate through case studies what meaningful public participation looks like in diverse regional contexts, addressing the intentions of being purposeful, inclusive, transformative and proactive. By emphasising the strength of community engagement, the Handbook argues that public participation in IA can contribute to enhanced democracy and sustainability for all…(More)”.

Misuse versus Missed use — the Urgent Need for Chief Data Stewards in the Age of AI


Article by Stefaan Verhulst and Richard Benjamins: “In the rapidly evolving landscape of artificial intelligence (AI), the need for and importance of Chief AI Officers (CAIO) are receiving increasing attention. One prominent example came in a recent memo on AI policy, issued by Shalanda Young, Director of the United States Office of Management and Budget. Among the most important — and prominently featured — recommendations were a call, “as required by Executive Order 14110,” for all government agencies to appoint a CAIO within 60 days of the release of the memo.

In many ways, this call is an important development; not even the EU AI Act is requiring this of public agencies. CAIOs have an important role to play in the search for a responsible use of AI for public services that would include guardrails and help protect the public good. Yet while acknowledging the need for CAIOs to safeguard a responsible use of AI, we argue that the duty of Administrations is not only to avoid negative impact, but also to create positive impact. In this sense, much work remains to be done in defining the CAIO role and considering their specific functions. In pursuit of these tasks, we further argue, policymakers and other stakeholders might benefit from looking at the role of another emerging profession in the digital ecology–that of Chief Data Stewards (CDS), which is focused on creating such positive impact for instance to help achieve the UN’s SDGs. Although the CDS position is itself somewhat in flux, we suggest that CDS can nonetheless provide a useful template for the functions and roles of CAIOs.

Image courtesy of Advertising Week

We start by explaining why CDS are relevant to the conversation over CAIOs; this is because data and data governance are foundational to AI governance. We then discuss some particular functions and competencies of CDS, showing how these can be equally applied to the governance of AI. Among the most important (if high-level) of these competencies is an ability to proactively identify opportunities in data sharing, and to balance the risks and opportunities of our data age. We conclude by exploring why this competency–an ethos of positive data responsibility that avoids overly-cautious risk aversion–is so important in the AI and data era…(More)”

Green Light


Google Research: “Road transportation is responsible for a significant amount of global and urban greenhouse gas emissions. It is especially problematic at city intersections where pollution can be 29 times higher than on open roads.  At intersections, half of these emissions come from traffic accelerating after stopping. While some amount of stop-and-go traffic is unavoidable, part of it is preventable through the optimization of traffic light timing configurations. To improve traffic light timing, cities need to either install costly hardware or run manual vehicle counts; both of these solutions are expensive and don’t provide all the necessary information. 

Green Light uses AI and Google Maps driving trends, with one of the strongest understandings of global road networks, to model traffic patterns and build intelligent recommendations for city traffic engineers to optimize traffic flow. Early numbers indicate a potential for up to 30% reduction in stops and 10% reduction in greenhouse gas emissions (1). By optimizing each intersection, and coordinating between adjacent intersections, we can create waves of green lights and help cities further reduce stop-and-go traffic. Green Light is now live in 70 intersections in 12 cities, 4 continents, from Haifa, Israel to Bangalore, India to Hamburg, Germany – and in these intersections we are able to save fuel and lower emissions for up to 30M car rides monthly. Green Light reflects Google Research’s commitment to use AI to address climate change and improve millions of lives in cities around the world…(More)”

Artificial Intelligence Applications for Social Science Research


Report by Megan Stubbs-Richardson et al: “Our team developed a database of 250 Artificial Intelligence (AI) applications useful for social science research. To be included in our database, the AI tool had to be useful for: 1) literature reviews, summaries, or writing, 2) data collection, analysis, or visualizations, or 3) research dissemination. In the database, we provide a name, description, and links to each of the AI tools that were current at the time of publication on September 29, 2023. Supporting links were provided when an AI tool was found using other databases. To help users evaluate the potential usefulness of each tool, we documented information about costs, log-in requirements, and whether plug-ins or browser extensions are available for each tool. Finally, as we are a team of scientists who are also interested in studying social media data to understand social problems, we also documented when the AI tools were useful for text-based data, such as social media. This database includes 132 AI tools that may have use for literature reviews or writing; 146 tools that may have use for data collection, analyses, or visualizations; and 108 that may be used for dissemination efforts. While 170 of the AI tools within this database can be used for general research purposes, 18 are specific to social media data analyses, and 62 can be applied to both. Our database thus offers some of the recently published tools for exploring the application of AI to social science research…(More)”

The Deliberative Turn in Democratic Theory


Book by Antonino Palumbo: “Thirty years of developments in deliberative democracy (DD) have consolidated this subfield of democratic theory. The acquired disciplinary prestige has made theorist and practitioners very confident about the ability of DD to address the legitimacy crisis experienced by liberal democracies at present at both theoretical and practical levels. The book advance a critical analysis of these developments that casts doubts on those certainties — current theoretical debates are reproposing old methodological divisions, and are afraid to move beyond the minimalist model of democracy advocated by liberal thinkers; democratic experimentation at the micro-level seems to have no impact at the macro-level, and remain sets of isolated experiences. The book indicates that those defects are mainly due to the liberal minimalist frame of reference within which reflection in democratic theory and practice takes place. Consequently, it suggests to move beyond liberal understandings of democracy as a game in need of external rules, and adopt instead a vision of democracy as a self-correcting metagame…(More)”.

Using Artificial Intelligence to Accelerate Collective Intelligence


Paper by Róbert Bjarnason, Dane Gambrell and Joshua Lanthier-Welch: “In an era characterized by rapid societal changes and complex challenges, institutions’ traditional methods of problem-solving in the public sector are increasingly proving inadequate. In this study, we present an innovative and effective model for how institutions can use artificial intelligence to enable groups of people to generate effective solutions to urgent problems more efficiently. We describe a proven collective intelligence method, called Smarter Crowdsourcing, which is designed to channel the collective intelligence of those with expertise about a problem into actionable solutions through crowdsourcing. Then we introduce Policy Synth, an innovative toolkit which leverages AI to make the Smarter Crowdsourcing problem-solving approach both more scalable, more effective and more efficient. Policy Synth is crafted using a human-centric approach, recognizing that AI is a tool to enhance human intelligence and creativity, not replace it. Based on a real-world case study comparing the results of expert crowdsourcing alone with expert sourcing supported by Policy Synth AI agents, we conclude that Smarter Crowdsourcing with Policy Synth presents an effective model for integrating the collective wisdom of human experts and the computational power of AI to enhance and scale up public problem-solving processes.

The potential for artificial intelligence to enhance the performance of groups of people has been a topic of great interest among scholars of collective intelligence. Though many AI toolkits exist, they too often are not fitted to the needs of institutions and policymakers. While many existing approaches view AI as a tool to make crowdsourcing and deliberative processes better and more efficient, Policy Synth goes a step further, recognizing that AI can also be used to synthesize the findings from engagements together with research to develop evidence-based solutions and policies. This study contributes significantly to the fields of collective intelligence, public problem-solving, and AI. The study offers practical tools and insights for institutions looking to engage communities effectively in addressing urgent societal challenges…(More)”

The tensions of data sharing for human rights: A modern slavery case study


Paper by Jamie Hancock et al: “There are calls for greater data sharing to address human rights issues. Advocates claim this will provide an evidence-base to increase transparency, improve accountability, enhance decision-making, identify abuses, and offer remedies for rights violations. However, these well-intentioned efforts have been found to sometimes enable harms against the people they seek to protect. This paper shows issues relating to fairness, accountability, or transparency (FAccT) in and around data sharing can produce such ‘ironic’ consequences. It does so using an empirical case study: efforts to tackle modern slavery and human trafficking in the UK. We draw on a qualitative analysis of expert interviews, workshops, ecosystem mapping exercises, and a desk-based review. The findings show how, in the UK, a large ecosystem of data providers, hubs, and users emerged to process and exchange data from across the country. We identify how issues including legal uncertainties, non-transparent sharing procedures, and limited accountability regarding downstream uses of data may undermine efforts to tackle modern slavery and place victims of abuses at risk of further harms. Our findings help explain why data sharing activities can have negative consequences for human rights, even within human rights initiatives. Moreover, our analysis offers a window into how FAccT principles for technology relate to the human rights implications of data sharing. Finally, we discuss why these tensions may be echoed in other areas where data sharing is pursued for human rights concerns, identifying common features which may lead to similar results, especially where sensitive data is shared to achieve social goods or policy objectives…(More)”.

The revolution shall not be automated: On the political possibilities of activism through data & AI


Article by Isadora Cruxên: “Every other day now, there are headlines about some kind of artificial intelligence (AI) revolution that is taking place. If you read the news or check social media regularly, you have probably come across these too: flashy pieces either trumpeting or warning against AI’s transformative potential. Some headlines promise that AI will fundamentally change how we work and learn or help us tackle critical challenges such as biodiversity conservation and climate change. Others question its intelligence, point to its embedded biases, and draw attention to its extractive labour record and high environmental costs.

Scrolling through these headlines, it is easy to feel like the ‘AI revolution’ is happening to us — or perhaps blowing past us at speed — while we are enticed to take the backseat and let AI-powered chat-boxes like ChatGPT do the work. But the reality is that we need to take the driver’s seat.

If we want to leverage this technology to advance social justice and confront the intersecting socio-ecological challenges before us, we need to stop simply wondering what the AI revolution will do to us and start thinking collectively about how we can produce data and AI models differently. As Mimi Ọnụọha and Mother Cyborg put it in A People’s Guide to AI, “the path to a fair future starts with the humans behind the machines, not the machines themselves.”

Sure, this might seem easier said than done. Most AI research and development is being driven by big tech corporations and start-ups. As Lauren Klein and Catherine D’Ignazio discuss in “Data Feminism for AI” (see “Further reading” at the end for all works cited), the results are models, tools, and platforms that are opaque to users, and that cater to the tech ambitions and profit motives of private actors, with broader societal needs and concerns becoming afterthoughts. There is excellent critical work that explores the extractive practices and unequal power relations that underpin AI production, including its relationship to processes of dataficationcolonial data epistemologies, and surveillance capitalism (to link but a few). Interrogating, illuminating, and challenging these dynamics is paramount if we are to take the driver’s seat and find alternative paths…(More)”.