Conversational Swarms of Humans and AI Agents enable Hybrid Collaborative Decision-making


Paper by Louis Rosenberg et al: “Conversational Swarm Intelligence (CSI) is an AI-powered communication and collaboration technology that allows large, networked groups (of potentially unlimited size) to hold thoughtful conversational deliberations in real-time. Inspired by the efficient decision-making dynamics of fish schools, CSI divides a human population into a set of small subgroups connected by AI agents. This enables the full group to hold a unified conversation. In this study, groups of 25 participants were tasked with selecting a roster of players in a real Fantasy Baseball contest. A total of 10 trials were run using CSI. In half the trials, each subgroup was augmented with a fact-providing AI agent referred to herein as an Infobot. The Infobot was loaded with a wide range of MLB statistics. The human participants could query the Infobot the same way they would query other persons in their subgroup. Results show that when using CSI, the 25-person groups outperformed 72% of individually surveyed participants and showed significant intelligence amplification versus the mean score (p=0.016). The CSI-enabled groups also significantly outperformed the most popular picks across the collected surveys for each daily contest (p<0.001). The CSI sessions that used Infobots scored slightly higher than those that did not, but it was not statistically significant in this study. That said, 85% of participants agreed with the statement ‘Our decisions were stronger because of information provided by the Infobot’ and only 4% disagreed. In addition, deliberations that used Infobots showed significantly less variance (p=0.039) in conversational content across members. This suggests that Infobots promoted more balanced discussions in which fewer members dominated the dialog. This may be because the infobot enabled participants to confidently express opinions with the support of factual data…(More)”.

Rediscovering the Pleasures of Pluralism: The Potential of Digitally Mediated Civic Participation


Essay by Lily L. Tsai and Alex Pentland: “Human society developed when most collective decision-making was limited to small, geographically concentrated groups such as tribes or extended family groups. Discussions about community issues could take place among small numbers of people with similar concerns. As coordination across larger distances evolved, the costs of travel required representatives from each clan or smaller group to participate in deliberations and decision-making involving multiple local communities. Divergence in the interests of representatives and their constituents opened up opportunities for corruption and elite capture.

Technologies now enable very large numbers of people to communicate, coordinate, and make collective decisions on the same platform. We have new opportunities for digitally enabled civic participation and direct democracy that scale for both the smallest and largest groups of people. Quantitative experiments, sometimes including tens of millions of individuals, have examined inclusiveness and efficiency in decision-making via digital networks. Their findings suggest that large networks of nonexperts can make practical, productive decisions and engage in collective action under certain (1) conditions. (2) These conditions include shared knowledge among individuals and communities with similar concerns, and information about their recent actions and outcomes…(More)”

Quality Assessment of Volunteered Geographic Information


Paper by Donia Nciri et al: “Traditionally, government and national mapping agencies have been a primary provider of authoritative geospatial information. Today, with the exponential proliferation of Information and Communication Technologies or ICTs (such as GPS, mobile mapping and geo-localized web applications, social media), any user becomes able to produce geospatial information. This participatory production of geographical data gives birth to the concept of Volunteered Geographic Information (VGI). This phenomenon has greatly contributed to the production of huge amounts of heterogeneous data (structured data, textual documents, images, videos, etc.). It has emerged as a potential source of geographic information in many application areas. Despite the various advantages associated with it, this information lacks often quality assurance, since it is provided by diverse user profiles. To address this issue, numerous research studies have been proposed to assess VGI quality in order to help extract relevant content. This work attempts to provide an overall review of VGI quality assessment methods over the last decade. It also investigates varied quality assessment attributes adopted in recent works. Moreover, it presents a classification that forms a basis for future research. Finally, it discusses in detail the relevance and the main limitations of existing approaches and outlines some guidelines for future developments…(More)”.

Design Thinking as a Strategic Approach to E-Participation


Book by Ilaria Mariani et al: “This open access book examines how the adoption of Design Thinking (DT) can support public organisations in overcoming some of the current barriers in e-participation. Scholars have discussed the adoption of technology to strengthen public engagement through e-participation, streamline and enhance the relationship between government and society, and improve accessibility and effectiveness. However, barriers persist, necessitating further research in this area. By analysing e-participation barriers emerging from the literature and aligning them with notions in the DT literature, this book identifies five core DT practices to enhance e-participation: (i) Meaning creation and sense-making, (ii) Publics formation, (iii) Co-production, (iv) Experimentation and prototyping, and (v) Changing organisational culture. As a result, this book provides insights into enhancing tech-aided public engagement and promoting inclusivity for translating citizen input into tangible service implementations. The book triangulates qualitative analysis of relevant literature in the fields of e-participation and DT with knowledge from European projects experimenting with public participation activities implying experimentation with digital tools. This research aims to bridge the gap between theoretical frameworks and practical application, ultimately contributing to more effective e-participation and digital public services…(More)”.

When combinations of humans and AI are useful: A systematic review and meta-analysis


Paper by Michelle Vaccaro, Abdullah Almaatouq & Thomas Malone: “Inspired by the increasing use of artificial intelligence (AI) to augment humans, researchers have studied human–AI systems involving different tasks, systems and populations. Despite such a large body of work, we lack a broad conceptual understanding of when combinations of humans and AI are better than either alone. Here we addressed this question by conducting a preregistered systematic review and meta-analysis of 106 experimental studies reporting 370 effect sizes. We searched an interdisciplinary set of databases (the Association for Computing Machinery Digital Library, the Web of Science and the Association for Information Systems eLibrary) for studies published between 1 January 2020 and 30 June 2023. Each study was required to include an original human-participants experiment that evaluated the performance of humans alone, AI alone and human–AI combinations. First, we found that, on average, human–AI combinations performed significantly worse than the best of humans or AI alone (Hedges’ g = −0.23; 95% confidence interval, −0.39 to −0.07). Second, we found performance losses in tasks that involved making decisions and significantly greater gains in tasks that involved creating content. Finally, when humans outperformed AI alone, we found performance gains in the combination, but when AI outperformed humans alone, we found losses. Limitations of the evidence assessed here include possible publication bias and variations in the study designs analysed. Overall, these findings highlight the heterogeneity of the effects of human–AI collaboration and point to promising avenues for improving human–AI systems…(More)”.

Make it make sense: the challenge of data analysis in global deliberation


Blog by Iñaki Goñi: “From climate change to emerging technologies to economic justice to space, global and transnational deliberation is on the rise. Global deliberative processes aim to bring citizen-centred governance to issues that no single nation can resolve alone. Running deliberative processes at this scale poses a unique set of challenges. How to select participants, make the forums accountableimpactfulfairly designed, and aware of power imbalances, are all crucial and open questions….

Massifying participation will be key to invigorating global deliberation. Assemblies will have a better chance of being seen as legitimate, fair, and publicly supported if they involve thousands or even millions of diverse participants. This raises an operational challenge: how to systematise political ideas from many people across the globe.

In a centralised global assembly, anything from 50 to 500 citizens from various countries engage in a single deliberation and produce recommendations or political actions by crossing languages and cultures. In a distributed assembly, multiple gatherings are convened locally that share a common but flexible methodology, allowing participants to discuss a common issue applied both to local and global contexts. Either way, a global deliberation process demands the organisation and synthesis of possibly thousands of ideas from diverse languages and cultures around the world.

How could we ever make sense of all that data to systematise citizens’ ideas and recommendations? Most people turn their heads to computational methods to help reduce complexity and identify patterns. First up, one technique for analysing text amounts to little more than simple counting, through which we can produce something like a frequency table or a wordcloud…(More)”.

AI can help humans find common ground in democratic deliberation


Paper by Michael Henry Tessler et al: “We asked whether an AI system based on large language models (LLMs) could successfully capture the underlying shared perspectives of a group of human discussants by writing a “group statement” that the discussants would collectively endorse. Inspired by Jürgen Habermas’s theory of communicative action, we designed the “Habermas Machine” to iteratively generate group statements that were based on the personal opinions and critiques from individual users, with the goal of maximizing group approval ratings. Through successive rounds of human data collection, we used supervised fine-tuning and reward modeling to progressively enhance the Habermas Machine’s ability to capture shared perspectives. To evaluate the efficacy of AI-mediated deliberation, we conducted a series of experiments with over 5000 participants from the United Kingdom. These experiments investigated the impact of AI mediation on finding common ground, how the views of discussants changed across the process, the balance between minority and majority perspectives in group statements, and potential biases present in those statements. Lastly, we used the Habermas Machine for a virtual citizens’ assembly, assessing its ability to support deliberation on controversial issues within a demographically representative sample of UK residents…(More)”.

Exploring New Frontiers of Citizen Participation in the Policy Cycle


OECD Discussion Paper: “… starts from the premise that democracies are endowed with valuable assets and that putting citizens at the heart of policy making offers an opportunity to strengthen democratic resilience. It draws on data, evidence and insights generated through a wide range of work underway at the OECD to identify systemic challenges and propose lines of action for the future. It calls for greater attention to, and investments in, citizen participation in policy making as one of the core functions of the state and the ‘life force’ of democratic governance. In keeping with the OECD’s strong commitment to providing a platform for diverse perspectives on challenging policy issues, it also offers a collection of thoughtprovoking opinion pieces by leading practitioners whose position as elected officials, academics and civil society leaders provides them with a unique vantage point from which to scan the horizon. As a contribution to an evolving field, this Discussion Paper offers neither a prescriptive framework nor a roadmap for governments but represents a step towards reaching a shared understanding of the very real challenges that lie ahead. It is also a timely invitation to all interested actors to join forces and take concerted action to embed meaningful citizen participation in policy making…(More)”.

The illusion of information adequacy


Paper by Hunter Gehlbach , Carly D. Robinson, Angus Fletcher: “How individuals navigate perspectives and attitudes that diverge from their own affects an array of interpersonal outcomes from the health of marriages to the unfolding of international conflicts. The finesse with which people negotiate these differing perceptions depends critically upon their tacit assumptions—e.g., in the bias of naïve realism people assume that their subjective construal of a situation represents objective truth. The present study adds an important assumption to this list of biases: the illusion of information adequacy. Specifically, because individuals rarely pause to consider what information they may be missing, they assume that the cross-section of relevant information to which they are privy is sufficient to adequately understand the situation. Participants in our preregistered study (N = 1261) responded to a hypothetical scenario in which control participants received full information and treatment participants received approximately half of that same information. We found that treatment participants assumed that they possessed comparably adequate information and presumed that they were just as competent to make thoughtful decisions based on that information. Participants’ decisions were heavily influenced by which cross-section of information they received. Finally, participants believed that most other people would make a similar decision to the one they made. We discuss the implications in the context of naïve realism and other biases that implicate how people navigate differences of perspective…(More)”.

Ensuring citizens’ assemblies land


Article by Graham Smith: “…the evidence shows that while the recommendations of assemblies are well considered and could help shape more robust policy, too often they fail to land. Why is this?

The simple answer is that so much time, resources and energy is spent on organising the assembly itself – ensuring the best possible experience for citizens – that the relationship with the local authority and its decision-making processes is neglected.

First, the question asked of the assembly does not always relate to a specific set of decisions about to be made by an authority. Is the relevant policy process open and ready for input? On a number of occasions assemblies have taken place just after a new policy or strategy has been agreed. Disastrous timing.

This does not mean assemblies should only be run when they are tied to a particular decision-making process. Sometimes it is important to open up a policy area with a broad question. And sometimes it makes sense to empower citizens to set the agenda and focus on the issues they find most compelling

The second element is the failure of authorities to prepare to receive recommendations from citizens.

One story is where the first a public official knew about an assembly was when its recommendations landed on their desk. They were not received in the best spirit.

Too often assemblies are commissioned by enthusiastic politicians and public officials who have not done the necessary work to ensure their colleagues are willing to give a considered response to the citizens’ recommendations. Too often an assembly will be organised by a department or ministry where the results require others in the authority to respond – but those other politicians and officials feel no connection to the process.

And too often, an assembly ends, and it is not clear who within the public authority has the responsibility to take the recommendations forward to ensure they are given a fair hearing across the authority.

For citizens’ assemblies to be effective requires political and administrative work well beyond just organising the assembly. If this is not done, it is not only a waste of resources, but it can do serious damage to democracy and trust as those citizens who have invested their time and energy into the process become disillusioned.

Those authorities where citizens’ assemblies have had meaningful impacts are those that have not only invested in the assembly, but also into preparing the authority to receive the recommendations. Often this has meant continuing support and resourcing for assembly members after the process. They are the best advocates for their work…(More)”