Citizen Jury on New Genomic Techniques


Paper by Kai P. Purnhagen and Alexandra Molitorisova: “Between 26-28 January 2024, a citizen jury was convened at the Schloss Thurnau in Upper Franconia, Germany to deliberate about new genomic techniques (NGTs) used in agriculture and food/feed production, ahead of the vote of the European Parliament and the Council of the European Union on the European Commission’s proposal for a regulation on plants obtained by certain NGTs and their food and feed. This report serves as a policy brief with all observations, assessments, and recommendations agreed by the jury with a minimum of 75 percent of the jurors’ votes. This report aims to provide policymakers, stakeholders, and the public with perspectives and considerations surrounding the use of NGTs in agriculture and food/feed production, as articulated by the members of the jury. There are 18 final recommendations produced by the jury. Through thoughtful analysis and dialogue, the jury sought to contribute to informed decision-making processes…(More)”.

Citizen silence: Missed opportunities in citizen science


Paper by Damon M Hall et al: “Citizen science is personal. Participation is contingent on the citizens’ connection to a topic or to interpersonal relationships meaningful to them. But from the peer-reviewed literature, scientists appear to have an acquisitive data-centered relationship with citizens. This has spurred ethical and pragmatic criticisms of extractive relationships with citizen scientists. We suggest five practical steps to shift citizen-science research from extractive to relational, reorienting the research process and providing reciprocal benefits to researchers and citizen scientists. By virtue of their interests and experience within their local environments, citizen scientists have expertise that, if engaged, can improve research methods and product design decisions. To boost the value of scientific outputs to society and participants, citizen-science research teams should rethink how they engage and value volunteers…(More)”.

Untapped


About: “Twenty-first century collective intelligence- combining people’s knowledge and skills, new forms of data and increasingly, technology – has the untapped potential to transform the way we understand and act on climate change.

Collective intelligence for climate action in the Global South takes many forms: from crowdsourcing of indigenous knowledge to preserve biodiversity to participatory monitoring of extreme heat and farmer experiments adapting crops to weather variability.

This research analyzes 100+ climate case studies across 45 countries that tap into people’s participation and use new forms of data. This research illustrates the potential that exists in communities everywhere to contribute to climate adaptation and mitigation efforts. It also aims to shine a light on practical ways in which these initiatives could be designed and further developed so this potential can be fully unleashed…(More)”.

The Wisdom of Partisan Crowds: Comparing Collective Intelligence in Humans and LLM-based Agents


Paper by Yun-Shiuan Chuang et al: “Human groups are able to converge to more accurate beliefs through deliberation, even in the presence of polarization and partisan bias – a phenomenon known as the “wisdom of partisan crowds.” Large Language Models (LLMs) agents are increasingly being used to simulate human collective behavior, yet few benchmarks exist for evaluating their dynamics against the behavior of human groups. In this paper, we examine the extent to which the wisdom of partisan crowds emerges in groups of LLM-based agents that are prompted to role-play as partisan personas (e.g., Democrat or Republican). We find that they not only display human-like partisan biases, but also converge to more accurate beliefs through deliberation, as humans do. We then identify several factors that interfere with convergence, including the use of chain-of-thought prompting and lack of details in personas. Conversely, fine-tuning on human data appears to enhance convergence. These findings show the potential and limitations of LLM-based agents as a model of human collective intelligence…(More)”

AI-enhanced Collective Intelligence: The State of the Art and Prospects


Paper by Hao Cui and Taha Yasseri: “The current societal challenges exceed the capacity of human individual or collective effort alone. As AI evolves, its role within human collectives is poised to vary from an assistive tool to a participatory member. Humans and AI possess complementary capabilities that, when synergized, can achieve a level of collective intelligence that surpasses the collective capabilities of either humans or AI in isolation. However, the interactions in human-AI systems are inherently complex, involving intricate processes and interdependencies. This review incorporates perspectives from network science to conceptualize a multilayer representation of human-AI collective intelligence, comprising a cognition layer, a physical layer, and an information layer. Within this multilayer network, humans and AI agents exhibit varying characteristics; humans differ in diversity from surface-level to deep-level attributes, while AI agents range in degrees of functionality and anthropomorphism. The interplay among these agents shapes the overall structure and dynamics of the system. We explore how agents’ diversity and interactions influence the system’s collective intelligence. Furthermore, we present an analysis of real-world instances of AI-enhanced collective intelligence. We conclude by addressing the potential challenges in AI-enhanced collective intelligence and offer perspectives on future developments in this field…(More)”.

Making Sense of Citizens’ Input through Artificial Intelligence: A Review of Methods for Computational Text Analysis to Support the Evaluation of Contributions in Public Participation


Paper by Julia Romberg and Tobias Escher: “Public sector institutions that consult citizens to inform decision-making face the challenge of evaluating the contributions made by citizens. This evaluation has important democratic implications but at the same time, consumes substantial human resources. However, until now the use of artificial intelligence such as computer-supported text analysis has remained an under-studied solution to this problem. We identify three generic tasks in the evaluation process that could benefit from natural language processing (NLP). Based on a systematic literature search in two databases on computational linguistics and digital government, we provide a detailed review of existing methods and their performance. While some promising approaches exist, for instance to group data thematically and to detect arguments and opinions, we show that there remain important challenges before these could offer any reliable support in practice. These include the quality of results, the applicability to non-English language corpuses and making algorithmic models available to practitioners through software. We discuss a number of avenues that future research should pursue that can ultimately lead to solutions for practice. The most promising of these bring in the expertise of human evaluators, for example through active learning approaches or interactive topic modeling…(More)”.

Once upon a bureaucrat: Exploring the role of stories in government


Article by Thea Snow: “When you think of a profession associated with stories, what comes to mind? Journalist, perhaps? Or author? Maybe, at a stretch, you might think about a filmmaker. But I would hazard a guess that “public servant” would unlikely be one of the first professions that come to mind. However, recent research suggests that we should be thinking more deeply about the connections between stories and government.

Since 2021, the Centre for Public Impact, in partnership with Dusseldorp Forum and Hands Up Mallee, has been exploring the role of storytelling in the context of place-based systems change work. Our first report, Storytelling for Systems Change: Insights from the Field, focused on the way communities use stories to support place-based change. Our second report, Storytelling for Systems Change: Listening to Understand, focused more on how stories are perceived and used by those in government who are funding and supporting community-led systems change initiatives.

To shape these reports, we have spent the past few years speaking to community members, collective impact backbone teams, storytelling experts, academics, public servants, data analysts, and more. Here’s some of what we’ve heard…(More)”.

Wisdom of the Silicon Crowd: LLM Ensemble Prediction Capabilities Rival Human Crowd Accuracy


Paper by Philipp Schoenegger, Indre Tuminauskaite, Peter S. Park, and Philip E. Tetlock: “Human forecasting accuracy in practice relies on the ‘wisdom of the crowd’ effect, in which predictions about future events are significantly improved by aggregating across a crowd of individual forecasters. Past work on the forecasting ability of large language models (LLMs) suggests that frontier LLMs, as individual forecasters, underperform compared to the gold standard of a human crowd forecasting tournament aggregate. In Study 1, we expand this research by using an LLM ensemble approach consisting of a crowd of twelve LLMs. We compare the aggregated LLM predictions on 31 binary questions to that of a crowd of 925 human forecasters from a three-month forecasting tournament. Our preregistered main analysis shows that the LLM crowd outperforms a simple no-information benchmark and is not statistically different from the human crowd. In exploratory analyses, we find that these two approaches are equivalent with respect to medium-effect-size equivalence bounds. We also observe an acquiescence effect, with mean model predictions being significantly above 50%, despite an almost even split of positive and negative resolutions. Moreover, in Study 2, we test whether LLM predictions (of GPT-4 and Claude 2) can be improved by drawing on human cognitive output. We find that both models’ forecasting accuracy benefits from exposure to the median human prediction as information, improving accuracy by between 17% and 28%: though this leads to less accurate predictions than simply averaging human and machine forecasts. Our results suggest that LLMs can achieve forecasting accuracy rivaling that of human crowd forecasting tournaments: via the simple, practically applicable method of forecast aggregation. This replicates the ‘wisdom of the crowd’ effect for LLMs, and opens up their use for a variety of applications throughout society…(More)”.

Citizen Engagement in Evidence-informed Policy-making: A Guide to Mini-publics


Report by WHO: “This guide focuses on a specific form of citizen engagement, namely mini-publics, and their potential to be adapted to a variety of contexts. Mini-publics are forums that include a cross-section of the population selected through civic lottery to participate in evidence-informed deliberation to inform policy and action. The term refers to a diverse set of democratic innovations to engage citizens in policy-making. This guide provides an overview of how to organize mini-publics in the health sector. It is a practical companion to the 2022 Overview report, Implementing citizen engagement within evidence-informed policy-making. Both documents examine and encourage contributions that citizens can make to advance WHO’s mission to achieve universal health coverage…(More)””

i.AI Consultation Analyser


New Tool by AI.Gov.UK: “Public consultations are a critical part of the process of making laws, but analysing consultation responses is complex and very time consuming. Working with the No10 data science team (10DS), the Incubator for Artificial Intelligence (i.AI) is developing a tool to make the process of analysing public responses to government consultations faster and fairer.

The Analyser uses AI and data science techniques to automatically extract patterns and themes from the responses, and turns them into dashboards for policy makers.

The goal is for computers to do what they are best at: finding patterns and analysing large amounts of data. That means humans are free to do the work of understanding those patterns.

Screenshot showing donut chart for those who agree or disagree, and a bar chart showing popularity of prevalent themes

Government runs 700-800 consultations a year on matters of importance to the public. Some are very small, but a large consultation might attract hundreds of thousands of written responses.

A consultation attracting 30,000 responses requires a team of around 25 analysts for 3 months to analyse the data and write the report. And it’s not unheard of to get double that number

If we can apply automation in a way that is fair, effective and accountable, we could save most of that £80m…(More)”