Untapped


About: “Twenty-first century collective intelligence- combining people’s knowledge and skills, new forms of data and increasingly, technology – has the untapped potential to transform the way we understand and act on climate change.

Collective intelligence for climate action in the Global South takes many forms: from crowdsourcing of indigenous knowledge to preserve biodiversity to participatory monitoring of extreme heat and farmer experiments adapting crops to weather variability.

This research analyzes 100+ climate case studies across 45 countries that tap into people’s participation and use new forms of data. This research illustrates the potential that exists in communities everywhere to contribute to climate adaptation and mitigation efforts. It also aims to shine a light on practical ways in which these initiatives could be designed and further developed so this potential can be fully unleashed…(More)”.

The Wisdom of Partisan Crowds: Comparing Collective Intelligence in Humans and LLM-based Agents


Paper by Yun-Shiuan Chuang et al: “Human groups are able to converge to more accurate beliefs through deliberation, even in the presence of polarization and partisan bias – a phenomenon known as the “wisdom of partisan crowds.” Large Language Models (LLMs) agents are increasingly being used to simulate human collective behavior, yet few benchmarks exist for evaluating their dynamics against the behavior of human groups. In this paper, we examine the extent to which the wisdom of partisan crowds emerges in groups of LLM-based agents that are prompted to role-play as partisan personas (e.g., Democrat or Republican). We find that they not only display human-like partisan biases, but also converge to more accurate beliefs through deliberation, as humans do. We then identify several factors that interfere with convergence, including the use of chain-of-thought prompting and lack of details in personas. Conversely, fine-tuning on human data appears to enhance convergence. These findings show the potential and limitations of LLM-based agents as a model of human collective intelligence…(More)”

AI-enhanced Collective Intelligence: The State of the Art and Prospects


Paper by Hao Cui and Taha Yasseri: “The current societal challenges exceed the capacity of human individual or collective effort alone. As AI evolves, its role within human collectives is poised to vary from an assistive tool to a participatory member. Humans and AI possess complementary capabilities that, when synergized, can achieve a level of collective intelligence that surpasses the collective capabilities of either humans or AI in isolation. However, the interactions in human-AI systems are inherently complex, involving intricate processes and interdependencies. This review incorporates perspectives from network science to conceptualize a multilayer representation of human-AI collective intelligence, comprising a cognition layer, a physical layer, and an information layer. Within this multilayer network, humans and AI agents exhibit varying characteristics; humans differ in diversity from surface-level to deep-level attributes, while AI agents range in degrees of functionality and anthropomorphism. The interplay among these agents shapes the overall structure and dynamics of the system. We explore how agents’ diversity and interactions influence the system’s collective intelligence. Furthermore, we present an analysis of real-world instances of AI-enhanced collective intelligence. We conclude by addressing the potential challenges in AI-enhanced collective intelligence and offer perspectives on future developments in this field…(More)”.

Will governments ever learn? A study of current provision and the key gaps


Paper by Geoff Mulgan: “The paper describes the history of training from ancient China onwards and the main forms it now takes. It suggests 10 areas where change may be needed and goes onto discuss how skills are learned, suggesting the need for more continuous learning and new approaches to capacity.

I hope anyone interested in this field will at least find it stimulating. I couldn’t find an overview of this kind available and so tried to fill the gap, if only with a personal view. This topic is particularly important for the UK which allowed its training system to collapse over the last decade. But the issues are relevant everywhere since the capacity of governments arguably has more impact on human wellbeing than anything else…(More)”.

How to improve economic forecasting


Article by Nicholas Gruen: “Today’s four-day weather forecasts are as accurate as one-day forecasts were 30 years ago. Economic forecasts, on the other hand, aren’t noticeably better. Former Federal Reserve chair Ben Bernanke should ponder this in his forthcoming review of the Bank of England’s forecasting.

There’s growing evidence that we can improve. But myopia and complacency get in the way. Myopia is an issue because economists think technical expertise is the essence of good forecasting when, actually, two things matter more: forecasters’ understanding of the limits of their expertise and their judgment in handling those limits.

Enter Philip Tetlock, whose 2005 book on geopolitical forecasting showed how little experts added to forecasting done by informed non-experts. To compare forecasts between the two groups, he forced participants to drop their vague weasel words — “probably”, “can’t be ruled out” — and specify exactly what they were forecasting and with what probability. 

That started sorting the sheep from the goats. The simple “point forecasts” provided by economists — such as “growth will be 3.0 per cent” — are doubly unhelpful in this regard. They’re silent about what success looks like. If I have forecast 3.0 per cent growth and actual growth comes in at 3.2 per cent — did I succeed or fail? Such predictions also don’t tell us how confident the forecaster is.

By contrast, “a 70 per cent chance of rain” specifies a clear event with a precise estimation of the weather forecaster’s confidence. Having rigorously specified the rules of the game, Tetlock has since shown how what he calls “superforecasting” is possible and how diverse teams of superforecasters do even better. 

What qualities does Tetlock see in superforecasters? As well as mastering necessary formal techniques, they’re open-minded, careful, curious and self-critical — in other words, they’re not complacent. Aware, like Socrates, of how little they know, they’re constantly seeking to learn — from unfolding events and from colleagues…(More)”.

Gamifying medical data labeling to advance AI


Article by Zach Winn: “…Duhaime began exploring ways to leverage collective intelligence to improve medical diagnoses. In one experiment, he trained groups of lay people and medical school students that he describes as “semiexperts” to classify skin conditions, finding that by combining the opinions of the highest performers he could outperform professional dermatologists. He also found that by combining algorithms trained to detect skin cancer with the opinions of experts, he could outperform either method on its own….The DiagnosUs app, which Duhaime developed with Centaur co-founders Zach Rausnitz and Tom Gellatly, is designed to help users test and improve their skills. Duhaime says about half of users are medical school students and the other half are mostly doctors, nurses, and other medical professionals…

The approach stands in sharp contrast to traditional data labeling and AI content moderation, which are typically outsourced to low-resource countries.

Centaur’s approach produces accurate results, too. In a paper with researchers from Brigham and Women’s Hospital, Massachusetts General Hospital (MGH), and Eindhoven University of Technology, Centaur showed its crowdsourced opinions labeled lung ultrasounds as reliably as experts did…

Centaur has found that the best performers come from surprising places. In 2021, to collect expert opinions on EEG patterns, researchers held a contest through the DiagnosUs app at a conference featuring about 50 epileptologists, each with more than 10 years of experience. The organizers made a custom shirt to give to the contest’s winner, who they assumed would be in attendance at the conference.

But when the results came in, a pair of medical students in Ghana, Jeffery Danquah and Andrews Gyabaah, had beaten everyone in attendance. The highest-ranked conference attendee had come in ninth…(More)”

Turning the Cacophony of the Internet’s Tower of Babel into a Coherent General Collective Intelligence


Paper by Andy E. Williams: “Increasing the number, diversity, or uniformity of opinions in a group does not necessarily imply that those opinions will converge into a single more “intelligent” one, if an objective definition of the term intelligent exists as it applies to opinions. However, a recently developed approach called human-centric functional modeling provides what might be the first general model for individual or collective intelligence. In the case of the collective intelligence of groups, this model suggests how a cacophony of incoherent opinions in a large group might be combined into coherent collective reasoning by a hypothetical platform called “general collective intelligence” (GCI). When applied to solving group problems, a GCI might be considered a system that leverages collective reasoning to increase the beneficial insights that might be derived from the information available to any group. This GCI model also suggests how the collective reasoning ability (intelligence) might be exponentially increased compared to the intelligence of any individual in a group, potentially resulting in what is predicted to be a collective superintelligence….(More)”

Collective Intelligence to Co-Create the Cities of the Future: Proposal of an Evaluation Tool for Citizen Initiatives


Paper by Fanny E. Berigüete, Inma Rodriguez Cantalapiedra, Mariana Palumbo and Torsten Masseck: “Citizen initiatives (CIs), through their activities, have become a mechanism to promote empowerment, social inclusion, change of habits, and the transformation of neighbourhoods, influencing their sustainability, but how can this impact be measured? Currently, there are no tools that directly assess this impact, so our research seeks to describe and evaluate the contributions of CIs in a holistic and comprehensive way, respecting the versatility of their activities. This research proposes an evaluation system of 33 indicators distributed in 3 blocks: social cohesion, urban metabolism, and transformation potential, which can be applied through a questionnaire. This research applied different methods such as desk study, literature review, and case study analysis. The evaluation of case studies showed that the developed evaluation system well reflects the individual contribution of CIs to sensitive and important aspects of neighbourhoods, with a lesser or greater impact according to the activities they carry out and the holistic conception they have of sustainability. Further implementation and validation of the system in different contexts is needed, but it is a novel and interesting proposal that will favour decision making for the promotion of one or another type of initiative according to its benefits and the reality and needs of the neighbourhood…(More)”.

The Generic Collective Intelligence Framework: A Qualitative View


Paper by Shweta Suray, Vishwajeet Pattanaik and Dirk Draheim: “Web-based crowd-oriented systems can be seen everywhere today. Some popular examples of such platforms include Reddit, MyGov, Wikipedia and GalaxyZoo. The main aim of such platforms is to harness individuals’ collective intelligence (CI) to offer several benefits to society — from solving complex problems to developing innovative solutions. Also, CI platforms are developing at an impressive pace, and each platform has unique features to gather, motivate and engage the crowd to achieve the platform’s goal. However, the design and development of CI systems is an expensive and time-consuming task. Thus, it is difficult for civic and government organizations to develop such platforms. Moreover, many CI platforms do not sustain for a longer period due to a lack of scientific knowledge about the several components that are required to build CI platforms. To fulfil this research gap, we developed a conceptual generic framework for CI systems that can be used to understand and examine CI systems regardless of their domains. Additionally, our CI generic framework allows stakeholders to combine several components in order to perform requirement elicitation for new platforms, thus enabling them to develop their own platforms in an efficient and effective manner. In order to evaluate the completeness and genericness and to find out the limitations of the CI generic model, we have conducted qualitative interviews with a series of CI experts. This article aims to present the findings of these expert interviews and suggest future directions for CI research to improve the generic CI model and to explore the CI from different perspectives…(More)”

Making Sense of Citizens’ Input through Artificial Intelligence: A Review of Methods for Computational Text Analysis to Support the Evaluation of Contributions in Public Participation


Paper by Julia Romberg and Tobias Escher: “Public sector institutions that consult citizens to inform decision-making face the challenge of evaluating the contributions made by citizens. This evaluation has important democratic implications but at the same time, consumes substantial human resources. However, until now the use of artificial intelligence such as computer-supported text analysis has remained an under-studied solution to this problem. We identify three generic tasks in the evaluation process that could benefit from natural language processing (NLP). Based on a systematic literature search in two databases on computational linguistics and digital government, we provide a detailed review of existing methods and their performance. While some promising approaches exist, for instance to group data thematically and to detect arguments and opinions, we show that there remain important challenges before these could offer any reliable support in practice. These include the quality of results, the applicability to non-English language corpuses and making algorithmic models available to practitioners through software. We discuss a number of avenues that future research should pursue that can ultimately lead to solutions for practice. The most promising of these bring in the expertise of human evaluators, for example through active learning approaches or interactive topic modelling…(More)” See also: Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern.