Paper by Yun-Shiuan Chuang et al: “Human groups are able to converge to more accurate beliefs through deliberation, even in the presence of polarization and partisan bias – a phenomenon known as the “wisdom of partisan crowds.” Large Language Models (LLMs) agents are increasingly being used to simulate human collective behavior, yet few benchmarks exist for evaluating their dynamics against the behavior of human groups. In this paper, we examine the extent to which the wisdom of partisan crowds emerges in groups of LLM-based agents that are prompted to role-play as partisan personas (e.g., Democrat or Republican). We find that they not only display human-like partisan biases, but also converge to more accurate beliefs through deliberation, as humans do. We then identify several factors that interfere with convergence, including the use of chain-of-thought prompting and lack of details in personas. Conversely, fine-tuning on human data appears to enhance convergence. These findings show the potential and limitations of LLM-based agents as a model of human collective intelligence…(More)”
The Wisdom of Partisan Crowds: Comparing Collective Intelligence in Humans and LLM-based Agents
How to contribute:
Did you come across – or create – a compelling project/report/book/app at the leading edge of innovation in governance?
Share it with us at info@thelivinglib.org so that we can add it to the Collection!
About the Curator
Get the latest news right in you inbox
Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday
Related articles
collective intelligence
Amplifying Human Creativity and Problem Solving with AI Through Generative Collective Intelligence
Posted in May 31, 2025 by Stefaan Verhulst
collective intelligence
Simulating Human Behavior with AI Agents
Posted in May 21, 2025 by Stefaan Verhulst
citizen science, collective intelligence
Mapping local knowledge supports science and stewardship
Posted in April 29, 2025 by Stefaan Verhulst