Paper by Pietro Bini, Lin William Cong, Xing Huang & Lawrence J. JinDo generative AI models, particularly large language models (LLMs), exhibit systematic behavioral biases in economic and financial decisions? If so, how can these biases be mitigated? Drawing on the cognitive psychology and experimental economics literatures, we conduct the most comprehensive set of experiments to date—originally designed to document human biases—on prominent LLM families across model versions and scales. We document systematic patterns in LLM behavior. In preference-based tasks, responses become more human-like as models become more advanced or larger, while in belief-based tasks, advanced large-scale models frequently generate rational responses. Prompting LLMs to make rational decisions reduces biases…(More)”.
Behavioral Economics of AI: LLM Biases and Corrections
How to contribute:
Did you come across – or create – a compelling project/report/book/app at the leading edge of innovation in governance?
Share it with us at info@thelivinglib.org so that we can add it to the Collection!
About the Curator
Get the latest news right in your inbox
Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday
Related articles
INSTITUTIONAL INNOVATION
The software complexity of nations
Posted in January 26, 2026 by Stefaan Verhulst
INSTITUTIONAL INNOVATION
WSIS at 20: From Declaration to Delivery
Posted in January 22, 2026 by Stefaan Verhulst
INSTITUTIONAL INNOVATION
Thinking Through Memes
Posted in January 21, 2026 by Stefaan Verhulst