Paper by Vincent Conitzer, et al: “Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, so that, for example, they refuse to comply with requests for help with committing crimes or with producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans’ expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about ”collective” preferences or otherwise use it to make collective choices about model behavior? In this paper, we argue that the field of social choice is well positioned to address these questions…(More)”.
How to contribute:
Did you come across – or create – a compelling project/report/book/app at the leading edge of innovation in governance?
Share it with us at info@thelivinglib.org so that we can add it to the Collection!
About the Curator
Get the latest news right in your inbox
Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday
Related articles
Collection, PEOPLE
PEOPLE
PEOPLE
How Polymarket and Kalshi are gamifying truth.
Posted in February 23, 2026 by Stefaan Verhulst
Collection, PEOPLE
PEOPLE
PEOPLE
“We are beautiful.” Creating political and societal traction through multi-stakeholder participation.
Posted in February 22, 2026 by Stefaan Verhulst
Collection, PEOPLE
PEOPLE
PEOPLE
The case against efficiency: friction in social media
Posted in February 22, 2026 by Stefaan Verhulst