Paper by Susan Aaronson: “The growing popularity of large language models (LLMs) has raised concerns about their accuracy. These chatbots can be used to provide information, but it may be tainted by errors or made-up or false information (hallucinations) caused by problematic data sets or incorrect assumptions made by the model. The questionable results produced by chatbots has led to growing disquiet among users, developers and policy makers. The author argues that policy makers need to develop a systemic approach to address these concerns. The current piecemeal approach does not reflect the complexity of LLMs or the magnitude of the data upon which they are based, therefore, the author recommends incentivizing greater transparency and accountability around data-set development…(More)”.
How to contribute:
Did you come across – or create – a compelling project/report/book/app at the leading edge of innovation in governance?
Share it with us at info@thelivinglib.org so that we can add it to the Collection!
About the Curator
Get the latest news right in you inbox
Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday
Related articles
DATA
Is this the end of Business-to-Government (B2G) sharing? How the European Commission’s Digital Omnibus Confines B2G Data Sharing to a ‘Last Resort’ Option
Posted in December 10, 2025 by Stefaan Verhulst
DATA
The Nation’s Data at Risk: 2025 Report
Posted in December 10, 2025 by Stefaan Verhulst
artificial intelligence, DATA
Compute Is the New Oil
Posted in December 9, 2025 by Stefaan Verhulst