Chapter by Mireille Hildebrandt: “… investigates the link between the contestability that is key to constitutional democracies on the one hand and the falsifiability of scientific theories on the other hand, with regard to large language models (LLMs). Legally relevant decision-making that is based on the deployment of applications that involve LLMs must be contestable in a court of law. The current flavour of such contestability is focused on transparency, usually framed in terms of the explainability of the model (explainable AI). In the long run, however, the fairness and reliability of these models should be tested in a more scientific manner, based on the falsifiability of the theoretical framework that should underpin the model. This requires that researchers in the domain of LLMs learn to abduct theoretical frameworks, based on the output models of LLMs and the real world patterns they imply, while this abduction should be such that the theory can be inductively tested in a way that allows for falsification. On top of that, researchers need to conduct empirical research to enable such inductive testing. The chapter thus argues that the contestability required under the Rule of Law, should move beyond explanations of how the model generates its output, to whether the real world patterns represented in the model output can falsify the theoretical framework that should inform the model…(More)”.
How to contribute:
Did you come across – or create – a compelling project/report/book/app at the leading edge of innovation in governance?
Share it with us at info@thelivinglib.org so that we can add it to the Collection!
About the Curator
Get the latest news right in your inbox
Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday
Related articles
Artificial Intelligence, Collection, DATA
Artificial IntelligenceDATA
Artificial Intelligence
DATA
The train has left the station: Agentic AI and the future of social science research
Posted in March 4, 2026 by Stefaan Verhulst
Artificial Intelligence, Collection, DATA
Artificial IntelligenceDATA
Artificial Intelligence
DATA
AI, Human Cognition and Knowledge Collapse
Posted in March 3, 2026 by Stefaan Verhulst
Artificial Intelligence, Collection, DATA
Artificial IntelligenceDATA
Artificial Intelligence
DATA
AI agents are ‘aeroplanes for the mind’: five ways to ensure that scientists are responsible pilots
Posted in March 3, 2026 by Stefaan Verhulst