Paper by Rainer Mühlhoff and Hannah Ruschemeier: “The purpose limitation principle goes beyond the protection of the individual data subjects: it aims to ensure transparency, fairness and its exception for privileged purposes. However, in the current reality of powerful AI models, purpose limitation is often impossible to enforce and is thus structurally undermined. This paper addresses a critical regulatory gap in EU digital legislation: the risk of secondary use of trained models and anonymised training datasets. Anonymised training data, as well as AI models trained from this data, pose the threat of being freely reused in potentially harmful contexts such as insurance risk scoring and automated job applicant screening. We propose shifting the focus of purpose limitation from data processing to AI model regulation. This approach mandates that those training AI models define the intended purpose and restrict the use of the model solely to this stated purpose…(More)”.
Updating purpose limitation for AI: a normative approach from law and philosophy
How to contribute:
Did you come across – or create – a compelling project/report/book/app at the leading edge of innovation in governance?
Share it with us at info@thelivinglib.org so that we can add it to the Collection!
About the Curator
Get the latest news right in you inbox
Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday
Related articles
artificial intelligence, DATA
Toward AI Governance That Works: Examining the Building Blocks of AI and the Impacts
Posted in December 3, 2025 by Stefaan Verhulst
DATA
Our Obsession With Statistical Significance Is Ruining Science
Posted in December 3, 2025 by Stefaan Verhulst
DATA
Zillow Removes Climate Risk Scores From Home Listings
Posted in December 3, 2025 by Stefaan Verhulst