Paper by Rainer Mühlhoff and Hannah Ruschemeier: “The purpose limitation principle goes beyond the protection of the individual data subjects: it aims to ensure transparency, fairness and its exception for privileged purposes. However, in the current reality of powerful AI models, purpose limitation is often impossible to enforce and is thus structurally undermined. This paper addresses a critical regulatory gap in EU digital legislation: the risk of secondary use of trained models and anonymised training datasets. Anonymised training data, as well as AI models trained from this data, pose the threat of being freely reused in potentially harmful contexts such as insurance risk scoring and automated job applicant screening. We propose shifting the focus of purpose limitation from data processing to AI model regulation. This approach mandates that those training AI models define the intended purpose and restrict the use of the model solely to this stated purpose…(More)”.
Updating purpose limitation for AI: a normative approach from law and philosophy
How to contribute:
Did you come across – or create – a compelling project/report/book/app at the leading edge of innovation in governance?
Share it with us at info@thelivinglib.org so that we can add it to the Collection!
About the Curator
Get the latest news right in you inbox
Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday
Related articles
DATA
Monitoring the Re-Use and Impact of Non-Traditional Data
Posted in September 11, 2025 by Stefaan Verhulst
DATA, data collaboratives
Piloting an infrastructure for the secondary use of health data: learnings from the HealthData@EU Pilot
Posted in September 10, 2025 by Stefaan Verhulst
artificial intelligence, DATA, privacy
Co-creating Consent for Data Use — AI-Powered Ethics for Biomedical AI
Posted in September 10, 2025 by Stefaan Verhulst