Paper by Marianna Ganapini, and Enrico Panai: “This is an audit framework for AI-nudging. Unlike the static form of nudging usually discussed in the literature, we focus here on a type of nudging that uses large amounts of data to provide personalized, dynamic feedback and interfaces. We call this AI-nudging (Lanzing, 2019, p. 549; Yeung, 2017). The ultimate goal of the audit outlined here is to ensure that an AI system that uses nudges will maintain a level of moral inertia and neutrality by complying with the recommendations, requirements, or suggestions of the audit (in other words, the criteria of the audit). In the case of unintended negative consequences, the audit suggests risk mitigation mechanisms that can be put in place. In the case of unintended positive consequences, it suggests some reinforcement mechanisms. Sponsored by the IBM-Notre Dame Tech Ethics Lab…(More)”.
How to contribute:
Did you come across – or create – a compelling project/report/book/app at the leading edge of innovation in governance?
Share it with us at info@thelivinglib.org so that we can add it to the Collection!
About the Curator
Get the latest news right in your inbox
Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday
Related articles
behavioral science, INSTITUTIONAL INNOVATION
Strikingly Similar
Posted in January 27, 2026 by Stefaan Verhulst
behavioral science, INSTITUTIONAL INNOVATION
Behavioral Economics of AI: LLM Biases and Corrections
Posted in January 26, 2026 by Stefaan Verhulst
behavioral science, INSTITUTIONAL INNOVATION
Nudging at Scale: Evidence from a Government Text Messaging Campaign during School Shutdowns in Punjab, Pakistan
Posted in January 11, 2026 by Stefaan Verhulst