Report by Evan D. Peet, Brian G. Vegetabile, Matthew Cefalu, Joseph D. Pane, Cheryl L. Damberg: “Machine learning (ML) can have a significant impact on public policy by modeling complex relationships and augmenting human decisionmaking. However, overconfidence in results and incorrectly interpreted algorithms can lead to peril, such as the perpetuation of structural inequities. In this Perspective, the authors give an overview of ML and discuss the importance of its interpretability. In addition, they offer the following recommendations, which will help policymakers develop trustworthy, transparent, and accountable information that leads to more-objective and more-equitable policy decisions: (1) improve data through coordinated investments; (2) approach ML expecting interpretability, and be critical; and (3) leverage interpretable ML to understand policy values and predict policy impacts…(More)”.
Machine Learning in Public Policy: The Perils and the Promise of Interpretability
How to contribute:
Did you come across – or create – a compelling project/report/book/app at the leading edge of innovation in governance?
Share it with us at info@thelivinglib.org so that we can add it to the Collection!
About the Curator
Get the latest news right in you inbox
Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday
Related articles
artificial intelligence, DATA
Toward AI Governance That Works: Examining the Building Blocks of AI and the Impacts
Posted in December 3, 2025 by Stefaan Verhulst
artificial intelligence
How AI is breaking cover letters
Posted in December 3, 2025 by Stefaan Verhulst
artificial intelligence
AI in Strategic Foresight: Reshaping Anticipatory Governance
Posted in December 3, 2025 by Stefaan Verhulst