Machine Learning in Public Policy: The Perils and the Promise of Interpretability


Report by Evan D. Peet, Brian G. Vegetabile, Matthew Cefalu, Joseph D. Pane, Cheryl L. Damberg: “Machine learning (ML) can have a significant impact on public policy by modeling complex relationships and augmenting human decisionmaking. However, overconfidence in results and incorrectly interpreted algorithms can lead to peril, such as the perpetuation of structural inequities. In this Perspective, the authors give an overview of ML and discuss the importance of its interpretability. In addition, they offer the following recommendations, which will help policymakers develop trustworthy, transparent, and accountable information that leads to more-objective and more-equitable policy decisions: (1) improve data through coordinated investments; (2) approach ML expecting interpretability, and be critical; and (3) leverage interpretable ML to understand policy values and predict policy impacts…(More)”.