Philanthropy Can Help Communities Weed Out Inequity in Automated Decision Making Tools


Article by Chris Kingsley and Stephen Plank: “Two very different stories illustrate the impact of sophisticated decision-making tools on individuals and communities. In one, the Los Angeles Police Department publicly abandoned a program that used data to target violent offenders after residents in some neighborhoods were stopped by police as many as 30 times per week. In the other, New York City deployed data to root out landlords who discriminated against tenants using housing vouchers.

The second story shows the potential of automated data tools to promote social good — even as the first illustrates their potential for great harm.

Tools like these — typically described broadly as artificial intelligence or somewhat more narrowly as predictive analytics, which incorporates more human decision making in the data collection process — increasingly influence and automate decisions that affect people’s lives. This includes which families are investigated by child protective services, where police deploy, whether loan officers extend credit, and which job applications a hiring manager receives.

How these tools are built, used, and governed will help shape the opportunities of everyday citizens, for good or ill.

Civil-rights advocates are right to worry about the harm such technology can do by hardpwiring bias into decision making. At the Annie E. Casey Foundation, where we fund and support data-focused efforts, we consulted with civil-rights groups, data scientists, government leaders, and family advocates to learn more about what needs to be done to weed out bias and inequities in automated decision-making tools — and recently produced a report about how to harness their potential to promote equity and social good.

Foundations and nonprofit organizations can play vital roles in ensuring equitable use of A.I. and other data technology. Here are four areas in which philanthropy can make a difference:

Support the development and use of transparent data tools. The public has a right to know how A.I. is being used to influence policy decisions, including whether those tools were independently validated and who is responsible for addressing concerns about how they work. Grant makers should avoid supporting private algorithms whose design and performance are shielded by trade-secrecy claims. Despite calls from advocates, some companies have declined to disclose details that would allow the public to assess their fairness….(More)”