Kaveh Waddell in the Atlantic: “Big data can help solve problems that are too big for one person to wrap their head around. It’s helped businesses cut costs, cities plan new developments, intelligence agencies discover connections between terrorists, health officials predict outbreaks, and police forces get ahead of crime. Decision-makers are increasingly told to “listen to the data,” and make choices informed by the outputs of complex algorithms.
But when the data is about humans—especially those who lack a strong voice—those algorithms can become oppressive rather than liberating. For many poor people in the U.S., the data that’s gathered about them at every turn can obstruct attempts to escape poverty.
Low-income communities are among the most surveilled communities in America. And it’s not just the police that are watching, says Michele Gilman, a law professor at the University of Baltimore and a former civil-rights attorney at the Department of Justice. Public-benefits programs, child-welfare systems, and monitoring programs for domestic-abuse offenders all gather large amounts of data on their users, who are disproportionately poor.
In certain places, in order to qualify for public benefits like food stamps, applicants have to undergo fingerprinting and drug testing. Once people start receiving the benefits, officials regularly monitor them to see how they spend the money, and sometimes check in on them in their homes.
Data gathered from those sources can end up feeding back into police systems, leading to a cycle of surveillance. “It becomes part of these big-data information flows that most people aren’t aware they’re captured in, but that can have really concrete impacts on opportunities,” Gilman says.
Once an arrest crops up on a person’s record, for example, it becomes much more difficult for that person to find a job, secure a loan, or rent a home. And that’s not necessarily because loan officers or hiring managers pass over applicants with arrest records—computer systems that whittle down tall stacks of resumes or loan applications will often weed some out based on run-ins with the police.
When big-data systems make predictions that cut people off from meaningful opportunities like these, they can violate the legal principle of presumed innocence, according to Ian Kerr, a professor and researcher of ethics, law, and technology at the University of Ottawa.
Outside the court system, “innocent until proven guilty” is upheld by people’s due-process rights, Kerr says: “A right to be heard, a right to participate in one’s hearing, a right to know what information is collected about me, and a right to challenge that information.” But when opaque data-driven decision-making takes over—what Kerr calls “algorithmic justice”—some of those rights begin to erode….(More)”