Frank Pasquale at Real Life Magazine: “Algorithms increasingly govern our social world, transforming data into scores or rankings that decide who gets credit, jobs, dates, policing, and much more. The field of “algorithmic accountability” has arisen to highlight the problems with such methods of classifying people, and it has great promise: Cutting-edge work in critical algorithm studies applies social theory to current events; law and policy experts seem to publish new articles daily on how artificial intelligence shapes our lives, and a growing community of researchers has developed a field known as “Fairness, Accuracy, and Transparency in Machine Learning.”
The social scientists, attorneys, and computer scientists promoting algorithmic accountability aspire to advance knowledge and promote justice. But what should such “accountability” more specifically consist of? Who will define it? At a two-day, interdisciplinary roundtable on AI ethics I recently attended, such questions featured prominently, and humanists, policy experts, and lawyers engaged in a free-wheeling discussion about topics ranging from robot arms races to computationally planned economies. But at the end of the event, an emissary from a group funded by Elon Musk and Peter Thiel among others pronounced our work useless. “You have no common methodology,” he informed us (apparently unaware that that’s the point of an interdisciplinary meeting). “We have a great deal of money to fund real research on AI ethics and policy”— which he thought of as dry, economistic modeling of competition and cooperation via technology — “but this is not the right group.” He then gratuitously lashed out at academics in attendance as “rent seekers,” largely because we had the temerity to advance distinctive disciplinary perspectives rather than fall in line with his research agenda.
Most corporate contacts and philanthrocapitalists are more polite, but their sense of what is realistic and what is utopian, what is worth studying and what is mere ideology, is strongly shaping algorithmic accountability research in both social science and computer science. This influence in the realm of ideas has powerful effects beyond it. Energy that could be put into better public transit systems is instead diverted to perfect the coding of self-driving cars. Anti-surveillance activism transmogrifies into proposals to improve facial recognition systems to better recognize all faces. To help payday-loan seekers, developers might design data-segmentation protocols to show them what personal information they should reveal to get a lower interest rate. But the idea that such self-monitoring and data curation can be a trap, disciplining the user in ever finer-grained ways, remains less explored. Trying to make these games fairer, the research elides the possibility of rejecting them altogether….(More)”.