Nicholas Diakopoulos in Slate: “In 2015 more than 59 million Americans received some form ofbenefit from the Social Security Administration, not just for retirement but also for disability or as a survivor of a deceased worker. It’s a behemoth of a government program, and keeping it solvent has preoccupied the Office of the Chief Actuary of theSocial Security Administration for years. That office makes yearly forecasts of key demographic (such as mortality rates) or economic (for instance, labor forceparticipation) factors that inform how policy can or should change to keep theprogram on sound financial footing. But a recent Harvard University study examinedseveral of these forecasts and found that they were systematically biased—underestimating life expectancy and implying that funds were on firmer financialground than warranted. The procedures and methods that the SSA uses aren’t openfor inspection either, posing challenges to replicating and debugging those predictivealgorithms.
Whether forecasting the solvency of social programs, waging a war, managingnational security, doling out justice and punishment, or educating the populace,government has a lot of decisions to make—and it’s increasingly using algorithms tosystematize and scale that bureaucratic work. In the ideal democratic state, theelectorate chooses a government that provides social goods and exercises itsauthority via regulation. The government is legitimate to the extent that it is heldaccountable to the citizenry. Though as the SSA example shows, tightly heldalgorithms pose issues of accountability that grind at the very legitimacy of thegovernment itself.
One of the immensely useful abilities of algorithms is to rank and prioritize hugeamounts of data, turning a messy pile of items into a neat and orderly list. In 2013 theObama administration announced that it would be getting into the business ofranking colleges, helping the citizens of the land identify and evaluate the “best”educational opportunities. But two years later, the idea of ranking colleges had beenneutered, traded in for what amounts to a data dump of educational statistics calledthe College Scorecard. The human influences, subjective factors, and methodologicalpitfalls involved in quantifying education into rankings would be numerous. Perhapsthe government sensed that any ranking would be dubious—that it would be riddledwith questions of what data was used and how various statistical factors wereweighted. How could the government make such a ranking legitimate in the eyes ofthe public and of the industry that it seeks to hold accountable?
That’s a complicated question that goes far beyond college rankings. But whatever theend goal, government needs to develop protocols for opening up algorithmic blackboxes to democratic processes.
Transparency offers one promising path forward. Let’s consider the new risk-assessment algorithm that the state of Pennsylvania is developing to help make criminal sentencing decisions. Unlike some other states that are pursuing algorithmiccriminal justice using proprietary systems, the level of transparency around thePennsylvania Risk Assessment Project is laudable, with several publicly available in-depth reports on the development of the system….(More)’