How to Hold Algorithms Accountable


Nicholas Diakopoulos and Sorelle Friedler at MIT Technology Review:  Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold them accountable.

Various industry efforts, including a consortium of Silicon Valley behemoths, are beginning to grapple with the ethics of deploying algorithms that can have unanticipated effects on society. Algorithm developers and product managers need new ways to think about, design, and implement algorithmic systems in publicly accountable ways. Over the past several months, we and some colleagues have been trying to address these goals by crafting a set of principles for accountable algorithms….

Accountability implies an obligation to report and justify algorithmic decision-making, and to mitigate any negative social impacts or potential harms. We’ll consider accountability through the lens of five core principles: responsibility, explainability, accuracy, auditability, and fairness.

Responsibility. For any algorithmic system, there needs to be a person with the authority to deal with its adverse individual or societal effects in a timely fashion. This is not a statement about legal responsibility but, rather, a focus on avenues for redress, public dialogue, and internal authority for change. This could be as straightforward as giving someone on your technical team the internal power and resources to change the system, making sure that person’s contact information is publicly available.

Explainability. Any decisions produced by an algorithmic system should be explainable to the people affected by those decisions. These explanations must be accessible and understandable to the target audience; purely technical descriptions are not appropriate for the general public. Explaining risk assessment scores to defendants and their legal counsel would promote greater understanding and help them challenge apparent mistakes or faulty data. Some machine-learning models are more explainable than others, but just because there’s a fancy neural net involved doesn’t mean that a meaningful explanationcan’t be produced.

Accuracy. Algorithms make mistakes, whether because of data errors in their inputs (garbage in, garbage out) or statistical uncertainty in their outputs. The principle of accuracy suggests that sources of error and uncertainty throughout an algorithm and its data sources need to be identified, logged, and benchmarked. Understanding the nature of errors produced by an algorithmic system can inform mitigation procedures.

Auditability. The principle of auditability states that algorithms should be developed to enable third parties to probe and review the behavior of an algorithm. Enabling algorithms to be monitored, checked, and criticized would lead to more conscious design and course correction in the event of failure. While there may be technical challenges in allowing public auditing while protecting proprietary information, private auditing (as in accounting) could provide some public assurance. Where possible, even limited access (e.g., via an API) would allow the public a valuable chance to audit these socially significant algorithms.

Fairness. As algorithms increasingly make decisions based on historical and societal data, existing biases and historically discriminatory human decisions risk being “baked in” to automated decisions. All algorithms making decisions about individuals should be evaluated for discriminatory effects. The results of the evaluation and the criteria used should be publicly released and explained….(More)”

New Data Portal to analyze governance in Africa


Artificial Intelligence can streamline public comment for federal agencies


John Davis at the Hill: “…What became immediately clear to me was that — although not impossible to overcome — the lack of consistency and shared best practices across all federal agencies in accepting and reviewing public comments was a serious impediment. The promise of Natural Language Processing and cognitive computing to make the public comment process light years faster and more transparent becomes that much more difficult without a consensus among federal agencies on what type of data is collected – and how.

“There is a whole bunch of work we have to do around getting government to be more customer friendly and making it at least as easy to file your taxes as it is to order a pizza or buy an airline ticket,” President Obama recently said in an interview with WIRED. “Whether it’s encouraging people to vote or dislodging Big Data so that people can use it more easily, or getting their forms processed online more simply — there’s a huge amount of work to drag the federal government and state governments and local governments into the 21st century.”

…expanding the discussion around Artificial Intelligence and regulatory processes to include how the technology should be leveraged to ensure fairness and responsiveness in the very basic processes of rulemaking – in particular public notices and comments. These technologies could also enable us to consider not just public comments formally submitted to an agency, but the entire universe of statements made through social media posts, blogs, chat boards — and conceivably every other electronic channel of public communication.

Obviously, an anonymous comment on the Internet should not carry the same credibility as a formally submitted, personally signed statement, just as sworn testimony in court holds far greater weight than a grapevine rumor. But so much public discussion today occurs on Facebook pages, in Tweets, on news website comment sections, etc. Anonymous speech enjoys explicit protection under the Constitution, based on a justified expectation that certain sincere statements of sentiment might result in unfair retribution from the government.

Should we simply ignore the valuable insights about actual public sentiment on specific issues made possible through the power of Artificial Intelligence, which can ascertain meaning from an otherwise unfathomable ocean of relevant public conversations? With certain qualifications, I believe Artificial Intelligence, or AI, should absolutely be employed in the critical effort to gain insights from public comments – signed or anonymous.

“In the criminal justice system, some of the biggest concerns with Big Data are the lack of data and the lack of quality data,” the NSTC report authors state. “AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias.” As a former federal criminal prosecutor and defense attorney, I am well familiar with the absolute necessity to weigh the relative value of various forms of evidence – or in this case, data…(More)

Make Algorithms Accountable


Julia Angwin in The New York Times: “Algorithms are ubiquitous in our lives. They map out the best route to our destination and help us find new music based on what we listen to now. But they are also being employed to inform fundamental decisions about our lives.

Companies use them to sort through stacks of résumés from job seekers. Credit agencies use them to determine our credit scores. And the criminal justice system is increasingly using algorithms to predict a defendant’s future criminality.
Those computer-generated criminal “risk scores” were at the center of a recent Wisconsin Supreme Court decision that set the first significant limits on the use of risk algorithms in sentencing.
The court ruled that while judges could use these risk scores, the scores could not be a “determinative” factor in whether a defendant was jailed or placed on probation. And, most important, the court stipulated that a pre sentence report submitted to the judge must include a warning about the limits of the algorithm’s accuracy.

This warning requirement is an important milestone in the debate over how our data-driven society should hold decision-making software accountable.But advocates for big data due process argue that much more must be done to assure the appropriateness and accuracy of algorithm results.

An algorithm is a procedure or set of instructions often used by a computer to solve a problem. Many algorithms are secret. In Wisconsin, for instance,the risk-score formula was developed by a private company and has never been publicly disclosed because it is considered proprietary. This secrecy has made it difficult for lawyers to challenge a result.

 The credit score is the lone algorithm in which consumers have a legal right to examine and challenge the underlying data used to generate it. In 1970,President Richard M. Nixon signed the Fair Credit Reporting Act. It gave people the right to see the data in their credit reports and to challenge and delete data that was inaccurate.

For most other algorithms, people are expected to read fine-print privacy policies, in the hopes of determining whether their data might be used against them in a way that they wouldn’t expect.

 “We urgently need more due process with the algorithmic systems influencing our lives,” says Kate Crawford, a principal researcher atMicrosoft Research who has called for big data due process requirements.“If you are given a score that jeopardizes your ability to get a job, housing or education, you should have the right to see that data, know how it was generated, and be able to correct errors and contest the decision.”

The European Union has recently adopted a due process requirement for data-driven decisions based “solely on automated processing” that“significantly affect” citizens. The new rules, which are set to go into effect in May 2018, give European Union citizens the right to obtain an explanation of automated decisions and to challenge those decisions. However, since the European regulations apply only to situations that don’t involve human judgment “such as automatic refusal of an online credit application or e-recruiting practices without any human intervention,” they are likely to affect a narrow class of automated decisions. …More recently, the White House has suggested that algorithm makers police themselves. In a recent report, the administration called for automated decision-making tools to be tested for fairness, and for the development of“algorithmic auditing.”

But algorithmic auditing is not yet common. In 2014, Eric H. Holder Jr.,then the attorney general, called for the United States SentencingCommission to study whether risk assessments used in sentencing were reinforcing unjust disparities in the criminal justice system. No study was done….(More)”

Big data for government good: using analytics for policymaking


Kent Smetters in The Hill: ” Big Data and analytics are driving advancements that touch nearly every part of our lives. From improving disaster relief efforts following a storm, to enhancing patient response to specific medications to criminal justice reform and real-time traffic reporting, Big Data is saving lives, reducing costs and improving productivity across the private and the public sector.Yet when our elected officials draft policy they lack access to advanced data and analytics that would help them understand the economic implications of proposed legislation. Instead of using Big Data to inform and shape vital policy questions, Members of Congress typically don’t receive a detailed analysis of a bill until after it has been written, and after they have sought support for it. That’s when a policy typically undergoes a detailed budgetary analysis. And even then, these assessments often ignore the broader impact on jobs and the economy.

Yet when our elected officials draft policy they lack access to advanced data and analytics that would help them understand the economic implications of proposed legislation. Instead of using Big Data to inform and shape vital policy questions, Members of Congress typically don’t receive a detailed analysis of a bill until after it has been written, and after they have sought support for it. That’s when a policy typically undergoes a detailed budgetary analysis. And even then, these assessments often ignore the broader impact on jobs and the economy.

We must do better. Just as modern marketing firms use deep analytical tools to make smart business decisions, policymakers in Washington should similarly have access to modern tools for analyzing important policy questions.
Will Social Security be solvent for our grandchildren? How will changes to immigration policy influence the number of jobs and the GDP? How will tax reform impact the budget, economic growth and the income distribution? What is the impact of new investments in health care, education and roads? These are big questions that must be answered with reliable data and analysis while legislation is being written, not afterwards. The absence leaves us with ideology-driven partisanship.

Simply put, Washington needs better tools to evaluate these complex factors. Imagine the productive conversations we could have if we applied the kinds of tools that are commonplace in the business world to help Washington make more informed choices.

For example, with the help of a nonpartisan budget model from the Wharton School of the University of Pennsylvania, policymakers and the public can uncover some valuable—and even surprising—information about our choices surrounding Social Security, immigration and other issues.

By analyzing more than 4,000 different Social Security policy options, for example, the model projects that the Social Security Trust Fund will be depleted three years earlier than the Social Security Administration’s projections, barring any changes in current law. The tool’s projected shortfalls are larger than the SSA’s, in fact—because it takes into account how changes over time will affect the outcome. We also learn that many standard policy options fail to significantly move the Trust Fund exhaustion date, as these policies phase in too slowly or are too small. Securing Social Security, we now know, requires a range of policy combinations and potentially larger changes than we may have been considering.

Immigration policy, too, is an area where we could all benefit from greater understanding. The political left argues that legalizing undocumented workers will have a positive impact on jobs and the economy. The political right argues for just the opposite—deportation of undocumented workers—for many of the same reasons. But, it turns out, the numbers don’t offer much support to either side.

On one hand, legalization actually slightly reduces the number of jobs. The reason is simple: legal immigrants have better access to school and college, and they can spend more time looking for the best job match. However, because legal immigrants can gain more skills, the actual impact on GDP from legalization alone is basically a wash.

The other option being discussed, deportation, also reduces jobs, in this case because the number of native-born workers can’t rise enough to absorb the job losses caused by deportation. GDP also declines. Calculations based on 125 different immigration policy combinations show that increasing the total amount of legal immigrants—especially those with higher skills—is the most effective policy for increasing employment rates and GDP….(More)”

Data-Driven Justice Initiative, Disrupting Cycle of Incarceration


The White House: “Every year, more than 11 million people move through America’s 3,100 local jails, many on low-level, non-violent misdemeanors, costing local governments approximately $22 billion a year. In local jails, 64 percent of people suffer from mental illness, 68 percent have a substance abuse disorder, and 44 percent suffer from chronic health problems. Communities across the country have recognized that a relatively small number of these highly vulnerable people cycle repeatedly not just through local jails, but also hospital emergency rooms, shelters, and other public systems, receiving fragmented and uncoordinated care at great cost to American taxpayers, with poor outcomes.

For example, in Miami-Dade, Florida found that 97 people with serious mental illness accounted for $13.7 million in services over four years, spending more than 39,000 days in either jail, emergency rooms, state hospitals or psychiatric facilities in their county. In response, the county provided key mental health de-escalation training to their police officers and 911 dispatchers and, over the past five years, Miami-Dade police have responded to nearly 50,000 calls for service for people in mental health crisis, but have made only 109 arrests, diverting more than 10,000 people to services or safely stabilizing situations without arrest. The jail population fell from over 7000 to just over 4700 and the county was able to close an entire jail facility, saving nearly $12 million a year.

In addition, on any given day, more than 450,000 people are held in jail before trial, nearly 63 percent of the local jail population, even though they have not been convicted of a crime. A 2014 study of New York’s Riker’s Island jail found more than 86% percent of detained individuals were held on a bond of $500 or less. To tackle the challenges of bail, in 2014 Charlotte-Mecklenburg, NC began using a data-based risk assessment tool to identify low risk people in jail and find ways to release them safely. Since they began using the tool, the jail population has gone down 20 percent, significantly more low-risk individuals have been released from jail, and there has been no increase in reported crime.

To break this cycle of incarceration, the Administration has launched the Data-Driven Justice Initiative with a bipartisan coalition of city, county, and state governments who have committed to using data-driven strategies to divert low-level offenders with mental illness out of the criminal system and to change approaches to pre-trial incarceration so that low risk offenders no longer stay in jail simply because they cannot afford a bond. These innovative strategies, which have measurably reduced jail populations in several communities, help stabilize individuals and families, better serve communities, and, often, saves money in the process. DDJ communities commit to:

  1. combining data from across criminal justice and health systems to identify the individuals with the highest number of contacts with police, ambulance, emergency departments, and other services, and, leverage existing resources to link them to health, behavioral health, and social services in the community;
  2. equipping law enforcement and first responders to enable more rapid deployment of tools, approaches, and other innovations they need to safely and more effectively respond to people in mental health crisis and divert people with high needs to identified service providers instead of arrest; and
  3. working towards using objective, data-driven, validated risk assessment tools to inform the safe release of low-risk defendants from jails in order to reduce the jail population held pretrial….(More: FactSheet)”

The Billions We’re Wasting in Our Jails


Stephen Goldsmith  and Jane Wiseman in Governing: “By using data analytics to make decisions about pretrial detention, local governments could find substantial savings while making their communities safer….

Few areas of local government spending present better opportunities for dramatic savings than those that surround pretrial detention. Cities and counties are wasting more than $3 billion a year, and often inducing crime and job loss, by holding the wrong people while they await trial. The problem: Only 10 percent of jurisdictions use risk data analytics when deciding which defendants should be detained.

As a result, dangerous people are out in our communities, while many who could be safely in the community are behind bars. Vast numbers of people accused of petty offenses spend their pretrial detention time jailed alongside hardened convicts, learning from them how to be better criminals….

In this era of big data, analytics not only can predict and prevent crime but also can discern who should be diverted from jail to treatment for underlying mental health or substance abuse issues. Avoided costs aggregating in the billions could be better spent on detaining high-risk individuals, more mental health and substance abuse treatment, more police officers and other public safety services.

Jurisdictions that do use data to make pretrial decisions have achieved not only lower costs but also greater fairness and lower crime rates. Washington, D.C., releases 85 percent of defendants awaiting trial. Compared to the national average, those released in D.C. are two and a half times more likely to remain arrest-free and one and a half times as likely to show up for court.

Louisville, Ky., implemented risk-based decision-making using a tool developed by the Laura and John Arnold Foundation and now releases 70 percent of defendants before trial. Those released have turned out to be twice as likely to return to court and to stay arrest-free as those in other jurisdictions. Mesa County, Colo., and Allegheny County, Pa., both have achieved significant savings from reduced jail populations due to data-driven release of low-risk defendants.

Data-driven approaches are beginning to produce benefits not only in the area of pretrial detention but throughout the criminal justice process. Dashboards now in use in a handful of jurisdictions allow not only administrators but also the public to see court waiting times by offender type and to identify and address processing bottlenecks….(More)”

White House Challenges Artificial Intelligence Experts to Reduce Incarceration Rates


Jason Shueh at GovTech: “The U.S. spends $270 billion on incarceration each year, has a prison population of about 2.2 million and an incarceration rate that’s spiked 220 percent since the 1980s. But with the advent of data science, White House officials are asking experts for help.

On Tuesday, June 7, the White House Office of Science and Technology Policy’s Lynn Overmann, who also leads the White House Police Data Initiative, stressed the severity of the nation’s incarceration crisis while asking a crowd of data scientists and artificial intelligence specialists for aid.

“We have built a system that is too large, and too unfair and too costly — in every sense of the word — and we need to start to change it,” Overmann said, speaking at a Computing Community Consortium public workshop.

She argued that the U.S., a country that has the highest amount incarcerated citizens in the world, is in need of systematic reforms with both data tools to process alleged offenders and at the policy level to ensure fair and measured sentences. As a longtime counselor, advisor and analyst for the Justice Department and at the city and state levels, Overman said she has studied and witnessed an alarming number of issues in terms of bias and unwarranted punishments.

For instance, she said that statistically, while drug use is about equal between African-Americans and Caucasians, African-Americans are more likely to be arrested and convicted. They also receive longer prison sentences compared to Caucasian inmates convicted of the same crimes….

Data and digital tools can help curb such pitfalls by increasing efficiency, transparency and accountability, she said.

“We think these types of data exchanges [between officials and technologists] can actually be hugely impactful if we can figure out how to take this information and operationalize it for the folks who run these systems,” Obermann noted.

The opportunities to apply artificial intelligence and data analytics, she said, might include using it to improve questions on parole screenings, using it to analyze police body camera footage, and applying it to criminal justice data for legislators and policy workers….

If the private sector is any indication, artificial intelligence and machine learning techniques could be used to interpret this new and vast supply of law enforcement data. In an earlier presentation by Eric Horvitz, the managing director at Microsoft Research, Horvitz showcased how the company has applied artificial intelligence to vision and language to interpret live video content for the blind. The app, titled SeeingAI, can translate live video footage, captured from an iPhone or a pair of smart glasses, into instant audio messages for the seeing impaired. Twitter’s live-streaming app Periscope has employed similar technology to guide users to the right content….(More)”

What Algorithmic Injustice Looks Like in Real Life


Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchner at Pacific Standard: “Courtrooms across the nation are using computer programs to predict who will be a future criminal. The programs help inform decisions on everything from bail to sentencing. They are meant to make the criminal justice system fairer — and to weed out human biases.

ProPublica tested one such program and found that it’s often wrong — and biased against blacks.

We looked at the risk scores the program spit out for more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014. We checked to see how many defendants were charged with new crimes over the next two years — the same benchmark used by the creators of the algorithm. Our analysis showed:

  • The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
  • White defendants were mislabeled as low risk more often than black defendants.

What does that look like in real life? Here are five comparisons of defendants — one black and one white — who were charged with similar offenses but got very different scores.

Two Shoplifting Arrests

James Rivelli, 53: In August 2014, Rivelli allegedly shoplifted seven boxes of Crest Whitestrips from a CVS. An employee called the police. When the cops found Rivelli and pulled him over, they found the Whitestrips as well as heroin and drug paraphernalia in his car. He was charged with two felony counts and four misdemeanors for grand theft, drug possession, and driving with a suspended license and expired tags.

Past offenses: He had been charged with felony aggravated assault for domestic violence in 1996, felony grand theft also in 1996, and a misdemeanor theft in 1998. He also says that he was incarcerated in Massachusetts for felony drug trafficking.

COMPAS score: 3 — low

Subsequent offense: In April 2015, he was charged with two felony counts of grand theft in the 3rd degree for shoplifting about $1,000 worth of tools from a Home Depot.

He says: Rivelli says his crimes were fueled by drug use and he is now sober. “I’m surprised [my risk score] is so low,” Rivelli said in an interview in his mother’s apartment in April. “I spent five years in state prison in Massachusetts.”…(More)

Moneyballing Criminal Justice


Anne Milgram in the Atlantic: “…One area in which the potential of data analysis is still not adequately realized,however, is criminal justice. This is somewhat surprising given the success of CompStat, a law enforcement management tool that uses data to figure out how police resources can be used to reduce crime and hold law enforcement officials accountable for results. CompStat is widely credited with contributing to New York City’s dramatic reduction in serious crime over the past two decades. Yet data-driven decision-making has not expanded to the whole of the criminal justice system.

But it could. And, in this respect, the front end of the system — the part of the process that runs from arrest through sentencing — is particularly important. Atthis stage, police, prosecutors, defenders, and courts make key choices about how to deal with offenders — choices that, taken together, have an enormous impact on crime. Yet most jurisdictions do not collect or analyze the data necessary to know whether these decisions are being made in a way that accomplishes the most important goals of the criminal justice system: increased public safety,decreased recidivism, reduced cost, and the fair, efficient administration of justice.

Even in jurisdictions where good data exists, a lack of technology is often an obstacle to using it effectively. Police, jails, courts, district attorneys, and public defenders each keep separate information systems, the data from which is almost never pulled together and analyzed in a way that could answer the questions that matter most: Who is in our criminal justice system? What crimes have been charged? What risks do individual offenders pose? And which option would best protect the public and make the best use of our limited resources?

While debates about prison over-crowding, three strikes laws, and mandatory minimum sentences have captured public attention, the importance of what happens between arrest and sentencing has gone largely unnoticed. Even though I ran the criminal justice system in New Jersey, one of the largest states in the country, I had not realized the magnitude of the pretrial issues until I was tasked by theLaura and John Arnold Foundation with figuring out which aspects of criminal justice had the most need and presented the greatest opportunity for reform….

Technology could help us leverage data to identify offenders who will pose unacceptable risks to society if they are not behind bars and distinguish them from those defendants who will have lower recidivism rates if they are supervised in the community or given alternatives to incarceration before trial. Likewise, it could help us figure out which terms of imprisonment, alternatives to incarceration, and other interventions work best–and for whom. And the list does not end there.

The truth is our criminal justice system already makes these decisions every day.But it makes them without knowing whether they’re the right ones. That needs to change. If data is powerful enough to transform baseball, health care, and education, it can do the same for criminal justice….(More)”

…(More).