The Challenges of Prediction: Lessons from Criminal Justice


Paper by David G. Robinson: “Government authorities at all levels increasingly rely on automated predictions, grounded in statistical patterns, to shape people’s lives. Software that wields government power deserves special attention, particularly when it uses historical data to decide automatically what ought to happen next.

In this article, I draw examples primarily from the domain of criminal justice — and in particular, the intersection of civil rights and criminal justice — to illustrate three structural challenges that can arise whenever law or public policy contemplates adopting predictive analytics as a tool:

1) What matters versus what the data measure;
2) Current goals versus historical patterns; and
3) Public authority versus private expertise.

After explaining each of these challenges and illustrating each with concrete examples, I describe feasible ways to avoid these problems and to do prediction more successfully…(More)”

Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing


Priscilla Guo, Danielle Kehl, and Sam Kessler at Responsive Communities (Harvard): “In the summer of 2016, some unusual headlines began appearing in news outlets across the United States. “Secret Algorithms That Predict Future Criminals Get a Thumbs Up From the Wisconsin Supreme Court,” read one. Another declared: “There’s software used across the country to predict future criminals. And it’s biased against blacks.” These news stories (and others like them) drew attention to a previously obscure but fast-growing area in the field of criminal justice: the use of risk assessment software, powered by sophisticated and sometimes proprietary algorithms, to predict whether individual criminals are likely candidates for recidivism. In recent years, these programs have spread like wildfire throughout the American judicial system. They are now being used in a broad capacity, in areas ranging from pre-trial risk assessment to sentencing and probation hearings. This paper focuses on the latest—and perhaps most concerning—use of these risk assessment tools: their incorporation into the criminal sentencing process, a development which raises fundamental legal and ethical questions about fairness, accountability, and transparency. The goal is to provide an overview of these issues and offer a set of key considerations and questions for further research that can help local policymakers who are currently implementing or considering implementing similar systems. We start by putting this trend in context: the history of actuarial risk in the American legal system and the evolution of algorithmic risk assessments as the latest incarnation of a much broader trend. We go on to discuss how these tools are used in sentencing specifically and how that differs from other contexts like pre-trial risk assessment. We then delve into the legal and policy questions raised by the use of risk assessment software in sentencing decisions, including the potential for constitutional challenges under the Due Process and Equal Protection clauses of the Fourteenth Amendment. Finally, we summarize the challenges that these systems create for law and policymakers in the United States, and outline a series of possible best practices to ensure that these systems are deployed in a manner that promotes fairness, transparency, and accountability in the criminal justice system….(More)”.

Bridging Governments’ Borders


Robyn Scott & Lisa Witter at SSIR: “…Our research found that “disconnection” falls into five, negatively reinforcing categories in the public sector; a closer look at these categories may help policy makers see the challenge before them more clearly:

1. Disconnected Governments

There is a truism in politics and government that all policy is local and context-dependent. Whether this was ever an accurate statement is questionable; it is certainly no longer. While all policy must ultimately be customized for local conditions, it absurd to assume there is little or nothing to learn from other countries. Three trends, in fact, indicate that solutions will become increasingly fungible between countries…..

2. Disconnected Issues

What climate change policy can endure without a job-creation strategy? What sensible criminal justice reform does not consider education? Yet even within countries, departments and their employees often remain as foreign to each other as do nations….

3. Disconnected Public Servants

The isolation of governments, and of government departments, is caused by and reinforces the isolation of people working in government, who have few incentives—and plenty of disincentives—to share what they are working on…..

4. Disconnected Citizens

…There are areas of increasingly visible progress in bridging the disconnections of government, citizen engagement being one. We’re still in the early stages, but private sector fashions such as human-centered design and design thinking have become government buzzwords. And platforms enabling new types of citizen engagement—from participatory budgeting to apps that people use to report potholes—are increasingly popping up around the world…..

5. Disconnected Ideas

According to the World Bank’s own data, one third of its reports are never read, even once. Foundations and academia pour tens of millions of dollars into policy research with few targeted channels to reach policymakers; they also tend to produce and deliver information in formats that policymakers don’t find useful. People in government, like everyone else, are frequently on their mobile phones, and short of time….(More)”

 

Mastercard’s Big Data For Good Initiative: Data Philanthropy On The Front Lines


Interview by Randy Bean of Shamina Singh: Much has been written about big data initiatives and the efforts of market leaders to derive critical business insights faster. Less has been written about initiatives by some of these same firms to apply big data and analytics to a different set of issues, which are not solely focused on revenue growth or bottom line profitability. While the focus of most writing has been on the use of data for competitive advantage, a small set of companies has been undertaking, with much less fanfare, a range of initiatives designed to ensure that data can be applied not just for corporate good, but also for social good.

One such firm is Mastercard, which describes itself as a technology company in the payments industry, which connects buyers and sellers in 210 countries and territories across the globe. In 2013 Mastercard launched the Mastercard Center for Inclusive Growth, which operates as an independent subsidiary of Mastercard and is focused on the application of data to a range of issues for social benefit….

In testimony before the Senate Committee on Foreign Affairs on May 4, 2017, Mastercard Vice Chairman Walt Macnee, who serves as the Chairman of the Center for Inclusive Growth, addressed issues of private sector engagement. Macnee noted, “The private sector and public sector can each serve as a force for good independently; however when the public and private sectors work together, they unlock the potential to achieve even more.” Macnee further commented, “We will continue to leverage our technology, data, and know-how in an effort to solve many of the world’s most pressing problems. It is the right thing to do, and it is also good for business.”…

Central to the mission of the Mastercard Center is the notion of “data philanthropy”. This term encompasses notions of data collaboration and data sharing and is at the heart of the initiatives that the Center is undertaking. The three cornerstones on the Center’s mandate are:

  • Sharing Data Insights– This is achieved through the concept of “data grants”, which entails granting access to proprietary insights in support of social initiatives in a way that fully protects consumer privacy.
  • Data Knowledge – The Mastercard Center undertakes collaborations with not-for-profit and governmental organizations on a range of initiatives. One such effort was in collaboration with the Obama White House’s Data-Driven Justice Initiative, by which data was used to help advance criminal justice reform. This initiative was then able, through the use of insights provided by Mastercard, to demonstrate the impact crime has on merchant locations and local job opportunities in Baltimore.
  • Leveraging Expertise – Similarly, the Mastercard Center has collaborated with private organizations such as DataKind, which undertakes data science initiatives for social good.Just this past month, the Mastercard Center released initial findings from its Data Exploration: Neighborhood Crime and Local Business initiative. This effort was focused on ways in which Mastercard’s proprietary insights could be combined with public data on commercial robberies to help understand the potential relationships between criminal activity and business closings. A preliminary analysis showed a spike in commercial robberies followed by an increase in bar and nightclub closings. These analyses help community and business leaders understand factors that can impact business success.Late last year, Ms. Singh issued A Call to Action on Data Philanthropy, in which she challenges her industry peers to look at ways in which they can make a difference — “I urge colleagues at other companies to review their data assets to see how they may be leveraged for the benefit of society.” She concludes, “the sheer abundance of data available today offers an unprecedented opportunity to transform the world for good.”….(More)

Algorithmic Transparency for the Smart City


Paper by Robert Brauneis and Ellen P. Goodman: “Emerging across many disciplines are questions about algorithmic ethics – about the values embedded in artificial intelligence and big data analytics that increasingly replace human decisionmaking. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. An individual can be denied parole or denied credit, fired or not hired for reasons she will never know and cannot be articulated. In the public sector, the opacity of algorithmic decisionmaking is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability. Investigative journalists have recently exposed the dangerous impenetrability of algorithmic processes used in the criminal justice field – dangerous because the predictions they make can be both erroneous and unfair, with none the wiser.

We set out to test the limits of transparency around governmental deployment of big data analytics, focusing our investigation on local and state government use of predictive algorithms. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. And it is here that stretched agencies are most likely to hand over the analytics to private vendors, which may make design and policy choices out of the sight of the client agencies, the public, or both. To see just how impenetrable the resulting “black box” algorithms are, we filed 42 open records requests in 23 states seeking essential information about six predictive algorithm programs. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector partnerships. The goal was to see if, using the open records process, we could discover what policy judgments these algorithms embody, and could evaluate their utility and fairness.

To do this work, we identified what meaningful “algorithmic transparency” entails. We found that in almost every case, it wasn’t provided. Over-broad assertions of trade secrecy were a problem. But contrary to conventional wisdom, they were not the biggest obstacle. It will not usually be necessary to release the code used to execute predictive models in order to dramatically increase transparency. We conclude that publicly-deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation; (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm; and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. Although it would require a multi-stakeholder process to develop best practices for record generation and disclosure, we present what we believe are eight principal types of information that such records should ideally contain….(More)”.

AI, people, and society


Eric Horvitz at Science: “In an essay about his science fiction, Isaac Asimov reflected that “it became very common…to picture robots as dangerous devices that invariably destroyed their creators.” He rejected this view and formulated the “laws of robotics,” aimed at ensuring the safety and benevolence of robotic systems. Asimov’s stories about the relationship between people and robots were only a few years old when the phrase “artificial intelligence” (AI) was used for the first time in a 1955 proposal for a study on using computers to “…solve kinds of problems now reserved for humans.” Over the half-century since that study, AI has matured into subdisciplines that have yielded a constellation of methods that enable perception, learning, reasoning, and natural language understanding.

Growing exuberance about AI has come in the wake of surprising jumps in the accuracy of machine pattern recognition using methods referred to as “deep learning.” The advances have put new capabilities in the hands of consumers, including speech-to-speech translation and semi-autonomous driving. Yet, many hard challenges persist—and AI scientists remain mystified by numerous capabilities of human intellect.

Excitement about AI has been tempered by concerns about potential downsides. Some fear the rise of superintelligences and the loss of control of AI systems, echoing themes from age-old stories. Others have focused on nearer-term issues, highlighting potential adverse outcomes. For example, data-fueled classifiers used to guide high-stakes decisions in health care and criminal justice may be influenced by biases buried deep in data sets, leading to unfair and inaccurate inferences. Other imminent concerns include legal and ethical issues regarding decisions made by autonomous systems, difficulties with explaining inferences, threats to civil liberties through new forms of surveillance, precision manipulation aimed at persuasion, criminal uses of AI, destabilizing influences in military applications, and the potential to displace workers from jobs and to amplify inequities in wealth.

As we push AI science forward, it will be critical to address the influences of AI on people and society, on short- and long-term scales. Valuable assessments and guidance can be developed through focused studies, monitoring, and analysis. The broad reach of AI’s influences requires engagement with interdisciplinary groups, including computer scientists, social scientists, psychologists, economists, and lawyers. On longer-term issues, conversations are needed to bridge differences of opinion about the possibilities of superintelligence and malevolent AI. Promising directions include working to specify trajectories and outcomes, and engaging computer scientists and engineers with expertise in software verification, security, and principles of failsafe design….Asimov concludes in his essay, “I could not bring myself to believe that if knowledge presented danger, the solution was ignorance. To me, it always seemed that the solution had to be wisdom. You did not refuse to look at danger, rather you learned how to handle it safely.” Indeed, the path forward for AI should be guided by intellectual curiosity, care, and collaboration….(More)”

Powerlessness and the Politics of Blame


The Jefferson Lecture in the Humanities by Martha C. Nussbaum: “… I believe the Greeks and Romans are right: anger is a poison to democratic politics, and it is all the worse when fueled by a lurking fear and a sense of helplessness. As a philosopher I have been working on these ideas for some time, first in a 2016 book called Anger and Forgiveness, and now in a book in progress called The Monarchy of Fear, investigating the relationship between anger and fear. In my work, I draw not only on the Greeks and Romans, but also on some recent figures, as I shall tonight. I conclude that we should resist anger in ourselves and inhibit its role in our political culture.

That idea, however, is radical and evokes strong opposition. For anger, with all its ugliness, is a popular emotion. Many people think that it is impossible to care for justice without anger at injustice, and that anger should be encouraged as part of a transformative process. Many also believe that it is impossible for individuals to stand up for their own self-respect without anger, that someone who reacts to wrongs and insults without anger is spineless and downtrodden. Nor are these ideas confined to the sphere of personal relations. The most popular position in the sphere of criminal justice today is retributivism, the view that the law ought to punish aggressors in a manner that embodies the spirit of justified anger. And it is also very widely believed that successful challenges against great injustice need anger to make progress.

Still, we may persist in our Aeschylean skepticism, remembering that recent years have seen three noble and successful freedom movements conducted in a spirit of non-anger: those of Mohandas Gandhi, Martin Luther King, Jr., and Nelson Mandela—surely people who stood up for their self-respect and that of others, and who did not acquiesce in injustice.

I’ll now argue that a philosophical analysis of anger can help us support these philosophies of non-anger, showing why anger is fatally flawed from a normative viewpoint—sometimes incoherent, sometimes based on bad values, and especially poisonous when people use it to deflect attention from real problems that they feel powerless to solve.  Anger pollutes democratic politics and is of dubious value in both life and the law. I’ll present my general view, and then show its relevance to thinking well about the struggle for political justice, taking our own ongoing struggle for racial justice as my example. And I’ll end by showing why these arguments make it urgent for us to learn from literature and philosophy, keeping the humanities strong in our society….(More)”

Updated N.Y.P.D. Anti-Crime System to Ask: ‘How We Doing?’


It was a policing invention with a futuristic sounding name — CompStat — when the New York Police Department introduced it as a management system for fighting crime in an era of much higher violence in the 1990s. Police departments around the country, and the world, adapted its system of mapping muggings, robberies and other crimes; measuring police activity; and holding local commanders accountable.

Now, a quarter-century later, it is getting a broad reimagining and being brought into the mobile age. Moving away from simple stats and figures, CompStat is getting touchy-feely. It’s going to ask New Yorkers — via thousands of questions on their phones — “How are you feeling?” and “How are we, the police, doing?”

Whether this new approach will be mimicked elsewhere is still unknown, but as is the case with almost all new tactics in the N.Y.P.D. — the largest municipal police force in the United States by far — it will be closely watched. Nor is it clear if New Yorkers will embrace this approach, reject it as intrusive or simply be annoyed by it.

The system, using location technology, sends out short sets of questions to smartphones along three themes: Do you feel safe in your neighborhood? Do you trust the police? Are you confident in the New York Police Department?

The questions stream out every day, around the clock, on 50,000 different smartphone applications and present themselves on screens as eight-second surveys.

The department believes it will get a more diverse measure of community satisfaction, and allow it to further drive down crime. For now, Police Commissioner James P. O’Neill is calling the tool a “sentiment meter,” though he is open to suggestions for a better name….(More)”.

Why big-data analysis of police activity is inherently biased


 and  in The Conversation: “In early 2017, Chicago Mayor Rahm Emanuel announced a new initiative in the city’s ongoing battle with violent crime. The most common solutions to this sort of problem involve hiring more police officers or working more closely with community members. But Emanuel declared that the Chicago Police Department would expand its use of software, enabling what is called “predictive policing,” particularly in neighborhoods on the city’s south side.

The Chicago police will use data and computer analysis to identify neighborhoods that are more likely to experience violent crime, assigning additional police patrols in those areas. In addition, the software will identify individual people who are expected to become – but have yet to be – victims or perpetrators of violent crimes. Officers may even be assigned to visit those people to warn them against committing a violent crime.

Any attempt to curb the alarming rate of homicides in Chicago is laudable. But the city’s new effort seems to ignore evidence, including recent research from members of our policing study team at the Human Rights Data Analysis Group, that predictive policing tools reinforce, rather than reimagine, existing police practices. Their expanded use could lead to further targeting of communities or people of color.

Working with available data

At its core, any predictive model or algorithm is a combination of data and a statistical process that seeks to identify patterns in the numbers. This can include looking at police data in hopes of learning about crime trends or recidivism. But a useful outcome depends not only on good mathematical analysis: It also needs good data. That’s where predictive policing often falls short.

Machine-learning algorithms learn to make predictions by analyzing patterns in an initial training data set and then look for similar patterns in new data as they come in. If they learn the wrong signals from the data, the subsequent analysis will be lacking.

This happened with a Google initiative called “Flu Trends,” which was launched in 2008 in hopes of using information about people’s online searches to spot disease outbreaks. Google’s systems would monitor users’ searches and identify locations where many people were researching various flu symptoms. In those places, the program would alert public health authorities that more people were about to come down with the flu.

But the project failed to account for the potential for periodic changes in Google’s own search algorithm. In an early 2012 update, Google modified its search tool to suggest a diagnosis when users searched for terms like “cough” or “fever.” On its own, this change increased the number of searches for flu-related terms. But Google Flu Trends interpreted the data as predicting a flu outbreak twice as big as federal public health officials expected and far larger than what actually happened.

Criminal justice data are biased

The failure of the Google Flu Trends system was a result of one kind of flawed data – information biased by factors other than what was being measured. It’s much harder to identify bias in criminal justice prediction models. In part, this is because police data aren’t collected uniformly, and in part it’s because what data police track reflect longstanding institutional biases along income, race and gender lines….(More)”.

The law is adapting to a software-driven world


 in the Financial Times: “When the investor Marc Andreessen wrote in 2011 that “software is eating the world,” his point was a contentious one. He argued that the boundary between technology companies and the rest of industry was becoming blurred, and that the “information economy” would supplant the physical economy in ways that were not entirely obvious. Six years later, software’s dominance is a fact of life. What it has yet to eat, however, is the law. If almost every sector of society has been exposed to the headwinds of the digital revolution, governments and the legal profession have not. But that is about to change. The rise of complex software systems has led to new legal challenges. Take, for example, the artificial intelligence systems used in self-driving cars. Last year, the US Department of Transportation wrote to Google stating that the government would “interpret ‘driver’ in the context of Google’s described motor-vehicle design” as referring to the car’s artificial intelligence. So what does this mean for the future of law?

It means that regulations traditionally meant to govern the way that humans interact are adapting to a world that has been eaten by software, as Mr Andreessen predicted. And this is about much more than self-driving cars. Complex algorithms are used in mortgage and credit decisions, in the criminal justice and immigration systems and in the realm of national security, to name just a few areas. The outcome of this shift is unlikely to be more lawyers writing more memos. Rather, new laws will start to become more like software — embedded within applications as computer code. As technology evolves, interpreting the law itself will become more like programming software.

But there is more to this shift than technology alone. The fact is that law is both deeply opaque and unevenly accessible. The legal advice required to understand both what our governments are doing, and what our rights are, is only accessible to a select few. Studies suggest, for example, that an estimated 80 per cent of the legal needs of the poor in the US go unmet. To the average citizen, the inner workings of government have become more impenetrable over time. Granted, laws have been murky to average citizens for as long as governments have been around. But the level of disenchantment with institutions and the experts who run them is placing new pressures on governments to change their ways. The relationship between citizens and professionals — from lawyers to bureaucrats to climatologists — has become tinged with scepticism and suspicion. This mistrust is driven by the sense that society is stacked against those at the bottom — that knowledge is power, but that power costs money only a few can afford….(More)”.