Index: Crime and Criminal Justice Data


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on crime and criminal justice data and was originally published in 2015.

This index provides information about the type of crime and criminal justice data collected, shared and used in the United States. Because it is well known that data related to the criminal justice system is often times unreliable, or just plain missing, this index also highlights some of the issues that stand in the way of accessing useful and in-demand statistics.

Data Collections: National Crime Statistics

  • Number of incident-based crime datasets created by the Federal Bureau of Investigation (FBI): 2
    • Number of U.S. Statistical Agencies: 13
    • How many of those are focused on criminal justice: 1, the Bureau of Justice Statistics (BJS)
    • Number of data collections focused on criminal justice the BJS produces: 61
    • Number of federal-level APIs available for crime or criminal justice data: 1, the National Crime Victimization Survey (NCVS).
    • Frequency of the NCVS: annually
  • Number of Statistical Analysis Centers (SACs), organizations that are essentially clearinghouses for crime and criminal justice data for each state, the District of Columbia, Puerto Rico and the Northern Mariana Islands: 53

Open data, data use and the impact of those efforts

  • Number of datasets that are returned when “criminal justice” is searched for on Data.gov: 417, including federal-, state- and city-level datasets
  • Number of datasets that are returned when “crime” is searched for on Data.gov: 281
  • The percentage that public complaints dropped after officers started wearing body cameras, according to a study done in Rialto, Calif.: 88
  • The percentage that reported incidents of officer use of force fell after officers started wearing body cameras, according to a study done in Rialto, Calif.: 5
  • The percent that crime decreased during an experiment in predictive policing in Shreveport, LA: 35  
  • Number of crime data sets made available by the Seattle Police Department – generally seen as a leader in police data innovation – on the Seattle.gov website: 4
    • Major crime stats by category in aggregate
    • Crime trend reports
    • Precinct data by beat
    • State sex offender database
  • Number of datasets mapped by the Seattle Police Department: 2:
      • 911 incidents
    • Police reports
  • Number of states where risk assessment tools must be used in pretrial proceedings to help determine whether an offender is released from jail before a trial: at least 11.

Police Data

    • Number of federally mandated databases that collect information about officer use of force or officer involved shootings, nationwide: 0
    • The year a crime bill was passed that called for data on excessive force to be collected for research and statistical purposes, but has never been funded: 1994
    • Number of police departments that committed to being a part of the White House’s Police Data Initiative: 21
    • Percentage of police departments surveyed in 2013 by the Office of Community Oriented Policing within the Department of Justice that are not using body cameras, therefore not collecting body camera data: 75

The criminal justice system

  • Parts of the criminal justice system where data about an individual can be created or collected: at least 6
    • Entry into the system (arrest)
    • Prosecution and pretrial
    • Sentencing
    • Corrections
    • Probation/parole
    • Recidivism

Sources

  • Crime Mapper. Philadelphia Police Department. Accessed August 24, 2014.

The Silo Effect – The Peril of Expertise and the Promise of Breaking Down Barriers


Book by Gillian Tett: “From award-winning columnist and journalist Gillian Tett comes a brilliant examination of how our tendency to create functional departments—silos—hinders our work…and how some people and organizations can break those silos down to unleash innovation.

One of the characteristics of industrial age enterprises is that they are organized around functional departments. This organizational structure results in both limited information and restricted thinking. The Silo Effect asks these basic questions: why do humans working in modern institutions collectively act in ways that sometimes seem stupid? Why do normally clever people fail to see risks and opportunities that later seem blindingly obvious? Why, as psychologist Daniel Kahneman put it, are we sometimes so “blind to our own blindness”?

Gillian Tett, journalist and senior editor for the Financial Times, answers these questions by plumbing her background as an anthropologist and her experience reporting on the financial crisis in 2008. In The Silo Effect, she shares eight different tales of the silo syndrome, spanning Bloomberg’s City Hall in New York, the Bank of England in London, Cleveland Clinic hospital in Ohio, UBS bank in Switzerland, Facebook in San Francisco, Sony in Tokyo, the BlueMountain hedge fund, and the Chicago police. Some of these narratives illustrate how foolishly people can behave when they are mastered by silos. Others, however, show how institutions and individuals can master their silos instead. These are stories of failure and success.

From ideas about how to organize office spaces and lead teams of people with disparate expertise, Tett lays bare the silo effect and explains how people organize themselves, interact with each other, and imagine the world can take hold of an organization and lead from institutional blindness to 20/20 vision. – (More)”

Ethics in Public Policy and Management: A global research companion


New book edited by Alan Lawton, Zeger van der Wal, and Leo Huberts: “Ethics in Public Policy and Management: A global research companion showcases the latest research from established and newly emerging scholars in the fields of public management and ethics. This collection examines the profound changes of the last 25 years, including the rise of New Public Management, New Public Governance and Public Value; how these have altered practitioners’ delivery of public services; and how academics think about those services.

Drawing on research from a broad range of disciplines, Ethics in Public Policy and Management looks to reflect on this changing landscape. With contributions from Asia, Australasia, Europe and the USA, the collection is grouped into five main themes:

  • theorising the practice of ethics;
  • understanding and combating corruption;
  • managing integrity;
  • ethics across boundaries;
  • expanding ethical policy domains.

This volume will prove thought-provoking for educators, administrators, policy makers and researchers across the fields of public management, public administration and ethics….(More)”

Review Federal Agencies on Yelp…and Maybe Get a Response


Yelp Official Blog: “We are excited to announce that Yelp has concluded an agreement with the federal government that will allow federal agencies and offices to claim their Yelp pages, read and respond to reviews, and incorporate that feedback into service improvements.

We encourage Yelpers to review any of the thousands of agency field offices, TSA checkpoints, national parks, Social Security Administration offices, landmarks and other places already listed on Yelp if you have good or bad feedback to share about your experiences. Not only is it helpful to others who are looking for information on these services, but you can actually make an impact by sharing your feedback directly with the source.

It’s clear Washington is eager to engage with people directly through social media. Earlier this year a group of 46 lawmakers called for the creation of a “Yelp for Government” in order to boost transparency and accountability, and Representative Ron Kind reiterated this call in a letter to the General Services Administration (GSA). Luckily for them, there’s no need to create a new platform now that government agencies can engage directly on Yelp.

As this agreement is fully implemented in the weeks and months ahead, we’re excited to help the federal government more directly interact with and respond to the needs of citizens and to further empower the millions of Americans who use Yelp every day.

In addition to working with the federal government, last week we announced our our partnership with ProPublica to incorporate health care statistics and consumer opinion survey data onto the Yelp business pages of more than 25,000 medical treatment facilities. We’ve also partnered with local governments in expanding the LIVES open data standard to show restaurant health scores on Yelp….(More)”

Can big databases be kept both anonymous and useful?


The Economist: “….The anonymisation of a data record typically means the removal from it of personally identifiable information. Names, obviously. But also phone numbers, addresses and various intimate details like dates of birth. Such a record is then deemed safe for release to researchers, and even to the public, to make of it what they will. Many people volunteer information, for example to medical trials, on the understanding that this will happen.

But the ability to compare databases threatens to make a mockery of such protections. Participants in genomics projects, promised anonymity in exchange for their DNA, have been identified by simple comparison with electoral rolls and other publicly available information. The health records of a governor of Massachusetts were plucked from a database, again supposedly anonymous, of state-employee hospital visits using the same trick. Reporters sifting through a public database of web searches were able to correlate them in order to track down one, rather embarrassed, woman who had been idly searching for single men. And so on.

Each of these headline-generating stories creates a demand for more controls. But that, in turn, deals a blow to the idea of open data—that the electronic “data exhaust” people exhale more or less every time they do anything in the modern world is actually useful stuff which, were it freely available for analysis, might make that world a better place.

Of cake, and eating it

Modern cars, for example, record in their computers much about how, when and where the vehicle has been used. Comparing the records of many vehicles, says Viktor Mayer-Schönberger of the Oxford Internet Institute, could provide a solid basis for, say, spotting dangerous stretches of road. Similarly, an opening of health records, particularly in a country like Britain, which has a national health service, and cross-fertilising them with other personal data, might help reveal the multifarious causes of diseases like Alzheimer’s.

This is a true dilemma. People want both perfect privacy and all the benefits of openness. But they cannot have both. The stripping of a few details as the only means of assuring anonymity, in a world choked with data exhaust, cannot work. Poorly anonymised data are only part of the problem. What may be worse is that there is no standard for anonymisation. Every American state, for example, has its own prescription for what constitutes an adequate standard.

Worse still, devising a comprehensive standard may be impossible. Paul Ohm of Georgetown University, in Washington, DC, thinks that this is partly because the availability of new data constantly shifts the goalposts. “If we could pick an industry standard today, it would be obsolete in short order,” he says. Some data, such as those about medical conditions, are more sensitive than others. Some data sets provide great precision in time or place, others merely a year or a postcode. Each set presents its own dangers and requirements.

Fortunately, there are a few easy fixes. Thanks in part to the headlines, many now agree that public release of anonymised data is a bad move. Data could instead be released piecemeal, or kept in-house and accessible by researchers through a question-and-answer mechanism. Or some users could be granted access to raw data, but only in strictly controlled conditions.

All these approaches, though, are anathema to the open-data movement, because they limit the scope of studies. “If we’re making it so hard to share that only a few have access,” says Tim Althoff, a data scientist at Stanford University, “that has profound implications for science, for people being able to replicate and advance your work.”

Purely legal approaches might mitigate that. Data might come with what have been called “downstream contractual obligations”, outlining what can be done with a given data set and holding any onward recipients to the same standards. One perhaps draconian idea, suggested by Daniel Barth-Jones, an epidemiologist at Columbia University, in New York, is to make it illegal even to attempt re-identification….(More).”

Big data algorithms can discriminate, and it’s not clear what to do about it


 at the Conversation“This program had absolutely nothing to do with race…but multi-variable equations.”

That’s what Brett Goldstein, a former policeman for the Chicago Police Department (CPD) and current Urban Science Fellow at the University of Chicago’s School for Public Policy, said about a predictive policing algorithm he deployed at the CPD in 2010. His algorithm tells police where to look for criminals based on where people have been arrested previously. It’s a “heat map” of Chicago, and the CPD claims it helps them allocate resources more effectively.

Chicago police also recently collaborated with Miles Wernick, a professor of electrical engineering at Illinois Institute of Technology, to algorithmically generate a “heat list” of 400 individuals it claims have thehighest chance of committing a violent crime. In response to criticism, Wernick said the algorithm does not use “any racial, neighborhood, or other such information” and that the approach is “unbiased” and “quantitative.” By deferring decisions to poorly understood algorithms, industry professionals effectively shed accountability for any negative effects of their code.

But do these algorithms discriminate, treating low-income and black neighborhoods and their inhabitants unfairly? It’s the kind of question many researchers are starting to ask as more and more industries use algorithms to make decisions. It’s true that an algorithm itself is quantitative – it boils down to a sequence of arithmetic steps for solving a problem. The danger is that these algorithms, which are trained on data produced by people, may reflect the biases in that data, perpetuating structural racism and negative biases about minority groups.

There are a lot of challenges to figuring out whether an algorithm embodies bias. First and foremost, many practitioners and “computer experts” still don’t publicly admit that algorithms can easily discriminate.More and more evidence supports that not only is this possible, but it’s happening already. The law is unclear on the legality of biased algorithms, and even algorithms researchers don’t precisely understand what it means for an algorithm to discriminate….

While researchers clearly understand the theoretical dangers of algorithmic discrimination, it’s difficult to cleanly measure the scope of the issue in practice. No company or public institution is willing to publicize its data and algorithms for fear of being labeled racist or sexist, or maybe worse, having a great algorithm stolen by a competitor.

Even when the Chicago Police Department was hit with a Freedom of Information Act request, they did not release their algorithms or heat list, claiming a credible threat to police officers and the people on the list. This makes it difficult for researchers to identify problems and potentially provide solutions.

Legal hurdles

Existing discrimination law in the United States isn’t helping. At best, it’s unclear on how it applies to algorithms; at worst, it’s a mess. Solon Barocas, a postdoc at Princeton, and Andrew Selbst, a law clerk for the Third Circuit US Court of Appeals, argued together that US hiring law fails to address claims about discriminatory algorithms in hiring.

The crux of the argument is called the “business necessity” defense, in which the employer argues that a practice that has a discriminatory effect is justified by being directly related to job performance….(More)”

What factors influence transparency in US local government?


Grichawat Lowatcharin and Charles Menifield at LSE Impact Blog: “The Internet has opened a new arena for interaction between governments and citizens, as it not only provides more efficient and cooperative ways of interacting, but also more efficient service delivery, and more efficient transaction activities. …But to what extent does increased Internet access lead to higher levels of government transparency? …While we found Internet access to be a significant predictor of Internet-enabled transparency in our simplest model, this finding did not hold true in our most extensive model. This does not negate that fact that the variable is an important factor in assessing transparency levels and Internet access. …. Our data shows that total land area, population density, percentage of minority, education attainment, and the council-manager form of government are statistically significant predictors of Internet-enabled transparency.  These findings both confirm and negate the findings of previous researchers. For example, while the effect of education on transparency appears to be the most consistent finding in previous research, we also noted that the rural/urban (population density) dichotomy and the education variable are important factors in assessing transparency levels. Hence, as governments create strategic plans that include growth models, they should not only consider the budgetary ramifications of growth, but also the fact that educated residents want more web based interaction with government. This finding was reinforced by a recent Census Bureau report indicating that some of the cities and counties in Florida and California had population increases greater than ten thousand persons per month during the period 2013-2014.

This article is based on the paper ‘Determinants of Internet-enabled Transparency at the Local Level: A Study of Midwestern County Web Sites’, in State and Local Government Review. (More)”

Mining Administrative Data to Spur Urban Revitalization


New paper by Ben Green presented at the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: “After decades of urban investment dominated by sprawl and outward growth, municipal governments in the United States are responsible for the upkeep of urban neighborhoods that have not received sufficient resources or maintenance in many years. One of city governments’ biggest challenges is to revitalize decaying neighborhoods given only limited resources. In this paper, we apply data science techniques to administrative data to help the City of Memphis, Tennessee improve distressed neighborhoods. We develop new methods to efficiently identify homes in need of rehabilitation and to predict the impacts of potential investments on neighborhoods. Our analyses allow Memphis to design neighborhood-improvement strategies that generate greater impacts on communities. Since our work uses data that most US cities already collect, our models and methods are highly portable and inexpensive to implement. We also discuss the challenges we encountered while analyzing government data and deploying our tools, and highlight important steps to improve future data-driven efforts in urban policy….(More)”

Push, Pull, and Spill: A Transdisciplinary Case Study in Municipal Open Government


New paper by Jan Whittington et al: “Cities hold considerable information, including details about the daily lives of residents and employees, maps of critical infrastructure, and records of the officials’ internal deliberations. Cities are beginning to realize that this data has economic and other value: If done wisely, the responsible release of city information can also release greater efficiency and innovation in the public and private sector. New services are cropping up that leverage open city data to great effect.

Meanwhile, activist groups and individual residents are placing increasing pressure on state and local government to be more transparent and accountable, even as others sound an alarm over the privacy issues that inevitably attend greater data promiscuity. This takes the form of political pressure to release more information, as well as increased requests for information under the many public records acts across the country.

The result of these forces is that cities are beginning to open their data as never before. It turns out there is surprisingly little research to date into the important and growing area of municipal open data. This article is among the first sustained, cross-disciplinary assessments of an open municipal government system. We are a team of researchers in law, computer science, information science, and urban studies. We have worked hand-in-hand with the City of Seattle, Washington for the better part of a year to understand its current procedures from each disciplinary perspective. Based on this empirical work, we generate a set of recommendations to help the city manage risk latent in opening its data….(More)”

IBM using Watson to build a “SIRI for Cities”


 at FastCompany: “A new app that incorporates IBM’s Watson cognitive computing platform is like Siri for ordering city services.

IBM said today that the city of Surrey, in British Columbia, Canada, has rolled out the new app, which leverages Watson’s sophisticated language and data analysis system to allow residents to make requests for things like finding out why their trash wasn’t picked up or how to find a lost cat using natural language.

Watson is best known as the computer system that autonomously vanquished the world’s best Jeopardy players during a highly publicized competition in 2011. In the years since, IBM has applied the system to a wide range of computing problems in industries like health care, banking, retail, and education. The system is based on Watson’s ability to understand natural language queries and to analyze huge data sets.

Recently, Watson rolled out a tool designed to help people detect the tone in their writing.

Surrey worked with the developer Purple Forge to build the new city services app, which will be combined with the city’s existing “My Surrey” mobile and web tools. IBM said that residents can ask a wide range of questions on devices like smartphones, laptops, or even Apple Watches. Big Blue said Surrey’s app is the first time Watson has been utilized in a “citizen services” app.

The tool offers a series of frequently asked questions, but also allows residents in the city of nearly half a million to come up with their own. IBM said Surrey officials are hopeful that the app will help them be more responsive to residents’ concerns.

Among the services users can ask about are those provided by Surrey’s police and fire departments, animal control, parking enforcement, trash pickup, and others….(More)”