Machine Learning Technologies and Their Inherent Human Rights Issues in Criminal Justice Contexts


Essay by Jamie Grace: “This essay is an introductory exploration of machine learning technologies and their inherent human rights issues in criminal justice contexts. These inherent human rights issues include privacy concerns, the chilling of freedom of expression, problems around potential for racial discrimination, and the rights of victims of crime to be treated with dignity.

This essay is built around three case studies – with the first on the digital ‘mining’ of rape complainants’ mobile phones for evidence for disclosure to defence counsel. This first case study seeks to show how AI or machine learning tech might hypothetically either ease or inflame some of the tensions involved for human rights in this context. The second case study is concerned with the human rights challenges of facial recognition of suspects by police forces, using automated algorithms (live facial recognition) in public places. The third case study is concerned with the development of useful self-regulation in algorithmic governance practices in UK policing. This essay concludes with an emphasis on the need for the ‘politics of information’ (Lyon, 2007) to catch up with the ‘politics of public protection’ (Nash, 2010)….(More)”.

The Trace


About: “The Trace is an independent, nonpartisan, nonprofit newsroom dedicated to shining a light on America’s gun violence crisis….

Every year in our country, a firearm is used in nearly 500,000 crimes, resulting in the deaths and injuries of more than 110,000 people. Shootings devastate families and communities and drain billions of dollars from local, state, and federal governments. Meanwhile, the problem of gun violence has been compounded by another: the shortage of knowledge about the issue…

Data and records are shielded from public view—or don’t exist. Gun-lobby backed restrictions on federal gun violence research deprive policymakers and public health experts of potentially life-saving facts. Other laws limit the information that law enforcement agencies can share on illegal guns and curb litigation that could allow scrutiny of industry practices….

We make the problem clear. In partnership with Slate, we built an eye-opening, interactive map plotting the locations of nearly 40,000 incidents of gun violence nationwide. The feature received millions of pageviews and generated extensive local coverage and social media conversation. “So many shootings and deaths, so close to my home,” wrote one reader. “And I hadn’t even heard about most of them.”…(More)”.

Using speculative design to explore the future of Open Justice


UK Policy Lab: “Open justice is the principle that ‘justice should not only be done, but should manifestly and undoubtedly be seen to be done’(1). It is a very well established principle within our justice system, however new digital tools and approaches are creating new opportunities and potential challenges which necessitate significant rethinking on how open justice is delivered.

In this context, HM Courts & Tribunal Service (HMCTS) wanted to consider how the principle of open justice should be delivered in the future. As well as seeking input from those who most commonly work with courtrooms, like judges, court staff and legal professionals, they also wanted to explore a range of public views. HMCTS asked us to create a methodology which could spark a wide-ranging conversation about open justice, collecting diverse and divergent perspectives….

We approached this challenge by using speculative design to explore possible and desirable futures with citizens. In this blog we will share what we did (including how you can re-use our materials and approach), what we’ve learned, and what we’ll be experimenting with from here.

What we did

We ran 4 groups of 10 to 12 participants each. We spent the first 30 minutes discussing what participants understood and thought about Open Justice in the present. We spent the next 90 minutes using provocations to immerse them in a range of fictional futures, in which the justice system is accessed through a range of digital platforms.

The provocations were designed to:

  • engage even those with no prior interest, experience or knowledge of Open Justice
  • be reusable
  • not look like ‘finished’ government policy – we wanted to find out more about desirable outcomes
  • as far as possible, provoke discussion without leading
This is an image of one of the provocation cards used in the Open Justice focus groups
Open Justice ‘provocation cards’ used with focus groups

Using provocations to help participants think about the future allowed us to distill common principles which HMCTS can use when designing specific delivery mechanisms.

We hope the conversation can continue. HMCTS have published the provocations on their website. We encourage people to reuse them, or to use them to create their own….(More)”.

The Extended Corporate Mind: When Corporations Use AI to Break the Law


Paper by Mihailis Diamantis: “Algorithms may soon replace employees as the leading cause of corporate harm. For centuries, the law has defined corporate misconduct — anything from civil discrimination to criminal insider trading — in terms of employee misconduct. Today, however, breakthroughs in artificial intelligence and big data allow automated systems to make many corporate decisions, e.g., who gets a loan or what stocks to buy. These technologies introduce valuable efficiencies, but they do not remove (or even always reduce) the incidence of corporate harm. Unless the law adapts, corporations will become increasingly immune to civil and criminal liability as they transfer responsibility from employees to algorithms.

This Article is the first to tackle the full extent of the growing doctrinal gap left by algorithmic corporate misconduct. To hold corporations accountable, the law must sometimes treat them as if they “know” information stored on their servers and “intend” decisions reached by their automated systems. Cognitive science and the philosophy of mind offer a path forward. The “extended mind thesis” complicates traditional views about the physical boundaries of the mind. The thesis states that the mind encompasses any system that sufficiently assists thought, e.g. by facilitating recall or enhancing decision-making. For natural people, the thesis implies that minds can extend beyond the brain to include external cognitive aids, like rolodexes and calculators. This Article adapts the thesis to corporate law. It motivates and proposes a doctrinal framework for extending the corporate mind to the algorithms that are increasingly integral to corporate thought. The law needs such an innovation if it is to hold future corporations to account for their most serious harms….(More)”.

What statistics can and can’t tell us about ourselves


Hannah Fry at The New Yorker: “Harold Eddleston, a seventy-seven-year-old from Greater Manchester, was still reeling from a cancer diagnosis he had been given that week when, on a Saturday morning in February, 1998, he received the worst possible news. He would have to face the future alone: his beloved wife had died unexpectedly, from a heart attack.

Eddleston’s daughter, concerned for his health, called their family doctor, a well-respected local man named Harold Shipman. He came to the house, sat with her father, held his hand, and spoke to him tenderly. Pushed for a prognosis as he left, Shipman replied portentously, “I wouldn’t buy him any Easter eggs.” By Wednesday, Eddleston was dead; Dr. Shipman had murdered him.

Harold Shipman was one of the most prolific serial killers in history. In a twenty-three-year career as a mild-mannered and well-liked family doctor, he injected at least two hundred and fifteen of his patients with lethal doses of opiates. He was finally arrested in September, 1998, six months after Eddleston’s death.

David Spiegelhalter, the author of an important and comprehensive new book, “The Art of Statistics” (Basic), was one of the statisticians tasked by the ensuing public inquiry to establish whether the mortality rate of Shipman’s patients should have aroused suspicion earlier. Then a biostatistician at Cambridge, Spiegelhalter found that Shipman’s excess mortality—the number of his older patients who had died in the course of his career over the number that would be expected of an average doctor’s—was a hundred and seventy-four women and forty-nine men at the time of his arrest. The total closely matched the number of victims confirmed by the inquiry….

In 1825, the French Ministry of Justice ordered the creation of a national collection of crime records. It seems to have been the first of its kind anywhere in the world—the statistics of every arrest and conviction in the country, broken down by region, assembled and ready for analysis. It’s the kind of data set we take for granted now, but at the time it was extraordinarily novel. This was an early instance of Big Data—the first time that mathematical analysis had been applied in earnest to the messy and unpredictable realm of human behavior.

Or maybe not so unpredictable. In the early eighteen-thirties, a Belgian astronomer and mathematician named Adolphe Quetelet analyzed the numbers and discovered a remarkable pattern. The crime records were startlingly consistent. Year after year, irrespective of the actions of courts and prisons, the number of murders, rapes, and robberies reached almost exactly the same total. There is a “terrifying exactitude with which crimes reproduce themselves,” Quetelet said. “We know in advance how many individuals will dirty their hands with the blood of others. How many will be forgers, how many poisoners.”

To Quetelet, the evidence suggested that there was something deeper to discover. He developed the idea of a “Social Physics,” and began to explore the possibility that human lives, like planets, had an underlying mechanistic trajectory. There’s something unsettling in the idea that, amid the vagaries of choice, chance, and circumstance, mathematics can tell us something about what it is to be human. Yet Quetelet’s overarching findings still stand: at some level, human life can be quantified and predicted. We can now forecast, with remarkable accuracy, the number of women in Germany who will choose to have a baby each year, the number of car accidents in Canada, the number of plane crashes across the Southern Hemisphere, even the number of people who will visit a New York City emergency room on a Friday evening….(More)”

Study finds Big Data eliminates confidentiality in court judgements


Swissinfo: “Swiss researchers have found that algorithms that mine large swaths of data can eliminate anonymity in federal court rulings. This could have major ramifications for transparency and privacy protection.

This is the result of a study by the University of Zurich’s Institute of Law, published in the legal journal “Jusletter” and shared by Swiss public television SRF on Monday.

The study relied on a “web scraping technique” or mining of large swaths of data. The researchers created a database of all decisions of the Supreme Court available online from 2000 to 2018 – a total of 122,218 decisions. Additional decisions from the Federal Administrative Court and the Federal Office of Public Health were also added.

Using an algorithm and manual searches for connections between data, the researchers were able to de-anonymise, in other words reveal identities, in 84% of the judgments in less than an hour.

In this specific study, the researchers were able to identify the pharma companies and medicines hidden in the documents of the complaints filed in court.  

Study authors say that this could have far-reaching consequences for transparency and privacy. One of the study’s co-authors Kerstin Noëlle Vokinger, professor of law at the University of Zurich explains that, “With today’s technological possibilities, anonymisation is no longer guaranteed in certain areas”. The researchers say the technique could be applied to any publicly available database.

Vokinger added there is a need to balance necessary transparency while safeguarding the personal rights of individuals.

Adrian Lobsiger, the Swiss Federal Data Protection Commissioner, told SRF that this confirms his view that facts may need to be treated as personal data in the age of technology….(More)”.

Investigators Use New Strategy to Combat Opioid Crisis: Data Analytics


Byron Tau and Aruna Viswanatha in the Wall Street Journal: “When federal investigators got a tip in 2015 that a health center in Houston was distributing millions of doses of opioid painkillers, they tried a new approach: look at the numbers.

State and federal prescription and medical billing data showed a pattern of overprescription, giving authorities enough ammunition to send an undercover Drug Enforcement Administration agent. She found a crowded waiting room and armed security guards. After a 91-second appointment with the sole doctor, the agent paid $270 at the cash-only clinic and walked out with 100 10mg pills of the powerful opioid hydrocodone.

The subsequent prosecution of the doctor and the clinic owner, who were sentenced last year to 35 years in prison, laid the groundwork for a new data-driven Justice Department strategy to help target one of the worst public-health crises in the country. Prosecutors expanded the pilot program from Houston to the hard-hit Appalachian region in early 2019. Within months, the effort resulted in the indictments of dozens of doctors, nurses, pharmacists and others. Two-thirds of them had been identified through analyzing the data, a Justice Department official said. A quarter of defendants were expected to plead guilty, according to the Justice Department, and additional indictments through the program are expected in the coming weeks.

“These are doctors behaving like drug dealers,” said Brian Benczkowski, head of the Justice Department’s criminal division who oversaw the expansion.

“They’ve been operating as though nobody could see them for a long period of time. Now we have the data,” Mr. Benczkowski said.

The Justice Department’s fraud section has been using data analytics in health-care prosecutions for several years—combing through Medicare and Medicaid billing data for evidence of fraud, and deploying the strategy in cities around the country that saw outlier billings. In 2018, the health-care fraud unit charged more than 300 people with fraud totaling more than $2 billion, according to the Justice Department.

But using the data to combat the opioid crisis, which is ravaging communities across the country, is a new development for the department, which has made tackling the epidemic a key priority in the Trump administration….(More)”.

E-Nudging Justice: The Role of Digital Choice Architecture in Online Courts


Paper by Ayelet Sela: “Justice systems around the world are launching online courts and tribunals in order to improve access to justice, especially for self-represented litigants (SRLs). Online courts are designed to handhold SRLs throughout the process and empower them to make procedural and substantive decisions. To that end, they present SRLs with streamlined and simplified procedures and employ a host of user interface design and user experience strategies (UI/UX). Focusing on these features, the article analyzes online courts as digital choice environments that shape SRLs’ decisions, inputs and actions, and considers their implications on access to justice, due process and the impartiality of courts. Accordingly, the article begins to close the knowledge gap regarding choice architecture in online legal proceedings. 

Using examples from current online courts, the article considers how mechanisms such as choice overload, display, colorfulness, visual complexity, and personalization influence SRLs’ choices and actions. The analysis builds on research in cognitive psychology and behavioral economics that shows that subtle changes in the context in which decisions are made steer (nudge) people to choose a particular option or course of action. It is also informed by recent studies that capture the effect of digital choice architecture on users’ choices and behaviors in online settings. The discussion clarifies that seemingly naïve UI/UX features can strongly influence users of online courts, in a manner that may be at odds with their institutional commitment to impartiality and due process. Moreover, the article challenges the view that online court interfaces (and those of other online legal services, for that matter) should be designed to maximize navigability, intuitiveness and user-friendliness. It argues that these design attributes involve the risk of nudging SRLs to make uninformed, non-deliberate, and biased decisions, possibly infringing their autonomy and self-determination. Accordingly, the article suggests that choice architecture in online courts should aim to encourage reflective participation and informed decision-making. Specifically, its goal should be to improve SRLs’ ability to identify and consider options, and advance their own — inherently diverse — interests. In order to mitigate the abovementioned risks, the article proposes an initial evaluation framework, measures, and methodologies to support evidence-based and ethical choice architecture in online courts….(More)”.

Law as Data: Computation, Text, and the Future of Legal Analysis


Book edited by Michael A. Livermore and Daniel N. Rockmore: “In recent years, the digitization of legal texts, combined with developments in the fields of statistics, computer science, and data analytics, have opened entirely new approaches to the study of law. This volume explores the new field of computational legal analysis, an approach marked by its use of legal texts as data. The emphasis herein is work that pushes methodological boundaries, either by using new tools to study longstanding questions within legal studies or by identifying new questions in response to developments in data availability and analysis.

By using the text and underlying data of legal documents as the direct objects of quantitative statistical analysis, Law as Data introduces the legal world to the broad range of computational tools already proving themselves relevant to law scholarship and practice, and highlights the early steps in what promises to be an exciting new approach to studying the law….(More)”.

Review into bias in algorithmic decision-making


Interim Report by the Centre for Data Ethics and Innovation (UK): The use of algorithms has the potential to improve the quality of decision- making by increasing the speed and accuracy with which decisions are made. If designed well, they can reduce human bias in decision-making processes. However, as the volume and variety of data used to inform decisions increases, and the algorithms used to interpret the data become more complex, concerns are growing that without proper oversight, algorithms risk entrenching and potentially worsening bias.

The way in which decisions are made, the potential biases which they are subject to and the impact these decisions have on individuals are highly context dependent. Our Review focuses on exploring bias in four key sectors: policing, financial services, recruitment and local government. These have been selected because they all involve significant decisions being made about individuals, there is evidence of the growing uptake of machine learning algorithms in the sectors and there is evidence of historic bias in decision-making within these sectors. This Review seeks to answer three sets of questions:

  1. Data: Do organisations and regulators have access to the data they require to adequately identify and mitigate bias?
  2. Tools and techniques: What statistical and technical solutions are available now or will be required in future to identify and mitigate bias and which represent best practice?
  3. Governance: Who should be responsible for governing, auditing and assuring these algorithmic decision-making systems?

Our work to date has led to some emerging insights that respond to these three sets of questions and will guide our subsequent work….(More)”.