Why Predictive Algorithms are So Risky for Public Sector Bodies


Paper by Madeleine Waller and Paul Waller: “This paper collates multidisciplinary perspectives on the use of predictive analytics in government services. It moves away from the hyped narratives of “AI” or “digital”, and the broad usage of the notion of “ethics”, to focus on highlighting the possible risks of the use of prediction algorithms in public administration. Guidelines for AI use in public bodies are currently available, however there is little evidence these are being followed or that they are being written into new mandatory regulations. The use of algorithms is not just an issue of whether they are fair and safe to use, but whether they abide with the law and whether they actually work.

Particularly in public services, there are many things to consider before implementing predictive analytics algorithms, as flawed use in this context can lead to harmful consequences for citizens, individually and collectively, and public sector workers. All stages of the implementation process of algorithms are discussed, from the specification of the problem and model design through to the context of their use and the outcomes.

Evidence is drawn from case studies of use in child welfare services, the US Justice System and UK public examination grading in 2020. The paper argues that the risks and drawbacks of such technological approaches need to be more comprehensively understood, and testing done in the operational setting, before implementing them. The paper concludes that while algorithms may be useful in some contexts and help to solve problems, it seems those relating to predicting real life have a long way to go to being safe and trusted for use. As “ethics” are located in time, place and social norms, the authors suggest that in the context of public administration, laws on human rights, statutory administrative functions, and data protection — all within the principles of the rule of law — provide the basis for appraising the use of algorithms, with maladministration being the primary concern rather than a breach of “ethics”….(More)”

Review into bias in algorithmic decision-making


Report by the Center for Data Ethics and Innovation (CDEI) (UK): “Unfair biases, whether conscious or unconscious, can be a problem in many decision-making processes. This review considers the impact that an increasing use of algorithmic tools is having on bias in decision-making, the steps that are required to manage risks, and the opportunities that better use of data offers to enhance fairness. We have focused on the use of
algorithms in significant decisions about individuals, looking across four sectors (recruitment, financial services, policing and local government), and making cross-cutting recommendations that aim to help build the right systems so that algorithms improve, rather than worsen, decision-making…(More)”.

unBail


About: “The criminal legal system is a maze of laws, language, and unwritten rules that lawyers are trained to maneuver to represent defendants.

However, according to the Bureau of Justice Statistics, only 27% of county public defender’s offices meet national caseload recommendations for cases per attorney, meaning that most public defenders are overworked, leaving their clients underrepresented.

Defendants must complete an estimated 200 discrete tasks during their legal proceeding. This leaves them overwhelmed, lost, and profoundly disadvantaged while attempting to navigate the system….

We have… created a product that acts as the trusted advisor for defendants and their families as they navigate the criminal legal system. We aim to deliver valuable and relevant legal information (but not legal advice) to the user in plain language, empowering them to advocate for themselves and proactively plan for the future and access social services if necessary. The user is also encouraged to give feedback on their experience at each step of the process in the hope that this can be used to improve the system….(More)”

Trace Labs


Trace Labs is a nonprofit organization whose mission is to accelerate
the family reunification of missing persons while training members in
the trade craft of open source intelligence (OSINT)….We crowdsource open source intelligence through both the Trace Labs OSINT Search Party CTFs and Ongoing Operations with our global community. Our highly skilled intelligence analysts then triage the data collected to produce actionable intelligence reports on each missing persons subject. These intelligence reports allow the law enforcement agencies that we work with the ability to quickly see any new details required to reopen a cold case and/or take immediate action on a missing subject.(More)”

Predict and Surveil: Data, Discretion, and the Future of Policing


Book by Sarah Brayne: “The scope of criminal justice surveillance has expanded rapidly in recent decades. At the same time, the use of big data has spread across a range of fields, including finance, politics, healthcare, and marketing. While law enforcement’s use of big data is hotly contested, very little is known about how the police actually use it in daily operations and with what consequences.

In Predict and Surveil, Sarah Brayne offers an unprecedented, inside look at how police use big data and new surveillance technologies, leveraging on-the-ground fieldwork with one of the most technologically advanced law enforcement agencies in the world-the Los Angeles Police Department. Drawing on original interviews and ethnographic observations, Brayne examines the causes and consequences of algorithmic control. She reveals how the police use predictive analytics to deploy resources, identify suspects, and conduct investigations; how the adoption of big data analytics transforms police organizational practices; and how the police themselves respond to these new data-intensive practices. Although big data analytics holds potential to reduce bias and increase efficiency, Brayne argues that it also reproduces and deepens existing patterns of social inequality, threatens privacy, and challenges civil liberties.

A groundbreaking examination of the growing role of the private sector in public policing, this book challenges the way we think about the data-heavy supervision law enforcement increasingly imposes upon civilians in the name of objectivity, efficiency, and public safety….(More)”.

High-Stakes AI Decisions Need to Be Automatically Audited


Oren Etzioni and Michael Li in Wired: “…To achieve increased transparency, we advocate for auditable AI, an AI system that is queried externally with hypothetical cases. Those hypothetical cases can be either synthetic or real—allowing automated, instantaneous, fine-grained interrogation of the model. It’s a straightforward way to monitor AI systems for signs of bias or brittleness: What happens if we change the gender of a defendant? What happens if the loan applicants reside in a historically minority neighborhood?

Auditable AI has several advantages over explainable AI. Having a neutral third-party investigate these questions is a far better check on bias than explanations controlled by the algorithm’s creator. Second, this means the producers of the software do not have to expose trade secrets of their proprietary systems and data sets. Thus, AI audits will likely face less resistance.

Auditing is complementary to explanations. In fact, auditing can help to investigate and validate (or invalidate) AI explanations. Say Netflix recommends The Twilight Zone because I watched Stranger Things. Will it also recommend other science fiction horror shows? Does it recommend The Twilight Zone to everyone who’s watched Stranger Things?

Early examples of auditable AI are already having a positive impact. The ACLU recently revealed that Amazon’s auditable facial-recognition algorithms were nearly twice as likely to misidentify people of color. There is growing evidence that public audits can improve model accuracy for under-represented groups.

In the future, we can envision a robust ecosystem of auditing systems that provide insights into AI. We can even imagine “AI guardians” that build external models of AI systems based on audits. Instead of requiring AI systems to provide low-fidelity explanations, regulators can insist that AI systems used for high-stakes decisions provide auditing interfaces.

Auditable AI is not a panacea. If an AI system is performing a cancer diagnostic, the patient will still want an accurate and understandable explanation, not just an audit. Such explanations are the subject of ongoing research and will hopefully be ready for commercial use in the near future. But in the meantime, auditable AI can increase transparency and combat bias….(More)”.

Behavioral nudges reduce failure to appear for court


Paper by Alissa Fishbane, Aurelie Ouss and Anuj K. Shah: “Each year, millions of Americans fail to appear in court for low-level offenses, and warrants are then issued for their arrest. In two field studies in New York City, we make critical information salient by redesigning the summons form and providing text message reminders. These interventions reduce failures to appear by 13-21% and lead to 30,000 fewer arrest warrants over a 3-year period. In lab experiments, we find that while criminal justice professionals see failures to appear as relatively unintentional, laypeople believe they are more intentional. These lay beliefs reduce support for policies that make court information salient and increase support for punishment. Our findings suggest that criminal justice policies can be made more effective and humane by anticipating human error in unintentional offenses….(More)”

Prediction paradigm: the human price of instrumentalism


Editorial by Karamjit S. Gill at AI&Society: “Reflecting on the rise of instrumentalism, we learn how it has travelled across the academic boundary to the high-tech culture of Silicon Valley. At its core lies the prediction paradigm. Under the cloak of inevitability of technology, we are being offered the prediction paradigm as the technological dream of public safety, national security, fraud detection, and even disease control and diagnosis. For example, there are offers of facial recognition systems for predicting behaviour of citizens, offers of surveillance drones for ’biometric readings’, ‘Predictive Policing’ is offered as an effective tool to predict and reduce crime rates. A recent critical review of the prediction technology (Coalition for Critical Technology 2020), brings to our notice the discriminatory consequences of predicting “criminality” using biometric and/or criminal legal data.

The review outlines the specific ways crime prediction technology reproduces, naturalizes and amplifies discriminatory outcomes, and why exclusively technical criteria are insufficient for evaluating their risks. We learn that neither predication architectures nor machine learning programs are neutral, they often uncritically inherit, accept and incorporate dominant cultural and belief systems, which are then normalised. For example, “Predictions” based on finding correlations between facial features and criminality are accepted as valid, interpreted as the product of intelligent and “objective” technical assessments. Furthermore, the data from predictive outcomes and recommendations are fed back into the system, thereby reproducing and confirming biased correlations. The consequence of this feedback loop, especially in facial recognition architectures, combined with a belief in “evidence based” diagnosis, is that it leads to ‘widespread mischaracterizations of criminal justice data’ that ‘justifies the exclusion and repression of marginalized populations through the construction of “risky” or “deviant” profiles’…(More).

Why Hundreds of Mathematicians Are Boycotting Predictive Policing


Courtney Linder at Popular Mechanics: “Several prominent academic mathematicians want to sever ties with police departments across the U.S., according to a letter submitted to Notices of the American Mathematical Society on June 15. The letter arrived weeks after widespread protests against police brutality, and has inspired over 1,500 other researchers to join the boycott.

These mathematicians are urging fellow researchers to stop all work related to predictive policing software, which broadly includes any data analytics tools that use historical data to help forecast future crime, potential offenders, and victims. The technology is supposed to use probability to help police departments tailor their neighborhood coverage so it puts officers in the right place at the right time….

a flow chart showing how predictive policing works

RAND

According to a 2013 research briefing from the RAND Corporation, a nonprofit think tank in Santa Monica, California, predictive policing is made up of a four-part cycle (shown above). In the first two steps, researchers collect and analyze data on crimes, incidents, and offenders to come up with predictions. From there, police intervene based on the predictions, usually taking the form of an increase in resources at certain sites at certain times. The fourth step is, ideally, reducing crime.

“Law enforcement agencies should assess the immediate effects of the intervention to ensure that there are no immediately visible problems,” the authors note. “Agencies should also track longer-term changes by examining collected data, performing additional analysis, and modifying operations as needed.”

In many cases, predictive policing software was meant to be a tool to augment police departments that are facing budget crises with less officers to cover a region. If cops can target certain geographical areas at certain times, then they can get ahead of the 911 calls and maybe even reduce the rate of crime.

But in practice, the accuracy of the technology has been contested—and it’s even been called racist….(More)”.

Wrongfully Accused by an Algorithm


Kashmir Hill at the New York Times: “In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit….

The Shinola shoplifting occurred in October 2018. Katherine Johnston, an investigator at Mackinac Partners, a loss prevention firm, reviewed the store’s surveillance video and sent a copy to the Detroit police, according to their report.

Five months later, in March 2019, Jennifer Coulson, a digital image examiner for the Michigan State Police, uploaded a “probe image” — a still from the video, showing the man in the Cardinals cap — to the state’s facial recognition database. The system would have mapped the man’s face and searched for similar ones in a collection of 49 million photos.

The state’s technology is supplied for $5.5 million by a company called DataWorks Plus. Founded in South Carolina in 2000, the company first offered mug shot management software, said Todd Pastorini, a general manager. In 2005, the firm began to expand the product, adding face recognition tools developed by outside vendors.

When one of these subcontractors develops an algorithm for recognizing faces, DataWorks attempts to judge its effectiveness by running searches using low-quality images of individuals it knows are present in a system. “We’ve tested a lot of garbage out there,” Mr. Pastorini said. These checks, he added, are not “scientific” — DataWorks does not formally measure the systems’ accuracy or bias.

“We’ve become a pseudo-expert in the technology,” Mr. Pastorini said.

In Michigan, the DataWorks software used by the state police incorporates components developed by the Japanese tech giant NEC and by Rank One Computing, based in Colorado, according to Mr. Pastorini and a state police spokeswoman. In 2019, algorithms from both companies were included in a federal study of over 100 facial recognition systems that found they were biased, falsely identifying African-American and Asian faces 10 times to 100 times more than Caucasian faces….(More)“.