The Low Threshold for Face Recognition in New Delhi


Article by Varsha Bansal: “Indian law enforcement is starting to place huge importance on facial recognition technology. Delhi police, looking into identifying people involved in civil unrest in northern India in the past few years, said that they would consider 80 percent accuracy and above as a “positive” match, according to documents obtained by the Internet Freedom Foundation through a public records request.

Facial recognition’s arrival in India’s capital region marks the expansion of Indian law enforcement officials using facial recognition data as evidence for potential prosecution, ringing alarm bells among privacy and civil liberties experts. There are also concerns about the 80 percent accuracy threshold, which critics say is arbitrary and far too low, given the potential consequences for those marked as a match. India’s lack of a comprehensive data protection law makes matters even more concerning.

The documents further state that even if a match is under 80 percent, it would be considered a “false positive” rather than a negative, which would make that individual “subject to due verification with other corroborative evidence.”

“This means that even though facial recognition is not giving them the result that they themselves have decided is the threshold, they will continue to investigate,” says Anushka Jain, associate policy counsel for surveillance and technology with the IFF, who filed for this information. “This could lead to harassment of the individual just because the technology is saying that they look similar to the person the police are looking for.” She added that this move by the Delhi Police could also result in harassment of people from communities that have been historically targeted by law enforcement officials…(More)”

The UK Algorithmic Transparency Standard: A Qualitative Analysis of Police Perspectives


Paper by Marion Oswald, Luke Chambers, Ellen P. Goodman, Pam Ugwudike, and Miri Zilka: “1. The UK Government’s draft ‘Algorithmic Transparency Standard’ is intended to provide a standardised way for public bodies and government departments to provide information about how algorithmic tools are being used to support decisions. The research discussed in this report was conducted in parallel to the piloting of the Standard by the Cabinet Office and the Centre for Data Ethics and Innovation.
2. We conducted semi-structured interviews with respondents from across UK policing and commercial bodies involved in policing technologies. Our aim was to explore the implications for police forces of participation in the Standard, to identify rewards, risks, challenges for the police, and areas where the Standard could be improved, and therefore to contribute to the exploration of policy options for expansion of participation in the Standard.
3. Algorithmic transparency is both achievable for policing and could bring significant rewards. A key reward of police participation in the Standard is that it provides the opportunity to demonstrate proficient implementation of technology-driven policing, thus enhancing earned trust. Research participants highlighted the public good that could result from the considered use of algorithms.
4. Participants noted, however, a risk of misperception of the dangers of policing technology, especially if use of algorithmic tools was not appropriately compared to the status quo and current methods…(More)”.

In India, your payment data could become evidence of dissent


Article by Nilesh Christopher: “Indian payments firm Razorpay is under fire for seemingly breaching customer privacy. Some have gone on to call the company a “sell out” for sharing users’ payment data with authorities without their consent. But is faulting Razorpay for complying with a legal request fair?

On June 19, Mohammed Zubair, co-founder of fact-checking outlet Alt News, was arrested for hurting religious sentiments over a tweet he posted in 2018. Investigating authorities, through legal diktats, have now gained access to payment data of donors supporting Alt News from payments processor Razorpay. (Police are now probing Alt News for accepting foreign donations. Alt News has denied the charge.) 

The data sharing has had a chilling effect. Civil society organization Internet Freedom Foundation, which uses Razorpay for donations, is exploring “additional payment platforms to offer choice and comfort to donors.” Many donors are worried that they might now become targets on account of their contributions. 

This has created a new faultline in the discourse around weaponizing payment data by a state that has gained notoriety for cracking down on critics of Prime Minister Narendra Modi.

Faulting Razorpay for complying with a legal request is misguided. “I think Razorpay played it by the book,” said Dharmendra Chatur, partner at the law firm Poovayya & Co. “They sort of did what any reasonable person would do in this situation.” 

Under Section 91 of India’s Criminal Procedure Code, police authorities have the power to seek information or documents on the apprehension that a crime has been committed during the course of an inquiry, inspection, or trial. “You either challenge it or you comply. There’s no other option available [for Razorpay]. And who would want to just unnecessarily initiate litigation?” Chatur said…(More)”.

Crime Prediction Keeps Society Stuck in the Past


Article by Chris Gilliard: “…All of these policing systems operate on the assumption that the past determines the future. In Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition, digital media scholar Wendy Hui Kyong Chun argues that the most common methods used by technologies such as PredPol and Chicago’s heat list to make predictions do nothing of the sort. Rather than anticipating what might happen out of the myriad and unknowable possibilities on which the very idea of a future depends, machine learning and other AI-based methods of statistical correlation “restrict the future to the past.” In other words, these systems prevent the future in order to “predict” it—they ensure that the future will be just the same as the past was.

“If the captured and curated past is racist and sexist,” Chun writes, “these algorithms and models will only be verified as correct if they make sexist and racist predictions.” This is partly a description of the familiar garbage-in/garbage-out problem with all data analytics, but it’s something more: Ironically, the putatively “unbiased” technology sold to us by promoters is said to “work” precisely when it tells us that what is contingent in history is in fact inevitable and immutable. Rather than helping us to manage social problems like racism as we move forward, as the McDaniel case shows in microcosm, these systems demand that society not change, that things that we should try to fix instead must stay exactly as they are.

It’s a rather glaring observation that predictive policing tools are rarely if ever (with the possible exception of the parody “White Collar Crime Risk Zone” project) focused on wage theft or various white collar crimes, even though the dollar amounts of those types of offenses far outstrip property crimes in terms of dollar value by several orders of magnitude. This gap exists because of how crime exists in the popular imagination. For instance, news reports in recent weeks bludgeoned readers with reports of a so-called “crime wave” of shoplifting at high-end stores. Yet just this past February, Amazon agreed to pay regulators a whopping $61.7 million, the amount the FTC says the company shorted drivers in a two-and-a-half-year period. That story received a fraction of the coverage, and aside from the fine, there will be no additional charges.

The algorithmic crystal ball that promises to predict and forestall future crimes works from a fixed notion of what a criminal is, where crimes occur, and how they are prosecuted (if at all). Those parameters depend entirely on the power structure empowered to formulate them—and very often the explicit goal of those structures is to maintain existing racial and wealth hierarchies. This is the same set of carceral logics that allow the placement of children into gang databases, or the development of a computational tool to forecast which children will become criminals. The process of predicting the lives of children is about cementing existing realities rather than changing them. Entering children into a carceral ranking system is in itself an act of violence, but as in the case of McDaniel, it also nearly guarantees that the system that sees them as potential criminals will continue to enact violence on them throughout their lifetimes…(More)”.

Algorithm Claims to Predict Crime in US Cities Before It Happens


Article by Carrington York: “A new computer algorithm can now forecast crime in a big city near you — apparently. 

The algorithm, which was formulated by social scientists at the University of Chicago and touts 90% accuracy, divides cities into 1,000-square-foot tiles, according to a study published in Nature Human Behavior. Researchers used historical data on violent crimes and property crimes from Chicago to test the model, which detects patterns over time in these tiled areas tries to predict future events. It performed just as well using data from other big cities, including Atlanta, Los Angeles and Philadelphia, the study showed. 

The new tool contrasts with previous models for prediction, which depict crime as emerging from “hotspots” that spread to surrounding areas. Such an approach tends to miss the complex social environment of cities, as well as the nuanced relationship between crime and the effects of police enforcement, thus leaving room for bias, according to the report.

“It is hard to argue that bias isn’t there when people sit down and determine which patterns they will look at to predict crime because these patterns, by themselves, don’t mean anything,” said Ishanu Chattopadhyay, Assistant Professor of Medicine at the University of Chicago and senior author of the study. “But now, you can ask the algorithm complex questions like: ‘What happens to the rate of violent crime if property crimes go up?”

But Emily M. Bender, professor of linguistics at the University of Washington, said in a series of tweets that the focus should be on targeting underlying inequities rather than on predictive policing, while also noting that the research appears to ignore securities fraud or environmental crimes…(More)”

Police Violence In Puerto Rico: Flooded With Data


Blog by Christine Grillo: “For María Mari-Narváez, a recent decision by the Supreme Court of Puerto Rico was both a victory and a moment of reckoning. The Court granted Kilómetro Cero, a citizen-led police accountability project in Puerto Rico, full access to every use-of-force report filed by the Puerto Rico Police Department since 2014. The decision will make it possible for advocates such as Mari to get a clear picture of how state police officers are using force, and when that use of force crosses the line into abuse. But the court victory flooded her small organization with data.

“We won, finally, and then I realized I was going to be receiving thousands of documents that I had zero capacity to process,” says Mari.

“One of the things that’s important to me when analyzing data is to find out where the gaps are, why those gaps exist, and what those gaps represent.” —Tarak Shah, data scientist

The Court made its decision in April 2021, and the police department started handing over PDF files in July. By the end, there could be up to 10,000 documents that get turned in. In addition to incident reports, the police had to provide their use-of-force database. Combined, the victory provides a complicated mixture of quantitative and qualitative data that can be analyzed to answer questions about what the state police are doing to its citizens during police interventions. In particular, Kilómetro Cero, which Mari founded, wants to find out if some Puerto Ricans are more likely to be victims of police violence than others.

“We’re looking for bias,” says Mari. “Bias against poor people, or people who live in a certain neighborhood. Gender bias. Language bias. Bias against drug users, sex workers, immigrants, people who don’t have a house. We’re trying to analyze the language of vulnerability.”…(More)”.

Understanding Criminal Justice Innovations


Paper by Meghan J. Ryan: “Burgeoning science and technology have provided the criminal justice system with the opportunity to address some of its shortcomings. And the criminal justice system has significant shortcomings. Among other issues, we have a mass incarceration problem; clearance rates are surprisingly low; there are serious concerns about wrongful convictions; and the system is layered with racial, religious, and other biases. Innovations that are widely used across industries, as well as those directed specifically at the criminal justice system, have the potential to improve upon such problems. But it is important to recognize that these innovations also have downsides, and criminal justice actors must proceed with caution and understand not only the potential of these interventions but also their limitations. Relevant to this calculation of caution is whether the innovation is broadly used across industry sectors or, rather, whether it has been specifically developed for use within the criminal justice system. These latter innovations have a record of not being sufficiently vetted for accuracy and reliability. Accordingly, criminal justice actors must be sufficiently well versed in basic science and technology so that they have the ability and the confidence to critically assess the usefulness of the various criminal justice innovations in light of their limitations. Considering lawyers’ general lack of competency in these areas, scientific and technological training is necessary to mold them into modern competent criminal justice actors. This training must be more than superficial subject-specific training, though; it must dig deeper, delving into critical thinking skills that include evaluating the accuracy and reliability of the innovation at issue, as well as assessing broader concerns such as the need for development transparency, possible intrusions on individual privacy, and incentives to curtail individual liberties given the innovation at hand….(More)”

Machine Learning Can Predict Shooting Victimization Well Enough to Help Prevent It


Paper by Sara B. Heller, Benjamin Jakubowski, Zubin Jelveh & Max Kapustin: “This paper shows that shootings are predictable enough to be preventable. Using arrest and victimization records for almost 644,000 people from the Chicago Police Department, we train a machine learning model to predict the risk of being shot in the next 18 months. We address central concerns about police data and algorithmic bias by predicting shooting victimization rather than arrest, which we show accurately captures risk differences across demographic groups despite bias in the predictors. Out-of-sample accuracy is strikingly high: of the 500 people with the highest predicted risk, 13 percent are shot within 18 months, a rate 130 times higher than the average Chicagoan. Although Black male victims more often have enough police contact to generate predictions, those predictions are not, on average, inflated; the demographic composition of predicted and actual shooting victims is almost identical. There are legal, ethical, and practical barriers to using these predictions to target law enforcement. But using them to target social services could have enormous preventive benefits: predictive accuracy among the top 500 people justifies spending up to $123,500 per person for an intervention that could cut their risk of being shot in half….(More)”.

In this small Va. town, citizens review police like Uber drivers


Article by Emily Davies: “Chris Ford stepped on the gas in his police cruiser and rolled down Gold Cup Drive to catch the SUV pushing 30 mph in a 15 mph zone. Eleven hours and 37 minutes into his shift, the corporal was ready for his first traffic stop of the day.

“Look at him being sneaky,” Fordsaid, his blue lights flashing on a quiet road in this small town where a busy day could mean animals escaped from a local slaughterhouse.

Ford parked, walked toward the SUV and greeted the man who had ignored the speed limit at exactly the wrong time.

“I was doing 15,” said the driver, a Black man in a mostly White neighborhood of a mostly White town.

The officertook his license and registration back to the cruiser.

“Every time I pull over someone of color, they’re standoffish with me. Like, ‘Here’s a White police officer, here we go again.’ ” Ford, 56, said. “So I just try to be nice.”

Ford knew the stop would be scrutinized — and not just by the reporter who was allowed to ride along on his shift.

After every significant encounter with residents, officers in Warrenton are required to hand out a QR code, which is on the back of their business card, asking for feedback on the interaction. Through a series of questions, citizens can use a star-based system to rate officers on their communication, listening skills and fairness. The responses are anonymous and can be completed any time after the interaction to encourage people to give honest assessments. The program, called Guardian Score, is supposed to give power to those stopped by police in a relationship that has historically felt one-sided — and to give police departments a tool to evaluate their force on more than arrests and tickets.

“If we started to measure how officers are treating community members, we realized we could actually infuse this into the overall evaluation process of individual officers,” said Burke Brownfeld, a founder of Guardian Score and a former police officer in Alexandria. “The definition of doing a good job could change. It would also include: How are your listening skills? How fairly are you treating people based on their perception?”…(More)”.

Technology rules? The advent of new technologies in the justice system


Report by The Justice and Home Affairs Committee (House of Lords): “In recent years, and without many of us realising it, Artificial Intelligence has begun to permeate every aspect of our personal and professional lives. We live in a world of big data; more and more decisions in society are being taken by machines using algorithms built from that data, be it in healthcare, education, business, or consumerism. Our Committee has limited its investigation to only one area–how these advanced technologies are used in our justice system. Algorithms are being used to improve crime detection, aid the security categorisation of prisoners, streamline entry clearance processes at our borders and generate new insights that feed into the entire criminal justice pipeline.

We began our work on the understanding that Artificial Intelligence (AI), used correctly, has the potential to improve people’s lives through greater efficiency, improved productivity. and in finding solutions to often complex problems. But while acknowledging the many benefits, we were taken aback by the proliferation of Artificial Intelligence tools potentially being used without proper oversight, particularly by police forces across the country. Facial recognition may be the best known of these new technologies but in reality there are many more already in use, with more being developed all the time.

When deployed within the justice system, AI technologies have serious implications for a person’s human rights and civil liberties. At what point could someone be imprisoned on the basis of technology that cannot be explained? Informed scrutiny is therefore essential to ensure that any new tools deployed in this sphere are safe, necessary, proportionate, and effective. This scrutiny is not happening. Instead, we uncovered a landscape, a new Wild West, in which new technologies are developing at a pace that public awareness, government and legislation have not kept up with.
Public bodies and all 43 police forces are free to individually commission whatever tools they like or buy them from companies eager to get in on the burgeoning AI market. And the market itself is worryingly opaque. We were told that public bodies often do not know much about the systems they are buying and will be implementing, due to the seller’s insistence on commercial confidentiality–despite the fact that many of these systems will be harvesting, and relying on, data from the general public.
This is particularly concerning in light of evidence we heard of dubious selling practices and claims made by vendors as to their products’ effectiveness which are often untested and unproven…(More)”.