The Judicial Data Collaborative


About: “We enable collaborations between researchers, technical experts, practitioners and organisations to create a shared vocabulary, standards and protocols for open judicial data sets, shared infrastructure and resources to host and explain available judicial data.

The objective is to drive and sustain advocacy on the quality and limitations of Indian judicial data and engage the judicial data community to enable cross-learning among various projects…

Accessibility and understanding of judicial data are essential to making courts and tribunals more transparent, accountable and easy to navigate for litigants. In recent years, eCourts services and various Court and tribunals’ websites have made a large volume of data about cases available. This has expanded the window into judicial functioning and enabled more empirical research on the role of courts in the protection of citizen’s rights. Such research can also assist busy courts understand patterns of litigation and practice and can help engage across disciplines with stakeholders to improve functioning of courts.

Some pioneering initiatives in the judicial data landscape include research such as DAKSH’s database; annual India Justice Reports; and studies of court functioning during the pandemic and quality of eCourts data; open datasets including Development Data Lab’s Judicial Data Portal containing District & Taluka court cases (2010-2018) and platforms that collect them such as Justice Hub; and interactive databases such as the Vidhi JALDI Constitution Bench Pendency Project…(More)”.

Facial Recognition: Current Capabilities, Future Prospects, and Governance


A National Academies of Sciences, Engineering, and Medicine study: “Facial recognition technology is increasingly used for identity verification and identification, from aiding law enforcement investigations to identifying potential security threats at large venues. However, advances in this technology have outpaced laws and regulations, raising significant concerns related to equity, privacy, and civil liberties.

This report explores the current capabilities, future possibilities, and necessary governance for facial recognition technology. Facial Recognition Technology discusses legal, societal, and ethical implications of the technology, and recommends ways that federal agencies and others developing and deploying the technology can mitigate potential harms and enact more comprehensive safeguards…(More)”.

Predictive Policing Software Terrible At Predicting Crimes


Article by Aaron Sankin and Surya Mattu: “A software company sold a New Jersey police department an algorithm that was right less than 1% of the time

Crime predictions generated for the police department in Plainfield, New Jersey, rarely lined up with reported crimes, an analysis by The Markup has found, adding new context to the debate over the efficacy of crime prediction software.

Geolitica, known as PredPol until a 2021 rebrand, produces software that ingests data from crime incident reports and produces daily predictions on where and when crimes are most likely to occur.

We examined 23,631 predictions generated by Geolitica between Feb. 25 to Dec. 18, 2018 for the Plainfield Police Department (PD). Each prediction we analyzed from the company’s algorithm indicated that one type of crime was likely to occur in a location not patrolled by Plainfield PD. In the end, the success rate was less than half a percent. Fewer than 100 of the predictions lined up with a crime in the predicted category, that was also later reported to police.

Diving deeper, we looked at predictions specifically for robberies or aggravated assaults that were likely to occur in Plainfield and found a similarly low success rate: 0.6 percent. The pattern was even worse when we looked at burglary predictions, which had a success rate of 0.1 percent.

“Why did we get PredPol? I guess we wanted to be more effective when it came to reducing crime. And having a prediction where we should be would help us to do that. I don’t know that it did that,” said Captain David Guarino of the Plainfield PD. “I don’t believe we really used it that often, if at all. That’s why we ended up getting rid of it.”…(More)’.

The GPTJudge: Justice in a Generative AI World


Paper by Grossman, Maura and Grimm, Paul and Brown, Dan and Xu, Molly: “Generative AI (“GenAI”) systems such as ChatGPT recently have developed to the point where they are capable of producing computer-generated text and images that are difficult to differentiate from human-generated text and images. Similarly, evidentiary materials such as documents, videos and audio recordings that are AI-generated are becoming increasingly difficult to differentiate from those that are not AI-generated. These technological advancements present significant challenges to parties, their counsel, and the courts in determining whether evidence is authentic or fake. Moreover, the explosive proliferation and use of GenAI applications raises concerns about whether litigation costs will dramatically increase as parties are forced to hire forensic experts to address AI- generated evidence, the ability of juries to discern authentic from fake evidence, and whether GenAI will overwhelm the courts with AI-generated lawsuits, whether vexatious or otherwise. GenAI systems have the potential to challenge existing substantive intellectual property (“IP”) law by producing content that is machine, not human, generated, but that also relies on human-generated content in potentially infringing ways. Finally, GenAI threatens to alter the way in which lawyers litigate and judges decide cases.

This article discusses these issues, and offers a comprehensive, yet understandable, explanation of what GenAI is and how it functions. It explores evidentiary issues that must be addressed by the bench and bar to determine whether actual or asserted (i.e., deepfake) GenAI output should be admitted as evidence in civil and criminal trials. Importantly, it offers practical, step-by- step recommendations for courts and attorneys to follow in meeting the evidentiary challenges posed by GenAI. Finally, it highlights additional impacts that GenAI evidence may have on the development of substantive IP law, and its potential impact on what the future may hold for litigating cases in a GenAI world…(More)”.

Essential Elements and Ethical Principles for Trustworthy Artificial Intelligence Adoption in Courts


Paper by Carlos E. Jimenez-Gomez and Jesus Cano Carrillo: “Tasks in courts have rapidly evolved from manual to digital work. In these innovation processes, theory and practice have demonstrated that adopting technology per se is not the right path. Innovation in courts requires specific plans for digital transformation, including analysis, programmatic changes, or skills. Artificial Intelligence (AI) is not an exception.
The use of AI in courts is not futuristic. From efficiency to decision-making support, AI-based tools are already being used by U.S. courts. To cite some examples, AI tools allow the discovery of divergences, disparities, and dissonances in jurisdictional activity. At a higher level, AI helps improve internal organization. AI helps with judicial decision consistency, exploiting a large judicial knowledge base in the form of big data, and it makes the judge’s work more agile with pattern and linguistic recognition in documents, identifying schemes and conceptualizations.

AI could bring considerable benefits to the judicial system. However, the risks and challenges are also
enormous, posing unique hurdles for user trust…

This article defines AI in relation to courts to understand challenges and implications and reviews AI components with a special focus on characteristics of trustworthy AI. It also examines the importance of a new policy and regulatory framework, and makes recommendations to avoid major problems…(More)”

Lawless Surveillance


Paper by Barry Friedman: “Here in the United States, policing agencies are engaging in mass collection of personal data, building a vast architecture of surveillance. License plate readers collect our location information. Mobile forensics data terminals suck in the contents of cell phones during traffic stops. CCTV maps our movements. Cheap storage means most of this is kept for long periods of time—sometimes into perpetuity. Artificial intelligence makes searching and mining the data a snap. For most of us whose data is collected, stored, and mined, there is no suspicion whatsoever of wrongdoing.

This growing network of surveillance is almost entirely unregulated. It is, in short, lawless. The Fourth Amendment touches almost none of it, either because what is captured occurs in public, and so is supposedly “knowingly exposed,” or because of doctrine that shields information collected from third parties. It is unregulated by statutes because legislative bodies—when they even know about these surveillance systems—see little profit in taking on the police.

In the face of growing concern over such surveillance, this Article argues there is a constitutional solution sitting in plain view. In virtually every other instance in which personal information is collected by the government, courts require that a sound regulatory scheme be in place before information collection occurs. The rulings on the mandatory nature of regulation are remarkably similar, no matter under which clause of the Constitution collection is challenged.

This Article excavates this enormous body of precedent and applies it to the problem of government mass data collection. It argues that before the government can engage in such surveillance, there must be a regulatory scheme in place. And by changing the default rule from allowing police to collect absent legislative prohibition, to banning collection until there is legislative action, legislatures will be compelled to act (or there will be no surveillance). The Article defines what a minimally-acceptable regulatory scheme for mass data collection must include, and shows how it can be grounded in the Constitution…(More)”.

The Low Threshold for Face Recognition in New Delhi


Article by Varsha Bansal: “Indian law enforcement is starting to place huge importance on facial recognition technology. Delhi police, looking into identifying people involved in civil unrest in northern India in the past few years, said that they would consider 80 percent accuracy and above as a “positive” match, according to documents obtained by the Internet Freedom Foundation through a public records request.

Facial recognition’s arrival in India’s capital region marks the expansion of Indian law enforcement officials using facial recognition data as evidence for potential prosecution, ringing alarm bells among privacy and civil liberties experts. There are also concerns about the 80 percent accuracy threshold, which critics say is arbitrary and far too low, given the potential consequences for those marked as a match. India’s lack of a comprehensive data protection law makes matters even more concerning.

The documents further state that even if a match is under 80 percent, it would be considered a “false positive” rather than a negative, which would make that individual “subject to due verification with other corroborative evidence.”

“This means that even though facial recognition is not giving them the result that they themselves have decided is the threshold, they will continue to investigate,” says Anushka Jain, associate policy counsel for surveillance and technology with the IFF, who filed for this information. “This could lead to harassment of the individual just because the technology is saying that they look similar to the person the police are looking for.” She added that this move by the Delhi Police could also result in harassment of people from communities that have been historically targeted by law enforcement officials…(More)”

The UK Algorithmic Transparency Standard: A Qualitative Analysis of Police Perspectives


Paper by Marion Oswald, Luke Chambers, Ellen P. Goodman, Pam Ugwudike, and Miri Zilka: “1. The UK Government’s draft ‘Algorithmic Transparency Standard’ is intended to provide a standardised way for public bodies and government departments to provide information about how algorithmic tools are being used to support decisions. The research discussed in this report was conducted in parallel to the piloting of the Standard by the Cabinet Office and the Centre for Data Ethics and Innovation.
2. We conducted semi-structured interviews with respondents from across UK policing and commercial bodies involved in policing technologies. Our aim was to explore the implications for police forces of participation in the Standard, to identify rewards, risks, challenges for the police, and areas where the Standard could be improved, and therefore to contribute to the exploration of policy options for expansion of participation in the Standard.
3. Algorithmic transparency is both achievable for policing and could bring significant rewards. A key reward of police participation in the Standard is that it provides the opportunity to demonstrate proficient implementation of technology-driven policing, thus enhancing earned trust. Research participants highlighted the public good that could result from the considered use of algorithms.
4. Participants noted, however, a risk of misperception of the dangers of policing technology, especially if use of algorithmic tools was not appropriately compared to the status quo and current methods…(More)”.

In India, your payment data could become evidence of dissent


Article by Nilesh Christopher: “Indian payments firm Razorpay is under fire for seemingly breaching customer privacy. Some have gone on to call the company a “sell out” for sharing users’ payment data with authorities without their consent. But is faulting Razorpay for complying with a legal request fair?

On June 19, Mohammed Zubair, co-founder of fact-checking outlet Alt News, was arrested for hurting religious sentiments over a tweet he posted in 2018. Investigating authorities, through legal diktats, have now gained access to payment data of donors supporting Alt News from payments processor Razorpay. (Police are now probing Alt News for accepting foreign donations. Alt News has denied the charge.) 

The data sharing has had a chilling effect. Civil society organization Internet Freedom Foundation, which uses Razorpay for donations, is exploring “additional payment platforms to offer choice and comfort to donors.” Many donors are worried that they might now become targets on account of their contributions. 

This has created a new faultline in the discourse around weaponizing payment data by a state that has gained notoriety for cracking down on critics of Prime Minister Narendra Modi.

Faulting Razorpay for complying with a legal request is misguided. “I think Razorpay played it by the book,” said Dharmendra Chatur, partner at the law firm Poovayya & Co. “They sort of did what any reasonable person would do in this situation.” 

Under Section 91 of India’s Criminal Procedure Code, police authorities have the power to seek information or documents on the apprehension that a crime has been committed during the course of an inquiry, inspection, or trial. “You either challenge it or you comply. There’s no other option available [for Razorpay]. And who would want to just unnecessarily initiate litigation?” Chatur said…(More)”.

Crime Prediction Keeps Society Stuck in the Past


Article by Chris Gilliard: “…All of these policing systems operate on the assumption that the past determines the future. In Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition, digital media scholar Wendy Hui Kyong Chun argues that the most common methods used by technologies such as PredPol and Chicago’s heat list to make predictions do nothing of the sort. Rather than anticipating what might happen out of the myriad and unknowable possibilities on which the very idea of a future depends, machine learning and other AI-based methods of statistical correlation “restrict the future to the past.” In other words, these systems prevent the future in order to “predict” it—they ensure that the future will be just the same as the past was.

“If the captured and curated past is racist and sexist,” Chun writes, “these algorithms and models will only be verified as correct if they make sexist and racist predictions.” This is partly a description of the familiar garbage-in/garbage-out problem with all data analytics, but it’s something more: Ironically, the putatively “unbiased” technology sold to us by promoters is said to “work” precisely when it tells us that what is contingent in history is in fact inevitable and immutable. Rather than helping us to manage social problems like racism as we move forward, as the McDaniel case shows in microcosm, these systems demand that society not change, that things that we should try to fix instead must stay exactly as they are.

It’s a rather glaring observation that predictive policing tools are rarely if ever (with the possible exception of the parody “White Collar Crime Risk Zone” project) focused on wage theft or various white collar crimes, even though the dollar amounts of those types of offenses far outstrip property crimes in terms of dollar value by several orders of magnitude. This gap exists because of how crime exists in the popular imagination. For instance, news reports in recent weeks bludgeoned readers with reports of a so-called “crime wave” of shoplifting at high-end stores. Yet just this past February, Amazon agreed to pay regulators a whopping $61.7 million, the amount the FTC says the company shorted drivers in a two-and-a-half-year period. That story received a fraction of the coverage, and aside from the fine, there will be no additional charges.

The algorithmic crystal ball that promises to predict and forestall future crimes works from a fixed notion of what a criminal is, where crimes occur, and how they are prosecuted (if at all). Those parameters depend entirely on the power structure empowered to formulate them—and very often the explicit goal of those structures is to maintain existing racial and wealth hierarchies. This is the same set of carceral logics that allow the placement of children into gang databases, or the development of a computational tool to forecast which children will become criminals. The process of predicting the lives of children is about cementing existing realities rather than changing them. Entering children into a carceral ranking system is in itself an act of violence, but as in the case of McDaniel, it also nearly guarantees that the system that sees them as potential criminals will continue to enact violence on them throughout their lifetimes…(More)”.