The GPTJudge: Justice in a Generative AI World

Paper by Grossman, Maura and Grimm, Paul and Brown, Dan and Xu, Molly: “Generative AI (“GenAI”) systems such as ChatGPT recently have developed to the point where they are capable of producing computer-generated text and images that are difficult to differentiate from human-generated text and images. Similarly, evidentiary materials such as documents, videos and audio recordings that are AI-generated are becoming increasingly difficult to differentiate from those that are not AI-generated. These technological advancements present significant challenges to parties, their counsel, and the courts in determining whether evidence is authentic or fake. Moreover, the explosive proliferation and use of GenAI applications raises concerns about whether litigation costs will dramatically increase as parties are forced to hire forensic experts to address AI- generated evidence, the ability of juries to discern authentic from fake evidence, and whether GenAI will overwhelm the courts with AI-generated lawsuits, whether vexatious or otherwise. GenAI systems have the potential to challenge existing substantive intellectual property (“IP”) law by producing content that is machine, not human, generated, but that also relies on human-generated content in potentially infringing ways. Finally, GenAI threatens to alter the way in which lawyers litigate and judges decide cases.

This article discusses these issues, and offers a comprehensive, yet understandable, explanation of what GenAI is and how it functions. It explores evidentiary issues that must be addressed by the bench and bar to determine whether actual or asserted (i.e., deepfake) GenAI output should be admitted as evidence in civil and criminal trials. Importantly, it offers practical, step-by- step recommendations for courts and attorneys to follow in meeting the evidentiary challenges posed by GenAI. Finally, it highlights additional impacts that GenAI evidence may have on the development of substantive IP law, and its potential impact on what the future may hold for litigating cases in a GenAI world…(More)”.

Essential Elements and Ethical Principles for Trustworthy Artificial Intelligence Adoption in Courts

Paper by Carlos E. Jimenez-Gomez and Jesus Cano Carrillo: “Tasks in courts have rapidly evolved from manual to digital work. In these innovation processes, theory and practice have demonstrated that adopting technology per se is not the right path. Innovation in courts requires specific plans for digital transformation, including analysis, programmatic changes, or skills. Artificial Intelligence (AI) is not an exception.
The use of AI in courts is not futuristic. From efficiency to decision-making support, AI-based tools are already being used by U.S. courts. To cite some examples, AI tools allow the discovery of divergences, disparities, and dissonances in jurisdictional activity. At a higher level, AI helps improve internal organization. AI helps with judicial decision consistency, exploiting a large judicial knowledge base in the form of big data, and it makes the judge’s work more agile with pattern and linguistic recognition in documents, identifying schemes and conceptualizations.

AI could bring considerable benefits to the judicial system. However, the risks and challenges are also
enormous, posing unique hurdles for user trust…

This article defines AI in relation to courts to understand challenges and implications and reviews AI components with a special focus on characteristics of trustworthy AI. It also examines the importance of a new policy and regulatory framework, and makes recommendations to avoid major problems…(More)”

Lawless Surveillance

Paper by Barry Friedman: “Here in the United States, policing agencies are engaging in mass collection of personal data, building a vast architecture of surveillance. License plate readers collect our location information. Mobile forensics data terminals suck in the contents of cell phones during traffic stops. CCTV maps our movements. Cheap storage means most of this is kept for long periods of time—sometimes into perpetuity. Artificial intelligence makes searching and mining the data a snap. For most of us whose data is collected, stored, and mined, there is no suspicion whatsoever of wrongdoing.

This growing network of surveillance is almost entirely unregulated. It is, in short, lawless. The Fourth Amendment touches almost none of it, either because what is captured occurs in public, and so is supposedly “knowingly exposed,” or because of doctrine that shields information collected from third parties. It is unregulated by statutes because legislative bodies—when they even know about these surveillance systems—see little profit in taking on the police.

In the face of growing concern over such surveillance, this Article argues there is a constitutional solution sitting in plain view. In virtually every other instance in which personal information is collected by the government, courts require that a sound regulatory scheme be in place before information collection occurs. The rulings on the mandatory nature of regulation are remarkably similar, no matter under which clause of the Constitution collection is challenged.

This Article excavates this enormous body of precedent and applies it to the problem of government mass data collection. It argues that before the government can engage in such surveillance, there must be a regulatory scheme in place. And by changing the default rule from allowing police to collect absent legislative prohibition, to banning collection until there is legislative action, legislatures will be compelled to act (or there will be no surveillance). The Article defines what a minimally-acceptable regulatory scheme for mass data collection must include, and shows how it can be grounded in the Constitution…(More)”.

The Low Threshold for Face Recognition in New Delhi

Article by Varsha Bansal: “Indian law enforcement is starting to place huge importance on facial recognition technology. Delhi police, looking into identifying people involved in civil unrest in northern India in the past few years, said that they would consider 80 percent accuracy and above as a “positive” match, according to documents obtained by the Internet Freedom Foundation through a public records request.

Facial recognition’s arrival in India’s capital region marks the expansion of Indian law enforcement officials using facial recognition data as evidence for potential prosecution, ringing alarm bells among privacy and civil liberties experts. There are also concerns about the 80 percent accuracy threshold, which critics say is arbitrary and far too low, given the potential consequences for those marked as a match. India’s lack of a comprehensive data protection law makes matters even more concerning.

The documents further state that even if a match is under 80 percent, it would be considered a “false positive” rather than a negative, which would make that individual “subject to due verification with other corroborative evidence.”

“This means that even though facial recognition is not giving them the result that they themselves have decided is the threshold, they will continue to investigate,” says Anushka Jain, associate policy counsel for surveillance and technology with the IFF, who filed for this information. “This could lead to harassment of the individual just because the technology is saying that they look similar to the person the police are looking for.” She added that this move by the Delhi Police could also result in harassment of people from communities that have been historically targeted by law enforcement officials…(More)”

The UK Algorithmic Transparency Standard: A Qualitative Analysis of Police Perspectives

Paper by Marion Oswald, Luke Chambers, Ellen P. Goodman, Pam Ugwudike, and Miri Zilka: “1. The UK Government’s draft ‘Algorithmic Transparency Standard’ is intended to provide a standardised way for public bodies and government departments to provide information about how algorithmic tools are being used to support decisions. The research discussed in this report was conducted in parallel to the piloting of the Standard by the Cabinet Office and the Centre for Data Ethics and Innovation.
2. We conducted semi-structured interviews with respondents from across UK policing and commercial bodies involved in policing technologies. Our aim was to explore the implications for police forces of participation in the Standard, to identify rewards, risks, challenges for the police, and areas where the Standard could be improved, and therefore to contribute to the exploration of policy options for expansion of participation in the Standard.
3. Algorithmic transparency is both achievable for policing and could bring significant rewards. A key reward of police participation in the Standard is that it provides the opportunity to demonstrate proficient implementation of technology-driven policing, thus enhancing earned trust. Research participants highlighted the public good that could result from the considered use of algorithms.
4. Participants noted, however, a risk of misperception of the dangers of policing technology, especially if use of algorithmic tools was not appropriately compared to the status quo and current methods…(More)”.

In India, your payment data could become evidence of dissent

Article by Nilesh Christopher: “Indian payments firm Razorpay is under fire for seemingly breaching customer privacy. Some have gone on to call the company a “sell out” for sharing users’ payment data with authorities without their consent. But is faulting Razorpay for complying with a legal request fair?

On June 19, Mohammed Zubair, co-founder of fact-checking outlet Alt News, was arrested for hurting religious sentiments over a tweet he posted in 2018. Investigating authorities, through legal diktats, have now gained access to payment data of donors supporting Alt News from payments processor Razorpay. (Police are now probing Alt News for accepting foreign donations. Alt News has denied the charge.) 

The data sharing has had a chilling effect. Civil society organization Internet Freedom Foundation, which uses Razorpay for donations, is exploring “additional payment platforms to offer choice and comfort to donors.” Many donors are worried that they might now become targets on account of their contributions. 

This has created a new faultline in the discourse around weaponizing payment data by a state that has gained notoriety for cracking down on critics of Prime Minister Narendra Modi.

Faulting Razorpay for complying with a legal request is misguided. “I think Razorpay played it by the book,” said Dharmendra Chatur, partner at the law firm Poovayya & Co. “They sort of did what any reasonable person would do in this situation.” 

Under Section 91 of India’s Criminal Procedure Code, police authorities have the power to seek information or documents on the apprehension that a crime has been committed during the course of an inquiry, inspection, or trial. “You either challenge it or you comply. There’s no other option available [for Razorpay]. And who would want to just unnecessarily initiate litigation?” Chatur said…(More)”.

Crime Prediction Keeps Society Stuck in the Past

Article by Chris Gilliard: “…All of these policing systems operate on the assumption that the past determines the future. In Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition, digital media scholar Wendy Hui Kyong Chun argues that the most common methods used by technologies such as PredPol and Chicago’s heat list to make predictions do nothing of the sort. Rather than anticipating what might happen out of the myriad and unknowable possibilities on which the very idea of a future depends, machine learning and other AI-based methods of statistical correlation “restrict the future to the past.” In other words, these systems prevent the future in order to “predict” it—they ensure that the future will be just the same as the past was.

“If the captured and curated past is racist and sexist,” Chun writes, “these algorithms and models will only be verified as correct if they make sexist and racist predictions.” This is partly a description of the familiar garbage-in/garbage-out problem with all data analytics, but it’s something more: Ironically, the putatively “unbiased” technology sold to us by promoters is said to “work” precisely when it tells us that what is contingent in history is in fact inevitable and immutable. Rather than helping us to manage social problems like racism as we move forward, as the McDaniel case shows in microcosm, these systems demand that society not change, that things that we should try to fix instead must stay exactly as they are.

It’s a rather glaring observation that predictive policing tools are rarely if ever (with the possible exception of the parody “White Collar Crime Risk Zone” project) focused on wage theft or various white collar crimes, even though the dollar amounts of those types of offenses far outstrip property crimes in terms of dollar value by several orders of magnitude. This gap exists because of how crime exists in the popular imagination. For instance, news reports in recent weeks bludgeoned readers with reports of a so-called “crime wave” of shoplifting at high-end stores. Yet just this past February, Amazon agreed to pay regulators a whopping $61.7 million, the amount the FTC says the company shorted drivers in a two-and-a-half-year period. That story received a fraction of the coverage, and aside from the fine, there will be no additional charges.

The algorithmic crystal ball that promises to predict and forestall future crimes works from a fixed notion of what a criminal is, where crimes occur, and how they are prosecuted (if at all). Those parameters depend entirely on the power structure empowered to formulate them—and very often the explicit goal of those structures is to maintain existing racial and wealth hierarchies. This is the same set of carceral logics that allow the placement of children into gang databases, or the development of a computational tool to forecast which children will become criminals. The process of predicting the lives of children is about cementing existing realities rather than changing them. Entering children into a carceral ranking system is in itself an act of violence, but as in the case of McDaniel, it also nearly guarantees that the system that sees them as potential criminals will continue to enact violence on them throughout their lifetimes…(More)”.

Algorithm Claims to Predict Crime in US Cities Before It Happens

Article by Carrington York: “A new computer algorithm can now forecast crime in a big city near you — apparently. 

The algorithm, which was formulated by social scientists at the University of Chicago and touts 90% accuracy, divides cities into 1,000-square-foot tiles, according to a study published in Nature Human Behavior. Researchers used historical data on violent crimes and property crimes from Chicago to test the model, which detects patterns over time in these tiled areas tries to predict future events. It performed just as well using data from other big cities, including Atlanta, Los Angeles and Philadelphia, the study showed. 

The new tool contrasts with previous models for prediction, which depict crime as emerging from “hotspots” that spread to surrounding areas. Such an approach tends to miss the complex social environment of cities, as well as the nuanced relationship between crime and the effects of police enforcement, thus leaving room for bias, according to the report.

“It is hard to argue that bias isn’t there when people sit down and determine which patterns they will look at to predict crime because these patterns, by themselves, don’t mean anything,” said Ishanu Chattopadhyay, Assistant Professor of Medicine at the University of Chicago and senior author of the study. “But now, you can ask the algorithm complex questions like: ‘What happens to the rate of violent crime if property crimes go up?”

But Emily M. Bender, professor of linguistics at the University of Washington, said in a series of tweets that the focus should be on targeting underlying inequities rather than on predictive policing, while also noting that the research appears to ignore securities fraud or environmental crimes…(More)”

Police Violence In Puerto Rico: Flooded With Data

Blog by Christine Grillo: “For María Mari-Narváez, a recent decision by the Supreme Court of Puerto Rico was both a victory and a moment of reckoning. The Court granted Kilómetro Cero, a citizen-led police accountability project in Puerto Rico, full access to every use-of-force report filed by the Puerto Rico Police Department since 2014. The decision will make it possible for advocates such as Mari to get a clear picture of how state police officers are using force, and when that use of force crosses the line into abuse. But the court victory flooded her small organization with data.

“We won, finally, and then I realized I was going to be receiving thousands of documents that I had zero capacity to process,” says Mari.

“One of the things that’s important to me when analyzing data is to find out where the gaps are, why those gaps exist, and what those gaps represent.” —Tarak Shah, data scientist

The Court made its decision in April 2021, and the police department started handing over PDF files in July. By the end, there could be up to 10,000 documents that get turned in. In addition to incident reports, the police had to provide their use-of-force database. Combined, the victory provides a complicated mixture of quantitative and qualitative data that can be analyzed to answer questions about what the state police are doing to its citizens during police interventions. In particular, Kilómetro Cero, which Mari founded, wants to find out if some Puerto Ricans are more likely to be victims of police violence than others.

“We’re looking for bias,” says Mari. “Bias against poor people, or people who live in a certain neighborhood. Gender bias. Language bias. Bias against drug users, sex workers, immigrants, people who don’t have a house. We’re trying to analyze the language of vulnerability.”…(More)”.

Understanding Criminal Justice Innovations

Paper by Meghan J. Ryan: “Burgeoning science and technology have provided the criminal justice system with the opportunity to address some of its shortcomings. And the criminal justice system has significant shortcomings. Among other issues, we have a mass incarceration problem; clearance rates are surprisingly low; there are serious concerns about wrongful convictions; and the system is layered with racial, religious, and other biases. Innovations that are widely used across industries, as well as those directed specifically at the criminal justice system, have the potential to improve upon such problems. But it is important to recognize that these innovations also have downsides, and criminal justice actors must proceed with caution and understand not only the potential of these interventions but also their limitations. Relevant to this calculation of caution is whether the innovation is broadly used across industry sectors or, rather, whether it has been specifically developed for use within the criminal justice system. These latter innovations have a record of not being sufficiently vetted for accuracy and reliability. Accordingly, criminal justice actors must be sufficiently well versed in basic science and technology so that they have the ability and the confidence to critically assess the usefulness of the various criminal justice innovations in light of their limitations. Considering lawyers’ general lack of competency in these areas, scientific and technological training is necessary to mold them into modern competent criminal justice actors. This training must be more than superficial subject-specific training, though; it must dig deeper, delving into critical thinking skills that include evaluating the accuracy and reliability of the innovation at issue, as well as assessing broader concerns such as the need for development transparency, possible intrusions on individual privacy, and incentives to curtail individual liberties given the innovation at hand….(More)”