Predictability, AI, And Judicial Futurism: Why Robots Will Run The Law And Textualists Will Like It


Paper by Jack Kieffaber: “The question isn’t whether machines are going to replace judges and lawyers—they are. The question is whether that’s a good thing. If you’re a textualist, you have to answer yes. But you won’t—which means you’re not a textualist. Sorry.

Hypothetical: The year is 2030.  AI has far eclipsed the median federal jurist as a textual interpreter. A new country is founded; it’s a democratic republic that uses human legislators to write laws and programs a state-sponsored Large Language Model called “Judge.AI” to apply those laws to facts. The model makes judicial decisions as to conduct on the back end, but can also provide advisory opinions on the front end; if a citizen types in his desired action and hits “enter,” Judge.AI will tell him, ex ante, exactly what it would decide ex post if the citizen were to perform the action and be prosecuted. The primary result is perfect predictability; secondary results include the abolition of case law, the death of common law, and the replacement of all judges—indeed, all lawyers—by a single machine. Don’t fight the hypothetical, assume it works. This article poses the question:  Is that a utopia or a dystopia?

If you answer dystopia, you cannot be a textualist. Part I of this article establishes why:  Because predictability is textualism’s only lodestar, and Judge.AI is substantially more predictable than any regime operating today. Part II-A dispatches rebuttals premised on positive nuances of the American system; such rebuttals forget that my hypothetical presumes a new nation and take for granted how much of our nation’s founding was premised on mitigating exactly the kinds of human error that Judge.AI would eliminate. And Part II-B dispatches normative rebuttals, which ultimately amount to moral arguments about objective good—which are none of the textualist’s business. 

When the dust clears, you have only two choices: You’re a moralist, or you’re a formalist. If you’re the former, you’ll need a complete account of the objective good—which has evaded man for his entire existence. If you’re the latter, you should relish the fast-approaching day when all laws and all lawyers are usurped by a tin box.  But you’re going to say you’re something in between. And you’re not…(More)”

AI, huge hacks leave consumers facing a perfect storm of privacy perils


Article by Joseph Menn: “Hackers are using artificial intelligence to mine unprecedented troves of personal information dumped online in the past year, along with unregulated commercial databases, to trick American consumers and even sophisticated professionals into giving up control of bank and corporate accounts.

Armed with sensitive health informationcalling records and hundreds of millions of Social Security numbers, criminals and operatives of countries hostile to the United States are crafting emails, voice calls and texts that purport to come from government officials, co-workers or relatives needing help, or familiar financial organizations trying to protect accounts instead of draining them.

“There is so much data out there that can be used for phishing and password resets that it has reduced overall security for everyone, and artificial intelligence has made it much easier to weaponize,” said Ashkan Soltani, executive director of the California Privacy Protection Agency, the only such state-level agency.

The losses reported to the FBI’s Internet Crime Complaint Center nearly tripled from 2020 to 2023, to $12.5 billion, and a number of sensitive breaches this year have only increased internet insecurity. The recently discovered Chinese government hacks of U.S. telecommunications companies AT&T, Verizon and others, for instance, were deemed so serious that government officials are being told not to discuss sensitive matters on the phone, some of those officials said in interviews. A Russian ransomware gang’s breach of Change Healthcare in February captured data on millions of Americans’ medical conditions and treatments, and in August, a small data broker, National Public Data, acknowledged that it had lost control of hundreds of millions of Social Security numbers and addresses now being sold by hackers.

Meanwhile, the capabilities of artificial intelligence are expanding at breakneck speed. “The risks of a growing surveillance industry are only heightened by AI and other forms of predictive decision-making, which are fueled by the vast datasets that data brokers compile,” U.S. Consumer Financial Protection Bureau Director Rohit Chopra said in September…(More)”.

Why ‘open’ AI systems are actually closed, and why this matters


Paper by David Gray Widder, Meredith Whittaker & Sarah Myers West: “This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector…(More)”.

Can AI review the scientific literature — and figure out what it all means?


Article by Helen Pearson: “When Sam Rodriques was a neurobiology graduate student, he was struck by a fundamental limitation of science. Even if researchers had already produced all the information needed to understand a human cell or a brain, “I’m not sure we would know it”, he says, “because no human has the ability to understand or read all the literature and get a comprehensive view.”

Five years later, Rodriques says he is closer to solving that problem using artificial intelligence (AI). In September, he and his team at the US start-up FutureHouse announced that an AI-based system they had built could, within minutes, produce syntheses of scientific knowledge that were more accurate than Wikipedia pages1. The team promptly generated Wikipedia-style entries on around 17,000 human genes, most of which previously lacked a detailed page.How AI-powered science search engines can speed up your research

Rodriques is not the only one turning to AI to help synthesize science. For decades, scholars have been trying to accelerate the onerous task of compiling bodies of research into reviews. “They’re too long, they’re incredibly intensive and they’re often out of date by the time they’re written,” says Iain Marshall, who studies research synthesis at King’s College London. The explosion of interest in large language models (LLMs), the generative-AI programs that underlie tools such as ChatGPT, is prompting fresh excitement about automating the task…(More)”.

AI adoption in the public sector


Two studies from the Joint Research Centre: “…delve into the factors that influence the adoption of Artificial Intelligence (AI) in public sector organisations.

first report analyses a survey conducted among 574 public managers across seven EU countries, identifying what are currently the main drivers of AI adoption and providing 3 key recommendations to practitioners. 

Strong expertise and various organisational factors emerge as key contributors for AI adoptions, and a second study sheds light on the essential competences and governance practices required for the effective adoption and usage of AI in the public sector across Europe…

The study finds that AI adoption is no longer a promise for public administration, but a reality, particularly in service delivery and internal operations and to a lesser extent in policy decision-making. It also highlights the importance of organisational factors such as leadership support, innovative culture, clear AI strategy, and in-house expertise in fostering AI adoption. Anticipated citizen needs are also identified as a key external factor driving AI adoption. 

Based on these findings, the report offers three policy recommendations. First, it suggests paying attention to AI and digitalisation in leadership programmes, organisational development and strategy building. Second, it recommends broadening in-house expertise on AI, which should include not only technical expertise, but also expertise on ethics, governance, and law. Third, the report advises monitoring (for instance through focus groups and surveys) and exchanging on citizen needs and levels of readiness for digital improvements in government service delivery…(More)”.

AI Investment Potential Index: Mapping Global Opportunities for Sustainable Development


Paper by AFD: “…examines the potential of artificial intelligence (AI) investment to drive sustainable development across diverse national contexts. By evaluating critical factors, including AI readiness, social inclusion, human capital, and macroeconomic conditions, we construct a nuanced and comprehensive analysis of the global AI landscape. Employing advanced statistical techniques and machine learning algorithms, we identify nations with significant untapped potential for AI investment.
We introduce the AI Investment Potential Index (AIIPI), a novel instrument designed to guide financial institutions, development banks, and governments in making informed, strategic AI investment decisions. The AIIPI synthesizes metrics of AI readiness with socio-economic indicators to identify and highlight opportunities for fostering inclusive and sustainable growth. The methodological novelty lies in the weight selection process, which combines statistical modeling and also an entropy-based weighting approach. Furthermore, we provide detailed policy implications to support stakeholders in making targeted investments aimed at reducing disparities and advancing equitable technological development…(More)”.

NegotiateAI 


About: “The NegotiateAI app is designed to streamline access to critical information on the UN Plastic Treaty Negotiations to develop a legally binding instrument on plastic pollution, including the marine environment. It offers a comprehensive, centralized database of documents submitted by member countries available here, along with an extensive collection of supporting resources, including reports, research papers, and policy briefs. You can find more information about the NegotiateAI project on our website…The Interactive Treaty Assistant simplifies the search and analysis of documents by INC members, enabling negotiators and other interested parties to quickly pinpoint crucial information. With an intuitive interface, The Interactive Treaty Assistant supports treaty-specific queries and provides direct links to relevant documents for deeper research…(More)”.

Building a Responsible Humanitarian Approach: The ICRC’s policy on Artificial Intelligence


Policy by the ICRC: “…is anchored in a purely humanitarian approach driven by our mandate and Fundamental Principles. It is meant to help ICRC staff learn about AI and safely explore its humanitarian potential.

This policy is the result of a collaborative and multidisciplinary approach that leveraged the ICRC’s humanitarian and operational expertise, existing international AI standards, and the guidance and feedback of external experts.

Given the constantly evolving nature of AI, this document cannot possibly address all the questions and challenges that will arise in the future, but we hope that it provides a solid basis and framework to ensure we take a responsible and human-centred approach when using AI in support of our mission, in line with our 2024–2027 Institutional Strategy…(More)”.

Shifting Patterns of Social Interaction: Exploring the Social Life of Urban Spaces Through A.I.


Paper by Arianna Salazar-Miranda, et al: “We analyze changes in pedestrian behavior over a 30-year period in four urban public spaces located in New York, Boston, and Philadelphia. Building on William Whyte’s observational work from 1980, where he manually recorded pedestrian behaviors, we employ computer vision and deep learning techniques to examine video footage from 1979-80 and 2008-10. Our analysis measures changes in walking speed, lingering behavior, group sizes, and group formation. We find that the average walking speed has increased by 15%, while the time spent lingering in these spaces has halved across all locations. Although the percentage of pedestrians walking alone remained relatively stable (from 67% to 68%), the frequency of group encounters declined, indicating fewer interactions in public spaces. This shift suggests that urban residents increasingly view streets as thoroughfares rather than as social spaces, which has important implications for the role of public spaces in fostering social engagement…(More)”.

Courts in Buenos Aires are using ChatGPT to draft rulings


Article by Victoria Mendizabal: “In May, the Public Prosecution Service of the City of Buenos Aires began using generative AI to predict rulings for some public employment cases related to salary demands.

Since then, justice employees at the office for contentious administrative and tax matters of the city of Buenos Aires have uploaded case documents into ChatGPT, which analyzes patterns, offers a preliminary classification from a catalog of templates, and drafts a decision. So far, ChatGPT has been used for 20 legal sentences.

The use of generative AI has cut down the time it takes to draft a sentence from an hour to about 10 minutes, according to recent studies conducted by the office.

“We, as professionals, are not the main characters anymore. We have become editors,” Juan Corvalán, deputy attorney general in contentious administrative and tax matters, told Rest of World.

The introduction of generative AI tools has improved efficiency at the office, but it has also prompted concerns within the judiciary and among independent legal experts about possiblebiases, the treatment of personal data, and the emergence of hallucinations. Similar concerns have echoed beyond Argentina’s borders.

“We, as professionals, are not the main characters anymore. We have become editors.”

“Any inconsistent use, such as sharing sensitive information, could have a considerable legal cost,” Lucas Barreiro, a lawyer specializing in personal data protection and a member of Privaia, a civil association dedicated to the defense of human rights in the digital era, told Rest of World.

Judges in the U.S. have voiced skepticism about the use of generative AI in the courts, with Manhattan Federal Judge Edgardo Ramos saying earlier this year that “ChatGPT has been shown to be an unreliable resource.” In Colombia and the Netherlands, the use of ChatGPT by judges was criticized by local experts. But not everyone is concerned: A court of appeals judge in the U.K. who used ChatGPT to write part of a judgment said that it was “jolly useful.”

For Corvalán, the move to generative AI is the culmination of a years-long transformation within the City of Buenos Aires’ attorney general’s office.In 2017, Corvalán put together a group of developers to train an AI-powered system called PROMETEA, which was intended to automate judicial tasks and expedite case proceedings. The team used more than 300,000 rulings and case files related to housing protection, public employment bonuses, enforcement of unpaid fines, and denial of cab licenses to individuals with criminal records…(More)”.