Is Law Computable? Critical Perspectives on Law and Artificial Intelligence


Book edited by Simon Deakin and Christopher Markou: “What does computable law mean for the autonomy, authority, and legitimacy of the legal system? Are we witnessing a shift from Rule of Law to a new Rule of Technology? Should we even build these things in the first place?

This unique volume collects original papers by a group of leading international scholars to address some of the fascinating questions raised by the encroachment of Artificial Intelligence (AI) into more aspects of legal process, administration, and culture. Weighing near-term benefits against the longer-term, and potentially path-dependent, implications of replacing human legal authority with computational systems, this volume pushes back against the more uncritical accounts of AI in law and the eagerness of scholars, governments, and LegalTech developers, to overlook the more fundamental – and perhaps ‘bigger picture’ – ramifications of computable law…(More)”

Rethinking Nudge: An Information-Costs Theory of Default Rules


Paper by Oren Bar-Gill and Omri Ben-Shahar: “Policymakers and scholars – both lawyers and economists – have long been pondering the optimal design of default rules. From the classic works on “mimicking” defaults for contracts and corporations to the modern rush to set “sticky” default rules to promote policies as diverse as organ donations, retirement savings, consumer protection, and data privacy, the optimal design of default rules has featured as a central regulatory challenge. The key element driving the design is opt-out costs—how to minimize them, or alternatively how to raise them to make the default sticky. Much of the literature has focused on “mechanical” opt-out costs—the effort people incur to select a non-default alternative. This focus is too narrow. A more important factor affecting opt-out is information—the knowledge people must acquire to make informed opt-out decisions. But, unlike high mechanical costs, high information costs need not make defaults stickier; they may instead make the defaults “slippery.”

This counterintuitive claim is due to the phenomenon of uninformed opt-out, which we identify and characterize. Indeed, the importance of uninformed opt-out requires a reassessment of the conventional wisdom about Nudge and asymmetric or libertarian paternalism. We also show that different defaults provide different incentives to acquire the information necessary for informed optout. With the ballooning use of default rules as a policy tool, our information-costs theory provides valuable guidance to policymakers….(More)”.

Location Surveillance to Counter COVID-19: Efficacy Is What Matters


Susan Landau at Lawfare: “…Some government officials believe that the location information that phones can provide will be useful in the current crisis. After all, if cellphone location information can be used to track terrorists and discover who robbed a bank, perhaps it can be used to determine whether you rubbed shoulders yesterday with someone who today was diagnosed as having COVID-19, the respiratory disease that the novel coronavirus causes. But such thinking ignores the reality of how phone-tracking technology works.

Let’s look at the details of what we can glean from cellphone location information. Cell towers track which phones are in their locale—but that is a very rough measure, useful perhaps for tracking bank robbers, but not for the six-foot proximity one wants in order to determine who might have been infected by the coronavirus.

Finer precision comes from GPS signals, but these can only work outside. That means the location information supplied by your phone—if your phone and that of another person are both on—can tell you if you both went into the same subway stop around the same time. But it won’t tell you whether you rode the same subway car. And the location information from your phone isn’t fully precise. So not only can’t it reveal if, for example, you were in the same aisle in the supermarket as the ill person, but sometimes it will make errors about whether you made it into the store, as opposed to just sitting on a bench outside. What’s more, many people won’t have the location information available because GPS drains the battery, so they’ll shut it off when they’re not using it. Their phones don’t have the location information—and neither do the providers, at least not at the granularity to determine coronavirus exposure.

GPS is not the only way that cellphones can collect location information. Various other ways exist, including through the WiFi network to which a phone is connected. But while two individuals using the same WiFi network are likely to be close together inside a building, the WiFi data would typically not be able to determine whether they were in that important six-foot proximity range.

Other devices can also get within that range, including Bluetooth beacons. These are used within stores, seeking to determine precisely what people are—and aren’t—buying; they track peoples’ locations indoors within inches. But like WiFi, they’re not ubiquitous, so their ability to track exposure will be limited.

If the apps lead to the government’s dogging people’s whereabouts at work, school, in the supermarket and at church, will people still be willing to download the tracking apps that get them get discounts when they’re passing the beer aisle? China follows this kind of surveillance model, but such a surveillance-state solution is highly unlikely to be acceptable in the United States. Yet anything less is unlikely to pinpoint individuals exposed to the virus.

South Korea took a different route. In precisely tracking coronavirus exposure, the country used additional digital records, including documentation of medical and pharmacy visits, history of credit card transactions, and CCTV videos, to determine where potentially exposed people had been—then followed up with interviews not just of infected people but also of their acquaintances, to determine where they had traveled.

Validating such records is labor intensive. And for the United States, it may not be the best use of resources at this time. There’s an even more critical reason that the Korean solution won’t work for the U.S.: South Korea was able to test exposed people. The U.S. can’t do this. Currently the country has a critical shortage of test kits; patients who are not sufficiently ill as to be hospitalized are not being tested. The shortage of test kits is sufficiently acute that in New York City, the current epicenter of the pandemic, the rule is, “unless you are hospitalized and a diagnosis will impact your care, you will not be tested.” With this in mind, moving to the South Korean model of tracking potentially exposed individuals won’t change the advice from federal and state governments that everyone should engage in social distancing—but employing such tracking would divert government resources and thus be counterproductive.

Currently, phone tracking in the United States is not efficacious. It cannot be unless all people are required to carry such location-tracking devices at all times; have location tracking on; and other forms of information tracking, including much wider use of CCTV cameras, Bluetooth beacons, and the like, are also in use. There are societies like this. But so far, even in the current crisis, no one is seriously contemplating the U.S. heading in that direction….(More)”.

Copy, Paste, Legislate


The Center for Public Integrity: “Do you know if a bill introduced in your statehouse — it might govern who can fix your shattered iPhone screen or whether you can still sue a pedophile priest years later — was actually written by your elected lawmakers? Use this new tool to find out.

Spoiler alert The answer may well be no.

Thousands of pieces of “model legislation” are drafted each year by business organizations and special interest groups and distributed to state lawmakers for introduction. These copycat bills influence policymaking across the nation, state by state, often with little scrutiny. This news application was developed by the Center for Public Integrity, part of a year-long collaboration with USA TODAY and the Arizona Republic to bring the practice into the light….(More)”.

Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems


Paper by Tim Wu: “Software has partially or fully displaced many former human activities, such as catching speeders or flying airplanes, and proven itself able to surpass humans in certain contests, like Chess and Jeopardy. What are the prospects for the displacement of human courts as the centerpiece of legal decision-making?

Based on the case study of hate speech control on major tech platforms, particularly on Twitter and Facebook, this Essay suggests displacement of human courts remains a distant prospect, but suggests that hybrid machine–human systems are the predictable future of legal adjudication, and that there lies some hope in that combination, if done well….(More)”.

Algorithmic Regulation and (Im)perfect Enforcement in the Personalized Economy


Chapter by Christoph Busch in “Data Economy and Algorithmic Regulation: A Handbook on Personalized Law”, C.H.Beck Nomos Hart, 2020: “Technological advances in data collection and information processing makes it possible to tailor legal norms to specific individuals and achieve an unprecedented degree of regulatory precision. However, the benefits of such a “personalized law” must not be confounded with the false promise of “perfect enforcement”. To the contrary, the enforcement of personalized law might be even more challenging and complex than the enforcement of impersonal and uniform rules. Starting from this premise, the first part of this Essay explores how algorithmic personalization of legal rules could be operationalized for tailoring disclosures on digital marketplaces, mitigating discrimination in the sharing economy and optimizing the flow of traffic in smart cities. The second part of the Essay looks into an aspect of personalized law that has so far been rather under-researched: a transition towards personalized law involves not only changes in the design of legal rules, but also necessitates modifications regarding compliance monitoring and enforcement. It is argued that personalized law can be conceptualized as a form of algorithmic regulation or governance-by-data. Therefore, the implementation of personalized law requires setting up a regulatory framework for ensuring algorithmic accountability. In a broader perspective, this Essay aims to create a link between the scholarly debate on algorithmic decision-making and automated legal enforcement and the emerging debate on personalized law….(More)”.

What is the Difference between a Conclusion and a Fact?


Paper by Howard M. Erichson: “In Ashcroft v. Iqbal, building on Bell Atlantic v. Twombly, the Supreme Court instructed district courts to treat a complaint’s conclusions differently from allegations of fact. Facts, but not conclusions, are assumed true for purposes of a motion to dismiss. The Court did little to help judges or lawyers understand the elusive distinction, and, indeed, obscured the distinction with its language. The Court said it was distinguishing “legal conclusions” from factual allegations. The application in Twombly and Iqbal, however, shows that the relevant distinction is not between law and fact, but rather between different types of factual assertions. This essay, written for a symposium on the tenth anniversary of Ashcroft v. Iqbal, explores the definitional problem with the conclusion-fact distinction and examines how district courts have applied the distinction in recent cases….(More)”.

Regulating Artificial Intelligence


Book by Thomas Wischmeyer and Timo Rademacher: “This book assesses the normative and practical challenges for artificial intelligence (AI) regulation, offers comprehensive information on the laws that currently shape or restrict the design or use of AI, and develops policy recommendations for those areas in which regulation is most urgently needed. By gathering contributions from scholars who are experts in their respective fields of legal research, it demonstrates that AI regulation is not a specialized sub-discipline, but affects the entire legal system and thus concerns all lawyers. 

Machine learning-based technology, which lies at the heart of what is commonly referred to as AI, is increasingly being employed to make policy and business decisions with broad social impacts, and therefore runs the risk of causing wide-scale damage. At the same time, AI technology is becoming more and more complex and difficult to understand, making it harder to determine whether or not it is being used in accordance with the law. In light of this situation, even tech enthusiasts are calling for stricter regulation of AI. Legislators, too, are stepping in and have begun to pass AI laws, including the prohibition of automated decision-making systems in Article 22 of the General Data Protection Regulation, the New York City AI transparency bill, and the 2017 amendments to the German Cartel Act and German Administrative Procedure Act. While the belief that something needs to be done is widely shared, there is far less clarity about what exactly can or should be done, or what effective regulation might look like. 

The book is divided into two major parts, the first of which focuses on features common to most AI systems, and explores how they relate to the legal framework for data-driven technologies, which already exists in the form of (national and supra-national) constitutional law, EU data protection and competition law, and anti-discrimination law. In the second part, the book examines in detail a number of relevant sectors in which AI is increasingly shaping decision-making processes, ranging from the notorious social media and the legal, financial and healthcare industries, to fields like law enforcement and tax law, in which we can observe how regulation by AI is becoming a reality….(More)”.

Comparative Constitution Making


Book edited by Hanna Lerner and David Landau: “In a seminal article more than two decades ago, Jon Elster lamented that despite the large volume of scholarship in related fields, such as comparative constitutional law and constitutional design, there was a severe dearth of work on the process and context of constitution making. Happily, his point no longer holds. Recent years have witnessed a near-explosion of high-quality work on constitution-making processes, across a range of fields including law, political science, and history. This volume attempts to synthesize and expand upon this literature. It offers a number of different perspectives and methodologies aimed at understanding the contexts in which constitution making takes place, its motivations, the theories and processes that guide it, and its effects. The goal of the contributors is not simply to explain the existing state of the field, but also to provide new research on these key questions.

Our aims in this introduction are relatively modest. First, we seek to set up some of the major questions treated by recent research in order to explain how the chapters in this volume contribute to them. We do not aim to give a complete state of the field, but we do lay out what we see as several of the biggest challenges and questions posed by recent scholarship. …(More)”.

Artificial Intelligence and Law: An Overview


Paper by Harry Surden: “Much has been written recently about artificial intelligence (AI) and law. But what is AI, and what is its relation to the practice and administration of law? This article addresses those questions by providing a high-level overview of AI and its use within law. The discussion aims to be nuanced but also understandable to those without a technical background. To that end, I first discuss AI generally. I then turn to AI and how it is being used by lawyers in the practice of law, people and companies who are governed by the law, and government officials who administer the law. A key motivation in writing this article is to provide a realistic, demystified view of AI that is rooted in the actual capabilities of the technology. This is meant to contrast with discussions about AI and law that are decidedly futurist in nature…(More)”.