Chapter by Christoph Busch in “Data Economy and Algorithmic Regulation: A Handbook on Personalized Law”, C.H.Beck Nomos Hart, 2020: “Technological advances in data collection and information processing makes it possible to tailor legal norms to specific individuals and achieve an unprecedented degree of regulatory precision. However, the benefits of such a “personalized law” must not be confounded with the false promise of “perfect enforcement”. To the contrary, the enforcement of personalized law might be even more challenging and complex than the enforcement of impersonal and uniform rules. Starting from this premise, the first part of this Essay explores how algorithmic personalization of legal rules could be operationalized for tailoring disclosures on digital marketplaces, mitigating discrimination in the sharing economy and optimizing the flow of traffic in smart cities. The second part of the Essay looks into an aspect of personalized law that has so far been rather under-researched: a transition towards personalized law involves not only changes in the design of legal rules, but also necessitates modifications regarding compliance monitoring and enforcement. It is argued that personalized law can be conceptualized as a form of algorithmic regulation or governance-by-data. Therefore, the implementation of personalized law requires setting up a regulatory framework for ensuring algorithmic accountability. In a broader perspective, this Essay aims to create a link between the scholarly debate on algorithmic decision-making and automated legal enforcement and the emerging debate on personalized law….(More)”.
What is the Difference between a Conclusion and a Fact?
Paper by Howard M. Erichson: “In Ashcroft v. Iqbal, building on Bell Atlantic v. Twombly, the Supreme Court instructed district courts to treat a complaint’s conclusions differently from allegations of fact. Facts, but not conclusions, are assumed true for purposes of a motion to dismiss. The Court did little to help judges or lawyers understand the elusive distinction, and, indeed, obscured the distinction with its language. The Court said it was distinguishing “legal conclusions” from factual allegations. The application in Twombly and Iqbal, however, shows that the relevant distinction is not between law and fact, but rather between different types of factual assertions. This essay, written for a symposium on the tenth anniversary of Ashcroft v. Iqbal, explores the definitional problem with the conclusion-fact distinction and examines how district courts have applied the distinction in recent cases….(More)”.
Regulating Artificial Intelligence
Book by Thomas Wischmeyer and Timo Rademacher: “This book assesses the normative and practical challenges for artificial intelligence (AI) regulation, offers comprehensive information on the laws that currently shape or restrict the design or use of AI, and develops policy recommendations for those areas in which regulation is most urgently needed. By gathering contributions from scholars who are experts in their respective fields of legal research, it demonstrates that AI regulation is not a specialized sub-discipline, but affects the entire legal system and thus concerns all lawyers.
Machine learning-based technology, which lies at the heart of what is commonly referred to as AI, is increasingly being employed to make policy and business decisions with broad social impacts, and therefore runs the risk of causing wide-scale damage. At the same time, AI technology is becoming more and more complex and difficult to understand, making it harder to determine whether or not it is being used in accordance with the law. In light of this situation, even tech enthusiasts are calling for stricter regulation of AI. Legislators, too, are stepping in and have begun to pass AI laws, including the prohibition of automated decision-making systems in Article 22 of the General Data Protection Regulation, the New York City AI transparency bill, and the 2017 amendments to the German Cartel Act and German Administrative Procedure Act. While the belief that something needs to be done is widely shared, there is far less clarity about what exactly can or should be done, or what effective regulation might look like.
The book is divided into two major parts, the first of which focuses on features common to most AI systems, and explores how they relate to the legal framework for data-driven technologies, which already exists in the form of (national and supra-national) constitutional law, EU data protection and competition law, and anti-discrimination law. In the second part, the book examines in detail a number of relevant sectors in which AI is increasingly shaping decision-making processes, ranging from the notorious social media and the legal, financial and healthcare industries, to fields like law enforcement and tax law, in which we can observe how regulation by AI is becoming a reality….(More)”.
Comparative Constitution Making
Book edited by Hanna Lerner and David Landau: “In a seminal article more than two decades ago, Jon Elster lamented that despite the large volume of scholarship in related fields, such as comparative constitutional law and constitutional design, there was a severe dearth of work on the process and context of constitution making. Happily, his point no longer holds. Recent years have witnessed a near-explosion of high-quality work on constitution-making processes, across a range of fields including law, political science, and history. This volume attempts to synthesize and expand upon this literature. It offers a number of different perspectives and methodologies aimed at understanding the contexts in which constitution making takes place, its motivations, the theories and processes that guide it, and its effects. The goal of the contributors is not simply to explain the existing state of the field, but also to provide new research on these key questions.
Our aims in this introduction are relatively modest. First, we seek to set up some of the major questions treated by recent research in order to explain how the chapters in this volume contribute to them. We do not aim to give a complete state of the field, but we do lay out what we see as several of the biggest challenges and questions posed by recent scholarship. …(More)”.
Artificial Intelligence and Law: An Overview
Paper by Harry Surden: “Much has been written recently about artificial intelligence (AI) and law. But what is AI, and what is its relation to the practice and administration of law? This article addresses those questions by providing a high-level overview of AI and its use within law. The discussion aims to be nuanced but also understandable to those without a technical background. To that end, I first discuss AI generally. I then turn to AI and how it is being used by lawyers in the practice of law, people and companies who are governed by the law, and government officials who administer the law. A key motivation in writing this article is to provide a realistic, demystified view of AI that is rooted in the actual capabilities of the technology. This is meant to contrast with discussions about AI and law that are decidedly futurist in nature…(More)”.
Law as Data: Computation, Text, and the Future of Legal Analysis
Book edited by Michael A. Livermore and Daniel N. Rockmore: “In recent years, the digitization of legal texts, combined with developments in the fields of statistics, computer science, and data analytics, have opened entirely new approaches to the study of law. This volume explores the new field of computational legal analysis, an approach marked by its use of legal texts as data. The emphasis herein is work that pushes methodological boundaries, either by using new tools to study longstanding questions within legal studies or by identifying new questions in response to developments in data availability and analysis.
By using the text and underlying data of legal documents as the direct objects of quantitative statistical analysis, Law as Data introduces the legal world to the broad range of computational tools already proving themselves relevant to law scholarship and practice, and highlights the early steps in what promises to be an exciting new approach to studying the law….(More)”.
The plan to mine the world’s research papers
Priyanka Pulla in Nature: “Carl Malamud is on a crusade to liberate information locked up behind paywalls — and his campaigns have scored many victories. He has spent decades publishing copyrighted legal documents, from building codes to court records, and then arguing that such texts represent public-domain law that ought to be available to any citizen online. Sometimes, he has won those arguments in court. Now, the 60-year-old American technologist is turning his sights on a new objective: freeing paywalled scientific literature. And he thinks he has a legal way to do it.
Over the past year, Malamud has — without asking publishers — teamed up with Indian researchers to build a gigantic store of text and images extracted from 73 million journal articles dating from 1847 up to the present day. The cache, which is still being created, will be kept on a 576-terabyte storage facility at Jawaharlal Nehru University (JNU) in New Delhi. “This is not every journal article ever written, but it’s a lot,” Malamud says. It’s comparable to the size of the core collection in the Web of Science database, for instance. Malamud and his JNU collaborator, bioinformatician Andrew Lynn, call their facility the JNU data depot.
No one will be allowed to read or download work from the repository, because that would breach publishers’ copyright. Instead, Malamud envisages, researchers could crawl over its text and data with computer software, scanning through the world’s scientific literature to pull out insights without actually reading the text.
The unprecedented project is generating much excitement because it could, for the first time, open up vast swathes of the paywalled literature for easy computerized analysis. Dozens of research groups already mine papers to build databases of genes and chemicals, map associations between proteins and diseases, and generate useful scientific hypotheses. But publishers control — and often limit — the speed and scope of such projects, which typically confine themselves to abstracts, not full text. Researchers in India, the United States and the United Kingdom are already making plans to use the JNU store instead. Malamud and Lynn have held workshops at Indian government laboratories and universities to explain the idea. “We bring in professors and explain what we are doing. They get all excited and they say, ‘Oh gosh, this is wonderful’,” says Malamud.
But the depot’s legal status isn’t yet clear. Malamud, who contacted several intellectual-property (IP) lawyers before starting work on the depot, hopes to avoid a lawsuit. “Our position is that what we are doing is perfectly legal,” he says. For the moment, he is proceeding with caution: the JNU data depot is air-gapped, meaning that no one can access it from the Internet. Users have to physically visit the facility, and only researchers who want to mine for non-commercial purposes are currently allowed in. Malamud says his team does plan to allow remote access in the future. “The hope is to do this slowly and deliberately. We are not throwing this open right away,” he says….(More)”.
The Governance Turn in Information Privacy Law
Paper by Jane K. Winn: “The governance turn in information privacy law is a turn away from a model of bureaucratic administration of individual control rights and toward a model of collaborative governance of shared interests in information. Collaborative information governance has roots in the American pragmatic philosophy of Peirce, James and Dewey and the 1973 HEW Report that rejected unilateral individual control rights, recognizing instead the essential characteristic of mutuality of shared purposes that are mediated through information governance. America’s current information privacy law regime consists of market mechanisms supplemented by sector-specific, risk-based laws designed to foster a culture of compliance. Prior to the GDPR, data protection law compliance in Europe was more honored in the breach than the observance, so the EU’s strengthening of its bureaucratic individual control rights model reveals more about the EU’s democratic deficit than a commitment to compliance.
The conventional “Europe good, America bad” wisdom about information privacy law obscures a paradox: if the focus shifts from what “law in the books” says to what “law in action” does, it quickly becomes apparent that American businesses lead the world with their efforts to comply with information privacy law, so “America good, Europe bad” might be more accurate. Creating a federal legislative interface through which regulators and voluntary, consensus standards organizations can collaborate could break the current political stalemate triggered by California’s 2018 EU-style information privacy law. Such a pragmatic approach to information governance can safeguard Americans’ continued access to the benefits of innovation and economic growth as well as providing risk-based protection from harm. America can preserve its leadership of the global information economy by rejecting EU-style information privacy laws and building instead a flexible, dynamic framework of information governance capable of addressing both privacy and disclosure issues simultaneously….(More)”.
Proposal for an International Taxonomy on the Various Forms of the ‘Right to Be Forgotten’: A Study on the Convergence of Norms
Paper by W. Gregory Voss and Céline Castets-Renard: “The term “right to be forgotten” is used today to represent a multitude of rights, and this fact causes difficulties in interpretation, analysis, and comprehension of such rights. These rights have become of utmost importance due to the increased risks to the privacy of individuals on the Internet, where social media, blogs, fora, and other outlets have entered into common use as part of human expression. Search engines, as Internet intermediaries, have been enrolled to assist in the attempt to regulate the Internet, and the rights falling under the moniker of the “right to be forgotten,” without truly knowing the extent of the related rights. In part to alleviate such problems, and focusing on digital technology and media, this paper proposes a taxonomy to identify various rights from different countries, which today are often regrouped under the banner “right to be forgotten,” and to do so in an understandable and coherent way. As an integral part of this exercise, this study aims to measure the extent to which there is a convergence of legal rules internationally in order to regulate private life on the Internet and to elucidate the impact that the important Google Spain “right to be forgotten” ruling of the Court of Justice of the European Union has had on law in other jurisdictions on this matter.
This paper will first introduce the definition and context of the “right to be forgotten.” Second, it will trace some of the sources of the rights discussed around the world to survey various forms of the “right to be forgotten” internationally and propose a taxonomy. This work will allow for a determination on whether there is a convergence of norms regarding the “right to be forgotten” and, more generally, with respect to privacy and personal data protection laws. Finally, this paper will provide certain criteria for the relevant rights and organize them into a proposed analytical grid to establish more precisely the proposed taxonomy of the “right to be forgotten” for the use of scholars, practitioners, policymakers, and students alike….(More)”.
Foundations of Information Ethics
Book by John T. F. Burgess and Emily J. M. Knox: “As discussions about the roles played by information in economic, political, and social arenas continue to evolve, the need for an intellectual primer on information ethics that also functions as a solid working casebook for LIS students and professionals has never been more urgent. This text, written by a stellar group of ethics scholars and contributors from around the globe, expertly fills that need. Organized into twelve chapters, making it ideal for use by instructors, this volume from editors Burgess and Knox
- thoroughly covers principles and concepts in information ethics, as well as the history of ethics in the information professions;
- examines human rights, information access, privacy, discourse, intellectual property, censorship, data and cybersecurity ethics, intercultural information ethics, and global digital citizenship and responsibility;
- synthesizes the philosophical underpinnings of these key subjects with abundant primary source material to provide historical context along with timely and relevant case studies;
- features contributions from John M. Budd, Paul T. Jaeger, Rachel Fischer, Margaret Zimmerman, Kathrine A. Henderson, Peter Darch, Michael Zimmer, and Masooda Bashir, among others; and
- offers a special concluding chapter by Amelia Gibson that explores emerging issues in information ethics, including discussions ranging from the ethics of social media and social movements to AI decision making…(More)”.