The linguistics search engine that overturned the federal mask mandate


Article by Nicole Wetsman: “The COVID-19 pandemic was still raging when a federal judge in Florida made the fateful decision to type “sanitation” into the search bar of the Corpus of Historical American English.

Many parts of the country had already dropped mask requirements, but a federal mask mandate on planes and other public transportation was still in place. A lawsuit challenging the mandate had come before Judge Kathryn Mizelle, a former clerk for Justice Clarence Thomas. The Biden administration said the mandate was valid, based on a law that authorizes the Centers for Disease Control and Prevention (CDC) to introduce rules around “sanitation” to prevent the spread of disease.

Mizelle took a textualist approach to the question — looking specifically at the meaning of the words in the law. But along with consulting dictionaries, she consulted a database of language, called a corpus, built by a Brigham Young University linguistics professor for other linguists. Pulling every example of the word “sanitation” from 1930 to 1944, she concluded that “sanitation” was used to describe actively making something clean — not as a way to keep something clean. So, she decided, masks aren’t actually “sanitation.”

The mask mandate was overturned, one of the final steps in the defanging of public health authorities, even as infectious disease ran rampant…

Using corpora to answer legal questions, a strategy often referred to as legal corpus linguistics, has grown increasingly popular in some legal circles within the past decade. It’s been used by judges on the Michigan Supreme Court and the Utah Supreme Court, and, this past March, was referenced by the US Supreme Court during oral arguments for the first time.

“It’s been growing rapidly since 2018,” says Kevin Tobia, a professor at Georgetown Law. “And it’s only going to continue to grow.”…(More)”.

The Digital Transformation of Law: Are We Prepared for Artificially Intelligent Legal Practice?


Paper by Larry Bridgesmith and Adel Elmessiry: “We live in an instant access and on-demand world of information sharing. The global pandemic of 2020 accelerated the necessity of remote working and team collaboration. Work teams are exploring and utilizing the remote work platforms required to serve in place of stand-ups common in the agile workplace. Online tools are needed to provide visibility to the status of projects and the accountability necessary to ensure that tasks are completed on time and on budget. Digital transformation of organizational data is now the target of AI projects to provide enterprise transparency and predictive insights into the process of work.

This paper develops the relationship between AI, law, and the digital transformation sweeping every industry sector. There is legitimate concern about the degree to which many nascent issues involving emerging technology oppose human rights and well being. However, lawyers will play a critical role in both the prosecution and defense of these rights. Equally, if not more so, lawyers will also be a vibrant source of insight and guidance for the development of “ethical” AI in a proactive—not simply reactive—way….(More)”.

Selected Readings on Digital Self-Determination for Migrants


By Uma Kalkar, Marine Ragnet, and Stefaan Verhulst

Digital self-determination (DSD) is a multidisciplinary concept that extends self-determination to the digital sphere. Self-determination places humans (and their ability to make ‘moral’ decisions) at the center of decision-making actions. While self-determination is considered as a jus cogens rule (i.e. a global norm), the concept of digital self-determination came only to light in the early 2010s as a result of the increasing digitization of most aspects of society. 

While digitalization has opened up new opportunities for self-expression and communication for individuals across the globe, its reach and benefits have not been evenly distributed. For instance, migrants and refugees are particularly vulnerable to the deepening inequalities and power structures brought on by increased digitization, and the subsequent datafication. Further, non-traditional data, such as social media and telecom data, have brought great potential to improve our understanding of the migration experience and patterns of mobility that can provide more targeted migration policies and services yet it also has brought new concerns related to the lack of agency to determine how the data is being used and who determines the migration narrative.

These selected readings look at DSD in light of the growing ubiquity of technology applications and specifically focus on their impacts on migrants. They were produced to inform the first studio on DSD and migration co-hosted by the Big Data for Migration Alliance and the International Digital Self Determination Network. The readings are listed in alphabetical order.

These readings serve as a primer to offer base perspectives on DSD and its manifestations, as well as provide a better understanding of how migration data is managed today to advance or hinder life for those on the move. Please alert us of any other publication we should include moving forward.

Berens, Jos, Nataniel Raymond, Gideon Shimshon, Stefaan Verhulst, and Lucy Bernholz. “The Humanitarian Data Ecosystem: the Case for Collective Responsibility.” Stanford Center for Philanthropy and Civil Society, 2017.

  • The authors explore the challenges to, and potential solutions for, the responsible use of digital data in the context of international humanitarian action. Data governance is related to DSD because it oversees how the information extracted from an individual—understood by DSD as an extension of oneself in the digital sphere—is handled.
  • They argue that in the digital age, the basic service provision activities of NGOs and aid organizations have become data collection processes. However, the ecosystem of actors is “uncoordinated” creating inefficiencies and vulnerabilities in the humanitarian space.
  • The paper presents a new framework for responsible data use in the humanitarian domain. The authors advocate for data users to follow three steps: 
  1. “[L]ook beyond the role they take up in the ‘data-lifecycle’ and consider previous and following steps and roles;
  2. Develop sound data responsibility strategies not only to prevent harm to their own operations but also to other organizations in the ‘data-lifecycle;’ and, 
  3. Collaborate with and learn from other organizations, both in the humanitarian field and beyond, to establish broadly supported guidelines and standards for humanitarian data use.”

Currion, Paul. “The Refugee Identity.Caribou Digital (via Medium), March 13, 2018.

  • Developed as part of a DFID-funded initiative, this essay outlines the Data Requirements for Service Delivery within Refugee Camps project that investigated current data standards and design of refugee identity systems.
  • Currion finds that since “the digitisation of aid has already begun…aid agencies must therefore pay more attention to the way in which identity systems affect the lives and livelihoods of the forcibly displaced, both positively and negatively.” He argues that an interoperable digital identity for refugees is essential to access financial, social, and material resources while on the move but also to tap into IoT services.
  • However, many refugees are wary of digital tracking and data collection services that could further marginalize them as they search for safety. At present, there are no sector-level data standards around refugee identity data collection, combination, and centralization. How can regulators balance data protection with government and NGO requirements to serve refugees in the ways they want to uphold their DSD?
  • Currion argues that a Responsible Data approach, as opposed to a process defined by a Data Minimization principle, provides “useful guidelines” but notes that data responsibility “still needs to be translated into organizational policy, then into institutional processes, and finally into operational practice. He further adds that “the digitization of aid, if approached from a position that empowers the individual as much as the institution, offers a chance to give refugees back their voices.”

Decker, Rianne, Paul Koot, S. Ilker Birbil, Mark van Embden Andres. “Co-designing algorithms for governance: Ensuring responsible and accountable algorithmic management of refugee camp supplies” Big Data and Society, April 2022. 

  • While recent literature has looked at the negative impacts of big data and algorithms in public governance, claiming they may reinforce existing biases and defy scrutiny by public officials, this paper argues that designing algorithms with relevant government and society stakeholders might be a way to make them more accountable and transparent. 
  • It presents a case study of the development of an algorithmic tool to estimate the populations of refugee camps to manage the delivery of emergency supplies. The algorithms included in this tool were co-designed with relevant stakeholders. 
  • This may provide a way to uphold DSD by  contributing to the “accountability of the algorithm by making the estimations transparent and explicable to its users.”
  • The authors found that the co-design process enabled better accuracy and responsibility and fostered collaboration between partners, creating a suitable purpose for the tool and making the algorithm understandable to its users. This enabled algorithmic accountability. 
  • The authors note, however, that the beneficiaries of the tools were not included in the design process, limiting the legitimacy of the initiative. 

European Migration Network. “The Use of Digitalisation and Artificial Intelligence in Migration Management.” EMN-OECD Inform Series, February 2022.

  • This paper explores the role of new digital technologies in the management of migration and asylum, focusing specifically on where digital technologies, such as online portals, blockchain, and AI-powered speech and facial recognition systems are being used across Europe to navigate the processes of obtaining visas, claiming asylum, gaining citizenship,  and deploying border control management. 
  • Further, it points to friction between GDPR and new technologies like blockchain—which by decision does not allow for the right to be forgotten—and potential workarounds, such as two-step pseudonymisation.
  • As well, it highlights steps taken to oversee and open up data protection processes for immigration. Austria, Belgium, and France have begun to conduct Data Protection Impact Assessments; France has a portal that allows one to request the right to be forgotten; Ireland informs online service users on how data can be shared or used with third-party agencies; and Spain outlines which personal data are used in immigration as per the Registry Public Treatment Activities.
  • Lastly, the paper points out next steps for policy development that upholds DSD, including universal access and digital literacy, trust in digital systems, willingness for government digital transformations, and bias and risk reduction.

Martin, Aaron, Gargi Sharma, Siddharth Peter de Souza, Linnet Taylor, Boudewijn van Eerd, Sean Martin McDonald, Massimo Marelli, Margie Cheesman, Stephan Scheel, and Huub Dijstelbloem. “Digitisation and Sovereignty in Humanitarian Space: Technologies, Territories and Tensions.” Geopolitics (2022): 1-36.

  • This paper explores how digitisation and datafication are reshaping sovereign authority, power, and control in humanitarian spaces.
  • Building on the notion that technology is political, Martin et al. discuss three cases where digital tools powered by partnerships between international organizations and NGOs and private firms such as Palantir and Facebook have raised concerns for data to be “repurposed” to undermine national sovereignty and distort humanitarian aims with for-profit motivations.
  • The authors draw attention to how cyber dependencies threaten international humanitarian organizations’ purported digital sovereignty. They touch on the tensions between national and digital sovereignty and self-governance.
  • The paper further argues that the rise of digital technologies in the governance of international mobility and migration policies “has all kinds of humanitarian and security consequences,” including (but not limited to) surveillance, privacy infringement, profiling, selection, inclusion/exclusion, and access barriers. Specifically, Scheel introduces the notion of function creep—the use of digital data beyond initially defined purposes—and emphasizes its common use in the context of migration as part “of the modus operandi of sovereign power.”

McAuliffe, Marie, Jenna Blower, and Ana Beduschi. “Digitalization and Artificial Intelligence in Migration and Mobility: Transnational Implications of the COVID-19 Pandemic.” Societies 11, no. 135 (2021): 1-13.

  • This paper critically examines the implications of intensifying digitalization and AI for migration and mobility systems in a post- COVID transnational context. 
  • The authors first situate digitalization and AI in migration by analyzing its uptake throughout the Migration Cycle, i.e. to verify identities and visas, “enable “smart” border processing,” and understand travelers’ adherence to legal frameworks. It then evaluates the current challenges and opportunities to migrants and migration systems brought about by deepening digitalization due to COVID-19. For example, contact tracing, infection screening, and quarantining procedures generate increased data about an individual and are meant, by design, to track and trace people, which raises concerns about migrants’ safety, privacy, and autonomy.
  • This essay argues that recent changes show the need for further computational advances that incorporate human rights throughout the design and development stages, “to mitigate potential risks to migrants’ human rights.” AI is severely flawed when it comes to decision-making around minority groups because of biased training data and could further marginalize vulnerable populations and intrusive data collection for public health could erode the power of one’s universal right to privacy. Leaving migrants at the mercy of black-box AI systems fails to uphold their right to DSD because it forces them to relinquish their agency and power to an opaque system.

Ponzanesi, Sandra. “Migration and Mobility in a Digital Age: (Re)Mapping Connectivity and Belonging.” Television & New Media 20, no. 6 (2019): 547-557.

  • This article explores the role of new media technologies in rethinking the dynamics of migration and globalization by focusing on the role of migrant users as “connected” and active participants, as well as “screened” and subject to biometric datafication, visualization, and surveillance.
  • Elaborating on concepts such as “migration” and “mobility,” the article analyzes the paradoxes of intermittent connectivity and troubled belonging, which are seen as relational definitions that are always fluid, negotiable, and porous.
  • It states that a city’s digital infrastructures are “complex sociotechnical systems” that have a functional side related to access and connectivity and a performative side where people engage with technology. Digital access and action represent areas of individual and collective manifestations of DSD. For migrants, gaining digital access and skills and “enacting citizenship” are important for resettlement. Ponzanesi advocates for further research conducted both from the bottom-up that leans on migrant experiences with technology to resettle and remain in contact with their homeland and a top-down approach that looks at datafication, surveillance, digital/e-governance as a part of the larger technology application ecosystem to understand contemporary processes and problems of migration.

Remolina, Nydia, and Mark James Findlay. “The Paths to Digital Self-Determination — A Foundational Theoretical Framework.” SMU Centre for AI & Data Governance Research Paper No. 03 (2021): 1-34.

  • Remolina and Findlay stress that self-determination is the vehicle by which people “decide their own destiny in the international order.” Decision-making ability powers humans to be in control of their own lives and excited to pursue a set of actions. Collective action, or the ability to make decisions as a part of a group—be it based on ethnicity, nationality, shared viewpoints, etc.—further motivates oneself.
  • The authors discuss how the European Union and European Court of Human Rights’ “principle of subsidiarity” aligns with self-determination because it advocates for power to be placed at the lowest level possible to preserve bottom-up agency with a “reasonable level of efficiency.” In practice, the results of subsidiarity have been disappointing.
  • The paper provides examples of indigenous populations’ fight for self-determination, offline and online. Here, digital self-determination refers to the challenges indigenous peoples face in accessing growing government uses of technology for unlocking innovative solutions because of a lack of physical infrastructure due to structural and social inequities between settler and indigenous communities.
  • Understanding self-determination—and by extension, digital self-determination as a human right, the report investigates how autonomy, sovereignty, the legal definition of a ‘right,’ inclusion, agency, data governance, data ownership, data control, and data quality.
  • Lastly, the paper presents a foundational theoretical framework that goes beyond just protecting personal data and privacy. Understanding that DSD “cannot be detached from duties for responsible data use,” the authors present a collective and individual dimension to DSD. They extend the individual dimension of DSD to include both my data and data about me that can be used to influence a person’s actions through micro-targeting and nudge techniques. They update the collective dimension of DSD to include the views and influences of organizations, businesses, and communities online and call for a better way of visualizing the ‘social self’ and its control over data.

Ziebart, Astrid, and Jessica Bither. “AI, Digital Identities, Biometrics, Blockchain: A Primer on the Use of Technology in Migration Management.” Migration Strategy Group on International Cooperation and Development, June 2020.

  • Ziebart and Bither note the implications of increasingly sophisticated use of technology and data collection by governments with respect to their citizens. They note that migrants and refugees “often are exposed to particular vulnerabilities” during these processes and underscore the need to bring migrants into data gathering and use policy conversations.  
  • The authors discuss the promise of technology—i.e., to predict migration through AI-powered analyses, employ technologies to reduce friction in the asylum-seeking processes, and the power of digital identities for those on the move. However, they stress the need to combine these tools with informational self-determination that allows migrants to own and control what data they share and how and where the data are used.
  • The migration and refugee policy space faces issues of “tech evangelism,” where technologies are being employed just because they exist, rather than because they serve an actual policy need or provide an answer to a particular policy question. This supply-driven policy implementation signals the need for more migrant voices to inform policymakers on what tools are actually useful for the migratory experience. In order to advance the digital agency of migrants, the paper offers recommendations for some of the ethical challenges these technologies might pose and ultimately advocates for greater participation of migrants and refugees in devising technology-driven policy instruments for migration issues.

On-the-go interesting resources 

  • Empowering Digital Self-Determination, mediaX at Stanford University: This short video presents definitions of DSD, and digital personhood, identity, and privacy and an overview of their applications across ethics, law, and the private sector.
  • Digital Self-Determination — A Living Syllabus: This syllabus and assorted materials have been created and curated from the 2021 Research Sprint run by the Digital Asia Hub and Berkman Klein Center for Internet Society at Harvard University. It introduces learners to the fundamentals of DSD across a variety of industries to enrich understanding of its existing and potential applications.
  • Digital Self-Determination Wikipedia Page: This Wikipedia page was developed by the students who took part in the Berkman Klein Center research sprint on digital self-determination. It provides a comprehensive overview of DSD definitions and its key elements, which include human-centered design, robust privacy mandates and data governance, and control over data use to give data subjects the ability to choose how algorithms manipulate their data for autonomous decision-making.
  • Roger Dubach on Digital Self-Determination: This short video presents DSD in the public sector and the dangers of creating a ‘data-protected’ world, but rather on understanding how governments can efficiently use data and protect privacy. Note: this video is part of the Living Syllabus course materials (Digital Self-Determination/Module 1: Beginning Inquiries).

Contracting and Contract Law in the Age of Artificial Intelligence



Book edited by Martin Ebers, Cristina Poncibò, and Mimi Zou: “This book provides original, diverse, and timely insights into the nature, scope, and implications of Artificial Intelligence (AI), especially machine learning and natural language processing, in relation to contracting practices and contract law. The chapters feature unique, critical, and in-depth analysis of a range of topical issues, including how the use of AI in contracting affects key principles of contract law (from formation to remedies), the implications for autonomy, consent, and information asymmetries in contracting, and how AI is shaping contracting practices and the laws relating to specific types of contracts and sectors.

The contributors represent an interdisciplinary team of lawyers, computer scientists, economists, political scientists, and linguists from academia, legal practice, policy, and the technology sector. The chapters not only engage with salient theories from different disciplines, but also examine current and potential real-world applications and implications of AI in contracting and explore feasible legal, policy, and technological responses to address the challenges presented by AI in this field.

The book covers major common and civil law jurisdictions, including the EU, Italy, Germany, UK, US, and China. It should be read by anyone interested in the complex and fast-evolving relationship between AI, contract law, and related areas of law such as business, commercial, consumer, competition, and data protection laws….(More)”.

Perspectives on Platform Regulation


Open Access Book edited by Judit Bayer, Bernd Holznage, Päivi Korpisaari and Lorna Woods: “Concepts and Models of Social Media GovernanceOnline social media platforms set the agenda and structure for public and private communication in our age. Their influence and power is beyond any traditional media empire. Their legal regulation is a pressing challenge, but currently, they are mainly governed by economic pressures. There are now diverse legislative attempts to regulate platforms in various parts of the world. The European Union and most of its Member States have historically relied on soft law, but are now looking to introduce regulation.

Leading researchers of the field analyse the hard questions and the responses given by various states. The book offers legislative solutions from various parts of the world, compares regulatory concepts and assesses the use of algorithms….(More)”.

When FOIA Goes to Court: 20 Years of Freedom of Information Act Litigation by News Organizations and Reporters


Report by The FOIA Project: “The news media are powerful players in the world of government transparency and public accountability. One important tool for ensuring public accountability is through invoking transparency mandates provided by the Freedom of Information Act (FOIA). In 2020, news organizations and individual reporters filed 122 different FOIA suits[1] to compel disclosure of federal government records—more than any year on record according to federal court data back to 2001 analyzed by the FOIA Project

In fact, the media alone have filed a total of 386 FOIA cases during the four years of the Trump Administration, from 2017 through 2020. This is greater than the total of 311 FOIA media cases filed during the sixteen years of the Bush and Obama Administrations combined. Moreover, many of these FOIA cases were the very first FOIA cases filed by members of the news media. Almost as many new FOIA litigators filed their first case in court in the past four years—178 from 2017 to 2020—than the years 2001 to 2016, when 196 FOIA litigators filed their first case. Reporters made up the majority of these. During the past four years, more than four out of five of first-time litigators were individual reporters. The ranks of FOIA litigators thus expanded considerably during the Trump Administration, with more reporters challenging agencies in court for failing to provide records they are seeking, either alone or with their news organizations.

Using the FOIA Project’s unique dataset of FOIA cases filed in federal court, this report provides unprecedented and valuable insight into the rapid growth of media lawsuits designed to make the government more transparent and accountable to the public. The complete, updated list of news media cases, along with the names of organizations and reporters who filed these suits, is available on the News Media List at FOIAProject.org. Figure 1shows the total number of FOIA cases filed by the news each year. Counts are available in Appendix Table 1 at the end of this report….(More)”.

Figure 1. Freedom of Information Act (FOIA) Cases Filed by News Organizations and Reporters in Federal Court, 2001–2020.

The Reasonable Robot: Artificial Intelligence and the Law


Book by Ryan Abbott: “AI and people do not compete on a level-playing field. Self-driving vehicles may be safer than human drivers, but laws often penalize such technology. People may provide superior customer service, but businesses are automating to reduce their taxes. AI may innovate more effectively, but an antiquated legal framework constrains inventive AI. In The Reasonable Robot, Ryan Abbott argues that the law should not discriminate between AI and human behavior and proposes a new legal principle that will ultimately improve human well-being. This work should be read by anyone interested in the rapidly evolving relationship between AI and the law….(More)”.

Technical Excellence and Scale


Cory Doctorow at EFF: “In America, we hope that businesses will grow by inventing amazing things that people love – rather than through deep-pocketed catch-and-kill programs in which every competitor is bought and tamed before it can grow to become a threat. We want vibrant, competitive, innovative markets where companies vie to create the best products. Growth solely through merger-and-acquisition helps create a world in which new firms compete to be bought up and absorbed into the dominant players, and customers who grow dissatisfied with a product or service and switch to a “rival” find that they’re still patronizing the same company—just another division.

To put it bluntly: we want companies that are good at making things as well as buying things.

This isn’t the whole story, though.

Small companies with successful products can become victims of their own success. As they are overwhelmed by eager new customers, they are strained beyond their technical and financial limits – for example, they may be unable to buy server hardware fast enough, and unable to lash that hardware together in efficient ways that let them scale up to meet demand.

When we look at the once small, once beloved companies that are now mere divisions of large, widely mistrusted ones—Instagram and Facebook; YouTube and Google; Skype and Microsoft; DarkSkies and Apple—we can’t help but notice that they are running at unimaginable scale, and moreover, they’re running incredibly well.

These services were once plagued with outages, buffering delays, overcapacity errors, slowdowns, and a host of other evils of scale. Today, they run so well that outages are newsworthy events.

There’s a reason for that: big tech companies are really good at being big. Whatever you think of Amazon, you can’t dispute that it gets a lot of parcels from A to B with remarkably few bobbles. Google’s search results arrive in milliseconds, Instagram photos load as fast as you can scroll them, and even Skype is far more reliable than in the pre-Microsoft days. These services have far more users than they ever did as independents, and yet, they are performing better than they did in those early days.

Can we really say that this is merely “buying things” and not also “making things?” Isn’t this innovation? Isn’t this technical accomplishment? It is. Does that mean big = innovative? It does not….(More)”.

MEPs chart path for a European approach to Artificial Intelligence


Samuel Stolton at Euractiv: “As part of a series of debates in Parliament’s Legal Affairs Committee on Tuesday afternoon, MEPs exchanged ideas concerning several reports on Artificial Intelligence, covering ethics, civil liability, and intellectual property.

The reports represent Parliament’s recommendations to the Commission on the future for AI technology in the bloc, following the publication of the executive’s White Paper on Artificial Intelligence, which stated that high-risk technologies in ‘critical sectors’ and those deemed to be of ‘critical use’ should be subjected to new requirements.

One Parliament initiative on the ethical aspects of AI, led by Spanish Socialist Ibán García del Blanco, notes that he believes a uniform regulatory framework in the field of AI in Europe is necessary to avoid member states adopting different approaches.

“We felt that regulation is important to make sure that there is no restriction on the internal market. If we leave scope to the member states, I think we’ll see greater legal uncertainty,” García del Blanco said on Tuesday.

In the context of the current public health crisis, García del Blanco also said the use of certain biometric applications and remote recognition technologies should be proportionate, while respecting the EU’s data protection regime and the EU Charter of Fundamental Rights.

A new EU agency for Artificial Intelligence?

One of the most contested areas of García del Blanco’s report was his suggestion that the EU should establish a new agency responsible for overseeing compliance with future ethical principles in Artificial Intelligence.

“We shouldn’t get distracted by the idea of setting up an agency, European Union citizens are not interested in setting up further bodies,” said the conservative EPP’s shadow rapporteur on the file, Geoffroy Didier.

The centrist-liberal Renew group also did not warm up to the idea of establishing a new agency for AI, with MEP Stephane Sejourne saying that there already exist bodies that could have their remits extended.

In the previous mandate, as part of a 2017 resolution on Civil Law Rules on Robotics, Parliament had called upon the Commission to ‘consider’ whether an EU Agency for Robotics and Artificial Intelligence could be worth establishing in the future.

Another point of divergence consistently raised by MEPs on Tuesday was the lack of harmony in key definitions related to Artificial Intelligence across different Parliamentary texts, which could create legal loopholes in the future.

In this vein, members highlighted the need to work towards joint definitions for Artificial intelligence operations, in order to ensure consistency across Parliament’s four draft recommendations to the Commission….(More)”.

Is Law Computable? Critical Perspectives on Law and Artificial Intelligence


Book edited by Simon Deakin and Christopher Markou: “What does computable law mean for the autonomy, authority, and legitimacy of the legal system? Are we witnessing a shift from Rule of Law to a new Rule of Technology? Should we even build these things in the first place?

This unique volume collects original papers by a group of leading international scholars to address some of the fascinating questions raised by the encroachment of Artificial Intelligence (AI) into more aspects of legal process, administration, and culture. Weighing near-term benefits against the longer-term, and potentially path-dependent, implications of replacing human legal authority with computational systems, this volume pushes back against the more uncritical accounts of AI in law and the eagerness of scholars, governments, and LegalTech developers, to overlook the more fundamental – and perhaps ‘bigger picture’ – ramifications of computable law…(More)”