The linguistics search engine that overturned the federal mask mandate


Article by Nicole Wetsman: “The COVID-19 pandemic was still raging when a federal judge in Florida made the fateful decision to type “sanitation” into the search bar of the Corpus of Historical American English.

Many parts of the country had already dropped mask requirements, but a federal mask mandate on planes and other public transportation was still in place. A lawsuit challenging the mandate had come before Judge Kathryn Mizelle, a former clerk for Justice Clarence Thomas. The Biden administration said the mandate was valid, based on a law that authorizes the Centers for Disease Control and Prevention (CDC) to introduce rules around “sanitation” to prevent the spread of disease.

Mizelle took a textualist approach to the question — looking specifically at the meaning of the words in the law. But along with consulting dictionaries, she consulted a database of language, called a corpus, built by a Brigham Young University linguistics professor for other linguists. Pulling every example of the word “sanitation” from 1930 to 1944, she concluded that “sanitation” was used to describe actively making something clean — not as a way to keep something clean. So, she decided, masks aren’t actually “sanitation.”

The mask mandate was overturned, one of the final steps in the defanging of public health authorities, even as infectious disease ran rampant…

Using corpora to answer legal questions, a strategy often referred to as legal corpus linguistics, has grown increasingly popular in some legal circles within the past decade. It’s been used by judges on the Michigan Supreme Court and the Utah Supreme Court, and, this past March, was referenced by the US Supreme Court during oral arguments for the first time.

“It’s been growing rapidly since 2018,” says Kevin Tobia, a professor at Georgetown Law. “And it’s only going to continue to grow.”…(More)”.

Aligning Artificial Intelligence with Humans through Public Policy



Paper by John Nay and James Daily: “Given that Artificial Intelligence (AI) increasingly permeates our lives, it is critical that we systematically align AI objectives with the goals and values of humans. The human-AI alignment problem stems from the impracticality of explicitly specifying the rewards that AI models should receive for all the actions they could take in all relevant states of the world. One possible solution, then, is to leverage the capabilities of AI models to learn those rewards implicitly from a rich source of data describing human values in a wide range of contexts. The democratic policy-making process produces just such data by developing specific rules, flexible standards, interpretable guidelines, and generalizable precedents that synthesize citizens’ preferences over potential actions taken in many states of the world. Therefore, computationally encoding public policies to make them legible to AI systems should be an important part of a socio-technical approach to the broader human-AI alignment puzzle. Legal scholars are exploring AI, but most research has focused on how AI systems fit within existing law, rather than how AI may understand the law. This Essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks. As a demonstration of the ability of AI to comprehend policy, we provide a case study of an AI system that predicts the relevance of proposed legislation to a given publicly traded company and its likely effect on that company. We believe this represents the “comprehension” phase of AI and policy, but leveraging policy as a key source of human values to align AI requires “understanding” policy. We outline what we believe will be required to move toward that, and two example research projects in that direction. Solving the alignment problem is crucial to ensuring that AI is beneficial both individually (to the person or group deploying the AI) and socially. As AI systems are given increasing responsibility in high-stakes contexts, integrating democratically-determined policy into those systems could align their behavior with human goals in a way that is responsive to a constantly evolving society…(More)”.

The Digital Transformation of Law: Are We Prepared for Artificially Intelligent Legal Practice?


Paper by Larry Bridgesmith and Adel Elmessiry: “We live in an instant access and on-demand world of information sharing. The global pandemic of 2020 accelerated the necessity of remote working and team collaboration. Work teams are exploring and utilizing the remote work platforms required to serve in place of stand-ups common in the agile workplace. Online tools are needed to provide visibility to the status of projects and the accountability necessary to ensure that tasks are completed on time and on budget. Digital transformation of organizational data is now the target of AI projects to provide enterprise transparency and predictive insights into the process of work.

This paper develops the relationship between AI, law, and the digital transformation sweeping every industry sector. There is legitimate concern about the degree to which many nascent issues involving emerging technology oppose human rights and well being. However, lawyers will play a critical role in both the prosecution and defense of these rights. Equally, if not more so, lawyers will also be a vibrant source of insight and guidance for the development of “ethical” AI in a proactive—not simply reactive—way….(More)”.

Algorithmic monoculture and social welfare


Paper by Jon Kleinberg and Manish Raghavan: “As algorithms are increasingly applied to screen applicants for high-stakes decisions in employment, lending, and other domains, concerns have been raised about the effects of algorithmic monoculture, in which many decision-makers all rely on the same algorithm. This concern invokes analogies to agriculture, where a monocultural system runs the risk of severe harm from unexpected shocks. Here, we show that the dangers of algorithmic monoculture run much deeper, in that monocultural convergence on a single algorithm by a group of decision-making agents, even when the algorithm is more accurate for any one agent in isolation, can reduce the overall quality of the decisions being made by the full collection of agents. Unexpected shocks are therefore not needed to expose the risks of monoculture; it can hurt accuracy even under “normal” operations and even for algorithms that are more accurate when used by only a single decision-maker. Our results rely on minimal assumptions and involve the development of a probabilistic framework for analyzing systems that use multiple noisy estimates of a set of alternatives…(More)”.

Operationalising AI governance through ethics-based auditing: an industry case study


Paper by Jakob Mökander & Luciano Floridi: “Ethics-based auditing (EBA) is a structured process whereby an entity’s past or present behaviour is assessed for consistency with moral principles or norms. Recently, EBA has attracted much attention as a governance mechanism that may help to bridge the gap between principles and practice in AI ethics. However, important aspects of EBA—such as the feasibility and effectiveness of different auditing procedures—have yet to be substantiated by empirical research. In this article, we address this knowledge gap by providing insights from a longitudinal industry case study. Over 12 months, we observed and analysed the internal activities of AstraZeneca, a biopharmaceutical company, as it prepared for and underwent an ethics-based AI audit. While previous literature concerning EBA has focussed on proposing or analysing evaluation metrics or visualisation techniques, our findings suggest that the main difficulties large multinational organisations face when conducting EBA mirror classical governance challenges. These include ensuring harmonised standards across decentralised organisations, demarcating the scope of the audit, driving internal communication and change management, and measuring actual outcomes. The case study presented in this article contributes to the existing literature by providing a detailed description of the organisational context in which EBA procedures must be integrated to be feasible and effective…(More)”.

Magic Numbers


Essay by Alana Mohamed: “…The willingness to believe in the “algorithm” as though it were a kind of god is not entirely surprising. New technologies have long been incorporated into spiritual practices, especially during times of mass crisis. In the mid-to-late 19th century, emergent technologies from the lightbulb to the telephone called the limitations of the physical world into question. New spiritual leaders, beliefs, and full-blown religions cropped up, inspired by the invisible electric currents powering scientific developments. If we could summon light and sound by unseen forces, what other invisible specters lurked beneath the surface of everyday life?

The casualties of the U.S. Civil War gave birth to new spiritual practices, including contacting the dead through spirit photography and the telegraph dial. Practices like table rapping used fairly low-tech objects — walls, tables — as conduits to the spirit realm, where ghosts would tap out responses. The rapping noise was reminiscent of Morse code, leading to comparisons with the telegraph. In fact, in 1854, a U.S. senator campaigned for a scientific commission that would establish a “spiritual telegraph” between our world and the spiritual world. (He was unsuccessful.)

William Mumler’s practice of spirit photography is perhaps better known. Mumler claimed that he could photograph a dead relative or loved one when photographing a living subject. His most famous photograph depicts the widowed Mary Todd Lincoln with the shadowy image of her decreased husband holding her shoulder. Though widely debunked as a fraud, the practice itself continued on, even earning a book written in its defense by Sir Arthur Conan Doyle. 

Similar investigations into otherworldly communication and esoteric knowledge would be mainstreamed after World War I, bolstered by the creation of the radio and wireless telegraphy. Amid a boom in table rapping, spirit photography, and the host of usual suspects, Thomas Edison spoke openly about his hopes to create a machine, based on early gramophones, to communicate with the dead, specifically referencing the work of mediums and spiritualists. Radio, in particular, provided a new way to think about the physical and spiritual worlds, with its language of tuning in, channels, frequencies, and wavelengths still employed today…(More)”.

Regulatory Insights on Artificial Intelligence


Book edited by Mark Findlay, Jolyon Ford, Josephine Seah, and Dilan Thampapillai: “This provocative book investigates the relationship between law and artificial intelligence (AI) governance, and the need for new and innovative approaches to regulating AI and big data in ways that go beyond market concerns alone and look to sustainability and social good.
 
Taking a multidisciplinary approach, the contributors demonstrate the interplay between various research methods, and policy motivations, to show that law-based regulation and governance of AI is vital to efforts at ensuring justice, trust in administrative and contractual processes, and inclusive social cohesion in our increasingly technologically-driven societies. The book provides valuable insights on the new challenges posed by a rapid reliance on AI and big data, from data protection regimes around sensitive personal data, to blockchain and smart contracts, platform data reuse, IP rights and limitations, and many other crucial concerns for law’s interventions. The book also engages with concerns about the ‘surveillance society’, for example regarding contact tracing technology used during the Covid-19 pandemic.
 
The analytical approach provided will make this an excellent resource for scholars and educators, legal practitioners (from constitutional law to contract law) and policy makers within regulation and governance. The empirical case studies will also be of great interest to scholars of technology law and public policy. The regulatory community will find this collection offers an influential case for law’s relevance in giving institutional enforceability to ethics and principled design…(More)”.

Artificial intelligence is breaking patent law


Article by Alexandra George & Toby Walsh: “In 2020, a machine-learning algorithm helped researchers to develop a potent antibiotic that works against many pathogens (see Nature https://doi.org/ggm2p4; 2020). Artificial intelligence (AI) is also being used to aid vaccine development, drug design, materials discovery, space technology and ship design. Within a few years, numerous inventions could involve AI. This is creating one of the biggest threats patent systems have faced.

Patent law is based on the assumption that inventors are human; it currently struggles to deal with an inventor that is a machine. Courts around the world are wrestling with this problem now as patent applications naming an AI system as the inventor have been lodged in more than 100 countries1. Several groups are conducting public consultations on AI and intellectual property (IP) law, including in the United States, United Kingdom and Europe.

If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge. Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions.

Rather than forcing old patent laws to accommodate new technology, we propose that national governments design bespoke IP law — AI-IP — that protects AI-generated inventions. Nations should also create an international treaty to ensure that these laws follow standardized principles, and that any disputes can be resolved efficiently. Researchers need to inform both steps….(More)”.

The Frontlines of Artificial Intelligence Ethics


Book edited by Andrew J. Hampton, and Jeanine A. DeFalco: “This foundational text examines the intersection of AI, psychology, and ethics, laying the groundwork for the importance of ethical considerations in the design and implementation of technologically supported education, decision support, and leadership training.

AI already affects our lives profoundly, in ways both mundane and sensational, obvious and opaque. Much academic and industrial effort has considered the implications of this AI revolution from technical and economic perspectives, but the more personal, humanistic impact of these changes has often been relegated to anecdotal evidence in service to a broader frame of reference. Offering a unique perspective on the emerging social relationships between people and AI agents and systems, Hampton and DeFalco present cutting-edge research from leading academics, professionals, and policy standards advocates on the psychological impact of the AI revolution. Structured into three parts, the book explores the history of data science, technology in education, and combatting machine learning bias, as well as future directions for the emerging field, bringing the research into the active consideration of those in positions of authority.

Exploring how AI can support expert, creative, and ethical decision making in both people and virtual human agents, this is essential reading for students, researchers, and professionals in AI, psychology, ethics, engineering education, and leadership, particularly military leadership…(More)”.

How the Pandemic Made Algorithms Go Haywire


Article by Ravi Parikh and Amol Navathe: “Algorithms have always had some trouble getting things right—hence the fact that ads often follow you around the internet for something you’ve already purchased.

But since COVID upended our lives, more of these algorithms have misfired, harming millions of Americans and widening existing financial and health disparities facing marginalized groups. At times, this was because we humans weren’t using the algorithms correctly. More often it was because COVID changed life in a way that made the algorithms malfunction.

Take, for instance, an algorithm used by dozens of hospitals in the U.S. to identify patients with sepsis—a life-threatening consequence of infection. It was supposed to help doctors speed up transfer to the intensive care unit. But starting in spring of 2020, the patients that showed up to the hospital suddenly changed due to COVID. Many of the variables that went into the algorithm—oxygen levels, age, comorbid conditions—were completely different during the pandemic. So the algorithm couldn’t effectively discern sicker from healthier patients, and consequently it flagged more than twice as many patients as “sick” even though hospital capacity was 35 percent lower than normal. The result was presumably more instances of doctors and nurses being summoned to the patient bedside. It’s possible all of these alerts were necessary – after all, more patients were sick. However, it’s also possible that many of these alerts were false alarms because the type of patients showing up to the hospital were different. Either way, this threatened to overwhelm physicians and hospitals. This “alert overload” was discovered months into the pandemic and led the University of Michigan health system to shut down its use of the algorithm…(More)”.