10 learnings from considering AI Ethics through global perspectives


Blog by Sampriti Saxena and Stefaan G. Verhulst: “Artificial Intelligence (AI) technologies have the potential to solve the world’s biggest challenges. However, they also come with certain risks to individuals and groups. As these technologies become more prevalent around the world, we need to consider the ethical ramifications of AI use to identify and rectify potential harms. Equally, we need to consider the various associated issues from a global perspective, not assuming that a single approach will satisfy different cultural and societal expectations.

In February 2021, The Governance Lab (The GovLab), the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), the Center for Responsible AI @ NYU (R/AI), and the Technical University of Munich’s (TUM) Institute for Ethics in Artificial Intelligence (IEAI) launched AI Ethics: Global Perspectives. …A year and a half later, the course has grown to 38 modules, contributed by 40 faculty members representing over 20 countries. Our conversations with faculty members and our experiences with the course modules have yielded a wealth of knowledge about AI ethics. In keeping with the values of openness and transparency that underlie the course, we summarized these insights into ten learnings to share with a broader audience. In what follows, we outline our key lessons from experts around the world.

Our Ten Learnings:

  1. Broaden the Conversation
  2. The Public as a Stakeholder
  3. Centering Diversity and Inclusion in Ethics
  4. Building Effective Systems of Accountability
  5. Establishing Trust
  6. Ask the Right Questions
  7. The Role of Independent Research
  8. Humans at the Center
  9. Our Shared Responsibility
  10. The Challenge and Potential for a Global Framework…(More)”.

Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI


Open access book by Alessandro Mantelero: “…focuses on the impact of Artificial Intelligence (AI) on individuals and society from a legal perspective, providing a comprehensive risk-based methodological framework to address it. Building on the limitations of data protection in dealing with the challenges of AI, the author proposes an integrated approach to risk assessment that focuses on human rights and encompasses contextual social and ethical values.

The core of the analysis concerns the assessment methodology and the role of experts in steering the design of AI products and services by business and public bodies in the direction of human rights and societal values.

Taking into account the ongoing debate on AI regulation, the proposed assessment model also bridges the gap between risk-based provisions and their real-world implementation.

The central focus of the book on human rights and societal values in AI and the proposed solutions will make it of interest to legal scholars, AI developers and providers, policy makers and regulators….(More)”.

Prediction machines, insurance, and protection: An alternative perspective on AI’s role in production


Paper by Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb: “Recent advances in AI represent improvements in prediction. We examine how decisionmaking and risk management strategies change when prediction improves. The adoption of AI may cause substitution away from risk management activities used when rules are applied (rules require always taking the same action), instead allowing for decisionmaking (choosing actions based on the predicted state). We provide a formal model evaluating the impact of AI and how risk management, stakes, and interrelated tasks affect AI adoption. The broad conclusion is that AI adoption can be stymied by existing processes designed to address uncertainty. In particular, many processes are designed to enable coordinated decisionmaking among different actors in an organization. AI can make coordination even more challenging. However, when the cost of changing such processes falls, then the returns from AI adoption increase….(More)”.

AI Ethics: Global Perspectives


New Course Modules: A Cybernetics Approach to Ethical AI Designexplores the relationship between cybernetics and AI ethics, and looks at how cybernetics can be leveraged to reframe how we think about and how we undertake ethical AI design. This module, by Ellen Broad, Associate Professor and Associate Director at the Australian National University’s School of Cybernetics, is divided into three sections, beginning with an introduction to cybernetics. Following that, we explore different ways of thinking about AI ethics, before concluding by bringing the two concepts together to understand a new approach to ethical AI design.

How should organizations put AI ethics and responsible AI into practice? Is the answer AI ethics principles and AI ethics boards or should everyone developing AI systems become experts in ethics? In An Ethics Model for Innovation: The PiE (Puzzle-solving in Ethics) Model, Cansu Canca, Founder and Director of the AI Ethics Lab, presents the model developed and employed at AI Ethics Lab: The Puzzle-solving in Ethics (PiE) Model. The PiE Model is a comprehensive and structured practice framework for organizations to integrate ethics into their operations as they develop and deploy AI systems. The PiE Model aims to make ethics a robust and integral part of innovation and enhance innovation through ethical puzzle-solving.

Nuria Oliver, Co-Founder and Scientific Director of the ELLIS Alicante Unit, presentsData Science against COVID-19: The Valencian Experience”. In this module, we explore the ELLIS Alicante Foundation’s Data-Science for COVID-19 team’s work in the Valencian region of Spain. The team was founded in response to the pandemic in March 2020 to assist policymakers in making informed, evidence-based decisions. The team tackles four different work areas: modeling human mobility, building computational epidemiological models, predictive models on the prevalence of the disease, and operating one of the largest online citizen surveys related to COVID-19 in the world. This lecture explains the four work streams and shares lessons learned from their work at the intersection between data, AI, and the pandemic…(More)”.

Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI


Open Access book by Alessandro Mantelero: “…focuses on the impact of Artificial Intelligence (AI) on individuals and society from a legal perspective, providing a comprehensive risk-based methodological framework to address it. Building on the limitations of data protection in dealing with the challenges of AI, the author proposes an integrated approach to risk assessment that focuses on human rights and encompasses contextual social and ethical values.

The core of the analysis concerns the assessment methodology and the role of experts in steering the design of AI products and services by business and public bodies in the direction of human rights and societal values.

Taking into account the ongoing debate on AI regulation, the proposed assessment model also bridges the gap between risk-based provisions and their real-world implementation.

The central focus of the book on human rights and societal values in AI and the proposed solutions will make it of interest to legal scholars, AI developers and providers, policy makers and regulators…(More)”.

The linguistics search engine that overturned the federal mask mandate


Article by Nicole Wetsman: “The COVID-19 pandemic was still raging when a federal judge in Florida made the fateful decision to type “sanitation” into the search bar of the Corpus of Historical American English.

Many parts of the country had already dropped mask requirements, but a federal mask mandate on planes and other public transportation was still in place. A lawsuit challenging the mandate had come before Judge Kathryn Mizelle, a former clerk for Justice Clarence Thomas. The Biden administration said the mandate was valid, based on a law that authorizes the Centers for Disease Control and Prevention (CDC) to introduce rules around “sanitation” to prevent the spread of disease.

Mizelle took a textualist approach to the question — looking specifically at the meaning of the words in the law. But along with consulting dictionaries, she consulted a database of language, called a corpus, built by a Brigham Young University linguistics professor for other linguists. Pulling every example of the word “sanitation” from 1930 to 1944, she concluded that “sanitation” was used to describe actively making something clean — not as a way to keep something clean. So, she decided, masks aren’t actually “sanitation.”

The mask mandate was overturned, one of the final steps in the defanging of public health authorities, even as infectious disease ran rampant…

Using corpora to answer legal questions, a strategy often referred to as legal corpus linguistics, has grown increasingly popular in some legal circles within the past decade. It’s been used by judges on the Michigan Supreme Court and the Utah Supreme Court, and, this past March, was referenced by the US Supreme Court during oral arguments for the first time.

“It’s been growing rapidly since 2018,” says Kevin Tobia, a professor at Georgetown Law. “And it’s only going to continue to grow.”…(More)”.

Aligning Artificial Intelligence with Humans through Public Policy



Paper by John Nay and James Daily: “Given that Artificial Intelligence (AI) increasingly permeates our lives, it is critical that we systematically align AI objectives with the goals and values of humans. The human-AI alignment problem stems from the impracticality of explicitly specifying the rewards that AI models should receive for all the actions they could take in all relevant states of the world. One possible solution, then, is to leverage the capabilities of AI models to learn those rewards implicitly from a rich source of data describing human values in a wide range of contexts. The democratic policy-making process produces just such data by developing specific rules, flexible standards, interpretable guidelines, and generalizable precedents that synthesize citizens’ preferences over potential actions taken in many states of the world. Therefore, computationally encoding public policies to make them legible to AI systems should be an important part of a socio-technical approach to the broader human-AI alignment puzzle. Legal scholars are exploring AI, but most research has focused on how AI systems fit within existing law, rather than how AI may understand the law. This Essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks. As a demonstration of the ability of AI to comprehend policy, we provide a case study of an AI system that predicts the relevance of proposed legislation to a given publicly traded company and its likely effect on that company. We believe this represents the “comprehension” phase of AI and policy, but leveraging policy as a key source of human values to align AI requires “understanding” policy. We outline what we believe will be required to move toward that, and two example research projects in that direction. Solving the alignment problem is crucial to ensuring that AI is beneficial both individually (to the person or group deploying the AI) and socially. As AI systems are given increasing responsibility in high-stakes contexts, integrating democratically-determined policy into those systems could align their behavior with human goals in a way that is responsive to a constantly evolving society…(More)”.

The Digital Transformation of Law: Are We Prepared for Artificially Intelligent Legal Practice?


Paper by Larry Bridgesmith and Adel Elmessiry: “We live in an instant access and on-demand world of information sharing. The global pandemic of 2020 accelerated the necessity of remote working and team collaboration. Work teams are exploring and utilizing the remote work platforms required to serve in place of stand-ups common in the agile workplace. Online tools are needed to provide visibility to the status of projects and the accountability necessary to ensure that tasks are completed on time and on budget. Digital transformation of organizational data is now the target of AI projects to provide enterprise transparency and predictive insights into the process of work.

This paper develops the relationship between AI, law, and the digital transformation sweeping every industry sector. There is legitimate concern about the degree to which many nascent issues involving emerging technology oppose human rights and well being. However, lawyers will play a critical role in both the prosecution and defense of these rights. Equally, if not more so, lawyers will also be a vibrant source of insight and guidance for the development of “ethical” AI in a proactive—not simply reactive—way….(More)”.

Algorithmic monoculture and social welfare


Paper by Jon Kleinberg and Manish Raghavan: “As algorithms are increasingly applied to screen applicants for high-stakes decisions in employment, lending, and other domains, concerns have been raised about the effects of algorithmic monoculture, in which many decision-makers all rely on the same algorithm. This concern invokes analogies to agriculture, where a monocultural system runs the risk of severe harm from unexpected shocks. Here, we show that the dangers of algorithmic monoculture run much deeper, in that monocultural convergence on a single algorithm by a group of decision-making agents, even when the algorithm is more accurate for any one agent in isolation, can reduce the overall quality of the decisions being made by the full collection of agents. Unexpected shocks are therefore not needed to expose the risks of monoculture; it can hurt accuracy even under “normal” operations and even for algorithms that are more accurate when used by only a single decision-maker. Our results rely on minimal assumptions and involve the development of a probabilistic framework for analyzing systems that use multiple noisy estimates of a set of alternatives…(More)”.

Operationalising AI governance through ethics-based auditing: an industry case study


Paper by Jakob Mökander & Luciano Floridi: “Ethics-based auditing (EBA) is a structured process whereby an entity’s past or present behaviour is assessed for consistency with moral principles or norms. Recently, EBA has attracted much attention as a governance mechanism that may help to bridge the gap between principles and practice in AI ethics. However, important aspects of EBA—such as the feasibility and effectiveness of different auditing procedures—have yet to be substantiated by empirical research. In this article, we address this knowledge gap by providing insights from a longitudinal industry case study. Over 12 months, we observed and analysed the internal activities of AstraZeneca, a biopharmaceutical company, as it prepared for and underwent an ethics-based AI audit. While previous literature concerning EBA has focussed on proposing or analysing evaluation metrics or visualisation techniques, our findings suggest that the main difficulties large multinational organisations face when conducting EBA mirror classical governance challenges. These include ensuring harmonised standards across decentralised organisations, demarcating the scope of the audit, driving internal communication and change management, and measuring actual outcomes. The case study presented in this article contributes to the existing literature by providing a detailed description of the organisational context in which EBA procedures must be integrated to be feasible and effective…(More)”.