Nonhuman humanitarianism: wen ‘AI for good’ can be harmful


Paper by Mirca Madianou: “Artificial intelligence (AI) applications have been introduced in humanitarian operations in order to help with the significant challenges the sector is facing. This article focuses on chatbots which have been proposed as an efficient method to improve communication with, and accountability to affected communities. Chatbots, together with other humanitarian AI applications such as biometrics, satellite imaging, predictive modelling and data visualisations, are often understood as part of the wider phenomenon of ‘AI for social good’. The article develops a decolonial critique of humanitarianism and critical algorithm studies which focuses on the power asymmetries underpinning both humanitarianism and AI. The article asks whether chatbots, as exemplars of ‘AI for good’, reproduce inequalities in the global context. Drawing on a mixed methods study that includes interviews with seven groups of stakeholders, the analysis observes that humanitarian chatbots do not fulfil claims such as ‘intelligence’. Yet AI applications still have powerful consequences. Apart from the risks associated with misinformation and data safeguarding, chatbots reduce communication to its barest instrumental forms which creates disconnects between affected communities and aid agencies. This disconnect is compounded by the extraction of value from data and experimentation with untested technologies. By reflecting the values of their designers and by asserting Eurocentric values in their programmed interactions, chatbots reproduce the coloniality of power. The article concludes that ‘AI for good’ is an ‘enchantment of technology’ that reworks the colonial legacies of humanitarianism whilst also occluding the power dynamics at play…(More)”.

Why You Need an AI Ethics Committee


Article by Reid Blackman: “…There are a lot of well-documented and highly publicized ethical risks associated with AI; unintended bias and invasions of privacy are just two of the most notable kinds. In many instances the risks are specific to particular uses, like the possibility that self-driving cars will run over pedestrians or that AI-generated social media newsfeeds will sow distrust of public institutions. In some cases they’re major reputational, regulatory, financial, and legal threats. Because AI is built to operate at scale, when a problem occurs, it affects all the people the technology engages with—for instance, everyone who responds to a job listing or applies for a mortgage at a bank. If companies don’t carefully address ethical issues in planning and executing AI projects, they can waste a lot of time and money developing software that is ultimately too risky to use or sell, as many have already learned.

Your organization’s AI strategy needs to take into account several questions: How might the AI we design, procure, and deploy pose ethical risks that cannot be avoided? How do we systematically and comprehensively identify and mitigate them? If we ignore them, how much time and labor would it take us to respond to a regulatory investigation? How large a fine might we pay if found guilty, let alone negligent, of violating regulations or laws? How much would we need to spend to rebuild consumer and public trust, provided that money could solve the problem?

The answers to those questions will underscore how much your organization needs an AI ethical risk program. It must start at the executive level and permeate your company’s ranks—and, ultimately, the technology itself. In this article I’ll focus on one crucial element of such a program—an AI ethical risk committee—and explain why it’s critical that it include ethicists, lawyers, technologists, business strategists, and bias scouts. Then I’ll explore what that committee requires to be effective at a large enterprise.

But first, to provide a sense of why such a committee is so important, I’ll take a deep dive into the issue of discriminatory AI. Keep in mind that this is just one of the risks AI presents; there are many others that also need to be investigated in a systematic way…(More)”.

AI Can Predict Potential Nutrient Deficiencies from Space


Article by Rachel Berkowitz: “Micronutrient deficiencies afflict more than two billion people worldwide, including 340 million children. This lack of vitamins and minerals can have serious health consequences. But diagnosing deficiencies early enough for effective treatment requires expensive, time-consuming blood draws and laboratory tests.

New research provides a more efficient approach. Computer scientist Elizabeth Bondi and her colleagues at Harvard University used publicly available satellite data and artificial intelligence to reliably pinpoint geographical areas where populations are at high risk of micronutrient deficiencies. This analysis could potentially pave the way for early public health interventions.

Existing AI systems can use satellite data to predict localized food security issues, but they typically rely on directly observable features. For example, agricultural productivity can be estimated from views of vegetation. Micronutrient availability is harder to calculate. After seeing research showing that areas near forests tend to have better dietary diversity, Bondi and her colleagues were inspired to identify lesser-known markers for potential malnourishment. Their work shows that combining data such as vegetation cover, weather and water presence can suggest where populations will lack iron, vitamin B12 or vitamin A.

The team examined raw satellite measurements and consulted with local public health officials, then used AI to sift through the data and pinpoint key features. For instance, a food market, inferred based on roads and buildings visible, was vital for predicting a community’s risk level. The researchers then linked these features to specific nutrients lacking in four regions’ populations across Madagascar. They used real-world biomarker data (blood samples tested in labs) to train and test their AI program….(More)”.

10 learnings from considering AI Ethics through global perspectives


Blog by Sampriti Saxena and Stefaan G. Verhulst: “Artificial Intelligence (AI) technologies have the potential to solve the world’s biggest challenges. However, they also come with certain risks to individuals and groups. As these technologies become more prevalent around the world, we need to consider the ethical ramifications of AI use to identify and rectify potential harms. Equally, we need to consider the various associated issues from a global perspective, not assuming that a single approach will satisfy different cultural and societal expectations.

In February 2021, The Governance Lab (The GovLab), the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), the Center for Responsible AI @ NYU (R/AI), and the Technical University of Munich’s (TUM) Institute for Ethics in Artificial Intelligence (IEAI) launched AI Ethics: Global Perspectives. …A year and a half later, the course has grown to 38 modules, contributed by 40 faculty members representing over 20 countries. Our conversations with faculty members and our experiences with the course modules have yielded a wealth of knowledge about AI ethics. In keeping with the values of openness and transparency that underlie the course, we summarized these insights into ten learnings to share with a broader audience. In what follows, we outline our key lessons from experts around the world.

Our Ten Learnings:

  1. Broaden the Conversation
  2. The Public as a Stakeholder
  3. Centering Diversity and Inclusion in Ethics
  4. Building Effective Systems of Accountability
  5. Establishing Trust
  6. Ask the Right Questions
  7. The Role of Independent Research
  8. Humans at the Center
  9. Our Shared Responsibility
  10. The Challenge and Potential for a Global Framework…(More)”.

Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI


Open access book by Alessandro Mantelero: “…focuses on the impact of Artificial Intelligence (AI) on individuals and society from a legal perspective, providing a comprehensive risk-based methodological framework to address it. Building on the limitations of data protection in dealing with the challenges of AI, the author proposes an integrated approach to risk assessment that focuses on human rights and encompasses contextual social and ethical values.

The core of the analysis concerns the assessment methodology and the role of experts in steering the design of AI products and services by business and public bodies in the direction of human rights and societal values.

Taking into account the ongoing debate on AI regulation, the proposed assessment model also bridges the gap between risk-based provisions and their real-world implementation.

The central focus of the book on human rights and societal values in AI and the proposed solutions will make it of interest to legal scholars, AI developers and providers, policy makers and regulators….(More)”.

Prediction machines, insurance, and protection: An alternative perspective on AI’s role in production


Paper by Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb: “Recent advances in AI represent improvements in prediction. We examine how decisionmaking and risk management strategies change when prediction improves. The adoption of AI may cause substitution away from risk management activities used when rules are applied (rules require always taking the same action), instead allowing for decisionmaking (choosing actions based on the predicted state). We provide a formal model evaluating the impact of AI and how risk management, stakes, and interrelated tasks affect AI adoption. The broad conclusion is that AI adoption can be stymied by existing processes designed to address uncertainty. In particular, many processes are designed to enable coordinated decisionmaking among different actors in an organization. AI can make coordination even more challenging. However, when the cost of changing such processes falls, then the returns from AI adoption increase….(More)”.

AI Ethics: Global Perspectives


New Course Modules: A Cybernetics Approach to Ethical AI Designexplores the relationship between cybernetics and AI ethics, and looks at how cybernetics can be leveraged to reframe how we think about and how we undertake ethical AI design. This module, by Ellen Broad, Associate Professor and Associate Director at the Australian National University’s School of Cybernetics, is divided into three sections, beginning with an introduction to cybernetics. Following that, we explore different ways of thinking about AI ethics, before concluding by bringing the two concepts together to understand a new approach to ethical AI design.

How should organizations put AI ethics and responsible AI into practice? Is the answer AI ethics principles and AI ethics boards or should everyone developing AI systems become experts in ethics? In An Ethics Model for Innovation: The PiE (Puzzle-solving in Ethics) Model, Cansu Canca, Founder and Director of the AI Ethics Lab, presents the model developed and employed at AI Ethics Lab: The Puzzle-solving in Ethics (PiE) Model. The PiE Model is a comprehensive and structured practice framework for organizations to integrate ethics into their operations as they develop and deploy AI systems. The PiE Model aims to make ethics a robust and integral part of innovation and enhance innovation through ethical puzzle-solving.

Nuria Oliver, Co-Founder and Scientific Director of the ELLIS Alicante Unit, presentsData Science against COVID-19: The Valencian Experience”. In this module, we explore the ELLIS Alicante Foundation’s Data-Science for COVID-19 team’s work in the Valencian region of Spain. The team was founded in response to the pandemic in March 2020 to assist policymakers in making informed, evidence-based decisions. The team tackles four different work areas: modeling human mobility, building computational epidemiological models, predictive models on the prevalence of the disease, and operating one of the largest online citizen surveys related to COVID-19 in the world. This lecture explains the four work streams and shares lessons learned from their work at the intersection between data, AI, and the pandemic…(More)”.

Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI


Open Access book by Alessandro Mantelero: “…focuses on the impact of Artificial Intelligence (AI) on individuals and society from a legal perspective, providing a comprehensive risk-based methodological framework to address it. Building on the limitations of data protection in dealing with the challenges of AI, the author proposes an integrated approach to risk assessment that focuses on human rights and encompasses contextual social and ethical values.

The core of the analysis concerns the assessment methodology and the role of experts in steering the design of AI products and services by business and public bodies in the direction of human rights and societal values.

Taking into account the ongoing debate on AI regulation, the proposed assessment model also bridges the gap between risk-based provisions and their real-world implementation.

The central focus of the book on human rights and societal values in AI and the proposed solutions will make it of interest to legal scholars, AI developers and providers, policy makers and regulators…(More)”.

The linguistics search engine that overturned the federal mask mandate


Article by Nicole Wetsman: “The COVID-19 pandemic was still raging when a federal judge in Florida made the fateful decision to type “sanitation” into the search bar of the Corpus of Historical American English.

Many parts of the country had already dropped mask requirements, but a federal mask mandate on planes and other public transportation was still in place. A lawsuit challenging the mandate had come before Judge Kathryn Mizelle, a former clerk for Justice Clarence Thomas. The Biden administration said the mandate was valid, based on a law that authorizes the Centers for Disease Control and Prevention (CDC) to introduce rules around “sanitation” to prevent the spread of disease.

Mizelle took a textualist approach to the question — looking specifically at the meaning of the words in the law. But along with consulting dictionaries, she consulted a database of language, called a corpus, built by a Brigham Young University linguistics professor for other linguists. Pulling every example of the word “sanitation” from 1930 to 1944, she concluded that “sanitation” was used to describe actively making something clean — not as a way to keep something clean. So, she decided, masks aren’t actually “sanitation.”

The mask mandate was overturned, one of the final steps in the defanging of public health authorities, even as infectious disease ran rampant…

Using corpora to answer legal questions, a strategy often referred to as legal corpus linguistics, has grown increasingly popular in some legal circles within the past decade. It’s been used by judges on the Michigan Supreme Court and the Utah Supreme Court, and, this past March, was referenced by the US Supreme Court during oral arguments for the first time.

“It’s been growing rapidly since 2018,” says Kevin Tobia, a professor at Georgetown Law. “And it’s only going to continue to grow.”…(More)”.

Aligning Artificial Intelligence with Humans through Public Policy



Paper by John Nay and James Daily: “Given that Artificial Intelligence (AI) increasingly permeates our lives, it is critical that we systematically align AI objectives with the goals and values of humans. The human-AI alignment problem stems from the impracticality of explicitly specifying the rewards that AI models should receive for all the actions they could take in all relevant states of the world. One possible solution, then, is to leverage the capabilities of AI models to learn those rewards implicitly from a rich source of data describing human values in a wide range of contexts. The democratic policy-making process produces just such data by developing specific rules, flexible standards, interpretable guidelines, and generalizable precedents that synthesize citizens’ preferences over potential actions taken in many states of the world. Therefore, computationally encoding public policies to make them legible to AI systems should be an important part of a socio-technical approach to the broader human-AI alignment puzzle. Legal scholars are exploring AI, but most research has focused on how AI systems fit within existing law, rather than how AI may understand the law. This Essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks. As a demonstration of the ability of AI to comprehend policy, we provide a case study of an AI system that predicts the relevance of proposed legislation to a given publicly traded company and its likely effect on that company. We believe this represents the “comprehension” phase of AI and policy, but leveraging policy as a key source of human values to align AI requires “understanding” policy. We outline what we believe will be required to move toward that, and two example research projects in that direction. Solving the alignment problem is crucial to ensuring that AI is beneficial both individually (to the person or group deploying the AI) and socially. As AI systems are given increasing responsibility in high-stakes contexts, integrating democratically-determined policy into those systems could align their behavior with human goals in a way that is responsive to a constantly evolving society…(More)”.