How Does the Public Sector Identify Problems It Tries to Solve with AI?


Article by Maia Levy Daniel: “A correct analysis of the implementation of AI in a particular field or process needs to start by identifying if there actually is a problem to be solved. For instance, in the case of job matching, the problem would be related to the levels of unemployment in the country, and presumably addressing imbalances in specific fields. Then, would AI be the best way to address this specific problem? Are there any alternatives? Is there any evidence that shows that AI would be a better tool? Building AI systems is expensive and the funds being used by the public sector come from taxpayers. Are there any alternatives that could be less expensive? 

Moreover, governments must understand from the outset that these systems could involve potential risks for civil and human rights. Thus, it should be justified in detail why the government might be choosing a more expensive or riskier option. A potential guide to follow is the one developed by the UK’s Office for Artificial Intelligence on how to use AI in the public sector. This guide includes a section specifically devoted to how to assess whether AI is the right solution to a problem.

AI is such a buzzword that it has become appealing for governments to use as a solution to any public problem, without even starting to look for available alternatives. Although automation could accelerate decision-making processes, speed should not be prioritized over quality or over human rights protection. As Daniel Susser argues in his recent paper, the speed at which automated decisions are reached has normative implications. By incorporating digital technologies in decision-making processes, temporal norms and values that govern them are impacted, disrupting prior norms, re-calibrating balanced trade-offs, or displacing automation’s costs. As Susser suggests, speed is not necessarily bad; however, “using computational tools to speed up (or slow down) certain decisions is not a ‘neutral’ adjustment without further explanations.” 

So, conducting a thorough diagnosis including the identification of the specific problem to address and the best way to address it is key to protecting citizens’ rights. And this is why transparency must be mandatory. As citizens, we have a right to know how these processes are being conceived and designed, the reasons governments choose to implement technologies, as well as the risks involved.

In addition, maybe a good way to ultimately approach the systemic problem and change the structure of incentives is to stop using the pretentious terms “artificial intelligence”, “AI”, and “machine learning”, as Emily Tucker, the Executive Director of the Center on Privacy & Technology at Georgetown Law Center announced the Center would do. As Tucker explained, these terms are confusing for the average person, and the way they are typically employed makes us think it’s a machine rather than human beings making the decisions. By removing marketing terms from the equation and giving more visibility to the humans involved, these technologies may not ultimately seem so exotic…(More)”.

What AI Can Tell Us About Intelligence


Essay by Yann LeCun and Jacob Browning: “If there is one constant in the field of artificial intelligence it is exaggeration: there is always breathless hype and scornful naysaying. It is helpful to occasionally take stock of where we stand.

The dominant technique in contemporary AI is deep learning (DL) neural networks, massive self-learning algorithms which excel at discerning and utilizing patterns in data. Since their inception, critics have prematurely argued that neural networks had run into an insurmountable wall — and every time, it proved a temporary hurdle. In the 1960s, they could not solve non-linear functions. That changed in the 1980s with backpropagation, but the new wall was how difficult it was to train the systems. The 1990s saw a rise of simplifying programs and standardized architectures which made training more reliable, but the new problem was the lack of training data and computing power.

In 2012, when contemporary graphics cards could be trained on the massive ImageNet dataset, DL went mainstream, handily besting all competitors. But then critics spied a new problem: DL required too much hand-labelled data for training. The last few years have rendered this criticism moot, as self-supervised learning has resulted in incredibly impressive systems, such as GPT-3, which do not require labeled data.

Today’s seemingly insurmountable wall is symbolic reasoning, the capacity to manipulate symbols in the ways familiar from algebra or logic. As we learned as children, solving math problems involves a step-by-step manipulation of symbols according to strict rules (e.g., multiply the furthest right column, carry the extra value to the column to the left, etc.). Gary Marcus, author of “The Algebraic Mind”and co-author (with Ernie Davis) of “Rebooting AI,recently argued that DL is incapable of further progress because neural networks struggle with this kind of symbol manipulation. By contrast, many DL researchers are convinced that DL is already engaging in symbolic reasoning and will continue to improve at it.

At the heart of this debate are two different visions of the role of symbols in intelligence, both biological and mechanical: one holds that symbolic reasoning must be hard-coded from the outset and the other holds it can be learned through experience, by machines and humans alike. As such, the stakes are not just about the most practical way forward, but also how we should understand human intelligence — and, thus, how we should pursue human-level artificial intelligence…(More)”.

Non-human humanitarianism: when ‘AI for good’ can be harmful


Paper by Mirca Madianou: “Artificial intelligence (AI) applications have been introduced in humanitarian operations in order to help with the significant challenges the sector is facing. This article focuses on chatbots which have been proposed as an efficient method to improve communication with, and accountability to affected communities. Chatbots, together with other humanitarian AI applications such as biometrics, satellite imaging, predictive modelling and data visualisations, are often understood as part of the wider phenomenon of ‘AI for social good’. The article develops a decolonial critique of humanitarianism and critical algorithm studies which focuses on the power asymmetries underpinning both humanitarianism and AI. The article asks whether chatbots, as exemplars of ‘AI for good’, reproduce inequalities in the global context. Drawing on a mixed methods study that includes interviews with seven groups of stakeholders, the analysis observes that humanitarian chatbots do not fulfil claims such as ‘intelligence’. Yet AI applications still have powerful consequences. Apart from the risks associated with misinformation and data safeguarding, chatbots reduce communication to its barest instrumental forms which creates disconnects between affected communities and aid agencies. This disconnect is compounded by the extraction of value from data and experimentation with untested technologies. By reflecting the values of their designers and by asserting Eurocentric values in their programmed interactions, chatbots reproduce the coloniality of power. The article concludes that ‘AI for good’ is an ‘enchantment of technology’ that reworks the colonial legacies of humanitarianism whilst also occluding the power dynamics at play…(More)”.

Why You Need an AI Ethics Committee


Article by Reid Blackman: “…There are a lot of well-documented and highly publicized ethical risks associated with AI; unintended bias and invasions of privacy are just two of the most notable kinds. In many instances the risks are specific to particular uses, like the possibility that self-driving cars will run over pedestrians or that AI-generated social media newsfeeds will sow distrust of public institutions. In some cases they’re major reputational, regulatory, financial, and legal threats. Because AI is built to operate at scale, when a problem occurs, it affects all the people the technology engages with—for instance, everyone who responds to a job listing or applies for a mortgage at a bank. If companies don’t carefully address ethical issues in planning and executing AI projects, they can waste a lot of time and money developing software that is ultimately too risky to use or sell, as many have already learned.

Your organization’s AI strategy needs to take into account several questions: How might the AI we design, procure, and deploy pose ethical risks that cannot be avoided? How do we systematically and comprehensively identify and mitigate them? If we ignore them, how much time and labor would it take us to respond to a regulatory investigation? How large a fine might we pay if found guilty, let alone negligent, of violating regulations or laws? How much would we need to spend to rebuild consumer and public trust, provided that money could solve the problem?

The answers to those questions will underscore how much your organization needs an AI ethical risk program. It must start at the executive level and permeate your company’s ranks—and, ultimately, the technology itself. In this article I’ll focus on one crucial element of such a program—an AI ethical risk committee—and explain why it’s critical that it include ethicists, lawyers, technologists, business strategists, and bias scouts. Then I’ll explore what that committee requires to be effective at a large enterprise.

But first, to provide a sense of why such a committee is so important, I’ll take a deep dive into the issue of discriminatory AI. Keep in mind that this is just one of the risks AI presents; there are many others that also need to be investigated in a systematic way…(More)”.

AI Can Predict Potential Nutrient Deficiencies from Space


Article by Rachel Berkowitz: “Micronutrient deficiencies afflict more than two billion people worldwide, including 340 million children. This lack of vitamins and minerals can have serious health consequences. But diagnosing deficiencies early enough for effective treatment requires expensive, time-consuming blood draws and laboratory tests.

New research provides a more efficient approach. Computer scientist Elizabeth Bondi and her colleagues at Harvard University used publicly available satellite data and artificial intelligence to reliably pinpoint geographical areas where populations are at high risk of micronutrient deficiencies. This analysis could potentially pave the way for early public health interventions.

Existing AI systems can use satellite data to predict localized food security issues, but they typically rely on directly observable features. For example, agricultural productivity can be estimated from views of vegetation. Micronutrient availability is harder to calculate. After seeing research showing that areas near forests tend to have better dietary diversity, Bondi and her colleagues were inspired to identify lesser-known markers for potential malnourishment. Their work shows that combining data such as vegetation cover, weather and water presence can suggest where populations will lack iron, vitamin B12 or vitamin A.

The team examined raw satellite measurements and consulted with local public health officials, then used AI to sift through the data and pinpoint key features. For instance, a food market, inferred based on roads and buildings visible, was vital for predicting a community’s risk level. The researchers then linked these features to specific nutrients lacking in four regions’ populations across Madagascar. They used real-world biomarker data (blood samples tested in labs) to train and test their AI program….(More)”.

10 learnings from considering AI Ethics through global perspectives


Blog by Sampriti Saxena and Stefaan G. Verhulst: “Artificial Intelligence (AI) technologies have the potential to solve the world’s biggest challenges. However, they also come with certain risks to individuals and groups. As these technologies become more prevalent around the world, we need to consider the ethical ramifications of AI use to identify and rectify potential harms. Equally, we need to consider the various associated issues from a global perspective, not assuming that a single approach will satisfy different cultural and societal expectations.

In February 2021, The Governance Lab (The GovLab), the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), the Center for Responsible AI @ NYU (R/AI), and the Technical University of Munich’s (TUM) Institute for Ethics in Artificial Intelligence (IEAI) launched AI Ethics: Global Perspectives. …A year and a half later, the course has grown to 38 modules, contributed by 40 faculty members representing over 20 countries. Our conversations with faculty members and our experiences with the course modules have yielded a wealth of knowledge about AI ethics. In keeping with the values of openness and transparency that underlie the course, we summarized these insights into ten learnings to share with a broader audience. In what follows, we outline our key lessons from experts around the world.

Our Ten Learnings:

  1. Broaden the Conversation
  2. The Public as a Stakeholder
  3. Centering Diversity and Inclusion in Ethics
  4. Building Effective Systems of Accountability
  5. Establishing Trust
  6. Ask the Right Questions
  7. The Role of Independent Research
  8. Humans at the Center
  9. Our Shared Responsibility
  10. The Challenge and Potential for a Global Framework…(More)”.

Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI


Open access book by Alessandro Mantelero: “…focuses on the impact of Artificial Intelligence (AI) on individuals and society from a legal perspective, providing a comprehensive risk-based methodological framework to address it. Building on the limitations of data protection in dealing with the challenges of AI, the author proposes an integrated approach to risk assessment that focuses on human rights and encompasses contextual social and ethical values.

The core of the analysis concerns the assessment methodology and the role of experts in steering the design of AI products and services by business and public bodies in the direction of human rights and societal values.

Taking into account the ongoing debate on AI regulation, the proposed assessment model also bridges the gap between risk-based provisions and their real-world implementation.

The central focus of the book on human rights and societal values in AI and the proposed solutions will make it of interest to legal scholars, AI developers and providers, policy makers and regulators….(More)”.

Prediction machines, insurance, and protection: An alternative perspective on AI’s role in production


Paper by Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb: “Recent advances in AI represent improvements in prediction. We examine how decisionmaking and risk management strategies change when prediction improves. The adoption of AI may cause substitution away from risk management activities used when rules are applied (rules require always taking the same action), instead allowing for decisionmaking (choosing actions based on the predicted state). We provide a formal model evaluating the impact of AI and how risk management, stakes, and interrelated tasks affect AI adoption. The broad conclusion is that AI adoption can be stymied by existing processes designed to address uncertainty. In particular, many processes are designed to enable coordinated decisionmaking among different actors in an organization. AI can make coordination even more challenging. However, when the cost of changing such processes falls, then the returns from AI adoption increase….(More)”.

AI Ethics: Global Perspectives


New Course Modules: A Cybernetics Approach to Ethical AI Designexplores the relationship between cybernetics and AI ethics, and looks at how cybernetics can be leveraged to reframe how we think about and how we undertake ethical AI design. This module, by Ellen Broad, Associate Professor and Associate Director at the Australian National University’s School of Cybernetics, is divided into three sections, beginning with an introduction to cybernetics. Following that, we explore different ways of thinking about AI ethics, before concluding by bringing the two concepts together to understand a new approach to ethical AI design.

How should organizations put AI ethics and responsible AI into practice? Is the answer AI ethics principles and AI ethics boards or should everyone developing AI systems become experts in ethics? In An Ethics Model for Innovation: The PiE (Puzzle-solving in Ethics) Model, Cansu Canca, Founder and Director of the AI Ethics Lab, presents the model developed and employed at AI Ethics Lab: The Puzzle-solving in Ethics (PiE) Model. The PiE Model is a comprehensive and structured practice framework for organizations to integrate ethics into their operations as they develop and deploy AI systems. The PiE Model aims to make ethics a robust and integral part of innovation and enhance innovation through ethical puzzle-solving.

Nuria Oliver, Co-Founder and Scientific Director of the ELLIS Alicante Unit, presentsData Science against COVID-19: The Valencian Experience”. In this module, we explore the ELLIS Alicante Foundation’s Data-Science for COVID-19 team’s work in the Valencian region of Spain. The team was founded in response to the pandemic in March 2020 to assist policymakers in making informed, evidence-based decisions. The team tackles four different work areas: modeling human mobility, building computational epidemiological models, predictive models on the prevalence of the disease, and operating one of the largest online citizen surveys related to COVID-19 in the world. This lecture explains the four work streams and shares lessons learned from their work at the intersection between data, AI, and the pandemic…(More)”.

Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI


Open Access book by Alessandro Mantelero: “…focuses on the impact of Artificial Intelligence (AI) on individuals and society from a legal perspective, providing a comprehensive risk-based methodological framework to address it. Building on the limitations of data protection in dealing with the challenges of AI, the author proposes an integrated approach to risk assessment that focuses on human rights and encompasses contextual social and ethical values.

The core of the analysis concerns the assessment methodology and the role of experts in steering the design of AI products and services by business and public bodies in the direction of human rights and societal values.

Taking into account the ongoing debate on AI regulation, the proposed assessment model also bridges the gap between risk-based provisions and their real-world implementation.

The central focus of the book on human rights and societal values in AI and the proposed solutions will make it of interest to legal scholars, AI developers and providers, policy makers and regulators…(More)”.