AI Ethics: Seven Traps


Blog Post by Annette Zimmermann and Bendert Zevenbergen: “… In what follows, we outline seven ‘AI ethics traps’. In doing so, we hope to provide a resource for readers who want to understand and navigate the public debate on the ethics of AI better, who want to contribute to ongoing discussions in an informed and nuanced way, and who want to think critically and constructively about ethical considerations in science and technology more broadly. Of course, not everybody who contributes to the current debate on AI Ethics is guilty of endorsing any or all of these traps: the traps articulate extreme versions of a range of possible misconceptions, formulated in a deliberately strong way to highlight the ways in which one might prematurely dismiss ethical reasoning about AI as futile.

1. The reductionism trap:

“Doing the morally right thing is essentially the same as acting in a fair way. (or: transparent, or egalitarian, or <substitute any other value>). So ethics is the same as fairness (or transparency, or equality, etc.). If we’re being fair, then we’re being ethical.”

            Even though the problem of algorithmic bias and its unfair impact on decision outcomes is an urgent problem, it does not exhaust the ethical problem space. As important as algorithmic fairness is, it is crucial to avoid reducing ethics to a fairness problem alone. Instead, it is important to pay attention to how the ethically valuable goal of optimizing for a specific value like fairness interacts with other important ethical goals. Such goals could include—amongst many others—the goal of creating transparent and explainable systems which are open to democratic oversight and contestation, the goal of improving the predictive accuracy of machine learning systems, the goal of avoiding paternalistic infringements of autonomy rights, or the goal of protecting the privacy interests of data subjects. Sometimes, these different values may conflict: we cannot always optimize for everything at once. This makes it all the more important to adopt a sufficiently rich, pluralistic view of the full range of relevant ethical values at stake—only then can one reflect critically on what kinds of ethical trade-offs one may have to confront.

2. The simplicity trap:

“In order to make ethics practical and action-guiding, we need to distill our moral framework into a user-friendly compliance checklist. After we’ve decided on a particular path of action, we’ll go through that checklist to make sure that we’re being ethical.”

            Given the high visibility and urgency of ethical dilemmas arising in the context of AI, it is not surprising that there are more and more calls to develop actionable AI ethics checklists. For instance, a 2018 draft report by the European Commission’s High-Level Expert Group on Artificial Intelligence specifies a preliminary ‘assessment list’ for ‘trustworthy AI’. While the report plausibly acknowledges that such an assessment list must be context-sensitive and that it is not exhaustive, it nevertheless identifies a list of ten fixed ethical goals, including privacy and transparency. But can and should ethical values be articulated in a checklist in the first place? It is worth examining this underlying assumption critically. After all, a checklist implies a one-off review process: on that view, developers or policy-makers could determine whether a particular system is ethically defensible at a specific moment in time, and then move on without confronting any further ethical concerns once the checklist criteria have been satisfied once. But ethical reasoning cannot be a static one-off assessment: it required an ongoing process of reflection, deliberation, and contestation. Simplicity is good—but the willingness to reconsider simple frameworks, when required, is better. Setting a fixed ethical agenda ahead of time risks obscuring new ethical problems that may arise at a later point in time, or ongoing ethical problems that become apparent to human decision-makers only later.

3. The relativism trap:

“We all disagree about what is morally valuable, so it’s pointless to imagine that there is a universalbaseline against which we can use in order to evaluate moral choices. Nothing is objectively morally good: things can only be morally good relative to each person’s individual value framework.”

            Public discourse on the ethics of AI frequently produces little more than an exchange of personal opinions or institutional positions. In light of pervasive moral disagreement, it is easy to conclude that ethical reasoning can never stand on firm ground: it always seems to be relative to a person’s views and context. But this does not mean that ethical reasoning about AI and its social and political implications is futile: some ethical arguments about AI may ultimately be more persuasive than others. While it may not always be possible to determine ‘the one right answer’, it is often possible to identify at least  some paths of action are clearly wrong, and some paths of action that are comparatively better (if not optimal all things considered). If that is the case, comparing the respective merits of ethical arguments can be action-guiding for developers and policy-makers, despite the presence of moral disagreement. Thus, it is possible and indeed constructive for AI ethics to welcome value pluralism, without collapsing into extreme value relativism.

4. The value alignment trap:

“If relativism is wrong (see #3), there must be one morally right answer. We need to find that right answer, and ensure that everyone in our organisation acts in alignment with that answer. If our ethical reasoning leads to moral disagreement, that means that we have failed.”…(More)”.