Blog by Michael Hallsworth: “Last week, I was in San Francisco for the HumanX conference. Listening to people there pushed me to ask a question that’s been bouncing around in my head with increasing insistency:
What’s the psychological impact of being the human in the loop?
I feel like this issue is a time bomb that could destroy current plans of how AI will be governed. If you listen to any AI policy conversation for more than a few minutes, you’re likely to hear the phrase “human-in-the-loop” (HITL). It’s a catch-all term that provides reassurance and allow us carry on with the technical discussion. Like in the workplace, if we just keep the right people “in the loop,” all will be well.
The idea evokes an image of a capable, watchful person who will intervene expertly if the system goes wrong. Whole governance frameworks are built on top of this comforting picture. For example, Article 14 of the EU AI Act tries to put a set of requirements on humans to “prevent or minimise the risks to health, safety or fundamental rights”.
But the Act says nothing about whether these humans will have the skills, attention, or motivation to perform this oversight. Or, even if they can, for how long. Or what the experience would be like.
In other words, we’re not thinking enough about what it actually feels like to be the human in the loop.
I find that gap increasingly hard to ignore because billions (?) of humans-in-the-loop may soon face two contrasting problems that we’ve been neglecting:
- Verification burdens caused by too much cognitive stimulus;1
- Vigilance atrophy caused by too little stimulus.
The tricky thing is that these two risks can affect the same person on the same day. Moreover, they call for almost opposite responses. Even trickier! Here I suggest how we should start tackling this problem…(More)”.