NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems


Illustration shows how people evaluating two different tasks performed by AI -- music selection and medical diagnosis -- might trust the AI varying amounts because the risk level of each task is different.
NIST’s new publication proposes a list of nine factors that contribute to a human’s potential trust in an AI system. A person may weigh the nine factors differently depending on both the task itself and the risk involved in trusting the AI’s decision. As an example, two different AI programs — a music selection algorithm and an AI that assists with cancer diagnosis — may score the same on all nine criteria. Users, however, might be inclined to trust the music selection algorithm but not the medical assistant, which is performing a far riskier task.Credit: N. Hanacek/NIST

National Institute of Standards and Technology (NIST): ” Every time you speak to a virtual assistant on your smartphone, you are talking to an artificial intelligence — an AI that can, for example, learn your taste in music and make song recommendations that improve based on your interactions. However, AI also assists us with more risk-fraught activities, such as helping doctors diagnose cancer. These are two very different scenarios, but the same issue permeates both: How do we humans decide whether or not to trust a machine’s recommendations? 

This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems. The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. 

The report contributes to the broader NIST effort to help advance trustworthy AI systems. The focus of this latest publication is to understand how humans experience trust as they use or are affected by AI systems….(More)”.