Article by Sara Frueh: “State and local governments around the U.S. are harnessing AI for a range of applications — such as translating public meetings into multiple languages in real time to allow broader participation, or using chatbots to deliver services to the public, for instance.
While AI systems can offer benefits to agencies and the people they serve, the technology can also be harmful if misapplied. In one high-profile example, around 40,000 Michigan residents were wrongly accused of unemployment insurance fraud based on a state AI system with a faulty algorithm and inadequate human oversight.
“We have to think about a lot of AI systems as potentially useful and quite often unreliable, and treat them as such,” said Suresh Venkatasubramanian of Brown University, co-author of a recent National Academies rapid expert consultation on AI use by state and local governments.
He urged state and local leaders to avoid extreme hype about AI, both its promise and dangers, and instead to use a careful, experimental approach. “We have to embrace an ethos of experimentation and sandboxing, where we can understand how they work in our specific contexts.”
Venkatasubramanian spoke at a National Academies webinar that explored the report’s recommendations and other AI-related resources for state and local governments. He was joined by fellow co-author Nathan McNeese of Clemson University, and Leila Doty, a privacy and AI analyst for the city of San José, California, along with Kate Stoll of the American Association for the Advancement of Science, who moderated the session.
In considering whether to implement AI, McNeese advised state and city agencies to start by asking, “What’s the problem?” or “What’s the aspect of the organization that we want to enhance?”
“You do not want to introduce AI if there is not a specific need,” said McNeese. “You don’t want to implement AI because everyone else is.”
The point was seconded by Venkatasubramanian. “If you have a problem that needs to be solved, figure out what people need to solve it,” he said. “Maybe AI can be a part of it, maybe not. Don’t start by asking, ‘How can we bring AI to this?’ That way leads to problems.”
When AI is used, the report urges a human-centered approach to designing it — one that takes people’s needs, wants, and motivations into account, explained McNeese.
Those who have domain expertise — employees who provide services of value to the public — should be involved in determining where AI tools might and might not be useful, said Venkatasubramanian. “It is really, really important to empower the people who have the expertise to understand the domain,” he stressed…(More)”.