Article by Barrett and Greene: “Since GenAI first appeared on the scene in late 2022, both benefits and hazards have been chronicled in multiple places, including this website. Advantages of AI play out on a daily basis providing cities and counties quicker results, increased staff efficiency, and improved government-resident communications.
But as generative AI use took off, media reports surfaced of fabrications delivered in response to prompts (known as hallucinations) and factual errors that were embarrassing and sometimes costly for governments and their vendors.
“If you don’t have a strategy or plan in place for how you deal with AI hazards, you’re going to get in trouble very fast,” says Brian Funderburk, an advocate for the responsible use of AI in government, and a retired city manager in Texas with 40 years of experience in local government.
The litany of problematic uses of AI seems to grow every day as its use expands. Just for starters, there have been fictitious precedents cited in legal cases. Chatbot errors have also surfaced with some frequency, notably in the much-heralded chatbot designed for businesses developed by New York City in the fall of 2023, that was roundly criticized the following spring for giving business callers incorrect information and sometimes advising them to engage in illegal behavior.
Multiple companies have had to deal with the consequences of AI mistakes, including Deloitte, which agreed to refund the equivalent of $290,000 in U.S. dollars to the Australian government for a report “that was littered with apparent AI-generated errors,” according to an AP News report.
Although hallucinations that AI can conjure have diminished to some extent, the continuing threat of errors requires extensive double-checking and triple-checking by humans that bear responsibility for what’s produced. “It will be a while before we can trust AI unconditionally,” says Funderburk who is currently Vice President and AI Safety Officer at Civic Marketplace…(More)”.