Article by Emanuel Maiberg: “Online survey research, a fundamental method for data collection in many scientific studies, is facing an existential threat because of large language models, according to new research published in the Proceedings of the National Academy of Sciences (PNAS). The author of the paper, associate professor of government at Dartmouth and director of the Polarization Research Lab Sean Westwood, created an AI tool he calls “an autonomous synthetic respondent,” which can answer survey questions and “demonstrated a near-flawless ability to bypass the full range” of “state-of-the-art” methods for detecting bots.
According to the paper, the AI agent evaded detection 99.8 percent of the time.
“We can no longer trust that survey responses are coming from real people,” Westwood said in a press release. “With survey data tainted by bots, AI can poison the entire knowledge ecosystem.”
Survey research relies on attention check questions (ACQs), behavioral flags, and response pattern analysis to detect inattentive humans or automated bots. Westwood said these methods are now obsolete after his AI agent bypassed the full range of standard ACQs and other detection methods outlined in prominent papers, including one paper designed to detect AI responses. The AI agent also successfully avoided “reverse shibboleth” questions designed to detect nonhuman actors by presenting tasks that an LLM could complete easily, but are nearly impossible for a human…(More)”.