Article by John Burn-Murdoch: “…Last year I used detailed data on the ideological positions of people who post on social media to show that they over-represent the radical right and left, confirming the polarisation hypothesis. Over the past week I have used the same dataset of tens of thousands of responses to questions on policy preferences and sociopolitical beliefs to test whether and how the most widely used AI chatbots shape conversations about politics and society. The results strongly support the theory of AI chatbots as depolarising and technocratising.
I found that while different AI platforms behave in subtly different ways, all of them nudge people away from the most extreme positions and towards more moderate and expert-aligned stances. On average, Grok guides conversations about policy and society towards the centre-right — a rightward push for most people but a moderating nudge towards the centre for those who start out as conservative hardliners. OpenAI’s GPT, Google’s Gemini and the Chinese model DeepSeek all exert similarly sized nudges towards a centre-left worldview — a slight leftward nudge for most people but a moderating push away from fringe leftwing positions.
Importantly, this remains true after accounting for partisan differences in AI platform usage and chatbots’ sycophantic tendencies. Even when the AI bots know a user’s political leanings, conversations with LLMs still direct hardline partisans on both flanks away from extreme beliefs on average.
In addition, I found that while conspiratorial beliefs about topics including rigged elections and a link between vaccines and autism are over-represented among people who post to social media relative to the overall population, the opposite is true of AI chatbots, which almost never express agreement with these claims…(More)”.