Automating the Analysis of Online Deliberation? Comparing computational analyses of polarized discussions on climate change to established content analysis


Paper by Lisa Oswald: “High­-quality discussions can help people acquire an adequate understanding of issues and alleviate mechanisms of opinion polarization. However, the extent to which the quality of the online public discourse contributes is contested. Facing the importance and the sheer volume of online discussions, reliable computational approaches to assess the deliberative quality of online discussions at scale would open a new era of deliberation research. But is it possible to automate the assessment of deliberative quality? I compare structural features of discussion threads and sim­ple text­-based measures to established manual content analysis by applying all measures to online discussions on ‘Reddit’ that deal with the 2020 wildfires in Australia and California. I further com­ pare discussions between two ideologically opposite online communities, one featuring discussions in line with the scientific consensus and one featuring climate change skepticism. While no single computational measure can capture the multidimensional concept of deliberative quality, I find that (1) measures of structural complexity capture engagement and participation as preconditions for deliberation, (2) the length of comments is correlated with manual measures of argumentation, and (3) automated toxicity scores are correlated with manual measures of respect. While the presented computational approaches cannot replace in­depth content coding, the findings imply that selected automated measures can be useful, scalable additions to the measurement repertoire for specific dimensions of online deliberation. I discuss implications for communication research and platform regulation and suggest interdisciplinary research to synthesize past content coding efforts using machine learning….(More)”.