Paper by Jennifer Cobbe: “Effective content moderation by social platforms has long been recognised as both important and difficult, with numerous issues arising from the volume of information to be dealt with, the culturally sensitive and contextual nature of that information, and the nuances of human communication. Attempting to scale moderation efforts, various platforms have adopted, or signalled their intention to adopt, increasingly automated approaches to identifying and suppressing content and communications that they deem undesirable. However, algorithmic forms of online censorship by social platforms bring their own concerns, including the extensive surveillance of communications and the use of machine learning systems with the distinct possibility of errors and biases. This paper adopts a governmentality lens to examine algorithmic censorship by social platforms in order to assist in the development of a more comprehensive understanding of the risks of such approaches to content moderation. This analysis shows that algorithmic censorship is distinctive for two reasons: (1) it would potentially bring all communications carried out on social platforms within reach, and (2) it would potentially allow those platforms to take a much more active, interventionist approach to moderating those communications. Consequently, algorithmic censorship could allow social platforms to exercise an unprecedented degree of control over both public and private communications, with poor transparency, weak or non-existent accountability mechanisms, and little legitimacy. Moreover, commercial considerations would be inserted further into the everyday communications of billions of people. Due to the dominance of the web by a small number of social platforms, this control may be difficult or impractical to escape for many people, although opportunities for resistance do exist.
While automating content moderation may seem like an attractive proposition for both governments and platforms themselves, the issues identified in this paper are cause for concern and should be given serious consideration.Jennifer CobbeEffective content moderation by social platforms has long been recognised as both important and difficult, with numerous issues arising from the volume of information to be dealt with, the culturally sensitive and contextual nature of that information, and the nuances of human communication. Attempting to scale moderation efforts, various platforms have adopted, or signalled their intention to adopt, increasingly automated approaches to identifying and suppressing content and communications that they deem undesirable. However, algorithmic forms of online censorship by social platforms bring their own concerns, including the extensive surveillance of communications and the use of machine learning systems with the distinct possibility of errors and biases. This paper adopts a governmentality lens to examine algorithmic censorship by social platforms in order to assist in the development of a more comprehensive understanding of the risks of such approaches to content moderation.
This analysis shows that algorithmic censorship is distinctive for two reasons: (1) it would potentially bring all communications carried out on social platforms within reach, and (2) it would potentially allow those platforms to take a much more active, interventionist approach to moderating those communications. Consequently, algorithmic censorship could allow social platforms to exercise an unprecedented degree of control over both public and private communications, with poor transparency, weak or non-existent accountability mechanisms, and little legitimacy. Moreover, commercial considerations would be inserted further into the everyday communications of billions of people. Due to the dominance of the web by a small number of social platforms, this control may be difficult or impractical to escape for many people, although opportunities for resistance do exist. While automating content moderation may seem like an attractive proposition for both governments and platforms themselves, the issues identified in this paper are cause for concern and should be given serious consideration….(More)”.