Essay by Henry Farrell & Cosma Rohilla Shalizi: “…. LLMs create social relations between their users and the authors of the text in their training corpora. With the right access to the model and the corpus, one can trace the connections from system output back to individual source texts and their authors (Grosse et al., 2023). These social relations are mechanically mediated, giving users the illusion that they are interacting with just the machine and not an assemblage of people. But mediated social relationships and their illusions are a common fact of modern life. The social relations created by LLMs in turn cut across, and interact with, other social relations, including those shaped by other social technologies.
Our goal here is to clear a common space where the social sciences and computer science and engineering can discuss the social consequences of AI. We draw heavily on the ideas of Simon (1996), who saw AI, political science, administration, economics, computer science, and cognitive psychology as so many branches of the “sciences of the artificial,” studying how human beings create “artifacts” that model, and act on, their environment. From this perspective, AI models are another means of “complex information processing” (Newell and Simon, 1956). As Simon emphasizes, such systems encompass both information technologies, as studied and built by computer scientists and engineers, and social information systems such as markets, bureaucracy, and, although Simon himself does not stress this, democracy (Lindblom, 1965). All such systems process information by reducing complex realities into more tractable ‘coarse-grainings’ or abstractions that (hopefully) capture important features of the data. Producing coarse-grainings is not all that large-scale social institutions do, but it is quite important. Economic, administrative, and political coordination simply cannot work at scale if complex social relationships are not compressed into visible, tractable representations…(More)”.