Science and the State 


Introduction to Special Issue by Alondra Nelson et al: “…Current events have thrown these debates into high relief. Pressing issues from the pandemic to anthropogenic climate change, and the new and old inequalities they exacerbate, have intensified calls to critique but also imagine otherwise the relationship between scientific and state authority. Many of the subjects and communities whose well-being these authorities claim to promote have resisted, doubted, and mistrusted technoscientific experts and government officials. How might our understanding of the relationship change if the perspectives and needs of those most at risk from state and/or scientific violence or neglect were to be centered? Likewise, the pandemic and climate change have reminded scientists and state officials that relations among states matter at home and in the world systems that support supply chains, fuel technology, and undergird capitalism and migration. How does our understanding of the relationship between science and the state change if we eschew the nationalist framing of the classic Mertonian formulation and instead account for states in different parts of the world, as well as trans-state relationships?
This special issue began as a yearlong seminar on Science and the State convened by Alondra Nelson and Charis Thompson at the Institute for Advanced Study in Princeton, New Jersey. During the 2020–21 academic year, seventeen scholars from four continents met on a biweekly basis to read, discuss, and interrogate historical and contemporary scholarship on the origins, transformations, and sociopolitical
consequences of different configurations of science, technology, and governance. Our group consisted of scholars from different disciplines, including sociology, anthropology, philosophy, economics, history, political science, and geography. Examining technoscientific expertise and political authority while experiencing the conditions of the pandemic exerted a heightened sense of the stakes concerned and forced us to rethink easy critiques of scientific knowledge and state power. Our affective and lived experiences of the pandemic posed questions about what good science and good statecraft could be. How do we move beyond a presumption of isomorphism between “good” states and “good” science to understand and study the uneven experiences and sometimes exploitative practices of different configurations of science and the state?…(More)”.

Hopes over fears: Can democratic deliberation increase positive emotions concerning the future?


Paper by Mikko Leino and Katariina Kulha: “Deliberative mini-publics have often been considered to be a potential way to promote future-oriented thinking. Still, thinking about the future can be hard as it can evoke negative emotions such as stress and anxiety. This article establishes why a more positive outlook towards the future can benefit long-term decision-making. Then, it explores whether and to what extent deliberative mini-publics can facilitate thinking about the future by moderating negative emotions and encouraging positive emotions. We analyzed an online mini-public held in the region of Satakunta, Finland, organized to involve the public in the drafting process of a regional plan extending until the year 2050. In addition to the standard practices related to mini-publics, the Citizens’ Assembly included an imaginary time travel exercise, Future Design, carried out with half of the participants. Our analysis makes use of both survey and qualitative data. We found that democratic deliberation can promote positive emotions, like hopefulness and compassion, and lessen negative emotions, such as fear and confusion, related to the future. There were, however, differences in how emotions developed in the various small groups. Interviews with participants shed further light onto how participants felt during the event and how their sentiments concerning the future changed…(More)”.

Data Governance and Privacy Challenges in the Digital Healthcare Revolution


Paper by Nargiz Kazimova: “The onset of the COVID-19 pandemic has catalyzed an imperative for digital transformation in the healthcare sector. This study investigates the accelerated shift towards a digitally-enhanced healthcare delivery system, advocating for the widespread adoption of telemedicine and the relaxation of regulatory barriers. The paper also scrutinizes the burgeoning use of electronic health records, wearable devices, artificial intelligence, and machine learning, and how these technologies offer promising avenues for improving patient care and medical outcomes. Despite the advancements, the rapid digital integration raises significant privacy and security concerns. The stigma associated with certain illnesses and potential discrimination presents serious challenges that digital healthcare innovations can exacerbate.
This research underscores the criticality of stringent data governance to safeguard personal health information in the face of growing digitalization. The analysis begins with an exploration of the data governance role in optimizing healthcare outcomes and preserving privacy, followed by an assessment of the breadth and depth of health data proliferation. The paper subsequently navigates the complex legal and ethical terrain, contrasting HIPAA and GDPR frameworks to underline the current regulatory challenges.
A comprehensive set of strategic recommendations is provided for reinforcing data governance and enhancing privacy protection in healthcare. The author advises on updating legal provisions to match the dynamic healthcare environment, widening the scope of privacy laws, and improving the transparency of data-sharing practices. The establishment of ethical guidelines for the collection and use of health data is also recommended, focusing on explicit consent, decision-making transparency, harm accountability, maintenance of data anonymity, and the mitigation of biases in datasets.
Moreover, the study advocates for stronger transparency in data sharing with clear communication on data use, rigorous internal and external audit mechanisms, and informed consent processes. The conclusion calls for increased collaboration between healthcare providers, patients, administrative staff, ethicists, regulators, and technology companies to create governance models that reconcile patient rights with the expansive use of health data. The paper culminates in a call to action for a balanced approach to privacy and innovation in the data-driven era of healthcare…(More)”.

Can Large Language Models Capture Public Opinion about Global Warming? An Empirical Assessment of Algorithmic Fidelity and Bias


Paper by S. Lee et all: “Large language models (LLMs) have demonstrated their potential in social science research by emulating human perceptions and behaviors, a concept referred to as algorithmic fidelity. This study assesses the algorithmic fidelity and bias of LLMs by utilizing two nationally representative climate change surveys. The LLMs were conditioned on demographics and/or psychological covariates to simulate survey responses. The findings indicate that LLMs can effectively capture presidential voting behaviors but encounter challenges in accurately representing global warming perspectives when relevant covariates are not included. GPT-4 exhibits improved performance when conditioned on both demographics and covariates. However, disparities emerge in LLM estimations of the views of certain groups, with LLMs tending to underestimate worry about global warming among Black Americans. While highlighting the potential of LLMs to aid social science research, these results underscore the importance of meticulous conditioning, model selection, survey question format, and bias assessment when employing LLMs for survey simulation. Further investigation into prompt engineering and algorithm auditing is essential to harness the power of LLMs while addressing their inherent limitations…(More)”.

Unintended Consequences of Data-driven public participation: How Low-Traffic Neighborhood planning became polarized


Paper by Alison Powell: “This paper examines how data-driven consultation contributes to dynamics of political polarization, using the case of ‘Low-Traffic Neighborhoods’ in London, UK. It explores how data-driven consultation can facilitate participation, including ‘agonistic data practices” (Crooks and Currie, 2022) that challenge the dominant interpretations of digital data. The paper adds empirical detail to previous studies of agonistic data practices, concluding that agonistic data practices require certain normative conditions to be met, otherwise dissenting data practices can contribute to dynamics of polarization. The results of this paper draw on empirical insights from the political context of the UK to explain how ostensibly democratic processes including data-driven consultation establish some kinds of knowledge as more legitimate than others. Apparently ‘objective’ knowledge, or calculable data, is attributed greater legitimacy than strong feelings or affective narratives. This can displace affective responses to policy decisions into insular social media spaces where polarizing dynamics are at play. Affective polarization, where political difference is solidified through appeals to feeling, creates political distance and the dehumanization of ‘others’. This can help to amplify conspiracy theories that pose risks to democracy and to the overall legitimacy of media environments. These tendencies are exacerbated when processes of consultation prescribe narrow or specific contributions, valorize quantifiable or objective data and create limited room for dissent…(More)”

Does the sun rise for ChatGPT? Scientific discovery in the age of generative AI


Paper by David Leslie: “In the current hype-laden climate surrounding the rapid proliferation of foundation models and generative AI systems like ChatGPT, it is becoming increasingly important for societal stakeholders to reach sound understandings of their limitations and potential transformative effects. This is especially true in the natural and applied sciences, where magical thinking among some scientists about the take-off of “artificial general intelligence” has arisen simultaneously as the growing use of these technologies is putting longstanding norms, policies, and standards of good research practice under pressure. In this analysis, I argue that a deflationary understanding of foundation models and generative AI systems can help us sense check our expectations of what role they can play in processes of scientific exploration, sense-making, and discovery. I claim that a more sober, tool-based understanding of generative AI systems as computational instruments embedded in warm-blooded research processes can serve several salutary functions. It can play a crucial bubble-bursting role that mitigates some of the most serious threats to the ethos of modern science posed by an unreflective overreliance on these technologies. It can also strengthen the epistemic and normative footing of contemporary science by helping researchers circumscribe the part to be played by machine-led prediction in communicative contexts of scientific discovery while concurrently prodding them to recognise that such contexts are principal sites for human empowerment, democratic agency, and creativity. Finally, it can help spur ever richer approaches to collaborative experimental design, theory-construction, and scientific world-making by encouraging researchers to deploy these kinds of computational tools to heuristically probe unbounded search spaces and patterns in high-dimensional biophysical data that would otherwise be inaccessible to human-scale examination and inference…(More)”.

The Tragedy of AI Governance


Paper by Simon Chesterman: “Despite hundreds of guides, frameworks, and principles intended to make AI “ethical” or “responsible”, ever more powerful applications continue to be released ever more quickly. Safety and security teams are being downsized or sidelined to bring AI products to market. And a significant portion of AI developers apparently believe there is a real risk that their work poses an existential threat to humanity.

This contradiction between statements and action can be attributed to three factors that undermine the prospects for meaningful governance of AI. The first is the shift of power from public to private hands, not only in deployment of AI products but in fundamental research. The second is the wariness of most states about regulating the sector too aggressively, for fear that it might drive innovation elsewhere. The third is the dysfunction of global processes to manage collective action problems, epitomized by the climate crisis and now frustrating efforts to govern a technology that does not respect borders. The tragedy of AI governance is that those with the greatest leverage to regulate AI have the least interest in doing so, while those with the greatest interest have the least leverage.

Resolving these challenges either requires rethinking the incentive structures — or waiting for a crisis that brings the need for regulation and coordination into sharper focus…(More)”

Learning Like a State: Statecraft in the Digital Age


Paper by Marion Fourcade and Jeff Gordon: “What does it mean to sense, see, and act like a state in the digital age? We examine the changing phenomenology, governance, and capacity of the state in the era of big data and machine learning. Our argument is threefold. First, what we call the dataist state may be less accountable than its predecessor, despite its promise of enhanced transparency and accessibility. Second, a rapid expansion of the data collection mandate is fueling a transformation in political rationality, in which data affordances increasingly drive policy strategies. Third, the turn to dataist statecraft facilitates a corporate reconstruction of the state. On the one hand, digital firms attempt to access and capitalize on data “minted” by the state. On the other hand, firms compete with the state in an effort to reinvent traditional public functions. Finally, we explore what it would mean for this dataist state to “see like a citizen” instead…(More)”.

A standardised differential privacy framework for epidemiological modeling with mobile phone data


Paper by Merveille Koissi Savi et al: “During the COVID-19 pandemic, the use of mobile phone data for monitoring human mobility patterns has become increasingly common, both to study the impact of travel restrictions on population movement and epidemiological modeling. Despite the importance of these data, the use of location information to guide public policy can raise issues of privacy and ethical use. Studies have shown that simple aggregation does not protect the privacy of an individual, and there are no universal standards for aggregation that guarantee anonymity. Newer methods, such as differential privacy, can provide statistically verifiable protection against identifiability but have been largely untested as inputs for compartment models used in infectious disease epidemiology. Our study examines the application of differential privacy as an anonymisation tool in epidemiological models, studying the impact of adding quantifiable statistical noise to mobile phone-based location data on the bias of ten common epidemiological metrics. We find that many epidemiological metrics are preserved and remain close to their non-private values when the true noise state is less than 20, in a count transition matrix, which corresponds to a privacy-less parameter ϵ = 0.05 per release. We show that differential privacy offers a robust approach to preserving individual privacy in mobility data while providing useful population-level insights for public health. Importantly, we have built a modular software pipeline to facilitate the replication and expansion of our framework…(More)”.

AI is Like… A Literature Review of AI Metaphors and Why They Matter for Policy


Paper by Matthijs M. Maas: “As AI systems have become increasingly capable and impactful, there has been significant public and policymaker debate over this technology’s impacts—and the appropriate legal or regulatory responses. Within these debates many have deployed—and contested—a dazzling range of analogies, metaphors, and comparisons for AI systems, their impact, or their regulation.

This report reviews why and how metaphors matter to both the study and practice of AI governance, in order to contribute to more productive dialogue and more reflective policymaking. It first reviews five stages at which different foundational metaphors play a role in shaping the processes of technological innovation, the academic study of their impacts; the regulatory agenda, the terms of the policymaking process, and legislative and judicial responses to new technology. It then surveys a series of cases where the choice of analogy materially influenced the regulation of internet issues, as well as (recent) AI law issues. The report then provides a non-exhaustive survey of 55 analogies that have been given for AI technology, and some of their policy implications. Finally, it discusses the risks of utilizing unreflexive analogies in AI law and regulation.

By disentangling the role of metaphors and frames in these debates, and the space of analogies for AI, this survey does not aim to argue against the use or role of analogies in AI regulation—but rather to facilitate more reflective and productive conversations on these timely challenges…(More)”.