Paper by Anne David et al: “The burgeoning capabilities of artificial intelligence (AI) have prompted numerous local governments worldwide to consider its integration into their operations. Nevertheless, instances of notable AI failures have heightened ethical concerns, emphasising the imperative for local governments to approach the adoption of AI technologies in a responsible manner. While local government AI guidelines endeavour to incorporate characteristics of responsible innovation and technology (RIT), it remains essential to assess the extent to which these characteristics have been integrated into policy guidelines to facilitate more effective AI governance in the future. This study closely examines local government policy documents (n = 26) through the lens of RIT, employing directed content analysis with thematic data analysis software. The results reveal that: (a) Not all RIT characteristics have been given equal consideration in these policy documents; (b) Participatory and deliberate considerations were the most frequently mentioned responsible AI characteristics in policy documents; (c) Adaptable, explainable, sustainable, and accountable considerations were the least present responsible AI characteristics in policy documents; (d) Many of the considerations overlapped with each other as local governments were at the early stages of identifying them. Furthermore, the paper summarised strategies aimed at assisting local authorities in identifying their strengths and weaknesses in responsible AI characteristics, thereby facilitating their transformation into governing entities with responsible AI practices. The study informs local government policymakers, practitioners, and researchers on the critical aspects of responsible AI policymaking…(More)” See also: AI Localism
It is about time! Exploring the clashing timeframes of politics and public policy experiments
Paper by Ringa Raudla, Külli Sarapuu, Johanna Vallistu, and Nastassia Harbuzova: “Although existing studies on experimental policymaking have acknowledged the importance of the political setting in which policy experiments take place, we lack systematic knowledge on how various political dimensions affect experimental policymaking. In this article, we address a specific gap in the existing understanding of the politics of experimentation: how political timeframes influence experimental policymaking. Drawing on theoretical discussions on experimental policymaking, public policy, electoral politics, and mediatization of politics, we outline expectations about how electoral and problem cycles may influence the timing, design, and learning from policy experiments. We argue electoral timeframes are likely to discourage politicians from undertaking large-scale policy experiments and if politicians decide to launch experiments, they prefer shorter designs. The electoral cycle may lead politicians to draw too hasty conclusions or ignore the experiment’s results altogether. We expect problem cycles to shorten politicians’ time horizons further as there is pressure to solve problems quickly. We probe the plausibility of our theoretical expectations using interview data from two different country contexts: Estonia and Finland…(More)“.
Digital Distractions with Peer Influence: The Impact of Mobile App Usage on Academic and Labor Market Outcomes
Paper by Panle Jia Barwick, Siyu Chen, Chao Fu & Teng Li: “Concerns over the excessive use of mobile phones, especially among youths and young adults, are growing. Leveraging administrative student data from a Chinese university merged with mobile phone records, random roommate assignments, and a policy shock that affects peers’ peers, we present, to our knowledge, the first estimates of both behavioral spillover and contextual peer effects, and the first estimates of medium-term impacts of mobile app usage on academic achievement, physical health, and labor market outcomes. App usage is contagious: a one s.d. increase in roommates’ in-college app usage raises own app usage by 4.4% on average, with substantial heterogeneity across students. App usage is detrimental to both academic performance and labor market outcomes. A one s.d. increase in own app usage reduces GPAs by 36.2% of a within-cohort-major s.d. and lowers wages by 2.3%. Roommates’ app usage exerts both direct effects (e.g., noise and disruptions) and indirect effects (via behavioral spillovers) on GPA and wage, resulting in a total negative impact of over half the size of the own usage effect. Extending China’s minors’ game restriction policy of 3 hours per week to college students would boost their initial wages by 0.7%. Using high-frequency GPS data, we identify one underlying mechanism: high app usage crowds out time in study halls and increases absences from and late arrivals at lectures…(More)”.
Use of large language models as a scalable approach to understanding public health discourse
Paper by Laura Espinosa and Marcel Salathé: “Online public health discourse is becoming more and more important in shaping public health dynamics. Large Language Models (LLMs) offer a scalable solution for analysing the vast amounts of unstructured text found on online platforms. Here, we explore the effectiveness of Large Language Models (LLMs), including GPT models and open-source alternatives, for extracting public stances towards vaccination from social media posts. Using an expert-annotated dataset of social media posts related to vaccination, we applied various LLMs and a rule-based sentiment analysis tool to classify the stance towards vaccination. We assessed the accuracy of these methods through comparisons with expert annotations and annotations obtained through crowdsourcing. Our results demonstrate that few-shot prompting of best-in-class LLMs are the best performing methods, and that all alternatives have significant risks of substantial misclassification. The study highlights the potential of LLMs as a scalable tool for public health professionals to quickly gauge public opinion on health policies and interventions, offering an efficient alternative to traditional data analysis methods. With the continuous advancement in LLM development, the integration of these models into public health surveillance systems could substantially improve our ability to monitor and respond to changing public health attitudes…(More)”.
The illusion of information adequacy
Paper by Hunter Gehlbach , Carly D. Robinson, Angus Fletcher: “How individuals navigate perspectives and attitudes that diverge from their own affects an array of interpersonal outcomes from the health of marriages to the unfolding of international conflicts. The finesse with which people negotiate these differing perceptions depends critically upon their tacit assumptions—e.g., in the bias of naïve realism people assume that their subjective construal of a situation represents objective truth. The present study adds an important assumption to this list of biases: the illusion of information adequacy. Specifically, because individuals rarely pause to consider what information they may be missing, they assume that the cross-section of relevant information to which they are privy is sufficient to adequately understand the situation. Participants in our preregistered study (N = 1261) responded to a hypothetical scenario in which control participants received full information and treatment participants received approximately half of that same information. We found that treatment participants assumed that they possessed comparably adequate information and presumed that they were just as competent to make thoughtful decisions based on that information. Participants’ decisions were heavily influenced by which cross-section of information they received. Finally, participants believed that most other people would make a similar decision to the one they made. We discuss the implications in the context of naïve realism and other biases that implicate how people navigate differences of perspective…(More)”.
Asserting the public interest in health data: On the ethics of data governance for biobanks and insurers
Paper by Kathryne Metcalf and Jathan Sadowski : “Recent reporting has revealed that the UK Biobank (UKB)—a large, publicly-funded research database containing highly-sensitive health records of over half a million participants—has shared its data with private insurance companies seeking to develop actuarial AI systems for analyzing risk and predicting health. While news reports have characterized this as a significant breach of public trust, the UKB contends that insurance research is “in the public interest,” and that all research participants are adequately protected from the possibility of insurance discrimination via data de-identification. Here, we contest both of these claims. Insurers use population data to identify novel categories of risk, which become fodder in the production of black-boxed actuarial algorithms. The deployment of these algorithms, as we argue, has the potential to increase inequality in health and decrease access to insurance. Importantly, these types of harms are not limited just to UKB participants: instead, they are likely to proliferate unevenly across various populations within global insurance markets via practices of profiling and sorting based on the synthesis of multiple data sources, alongside advances in data analysis capabilities, over space/time. This necessitates a significantly expanded understanding of the publics who must be involved in biobank governance and data-sharing decisions involving insurers…(More)”.
AI-accelerated Nazca survey nearly doubles the number of known figurative geoglyphs and sheds light on their purpose
Paper by Masato Sakai, Akihisa Sakurai, Siyuan Lu, and Marcus Freitag: “It took nearly a century to discover a total of 430 figurative Nazca geoglyphs, which offer significant insights into the ancient cultures at the Nazca Pampa. Here, we report the deployment of an AI system to the entire Nazca region, a UNESCO World Heritage site, leading to the discovery of 303 new figurative geoglyphs within only 6 mo of field survey, nearly doubling the number of known figurative geoglyphs. Even with limited training examples, the developed AI approach is demonstrated to be effective in detecting the smaller relief-type geoglyphs, which unlike the giant line-type geoglyphs are very difficult to discern. The improved account of figurative geoglyphs enables us to analyze their motifs and distribution across the Nazca Pampa. We find that relief-type geoglyphs depict mainly human motifs or motifs of things modified by humans, such as domesticated animals and decapitated heads (81.6%). They are typically located within viewing distance (on average 43 m) of ancient trails that crisscross the Nazca Pampa and were most likely built and viewed at the individual or small-group level. On the other hand, the giant line-type figurative geoglyphs mainly depict wild animals (64%). They are found an average of 34 m from the elaborate linear/trapezoidal network of geoglyphs, which suggests that they were probably built and used on a community level for ritual activities…(More)”
Artificial Intelligence for Social Innovation: Beyond the Noise of Algorithms and Datafication
Paper by Igor Calzada: “In an era of rapid technological advancement, decisions about the ownership and governance of emerging technologies like Artificial Intelligence will shape the future of both urban and rural environments in the Global North and South. This article explores how AI can move beyond the noise of algorithms by adopting a technological humanistic approach to enable Social Innovation, focusing on global inequalities and digital justice. Using a fieldwork Action Research methodology, based on the Smart Rural Communities project in Colombia and Mozambique, the study develops a framework for integrating AI with SI. Drawing on insights from the AI4SI International Summer School held in Donostia-San Sebastián in 2024, the article examines the role of decentralized Web3 technologies—such as Blockchain, Decentralized Autonomous Organizations, and Data Cooperatives—in enhancing data sovereignty and fostering inclusive and participatory governance. The results demonstrate how decentralization can empower marginalized communities in the Global South by promoting digital justice and addressing the imbalance of power in digital ecosystems. The conclusion emphasizes the potential for AI and decentralized technologies to bridge the digital divide, offering practical recommendations for scaling these innovations to support equitable, community-driven governance and address systemic inequalities across the Global North and South…(More)”.
The ABC’s of Who Benefits from Working with AI: Ability, Beliefs, and Calibration
Paper by Andrew Caplin: “We use a controlled experiment to show that ability and belief calibration jointly determine the benefits of working with Artificial Intelligence (AI). AI improves performance more for people with low baseline ability. However, holding ability constant, AI assistance is more valuable for people who are calibrated, meaning they have accurate beliefs about their own ability. People who know they have low ability gain the most from working with AI. In a counterfactual analysis, we show that eliminating miscalibration would cause AI to reduce performance inequality nearly twice as much as it already does…(More)”.
Orphan Articles: The Dark Matter of Wikipedia
Paper by Akhil Arora, Robert West, Martin Gerlach: “With 60M articles in more than 300 language versions, Wikipedia is the largest platform for open and freely accessible knowledge. While the available content has been growing continuously at a rate of around 200K new articles each month, very little attention has been paid to the accessibility of the content. One crucial aspect of accessibility is the integration of hyperlinks into the network so the articles are visible to readers navigating Wikipedia. In order to understand this phenomenon, we conduct the first systematic study of orphan articles, which are articles without any incoming links from other Wikipedia articles, across 319 different language versions of Wikipedia. We find that a surprisingly large extent of content, roughly 15\% (8.8M) of all articles, is de facto invisible to readers navigating Wikipedia, and thus, rightfully term orphan articles as the dark matter of Wikipedia. We also provide causal evidence through a quasi-experiment that adding new incoming links to orphans (de-orphanization) leads to a statistically significant increase of their visibility in terms of the number of pageviews. We further highlight the challenges faced by editors for de-orphanizing articles, demonstrate the need to support them in addressing this issue, and provide potential solutions for developing automated tools based on cross-lingual approaches. Overall, our work not only unravels a key limitation in the link structure of Wikipedia and quantitatively assesses its impact, but also provides a new perspective on the challenges of maintenance associated with content creation at scale in Wikipedia…(More)”.