Paper by Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang: “Despite the growing consensus that stakeholders affected by AI systems should participate in their design, enormous variation and implicit disagreements exist among current approaches. For researchers and practitioners who are interested in taking a participatory approach to AI design and development, it remains challenging to assess the extent to which any participatory approach grants substantive agency to stakeholders. This article thus aims to ground what we dub the “participatory turn” in AI design by synthesizing existing theoretical literature on participation and through empirical investigation and critique of its current practices. Specifically, we derive a conceptual framework through synthesis of literature across technology design, political theory, and the social sciences that researchers and practitioners can leverage to evaluate approaches to participation in AI design. Additionally, we articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners. We use these empirical findings to understand the current state of participatory practice and subsequently provide guidance to better align participatory goals and methods in a way that accounts for practical constraints…(More)”.
Data Dysphoria: The Governance Challenge Posed by Large Learning Models
Paper by Susan Ariel Aaronson: “Only 8 months have passed since Chat-GPT and the large learning model underpinning it took the world by storm. This article focuses on the data supply chain—the data collected and then utilized to train large language models and the governance challenge it presents to policymakers These challenges include:
• How web scraping may affect individuals and firms which hold copyrights.
• How web scraping may affect individuals and groups who are supposed to be protected under privacy and personal data protection laws.
• How web scraping revealed the lack of protections for content creators and content providers on open access web sites; and
• How the debate over open and closed source LLM reveals the lack of clear and universal rules to ensure the quality and validity of datasets. As the US National Institute of Standards explained, many LLMs depend on “largescale datasets, which can lead to data quality and validity concerns. “The difficulty of finding the “right” data may lead AI actors to select datasets based more on accessibility and availability than on suitability… Such decisions could contribute to an environment where the data used in processes is not fully representative of the populations or phenomena that are being modeled, introducing downstream risks” –in short problems of quality and validity…(More)”.
Disaster preparedness: Will a “norm nudge” sink or swim?
Article by Jantsje Mol: “In these times of unprecedented climate change, one critical question persists: how do we motivate homeowners to protect their homes and loved ones from the ever-looming threat of flooding? This question led to a captivating behavioral science study, born from a research visit to the Wharton Risk Management and Decision Processes Center in 2019 (currently the Wharton Climate Center). Co-founded and co-directed by the late Howard Kunreuther, the Center has been at the forefront of understanding and mitigating the impact of natural disasters. In this study, we explored the potential of social norms to boost flood preparedness among homeowners. While the results may not align with initial expectations, they shed light on the complexities of human behavior, the significance of meticulous testing, and the enduring legacy of a visionary scholar.
The Power of Social Norms
Before we delve into the results, let’s take a moment to understand what social norms are and why they matter. Social norms dictate what is considered acceptable or expected in a given community. A popular behavioral intervention based on social norms is a norm-nudge: reading information about what others do (say, energy saving behavior of neighbors or tax compliance rates of fellow citizens) may adjust one’s own behavior closer. Norm-nudges are cheap, easy to implement and less prone to political resistance than traditional interventions such as taxes, but they might be ineffective or even backfire. Norm-nudges have been applied to health, finance and the environment, but not yet to the context of natural disaster risk-reduction…(More)”.
Can Google Trends predict asylum-seekers’ destination choices?
Paper by Haodong Qi & Tuba Bircan: “Google Trends (GT) collate the volumes of search keywords over time and by geographical location. Such data could, in theory, provide insights into people’s ex ante intentions to migrate, and hence be useful for predictive analysis of future migration. Empirically, however, the predictive power of GT is sensitive, it may vary depending on geographical context, the search keywords selected for analysis, as well as Google’s market share and its users’ characteristics and search behavior, among others. Unlike most previous studies attempting to demonstrate the benefit of using GT for forecasting migration flows, this article addresses a critical but less discussed issue: when GT cannot enhance the performances of migration models. Using EUROSTAT statistics on first-time asylum applications and a set of push-pull indicators gathered from various data sources, we train three classes of gravity models that are commonly used in the migration literature, and examine how the inclusion of GT may affect models’ abilities to predict refugees’ destination choices. The results suggest that the effects of including GT are highly contingent on the complexity of different models. Specifically, GT can only improve the performance of relatively simple models, but not of those augmented by flow Fixed-Effects or by Auto-Regressive effects. These findings call for a more comprehensive analysis of the strengths and limitations of using GT, as well as other digital trace data, in the context of modeling and forecasting migration. It is our hope that this nuanced perspective can spur further innovations in the field, and ultimately bring us closer to a comprehensive modeling framework of human migration…(More)”.
Demographic Parity: Mitigating Biases in Real-World Data
Paper by Orestis Loukas, and Ho-Ryun Chung: “Computer-based decision systems are widely used to automate decisions in many aspects of everyday life, which include sensitive areas like hiring, loaning and even criminal sentencing. A decision pipeline heavily relies on large volumes of historical real-world data for training its models. However, historical training data often contains gender, racial or other biases which are propagated to the trained models influencing computer-based decisions. In this work, we propose a robust methodology that guarantees the removal of unwanted biases while maximally preserving classification utility. Our approach can always achieve this in a model-independent way by deriving from real-world data the asymptotic dataset that uniquely encodes demographic parity and realism. As a proof-of-principle, we deduce from public census records such an asymptotic dataset from which synthetic samples can be generated to train well-established classifiers. Benchmarking the generalization capability of these classifiers trained on our synthetic data, we confirm the absence of any explicit or implicit bias in the computer-aided decision…(More)”.
From Happiness Data to Economic Conclusions
Paper by Daniel J. Benjamin, Kristen Cooper, Ori Heffetz & Miles S. Kimball: “Happiness data—survey respondents’ self-reported well-being (SWB)—have become increasingly common in economics research, with recent calls to use them in policymaking. Researchers have used SWB data in novel ways, for example to learn about welfare or preferences when choice data are unavailable or difficult to interpret. Focusing on leading examples of this pioneering research, the first part of this review uses a simple theoretical framework to reverse-engineer some of the crucial assumptions that underlie existing applications. The second part discusses evidence bearing on these assumptions and provides practical advice to the agencies and institutions that generate SWB data, the researchers who use them, and the policymakers who may use the resulting research. While we advocate creative uses of SWB data in economics, we caution that their use in policy will likely require both additional data collection and further research to better understand the data…(More)”.
Hopes over fears: Can democratic deliberation increase positive emotions concerning the future?
Paper by S. Ahvenharju, M. Minkkinen, and F. Lalot: “Deliberative mini-publics have often been considered to be a potential way to promote future-oriented thinking. Still, thinking about the future can be hard as it can evoke negative emotions such as stress and anxiety. This article establishes why a more positive outlook towards the future can benefit long-term decision-making. Then, it explores whether and to what extent deliberative mini-publics can facilitate thinking about the future by moderating negative emotions and encouraging positive emotions. We analyzed an online mini-public held in the region of Satakunta, Finland, organized to involve the public in the drafting process of a regional plan extending until the year 2050. In addition to the standard practices related to mini-publics, the Citizens’ Assembly included an imaginary time travel exercise, Future Design, carried out with half of the participants. Our analysis makes use of both survey and qualitative data. We found that democratic deliberation can promote positive emotions, like hopefulness and compassion, and lessen negative emotions, such as fear and confusion, related to the future. There were, however, differences in how emotions developed in the various small groups. Interviews with participants shed further light onto how participants felt during the event and how their sentiments concerning the future changed…(More)”.
Essential requirements for the governance and management of data trusts, data repositories, and other data collaborations
Paper by Alison Paprica et al: “Around the world, many organisations are working on ways to increase the use, sharing, and reuse of person-level data for research, evaluation, planning, and innovation while ensuring that data are secure and privacy is protected. As a contribution to broader efforts to improve data governance and management, in 2020 members of our team published 12 minimum specification essential requirements (min specs) to provide practical guidance for organisations establishing or operating data trusts and other forms of data infrastructure… We convened an international team, consisting mostly of participants from Canada and the United States of America, to test and refine the original 12 min specs. Twenty-three (23) data-focused organisations and initiatives recorded the various ways they address the min specs. Sub-teams analysed the results, used the findings to make improvements to the min specs, and identified materials to support organisations/initiatives in addressing the min specs.
Analyses and discussion led to an updated set of 15 min specs covering five categories: one min spec for Legal, five for Governance, four for Management, two for Data Users, and three for Stakeholder & Public Engagement. Multiple changes were made to make the min specs language more technically complete and precise. The updated set of 15 min specs has been integrated into a Canadian national standard that, to our knowledge, is the first to include requirements for public engagement and Indigenous Data Sovereignty…(More)”.
Data Repurposing through Compatibility: A Computational Perspective
Paper by Asia Biega: “Reuse of data in new contexts beyond the purposes for which it was originally collected has contributed to technological innovation and reducing the consent burden on data subjects. One of the legal mechanisms that makes such reuse possible is purpose compatibility assessment. In this paper, I offer an in-depth analysis of this mechanism through a computational lens. I moreover consider what should qualify as repurposing apart from using data for a completely new task, and argue that typical purpose formulations are an impediment to meaningful repurposing. Overall, the paper positions compatibility assessment as a constructive practice beyond an ineffective standard…(More)”
From Print to Pixels: The Changing Landscape of the Public Sphere in the Digital Age
Paper by Taha Yasseri: “This Mini Review explores the evolution of the public sphere in the digital age. The public sphere is a social space where individuals come together to exchange opinions, discuss public affairs, and engage in collective decision-making. It is considered a defining feature of modern democratic societies, allowing citizens to participate in public life and promoting transparency and accountability in the political process. This Mini Review discusses the changes and challenges faced by the public sphere in recent years, particularly with the advent of new communication technologies such as the Internet and social media. We highlight benefits such as a) increase in political participation, b) facilitation of collective action, c) real time spread of information, and d) democratization of information exchange; and harms such as a) increasing polarization of public discourse, b) the spread of misinformation, and c) the manipulation of public opinion by state and non-state actors. The discussion will conclude with an assessment of the digital age public sphere in established democracies like the US and the UK…(More)”.