Disaster preparedness: Will a “norm nudge” sink or swim?


Article by Jantsje Mol: “In these times of unprecedented climate change, one critical question persists: how do we motivate homeowners to protect their homes and loved ones from the ever-looming threat of flooding? This question led to a captivating behavioral science study, born from a research visit to the Wharton Risk Management and Decision Processes Center in 2019 (currently the Wharton Climate Center). Co-founded and co-directed by the late Howard Kunreuther, the Center has been at the forefront of understanding and mitigating the impact of natural disasters. In this study, we explored the potential of social norms to boost flood preparedness among homeowners. While the results may not align with initial expectations, they shed light on the complexities of human behavior, the significance of meticulous testing, and the enduring legacy of a visionary scholar.

The Power of Social Norms

Before we delve into the results, let’s take a moment to understand what social norms are and why they matter. Social norms dictate what is considered acceptable or expected in a given community. A popular behavioral intervention based on social norms is a norm-nudge: reading information about what others do (say, energy saving behavior of neighbors or tax compliance rates of fellow citizens) may adjust one’s own behavior closer. Norm-nudges are cheap, easy to implement and less prone to political resistance than traditional interventions such as taxes, but they might be ineffective or even backfire. Norm-nudges have been applied to health, finance and the environment, but not yet to the context of natural disaster risk-reduction…(More)”.

Can Google Trends predict asylum-seekers’ destination choices?


Paper by Haodong Qi & Tuba Bircan: “Google Trends (GT) collate the volumes of search keywords over time and by geographical location. Such data could, in theory, provide insights into people’s ex ante intentions to migrate, and hence be useful for predictive analysis of future migration. Empirically, however, the predictive power of GT is sensitive, it may vary depending on geographical context, the search keywords selected for analysis, as well as Google’s market share and its users’ characteristics and search behavior, among others. Unlike most previous studies attempting to demonstrate the benefit of using GT for forecasting migration flows, this article addresses a critical but less discussed issue: when GT cannot enhance the performances of migration models. Using EUROSTAT statistics on first-time asylum applications and a set of push-pull indicators gathered from various data sources, we train three classes of gravity models that are commonly used in the migration literature, and examine how the inclusion of GT may affect models’ abilities to predict refugees’ destination choices. The results suggest that the effects of including GT are highly contingent on the complexity of different models. Specifically, GT can only improve the performance of relatively simple models, but not of those augmented by flow Fixed-Effects or by Auto-Regressive effects. These findings call for a more comprehensive analysis of the strengths and limitations of using GT, as well as other digital trace data, in the context of modeling and forecasting migration. It is our hope that this nuanced perspective can spur further innovations in the field, and ultimately bring us closer to a comprehensive modeling framework of human migration…(More)”.

Demographic Parity: Mitigating Biases in Real-World Data


Paper by Orestis Loukas, and Ho-Ryun Chung: “Computer-based decision systems are widely used to automate decisions in many aspects of everyday life, which include sensitive areas like hiring, loaning and even criminal sentencing. A decision pipeline heavily relies on large volumes of historical real-world data for training its models. However, historical training data often contains gender, racial or other biases which are propagated to the trained models influencing computer-based decisions. In this work, we propose a robust methodology that guarantees the removal of unwanted biases while maximally preserving classification utility. Our approach can always achieve this in a model-independent way by deriving from real-world data the asymptotic dataset that uniquely encodes demographic parity and realism. As a proof-of-principle, we deduce from public census records such an asymptotic dataset from which synthetic samples can be generated to train well-established classifiers. Benchmarking the generalization capability of these classifiers trained on our synthetic data, we confirm the absence of any explicit or implicit bias in the computer-aided decision…(More)”.

From Happiness Data to Economic Conclusions


Paper by Daniel J. Benjamin, Kristen Cooper, Ori Heffetz & Miles S. Kimball: “Happiness data—survey respondents’ self-reported well-being (SWB)—have become increasingly common in economics research, with recent calls to use them in policymaking. Researchers have used SWB data in novel ways, for example to learn about welfare or preferences when choice data are unavailable or difficult to interpret. Focusing on leading examples of this pioneering research, the first part of this review uses a simple theoretical framework to reverse-engineer some of the crucial assumptions that underlie existing applications. The second part discusses evidence bearing on these assumptions and provides practical advice to the agencies and institutions that generate SWB data, the researchers who use them, and the policymakers who may use the resulting research. While we advocate creative uses of SWB data in economics, we caution that their use in policy will likely require both additional data collection and further research to better understand the data…(More)”.

Hopes over fears: Can democratic deliberation increase positive emotions concerning the future?


Paper by S. Ahvenharju, M. Minkkinen, and F. Lalot: “Deliberative mini-publics have often been considered to be a potential way to promote future-oriented thinking. Still, thinking about the future can be hard as it can evoke negative emotions such as stress and anxiety. This article establishes why a more positive outlook towards the future can benefit long-term decision-making. Then, it explores whether and to what extent deliberative mini-publics can facilitate thinking about the future by moderating negative emotions and encouraging positive emotions. We analyzed an online mini-public held in the region of Satakunta, Finland, organized to involve the public in the drafting process of a regional plan extending until the year 2050. In addition to the standard practices related to mini-publics, the Citizens’ Assembly included an imaginary time travel exercise, Future Design, carried out with half of the participants. Our analysis makes use of both survey and qualitative data. We found that democratic deliberation can promote positive emotions, like hopefulness and compassion, and lessen negative emotions, such as fear and confusion, related to the future. There were, however, differences in how emotions developed in the various small groups. Interviews with participants shed further light onto how participants felt during the event and how their sentiments concerning the future changed…(More)”.

Essential requirements for the governance and management of data trusts, data repositories, and other data collaborations


Paper by Alison Paprica et al: “Around the world, many organisations are working on ways to increase the use, sharing, and reuse of person-level data for research, evaluation, planning, and innovation while ensuring that data are secure and privacy is protected. As a contribution to broader efforts to improve data governance and management, in 2020 members of our team published 12 minimum specification essential requirements (min specs) to provide practical guidance for organisations establishing or operating data trusts and other forms of data infrastructure… We convened an international team, consisting mostly of participants from Canada and the United States of America, to test and refine the original 12 min specs. Twenty-three (23) data-focused organisations and initiatives recorded the various ways they address the min specs. Sub-teams analysed the results, used the findings to make improvements to the min specs, and identified materials to support organisations/initiatives in addressing the min specs.
Analyses and discussion led to an updated set of 15 min specs covering five categories: one min spec for Legal, five for Governance, four for Management, two for Data Users, and three for Stakeholder & Public Engagement. Multiple changes were made to make the min specs language more technically complete and precise. The updated set of 15 min specs has been integrated into a Canadian national standard that, to our knowledge, is the first to include requirements for public engagement and Indigenous Data Sovereignty…(More)”.

Data Repurposing through Compatibility: A Computational Perspective


Paper by Asia Biega: “Reuse of data in new contexts beyond the purposes for which it was originally collected has contributed to technological innovation and reducing the consent burden on data subjects. One of the legal mechanisms that makes such reuse possible is purpose compatibility assessment. In this paper, I offer an in-depth analysis of this mechanism through a computational lens. I moreover consider what should qualify as repurposing apart from using data for a completely new task, and argue that typical purpose formulations are an impediment to meaningful repurposing. Overall, the paper positions compatibility assessment as a constructive practice beyond an ineffective standard…(More)”

From Print to Pixels: The Changing Landscape of the Public Sphere in the Digital Age


Paper by Taha Yasseri: “This Mini Review explores the evolution of the public sphere in the digital age. The public sphere is a social space where individuals come together to exchange opinions, discuss public affairs, and engage in collective decision-making. It is considered a defining feature of modern democratic societies, allowing citizens to participate in public life and promoting transparency and accountability in the political process. This Mini Review discusses the changes and challenges faced by the public sphere in recent years, particularly with the advent of new communication technologies such as the Internet and social media. We highlight benefits such as a) increase in political participation, b) facilitation of collective action, c) real time spread of information, and d) democratization of information exchange; and harms such as a) increasing polarization of public discourse, b) the spread of misinformation, and c) the manipulation of public opinion by state and non-state actors. The discussion will conclude with an assessment of the digital age public sphere in established democracies like the US and the UK…(More)”.

Machine-assisted mixed methods: augmenting humanities and social sciences with artificial intelligence


Paper by Andres Karjus: “The increasing capacities of large language models (LLMs) present an unprecedented opportunity to scale up data analytics in the humanities and social sciences, augmenting and automating qualitative analytic tasks previously typically allocated to human labor. This contribution proposes a systematic mixed methods framework to harness qualitative analytic expertise, machine scalability, and rigorous quantification, with attention to transparency and replicability. 16 machine-assisted case studies are showcased as proof of concept. Tasks include linguistic and discourse analysis, lexical semantic change detection, interview analysis, historical event cause inference and text mining, detection of political stance, text and idea reuse, genre composition in literature and film; social network inference, automated lexicography, missing metadata augmentation, and multimodal visual cultural analytics. In contrast to the focus on English in the emerging LLM applicability literature, many examples here deal with scenarios involving smaller languages and historical texts prone to digitization distortions. In all but the most difficult tasks requiring expert knowledge, generative LLMs can demonstrably serve as viable research instruments. LLM (and human) annotations may contain errors and variation, but the agreement rate can and should be accounted for in subsequent statistical modeling; a bootstrapping approach is discussed. The replications among the case studies illustrate how tasks previously requiring potentially months of team effort and complex computational pipelines, can now be accomplished by an LLM-assisted scholar in a fraction of the time. Importantly, this approach is not intended to replace, but to augment researcher knowledge and skills. With these opportunities in sight, qualitative expertise and the ability to pose insightful questions have arguably never been more critical…(More)”.

On the culture of open access: the Sci-hub paradox


Paper by Abdelghani Maddi and David Sapinho: “Shadow libraries, also known as ”pirate libraries”, are online collections of copyrighted publications that have been made available for free without the permission of the copyright holders. They have gradually become key players of scientific knowledge dissemination, despite their illegality in most countries of the world. Many publishers and scientist-editors decry such libraries for their copyright infringement and loss of publication usage information, while some scholars and institutions support them, sometimes in a roundabout way, for their role in reducing inequalities of access to knowledge, particularly in low-income countries. Although there is a wealth of literature on shadow libraries, none of this have focused on its potential role in knowledge dissemination, through the open access movement. Here we analyze how shadow libraries can affect researchers’ citation practices, highlighting some counter-intuitive findings about their impact on the Open Access Citation Advantage (OACA). Based on a large randomized sample, this study first shows that OA publications, including those in fully OA journals, receive more citations than their subscription-based counterparts do. However, the OACA has slightly decreased over the seven last years. The introduction of a distinction between those accessible or not via the Scihub platform among subscription-based suggest that the generalization of its use cancels the positive effect of OA publishing. The results show that publications in fully OA journals are victims of the success of Sci-hub. Thus, paradoxically, although Sci-hub may seem to facilitate access to scientific knowledge, it negatively affects the OA movement as a whole, by reducing the comparative advantage of OA publications in terms of visibility for researchers. The democratization of the use of Sci-hub may therefore lead to a vicious cycle, hindering efforts to develop full OA strategies without proposing a credible and sustainable alternative model for the dissemination of scientific knowledge…(More)”.