Paper by Anneke Zuiderwijk, Yu-CheChen and Fadi Salem: “To lay the foundation for the special issue that this research article introduces, we present 1) a systematic review of existing literature on the implications of the use of Artificial Intelligence (AI) in public governance and 2) develop a research agenda. First, an assessment based on 26 articles on this topic reveals much exploratory, conceptual, qualitative, and practice-driven research in studies reflecting the increasing complexities of using AI in government – and the resulting implications, opportunities, and risks thereof for public governance. Second, based on both the literature review and the analysis of articles included in this special issue, we propose a research agenda comprising eight process-related recommendations and seven content-related recommendations. Process-wise, future research on the implications of the use of AI for public governance should move towards more public sector-focused, empirical, multidisciplinary, and explanatory research while focusing more on specific forms of AI rather than AI in general. Content-wise, our research agenda calls for the development of solid, multidisciplinary, theoretical foundations for the use of AI for public governance, as well as investigations of effective implementation, engagement, and communication plans for government strategies on AI use in the public sector. Finally, the research agenda calls for research into managing the risks of AI use in the public sector, governance modes possible for AI use in the public sector, performance and impact measurement of AI use in government, and impact evaluation of scaling-up AI usage in the public sector….(More)”.
Culture, Institutions and Social Equilibria: A Framework
Paper by Daron Acemoglu & James A. Robinson: “This paper proposes a new framework for studying the interplay between culture and institutions. We follow the recent sociology literature and interpret culture as a \repertoire”, which allows rich cultural responses to changes in the environment and shifts in political power. Specifically, we start with a culture set, which consists of attributes and the feasible connections between them. Combinations of attributes produce cultural configurations, which provide meaning, interpretation and justification for individual and group actions. Cultural figurations also legitimize and support different institutional arrangements. Culture matters as it shapes the set of feasible cultural figurations and via this channel institutions.
Yet, changes in politics and institutions can cause a rewiring of existing attributes, generating very different cultural configurations. Cultural persistence may result from the dynamics of political and economic factors – rather than being a consequence of an unchanging culture. We distinguish cultures by how fluid they are – whereby more fluid cultures allow a richer set of cultural configurations. Fluidity in turn depends on how specific (vs. abstract) and entangled (vs. free-standing) attributes in a culture set are. We illustrate these ideas using examples from African, England, China, the Islamic world, the Indian caste system and the Crow. In all cases, our interpretation highlights that culture becomes more of a constraint when it is less fluid (more hardwired), for example because its attributes are more specific or entangled. We also emphasize that less fluid cultures are not necessarily “bad cultures”, and may create a range of benefits, though they may reduce the responsiveness of culture to changing circumstances. In many instances, including in the African, Chinese and English cases, we show that there is a lot of fluidity and very different, almost diametrically-opposed, cultural configurations are feasible, often compete with each other for acceptance and can gain the upper hand depending on political factors….(More)”
Algorithmic thinking in the public interest: navigating technical, legal, and ethical hurdles to web scraping in the social sciences
Paper by Alex Luscombe, Kevin Dick & Kevin Walby: “Web scraping, defined as the automated extraction of information online, is an increasingly important means of producing data in the social sciences. We contribute to emerging social science literature on computational methods by elaborating on web scraping as a means of automated access to information. We begin by situating the practice of web scraping in context, providing an overview of how it works and how it compares to other methods in the social sciences. Next, we assess the benefits and challenges of scraping as a technique of information production. In terms of benefits, we highlight how scraping can help researchers answer new questions, supersede limits in official data, overcome access hurdles, and reinvigorate the values of sharing, openness, and trust in the social sciences. In terms of challenges, we discuss three: technical, legal, and ethical. By adopting “algorithmic thinking in the public interest” as a way of navigating these hurdles, researchers can improve the state of access to information on the Internet while also contributing to scholarly discussions about the legality and ethics of web scraping. Example software accompanying this article are available within the supplementary materials..(More)”.
‘Belonging Is Stronger Than Facts’: The Age of Misinformation
Max Fisher at the New York Times: “There’s a decent chance you’ve had at least one of these rumors, all false, relayed to you as fact recently: that President Biden plans to force Americans to eat less meat; that Virginia is eliminating advanced math in schools to advance racial equality; and that border officials are mass-purchasing copies of Vice President Kamala Harris’s book to hand out to refugee children.
All were amplified by partisan actors. But you’re just as likely, if not more so, to have heard it relayed from someone you know. And you may have noticed that these cycles of falsehood-fueled outrage keep recurring.
We are in an era of endemic misinformation — and outright disinformation. Plenty of bad actors are helping the trend along. But the real drivers, some experts believe, are social and psychological forces that make people prone to sharing and believing misinformation in the first place. And those forces are on the rise.
“Why are misperceptions about contentious issues in politics and science seemingly so persistent and difficult to correct?” Brendan Nyhan, a Dartmouth College political scientist, posed in a new paper in Proceedings of the National Academy of Sciences.
It’s not for want of good information, which is ubiquitous. Exposure to good information does not reliably instill accurate beliefs anyway. Rather, Dr. Nyhan writes, a growing body of evidence suggests that the ultimate culprits are “cognitive and memory limitations, directional motivations to defend or support some group identity or existing belief, and messages from other people and political elites.”
Put more simply, people become more prone to misinformation when three things happen. First, and perhaps most important, is when conditions in society make people feel a greater need for what social scientists call ingrouping — a belief that their social identity is a source of strength and superiority, and that other groups can be blamed for their problems….(More)”.
The power of words and networks
Introduction to Special Issue by A. Fronzetti Colladon, P. Gloor, and D. F. Iezzi: “According to Freud “words were originally magic and to this day words have retained much of their ancient magical power”. By words, behaviors are transformed and problems are solved. The way we use words reveals our intentions, goals and values. Novel tools for text analysis help understand the magical power of words. This power is multiplied, if it is combined with the study of social networks, i.e. with the analysis of relationships among social units. This special issue of the International Journal of Information Management, entitled “Combining Social Network Analysis and Text Mining: from Theory to Practice”, includes heterogeneous and innovative research at the nexus of text mining and social network analysis. It aims to enrich work at the intersection of these fields, which still lags behind in theoretical, empirical, and methodological foundations. The nine articles accepted for inclusion in this special issue all present methods and tools that have business applications. They are summarized in this editorial introduction….(More)”.
Improving hand hygiene in hospitals: comparing the effect of a nudge and a boost on protocol compliance
Paper by Henrico van Roekel, Joanne Reinhard and Stephan Grimmelikhuijsen: “Nudging has become a well-known policy practice. Recently, ‘boosting’ has been suggested as an alternative to nudging. In contrast to nudges, boosts aim to empower individuals to exert their own agency to make decisions. This article is one of the first to compare a nudging and a boosting intervention, and it does so in a critical field setting: hand hygiene compliance of hospital nurses. During a 4-week quasi-experiment, we tested the effect of a reframing nudge and a risk literacy boost on hand hygiene compliance in three hospital wards. The results show that nudging and boosting were both effective interventions to improve hand hygiene compliance. A tentative finding is that, while the nudge had a stronger immediate effect, the boost effect remained stable for a week, even after the removal of the intervention. We conclude that, besides nudging, researchers and policymakers may consider boosting when they seek to implement or test behavioral interventions in domains such as healthcare….(More)”.
Reimagining data responsibility: 10 new approaches toward a culture of trust in re-using data to address critical public needs
Commentary by Stefaan Verhulst in Data & Policy: “Data and data science offer tremendous potential to address some of our most intractable public problems (including the Covid-19 pandemic). At the same time, recent years have shown some of the risks of existing and emerging technologies. An updated framework is required to balance potential and risk, and to ensure that data is used responsibly. Data responsibility is not itself a new concept. However, amid a rapidly changing technology landscape, it has become increasingly clear that the concept may need updating, in order to keep up with new trends such as big data, open data, the Internet of things, and artificial intelligence, and machine learning. This paper seeks to outline 10 approaches and innovations for data responsibility in the 21st century….

Each of these is described at greater length in the paper, and illustrated with examples from around the world. Put together, they add up to a framework or outline for policy makers, scholars, and activists who seek to harness the potential of data to solve complex social problems and advance the public good. Needless to say, the 10 approaches outlined here represent just a start. We envision this paper more as an exercise in agenda-setting than a comprehensive survey…(More)”.
Quantifying collective intelligence in human groups
Paper by Christoph Riedl: “Collective intelligence (CI) is critical to solving many scientific, business, and other problems, but groups often fail to achieve it. Here, we analyze data on group performance from 22 studies, including 5,279 individuals in 1,356 groups. Our results support the conclusion that a robust CI factor characterizes a group’s ability to work together across a diverse set of tasks. We further show that CI is predicted by the proportion of women in the group, mediated by average social perceptiveness of group members, and that it predicts performance on various out-of-sample criterion tasks. We also find that, overall, group collaboration process is more important in predicting CI than the skill of individual members….(More)”
Selecting the Most Effective Nudge: Evidence from a Large-Scale Experiment on Immunization
NBER Paper by Abhijit Banerjee et al: “We evaluate a large-scale set of interventions to increase demand for immunization in Haryana, India. The policies under consideration include the two most frequently discussed tools—reminders and incentives—as well as an intervention inspired by the networks literature. We cross-randomize whether (a) individuals receive SMS reminders about upcoming vaccination drives; (b) individuals receive incentives for vaccinating their children; (c) influential individuals (information hubs, trusted individuals, or both) are asked to act as “ambassadors” receiving regular reminders to spread the word about immunization in their community. By taking into account different versions (or “dosages”) of each intervention, we obtain 75 unique policy combinations.
We develop a new statistical technique—a smart pooling and pruning procedure—for finding a best policy from a large set, which also determines which policies are effective and the effect of the best policy. We proceed in two steps. First, we use a LASSO technique to collapse the data: we pool dosages of the same treatment if the data cannot reject that they had the same impact, and prune policies deemed ineffective. Second, using the remaining (pooled) policies, we estimate the effect of the best policy, accounting for the winner’s curse. The key outcomes are (i) the number of measles immunizations and (ii) the number of immunizations per dollar spent. The policy that has the largest impact (information hubs, SMS reminders, incentives that increase with each immunization) increases the number of immunizations by 44 % relative to the status quo. The most cost-effective policy (information hubs, SMS reminders, no incentives) increases the number of immunizations per dollar by 9.1%….(More)”.
The value of data matching for public poverty initiatives: a local voucher program example
Paper by Sarah Giest, Jose M. Miotto and Wessel Kraaij: “The recent surge of data-driven methods in social policy have created new opportunities to assess existing poverty programs. The expectation is that the combination of advanced methods and more data can calculate the effectiveness of public interventions more accurately and tailor local initiatives accordingly. Specifically, nonmonetary indicators are increasingly being measured at micro levels in order to target social exclusion in combination with poverty. However, the multidimensional character of poverty, local context, and data matching pose challenges to data-driven analyses. By linking Dutch household-level data with policy-initiative-specific data at local level, we present an explorative study on the uptake of a local poverty pass. The goal is to unravel pass usage in terms of household income and location as well as the age of users. We find that income and age play a role in whether the pass is used, and usage differs per neighborhood. With this, the paper feeds into the discourse on how to operationalize and design data matching work in the multidimensional space of poverty and nonmonetary government initiatives….(More)”.