When Do Informational Interventions Work? Experimental Evidence from New York City High School Choice


Paper by Sarah Cohodes, Sean Corcoran, Jennifer Jennings & Carolyn Sattin-Bajaj: “This paper reports the results of a large, school-level randomized controlled trial evaluating a set of three informational interventions for young people choosing high schools in 473 middle schools, serving over 115,000 8th graders. The interventions differed in their level of customization to the student and their mode of delivery (paper or online); all treated schools received identical materials to scaffold the decision-making process. Every intervention reduced likelihood of application to and enrollment in schools with graduation rates below the city median (75 percent). An important channel is their effect on reducing nonoptimal first choice application strategies. Providing a simplified, middle-school specific list of relatively high graduation rate schools had the largest impacts, causing students to enroll in high schools with 1.5-percentage point higher graduation rates. Providing the same information online, however, did not alter students’ choices or enrollment. This appears to be due to low utilization. Online interventions with individual customization, including a recommendation tool and search engine, induced students to enroll in high schools with 1-percentage point higher graduation rates, but with more variance in impact. Together, these results show that successful informational interventions must generate engagement with the material, and this is possible through multiple channels…(More)”.

The emergence of algorithmic solidarity: unveiling mutual aid practices and resistance among Chinese delivery workers


Paper by Zizheng Yu, Emiliano Treré, and Tiziano Bonini: “This study explores how Chinese riders game the algorithm-mediated governing system of food delivery service platforms and how they mobilize WeChat to build solidarity networks to assist each other and better cope with the platform economy. We rely on 12 interviews with Chinese riders from 4 platforms (Meituan, Eleme, SF Express and Flash EX) in 5 cities, and draw on a 4-month online observation of 7 private WeChat groups. The article provides a detailed account of the gamification ranking and competition techniques employed by delivery platforms to drive the riders to achieve efficiency and productivity gains. Then, it critically explores how Chinese riders adapt and react to the algorithmic systems that govern their work by setting up private WeChat groups and developing everyday practices of resilience and resistance. This study demonstrates that Chinese riders working for food delivery platforms incessantly create a complex repertoire of tactics and develop hidden transcripts to resist the algorithmic control of digital platforms….(More)”.

What’s the problem? How crowdsourcing and text-mining may contribute to the understanding of unprecedented problems such as COVID-19


Paper by Julian Wahl, Johann Füller, and Katja Hutter: “In this research, we explore how crowdsourcing combined with text-mining can help to build a sound understanding of unstructured, complex and ill-defined problems. Therefore, we gathered 101 problem descriptions contributed to a crowdsourcing contest about the impact of COVID-19 on the tourism industry. Based on our findings we propose a five-phase process model for problem understanding consisting of: (1) information gathering, (2) information pre-structuring, (3) problem space mapping, (4) problem space exploration, and (5) problem understanding for solution search. While our study confirms that crowdsourcing and text-mining facilitate fast generation and exploration of problem spaces at limited cost, it also reveals the necessity to follow certain process steps and to deal with challenges such as information loss and human interpretation. For practitioners, our model presents a guideline for how to get a faster grasp on complex and rather unprecedented problems…(More)”.

The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence


Paper by Erik Brynjolfsson: “In 1950, Alan Turing proposed an “imitation game” as the ultimate test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions are indistinguishable from those of a human. Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds.

But not all types of AI are human-like—in fact, many of the most powerful systems are very different from humans —and an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created. What’s more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers…(More)”

Octagon Measurement: Public Attitudes toward AI Ethics


Paper by Yuko Ikkatai, Tilman Hartwig, Naohiro Takanashi & Hiromi M. Yokoyama: “Artificial intelligence (AI) is rapidly permeating our lives, but public attitudes toward AI ethics have only partially been investigated quantitatively. In this study, we focused on eight themes commonly shared in AI guidelines: “privacy,” “accountability,” “safety and security,” “transparency and explainability,” “fairness and non-discrimination,” “human control of technology,” “professional responsibility,” and “promotion of human values.” We investigated public attitudes toward AI ethics using four scenarios in Japan. Through an online questionnaire, we found that public disagreement/agreement with using AI varied depending on the scenario. For instance, anxiety over AI ethics was high for the scenario where AI was used with weaponry. Age was significantly related to the themes across the scenarios, but gender and understanding of AI differently related depending on the themes and scenarios. While the eight themes need to be carefully explained to the participants, our Octagon measurement may be useful for understanding how people feel about the risks of the technologies, especially AI, that are rapidly permeating society and what the problems might be…(More)”.

A tale of two labs: Rethinking urban living labs for advancing citizen engagement in food system transformations


Paper by Anke Brons et al: “Citizen engagement is heralded as essential for food democracy and equality, yet the implementation of inclusive citizen engagement mechanisms in urban food systems governance has lagged behind. This paper aims to further the agenda of citizen engagement in the transformation towards healthy and sustainable urban food systems by offering a conceptual reflection on urban living labs (ULLs) as a methodological platform. Over the past decades, ULLs have become increasingly popular to actively engage citizens in methodological testbeds for innovations within real-world settings. The paper proposes that ULLs as a tool for inclusive citizen engagement can be utilized in two ways: (i) the ULL as the daily life of which citizens are the experts, aimed at uncovering the unreflexive agency of a highly diverse population in co-shaping the food system and (ii) the ULL as a break with daily life aimed at facilitating reflexive agency in (re)shaping food futures. We argue that both ULL approaches have the potential to facilitate inclusive citizen engagement in different ways by strengthening the breadth and the depth of citizen engagement respectively. The paper concludes by proposing a sequential implementation of the two types of ULL, paying attention to spatial configurations and the short-termed nature of ULLs….(More)”.

Why people believe misinformation and resist correction


TechPolicyPress: “…In Nature, a team of nine researchers from the fields of psychology, mass media & communication have published a review of available research on the factors that lead people to “form or endorse misinformed views, and the psychological barriers” to changing their minds….

The authors summarize what is known about a variety of drivers of false beliefs, noting that they “generally arise through the same mechanisms that establish accurate beliefs” and the human weakness for trusting the “gut”. For a variety of reasons, people develop shortcuts when processing information, often defaulting to conclusions rather than evaluating new information critically. A complex set of variables related to information sources, emotional factors and a variety of other cues can lead to the formation of false beliefs. And, people often share information with little focus on its veracity, but rather to accomplish other goals- from self-promotion to signaling group membership to simply sating a desire to ‘watch the world burn’.

Source: Nature Reviews: Psychology, Volume 1, January 022

Barriers to belief revision are also complex, since “the original information is not simply erased or replaced” once corrective information is introduced. There is evidence that misinformation can be “reactivated and retrieved” even after an individual receives accurate information that contradicts it. A variety of factors affect whether correct information can win out. One theory looks at how information is integrated in a person’s “memory network”. Another complementary theory looks at “selective retrieval” and is backed up by neuro-imaging evidence…(More)”.

‘Sharing Is Caring’: Creative Commons, Transformative Culture, and Moral Rights Protection


Paper by Alexandra Giannopoulou: “The practice of sharing works free from traditional legal reservations, aims to mark both ideological and systemic distance from the exclusive proprietary regime of copyright. The positive involvement of the public in creativity acts is a defining feature of transformative culture in the digital sphere, which encourages creative collaborations between several people, without any limitation in space or time. Moral rights regimes are antithetical to these practices. This chapter will explore the moral rights challenges emerging from transformative culture. We will take the example of Creative Commons licenses and their interaction with internationally recognized moral rights. We conclude that the chilling effects of this legal uncertainty linked to moral rights enforcement could hurt copyright as a whole, but that moral rights can still constitute a strong defence mechanism against modern risks related to digital transformative creativity…(More)”.

From Poisons to Antidotes: Algorithms as Democracy Boosters


Paper by Paolo Cavaliere and Graziella Romeo: “Under what conditions can artificial intelligence contribute to political processes without undermining their legitimacy? Thanks to the ever-growing availability of data and the increasing power of decision-making algorithms, the future of political institutions is unlikely to be anything similar to what we have known throughout the last century, possibly with Parliaments deprived of their traditional authority and public decision-making processes largely unaccountable. This paper discusses and challenges these concerns by suggesting a theoretical framework under which algorithmic decision-making is compatible with democracy and, most relevantly, can offer a viable solution to counter the rise of populist rhetoric in the governance arena. Such a framework is based on three pillars: a. understanding the civic issues that are subjected to automated decision-making; b. controlling the issues that are assigned to AI; and c. evaluating and challenging the outputs of algorithmic decision-making….(More)”.

Data trust and data privacy in the COVID-19 period


Paper by Nicholas Biddle et al: “In this article, we focus on data trust and data privacy, and how attitudes may be changing during the COVID-19 period. On balance, it appears that Australians are more trusting of organizations with regards to data privacy and less concerned about their own personal information and data than they were prior to the spread of COVID-19. The major determinant of this change in trust with regards to data was changes in general confidence in government institutions. Despite this improvement in trust with regards to data privacy, trust levels are still low….(More)”.