Critical Ignoring as a Core Competence for Digital Citizens


Paper by Anastasia Kozyreva, et al: “Low-quality and misleading information online can hijack people’s attention, often by evoking curiosity, outrage, or anger. Resisting certain types of information and actors online requires people to adopt new mental habits that help them avoid being tempted by attention-grabbing and potentially harmful content. We argue that digital information literacy must include the competence of critical ignoring—choosing what to ignore and where to invest one’s limited attentional capacities. We review three types of cognitive strategies for implementing critical ignoring: self-nudging, in which one ignores temptations by removing them from one’s digital environments; lateral reading, in which one vets information by leaving the source and verifying its credibility elsewhere online; and the do-not-feed-the-trolls heuristic, which advises one to not reward malicious actors with attention. We argue that these strategies implementing critical ignoring should be part of school curricula on digital information literacy. Teaching the competence of critical ignoring requires a paradigm shift in educators’ thinking, from a sole focus on the power and promise of paying close attention to an additional emphasis on the power of ignoring. Encouraging students and other online users to embrace critical ignoring can empower them to shield themselves from the excesses, traps, and information disorders of today’s attention economy…(More)”.

Govtech against corruption: What are the integrity dividends of government digitalization?


Paper by Carlos Santiso: “Does digitalization reduce corruption? What are the integrity benefits of government digitalization? While the correlation between digitalization and corruption is well established, there is less actionable evidence on the integrity dividends of specific digitalization reforms on different types of corruption and the policy channels through which they operate. These linkages are especially relevant in high corruption risk environments. This article unbundles the integrity dividends of digital reforms undertaken by governments around the world, accelerated by the pandemic. It analyzes the rise of data-driven integrity analytics as promising tools in the anticorruption space deployed by tech-savvy integrity actors. It also assesses the broader integrity benefits of the digitalization of government services and the automation of bureaucratic processes, which contribute to reducing bribe solicitation risks by front-office bureaucrats. It analyzes in particular the impact of digitalization on social transfers. It argues that government digitalization can be an implicit yet effective anticorruption strategy, with subtler yet deeper effects, but there needs to be greater synergies between digital reforms and anticorruption strategies….(More)”.

People watching: Abstractions and orthodoxies of monitoring


Paper by Victoria Wang and John V.Tucker: “Our society has an insatiable appetite for data. Much of the data is collected to monitor the activities of people, e.g., for discovering the purchasing behaviour of customers, observing the users of apps, managing the performance of personnel, and conforming to regulations and laws, etc. Although monitoring practices are ubiquitous, monitoring as a general concept has received little analytical attention. We explore: (i) the nature of monitoring facilitated by software; (ii) the structure of monitoring processes; and (iii) the classification of monitoring systems. We propose an abstract definition of monitoring as a theoretical tool to analyse, document, and compare disparate monitoring applications. For us, monitoring is simply the systematic collection of data about the behaviour of people and objects. We then extend this concept with mechanisms for detecting events that require interventions and changes in behaviour, and describe five types of monitoring…(More)”.

The Labor Market Consequences of Appropriate Technology


Paper by Gustavo de Souza: “Developing countries rely on technology created by developed countries. This paper demonstrates that such reliance increases wage inequality but leads to greater production in developing countries. I study a Brazilian innovation program that taxed the leasing of international technology to subsidize national innovation. I show that the program led firms to replace technology licensed from developed countries with in-house innovations, which led to a decline in both employment and the share of high-skilled workers. Using a model of directed technological change and technology transfer, I find that increasing the share of firms that patent in Brazil by 1 p.p. decreases the skilled wage premium by 0.02% and production by 0.2%…(More)”.

Algorithms in the Public Sector. Why context matters


Paper by Georg Wenzelburger, Pascal D. König, Julia Felfeli, and Anja Achtziger: “Algorithms increasingly govern people’s lives, including through rapidly spreading applications in the public sector. This paper sheds light on acceptance of algorithms used by the public sector emphasizing that algorithms, as parts of socio-technical systems, are always embedded in a specific social context. We show that citizens’ acceptance of an algorithm is strongly shaped by how they evaluate aspects of this context, namely the personal importance of the specific problems an algorithm is supposed to help address and their trust in the organizations deploying the algorithm. The objective performance of presented algorithms affects acceptance much less in comparison. These findings are based on an original dataset from a survey covering two real-world applications, predictive policing and skin cancer prediction, with a sample of 2661 respondents from a representative German online panel. The results have important implications for the conditions under which citizens will accept algorithms in the public sector…(More)”.

Marine Data Sharing: Challenges, Technology Drivers and Quality Attributes


Paper by Keila Lima et al: “Many companies have been adopting data-driven applications in which products and services are centered around data analysis to approach new segments of the marketplace. Data ecosystems rise from data sharing among organizations premeditatedly. However, this migration to this new data sharing paradigm has not come that far in the marine domain. Nevertheless, better utilizing the ocean data might be crucial for humankind in the future, for food production, and minerals, to ensure the ocean’s health….We investigate the state-of-the-art regarding data sharing in the marine domain with a focus on aspects that impact the speed of establishing a data ecosystem for the ocean.We conducted an exploratory case study based on focus groups and workshops to understand the sharing of data in this context. Results: We identified main challenges of current systems that need to be addressed with respect to data sharing. Additionally, aspects related to the establishment of a data ecosystem were elicited and analyzed in terms of benefits, conflicts, and solutions…(More)”.

Is Facebook’s advertising data accurate enough for use in social science research? Insights from a cross-national online survey


Paper by André Grow et al: “Social scientists increasingly use Facebook’s advertising platform for research, either in the form of conducting digital censuses of the general population, or for recruiting participants for survey research. Both approaches depend on the accuracy of the data that Facebook provides about its users, but little is known about how accurate these data are. We address this gap in a large-scale, cross-national online survey (N = 137,224), in which we compare self-reported and Facebook-classified demographic information (sex, age and region of residence). Our results suggest that Facebook’s advertising platform can be fruitfully used for conducing social science research if additional steps are taken to assess the accuracy of the characteristics under consideration…(More)”.

Shared Models in Networks, Organizations, and Groups


Paper by Joshua Schwartzstein & Adi Sunderam: “To understand new information, we exchange models or interpretations with others. This paper provides a framework for thinking about such social exchanges of models. The key assumption is that people adopt the interpretation in their network that best explains the data, given their prior beliefs. An implication is that interpretations evolve within a network. For many network structures, social learning mutes reactions to data: the exchange of models leaves beliefs closer to priors than they were before. Our results shed light on why disagreements persist as new information arrives, as well as the goal and structure of meetings in organizations…(More)”.

Artificial Intelligence and Life in 2030: The One Hundred Year Study on Artificial Intelligence


Paper by Peter Stone et al: “In September 2016, Stanford’s “One Hundred Year Study on Artificial Intelligence” project (AI100) issued the first report of its planned long-term periodic assessment of artificial intelligence (AI) and its impact on society. It was written by a panel of 17 study authors, each of whom is deeply rooted in AI research, chaired by Peter Stone of the University of Texas at Austin. The report, entitled “Artificial Intelligence and Life in 2030,” examines eight domains of typical urban settings on which AI is likely to have impact over the coming years: transportation, home and service robots, healthcare, education, public safety and security, low-resource communities, employment and workplace, and entertainment. It aims to provide the general public with a scientifically and technologically accurate portrayal of the current state of AI and its potential and to help guide decisions in industry and governments, as well as to inform research and development in the field. The charge for this report was given to the panel by the AI100 Standing Committee, chaired by Barbara Grosz of Harvard University….(More)”.

The political imaginary of National AI Strategies


Paper by Guy Paltieli: “In the past few years, several democratic governments have published their National AI Strategies (NASs). These documents outline how AI technology should be implemented in the public sector and explain the policies that will ensure the ethical use of personal data. In this article, I examine these documents as political texts and reconstruct the political imaginary that underlies them. I argue that these documents intervene in contemporary democratic politics by suggesting that AI can help democracies overcome some of the challenges they are facing. To achieve this, NASs use different kinds of imaginaries—democratic, sociotechnical and data—that help citizens envision how a future AI democracy might look like. As part of this collective effort, a new kind of relationship between citizens and governments is formed. Citizens are seen as autonomous data subjects, but at the same time, they are expected to share their personal data for the common good. As a result, I argue, a new kind of political imaginary is developed in these documents. One that maintains a human-centric approach while championing a vision of collective sovereignty over data. This kind of political imaginary can become useful in understanding the roles of citizens and governments in this technological age….(More)”.