People watching: Abstractions and orthodoxies of monitoring


Paper by Victoria Wang and John V.Tucker: “Our society has an insatiable appetite for data. Much of the data is collected to monitor the activities of people, e.g., for discovering the purchasing behaviour of customers, observing the users of apps, managing the performance of personnel, and conforming to regulations and laws, etc. Although monitoring practices are ubiquitous, monitoring as a general concept has received little analytical attention. We explore: (i) the nature of monitoring facilitated by software; (ii) the structure of monitoring processes; and (iii) the classification of monitoring systems. We propose an abstract definition of monitoring as a theoretical tool to analyse, document, and compare disparate monitoring applications. For us, monitoring is simply the systematic collection of data about the behaviour of people and objects. We then extend this concept with mechanisms for detecting events that require interventions and changes in behaviour, and describe five types of monitoring…(More)”.

The Labor Market Consequences of Appropriate Technology


Paper by Gustavo de Souza: “Developing countries rely on technology created by developed countries. This paper demonstrates that such reliance increases wage inequality but leads to greater production in developing countries. I study a Brazilian innovation program that taxed the leasing of international technology to subsidize national innovation. I show that the program led firms to replace technology licensed from developed countries with in-house innovations, which led to a decline in both employment and the share of high-skilled workers. Using a model of directed technological change and technology transfer, I find that increasing the share of firms that patent in Brazil by 1 p.p. decreases the skilled wage premium by 0.02% and production by 0.2%…(More)”.

Algorithms in the Public Sector. Why context matters


Paper by Georg Wenzelburger, Pascal D. König, Julia Felfeli, and Anja Achtziger: “Algorithms increasingly govern people’s lives, including through rapidly spreading applications in the public sector. This paper sheds light on acceptance of algorithms used by the public sector emphasizing that algorithms, as parts of socio-technical systems, are always embedded in a specific social context. We show that citizens’ acceptance of an algorithm is strongly shaped by how they evaluate aspects of this context, namely the personal importance of the specific problems an algorithm is supposed to help address and their trust in the organizations deploying the algorithm. The objective performance of presented algorithms affects acceptance much less in comparison. These findings are based on an original dataset from a survey covering two real-world applications, predictive policing and skin cancer prediction, with a sample of 2661 respondents from a representative German online panel. The results have important implications for the conditions under which citizens will accept algorithms in the public sector…(More)”.

Marine Data Sharing: Challenges, Technology Drivers and Quality Attributes


Paper by Keila Lima et al: “Many companies have been adopting data-driven applications in which products and services are centered around data analysis to approach new segments of the marketplace. Data ecosystems rise from data sharing among organizations premeditatedly. However, this migration to this new data sharing paradigm has not come that far in the marine domain. Nevertheless, better utilizing the ocean data might be crucial for humankind in the future, for food production, and minerals, to ensure the ocean’s health….We investigate the state-of-the-art regarding data sharing in the marine domain with a focus on aspects that impact the speed of establishing a data ecosystem for the ocean.We conducted an exploratory case study based on focus groups and workshops to understand the sharing of data in this context. Results: We identified main challenges of current systems that need to be addressed with respect to data sharing. Additionally, aspects related to the establishment of a data ecosystem were elicited and analyzed in terms of benefits, conflicts, and solutions…(More)”.

Is Facebook’s advertising data accurate enough for use in social science research? Insights from a cross-national online survey


Paper by André Grow et al: “Social scientists increasingly use Facebook’s advertising platform for research, either in the form of conducting digital censuses of the general population, or for recruiting participants for survey research. Both approaches depend on the accuracy of the data that Facebook provides about its users, but little is known about how accurate these data are. We address this gap in a large-scale, cross-national online survey (N = 137,224), in which we compare self-reported and Facebook-classified demographic information (sex, age and region of residence). Our results suggest that Facebook’s advertising platform can be fruitfully used for conducing social science research if additional steps are taken to assess the accuracy of the characteristics under consideration…(More)”.

Shared Models in Networks, Organizations, and Groups


Paper by Joshua Schwartzstein & Adi Sunderam: “To understand new information, we exchange models or interpretations with others. This paper provides a framework for thinking about such social exchanges of models. The key assumption is that people adopt the interpretation in their network that best explains the data, given their prior beliefs. An implication is that interpretations evolve within a network. For many network structures, social learning mutes reactions to data: the exchange of models leaves beliefs closer to priors than they were before. Our results shed light on why disagreements persist as new information arrives, as well as the goal and structure of meetings in organizations…(More)”.

Artificial Intelligence and Life in 2030: The One Hundred Year Study on Artificial Intelligence


Paper by Peter Stone et al: “In September 2016, Stanford’s “One Hundred Year Study on Artificial Intelligence” project (AI100) issued the first report of its planned long-term periodic assessment of artificial intelligence (AI) and its impact on society. It was written by a panel of 17 study authors, each of whom is deeply rooted in AI research, chaired by Peter Stone of the University of Texas at Austin. The report, entitled “Artificial Intelligence and Life in 2030,” examines eight domains of typical urban settings on which AI is likely to have impact over the coming years: transportation, home and service robots, healthcare, education, public safety and security, low-resource communities, employment and workplace, and entertainment. It aims to provide the general public with a scientifically and technologically accurate portrayal of the current state of AI and its potential and to help guide decisions in industry and governments, as well as to inform research and development in the field. The charge for this report was given to the panel by the AI100 Standing Committee, chaired by Barbara Grosz of Harvard University….(More)”.

The political imaginary of National AI Strategies


Paper by Guy Paltieli: “In the past few years, several democratic governments have published their National AI Strategies (NASs). These documents outline how AI technology should be implemented in the public sector and explain the policies that will ensure the ethical use of personal data. In this article, I examine these documents as political texts and reconstruct the political imaginary that underlies them. I argue that these documents intervene in contemporary democratic politics by suggesting that AI can help democracies overcome some of the challenges they are facing. To achieve this, NASs use different kinds of imaginaries—democratic, sociotechnical and data—that help citizens envision how a future AI democracy might look like. As part of this collective effort, a new kind of relationship between citizens and governments is formed. Citizens are seen as autonomous data subjects, but at the same time, they are expected to share their personal data for the common good. As a result, I argue, a new kind of political imaginary is developed in these documents. One that maintains a human-centric approach while championing a vision of collective sovereignty over data. This kind of political imaginary can become useful in understanding the roles of citizens and governments in this technological age….(More)”.

Scrape, Request, Collect, Repeat: How Data Journalists Around the World Transcend Obstacles to Public Data


Paper by Jason A. Martin, Lindita Camaj & Gerry Lanosga: “This study applies a typology of public data transparency infrastructure and the contextualism framework for analysing journalism practice to examine patterns in data journalism production. The goal was to identify differences in approaches to acquiring and reporting on data around the world based on comparisons of public data transparency infrastructure. Data journalists from 34 countries were interviewed to understand challenges in data access, strategies used to overcome obstacles, innovation in collaboration, and attitudes about open-data advocacy. Analysis reveals themes of different approaches to journalistic interventionism by overcoming structural obstacles and inventive techniques journalists use to acquire and build their own data sets even in the most restrictive government contexts. Data journalists are increasingly connected with colleagues, third parties, and the public in using data, eschewing notions of competition for collaboration, and using crowdsourcing to address gaps in data. Patterns of direct and indirect activism are highlighted. Results contribute to a better understanding of global data journalism practice by revealing the influence of public data transparency infrastructure as a major factor that constrains or creates opportunities for data journalism practice as a subfield. Findings also broaden the cross-national base of empirical evidence on the developing practices and attitudes of data journalists….(More)”.

Tragedy of the Digital Commons


Paper by Chinmayi Sharma: “Google, iPhones, the national power grid, surgical operating rooms, baby monitors, surveillance technology, and wastewater management systems all run on open-source software. Open-source software, or software that is free and publicly available, powers our day-to-day lives. As a resource, it defies economic logic; it is built by developers, many of whom are volunteers, who build projects with the altruistic intention of donating them to the digital commons. Developers use it because it saves time and money and promotes innovation. Its benefits have led to its ubiquity and indispensability. Today, over 97% of all software uses open source. Without it, our critical infrastructure would crumble. The risk of that happening is more real than ever.

In December 2021, the Log4Shell vulnerability demonstrated that the issue of open-source security can no longer be ignored. One vulnerability found in a game of Minecraft threatened to take down systems worldwide—from the Belgian government to Google. The scope of the damage is unmatched; with open source, a vulnerability in one product can be used against every other entity that uses the same code. Open source’s benefits are also its burden. No one wants to pay for a resource they can get an unlimited supply of for free. Open source is not, however, truly unlimited. The open-source community is buckling under the weight of supporting over three-fourths of the world’s code. Rather than share the load, its primary beneficiaries, companies that build software, add to it. By failing to take basic precautionary measures in using open-source code, they make its exploitation nearly inevitable—when it happens, they free-ride on the already overwhelmed community to fix it. This doom cycle leaves everyone worse off because it leaves our critical infrastructure dangerously vulnerable.

Since it began, open source has worked behind the scenes to make society better. Today, its struggles are going unnoticed and unaddressed. The private sector isn’t willing to help—the few who are cannot carry the burden alone. So far, government interventions have been lacking. Secure open source requires much more. To start, it is time we treated open source as the critical infrastructure it is…(More)”.