Robot census: Gathering data to improve policymaking on new technologies


Essay by Robert Seamans: There is understandable excitement about the impact that new technologies like artificial intelligence (AI) and robotics will have on our economy. In our everyday lives, we already see the benefits of these technologies: when we use our smartphones to navigate from one location to another using the fastest available route or when a predictive typing algorithm helps us finish a sentence in our email. At the same time, there are concerns about possible negative effects of these new technologies on labor. The Council of Economic Advisers of the past two Administrations have addressed these issues in the annual Economic Report of the President (ERP). For example, the 2016 ERP included a chapter on technology and innovation that linked robotics to productivity and growth, and the 2019 ERP included a chapter on artificial intelligence that discussed the uneven effects of technological change. Both these chapters used data at highly aggregated levels, in part because that is the data that is available. As I’ve noted elsewhere, AI and robots are everywhere, except, as it turns out, in the data.

To date, there have been no large scale, systematic studies in the U.S. on how robots and AI affect productivity and labor in individual firms or establishments (a firm could own one or more establishments, which for example could be a plant in a manufacturing setting or a storefront in a retail setting). This is because the data are scarce. Academic researchers interested in the effects of AI and robotics on economic outcomes have mostly used aggregate country and industry-level data. Very recently, some have studied these issues at the firm level using data on robot imports to France, Spain, and other countries. I review a few of these academic papers in both categories below, which provide early findings on the nuanced role these new technologies have on labor. Thanks to some excellent work being done by the U.S. Census Bureau, however, we may soon have more data to work with. This includes new questions on robot purchases in the Annual Survey of Manufacturers and Annual Capital Expenditures Survey and new questions on other technologies including cloud computing and machine learning in the Annual Business Survey….(More)”.

Profiling Insurrection: Characterizing Collective Action Using Mobile Device Data


Paper by David Van Dijcke and Austin L. Wright: “We develop a novel approach for estimating spatially dispersed community-level participation in mass protest. This methodology is used to investigate factors associated with participation in the ‘March to Save America’ event in Washington, D.C. on January 6, 2021. This study combines granular location data from more than 40 million mobile devices with novel measures of community-level voting patterns, the location of organized hate groups, and the entire georeferenced digital archive of the social media platform Parler. We find evidence that partisanship, socio-political isolation, proximity to chapters of the Proud Boys organization, and the local activity on Parler are robustly associated with protest participation. Our research fills a prominent gap in the study of collective action: identifying and studying communities involved in mass-scale events that escalate into violent insurrection….(More)”.

Monitoring the R-Citizen in the Time of Coronavirus


Paper by John Flood and Monique Lewis: “The COVID pandemic has overwhelmed many countries in their attempts at tracking and tracing people infected with the disease. Our paper examines how tracking and tracing is done looking at manual and technological means. It raises the issues around efficiency and privacy, etc. The paper investigates more closely the approaches taken by two countries, namely Taiwan and the UK. It shows how tracking and tracing can be handled sensitively and openly compared to the bungled attempts of the UK that have led to the greatest number of dead in Europe. The key messages are that all communications around tracking and tracing need to open, clear, without confusion and delivered by those closest to the communities receiving the messages.This occurred in Taiwan but in the UK the central government chose to close out local government and other local resources. The highly centralised dirigiste approach of the government alienated much of the population who came to distrust government. As local government was later brought into the COVID fold the messaging improved. Taiwan always remained open in its communications, even allowing citizens to participate in improving the technology around COVID. Taiwan learnt from its earlier experiences with SARS, whereas the UK ignored its pandemic planning exercises from earlier years and even experimented with crude ideas of herd immunity by letting the disease rip through the population–an idea soon abandoned.

We also derive a new type of citizen from the pandemic, namely the R citizen. This unfortunate archetype is both a blessing and a curse. If the citizen scores over 1 the disease accelerates and the R citizen is chastised, whereas if the citizen declines to zero it disappears but receives no plaudits for their behaviour. The R citizen can neither exist or die, rather like Schrödinger’s cat. R citizens are of course datafied individuals who are assemblages of data and are treated as distinct from humans. We argue they cannot be so distinguished without rendering them inhuman. This is as much a moral category as it is a scientific one….(More)”.

A Worldwide Assessment of COVID-19 Pandemic-Policy Fatigue


Paper by Anna Petherick et al: “As the COVID-19 pandemic lingers, signs of “pandemic-policy fatigue” have raised worldwide concerns. But the phenomenon itself is yet to be thoroughly defined, documented, and delved into. Based on self-reported behaviours from samples of 238,797 respondents, representative of the populations of 14 countries, as well as global mobility and policy data, we systematically examine the prevalence and shape of people’s alleged gradual reduction in adherence to governments’ protective-behaviour policies against COVID-19. Our results show that from March through December 2020, pandemic-policy fatigue was empirically meaningful and geographically widespread. It emerged for high-cost and sensitising behaviours (physical distancing) but not for low-cost and habituating ones (mask wearing), and was less intense among retired people, people with chronic diseases, and in countries with high interpersonal trust. Particularly due to fatigue reversal patterns in high- and upper-middle-income countries, we observe an arch rather than a monotonic decline in global pandemic-policy fatigue….(More)”.

Are New Technologies Changing the Nature of Work? The Evidence So Far


Report by Kristyn Frank and Marc Frenette for the Institute for Research on Public Policy (Canada): “In recent years, ground breaking advances in artificial intelligence and their implications for automation technology have fuelled speculation that the very nature of work is being altered in unprecedented ways. News headlines regularly refer to the ”changing nature of work,” but what does it mean? Is there evidence that work has already been transformed by the new technologies? And if so, are these changes more dramatic than those experienced before?

In this paper, Kristyn Frank and Marc Frenette offer insights on these questions, based on the new research they conducted with their colleague Zhe Yang at Statistics Canada. Two aspects of work are under the microscope: the mix of work activities (or tasks) that constitute a job, and the mix of jobs in the economy. If new automation technologies are indeed changing the nature of work, the authors argue, then nonautomatable tasks should be increasingly important, and employment should be shifting toward occupations primarily involving such tasks.

According to the authors, nonroutine cognitive tasks (analytical or interpersonal) did become more important between 2011 and 2018. However, the changes were relatively modest, ranging from a 1.5 percent increase in the average importance of establishing and maintaining interpersonal relationships, to a 3.7 percent increase in analyzing data or information. Routine cognitive tasks — such as data entry — also gained importance, but these gains were even smaller. The picture is less clear for routine manual tasks, as the importance of tasks for which the pace is determined by the speed of equipment declined by close to 3 percent, whereas other tasks in that category became slightly more important.

Looking at longer-term shifts in overall employment, between 1987 and 2018, the authors find a gradual increase in the share of workers employed in occupations associated with nonroutine tasks, and a decline in routine-task-related occupations. The most pronounced shift in employment was away from production, craft, repair and operative occupations toward managerial, professional and technical occupations. However, they note that this shift to nonroutine occupations was not more pronounced between 2011 and 2018 than it was in the preceding decades. For instance, the share of employment in managerial, professional and technical occupations increased by 1.8 percentage points between 2011 and 2018, compared with a 6 percentage point increase between 1987 and 2010.

Most sociodemographic groups experienced the shift toward nonroutine jobs, although there were some exceptions. For instance, the employment share of workers in managerial, professional and technical occupations increased for all workers, but much more so for women than for men. Interestingly, there was a decline in the employment shares of workers in these occupations among those with a post-­secondary education. The explanation for this lies in the major increase over the past three decades in the proportion of workers with post-secondary education, which led some of them to move into jobs for which they are overqualified….(More)”.

Give more data, awareness and control to individual citizens, and they will help COVID-19 containment


Paper by Mirco Nanni et al: “The rapid dynamics of COVID-19 calls for quick and effective tracking of virus transmission chains and early detection of outbreaks, especially in the “phase 2” of the pandemic, when lockdown and other restriction measures are progressively withdrawn, in order to avoid or minimize contagion resurgence. For this purpose, contact-tracing apps are being proposed for large scale adoption by many countries. A centralized approach, where data sensed by the app are all sent to a nation-wide server, raises concerns about citizens’ privacy and needlessly strong digital surveillance, thus alerting us to the need to minimize personal data collection and avoiding location tracking. We advocate the conceptual advantage of a decentralized approach, where both contact and location data are collected exclusively in individual citizens’ “personal data stores”, to be shared separately and selectively (e.g., with a backend system, but possibly also with other citizens), voluntarily, only when the citizen has tested positive for COVID-19, and with a privacy preserving level of granularity. This approach better protects the personal sphere of citizens and affords multiple benefits: it allows for detailed information gathering for infected people in a privacy-preserving fashion; and, in turn this enables both contact tracing, and, the early detection of outbreak hotspots on more finely-granulated geographic scale. The decentralized approach is also scalable to large populations, in that only the data of positive patients need be handled at a central level. Our recommendation is two-fold. First to extend existing decentralized architectures with a light touch, in order to manage the collection of location data locally on the device, and allow the user to share spatio-temporal aggregates—if and when they want and for specific aims—with health authorities, for instance. Second, we favour a longer-term pursuit of realizing a Personal Data Store vision, giving users the opportunity to contribute to collective good in the measure they want, enhancing self-awareness, and cultivating collective efforts for rebuilding society….(More)”.

Governance models for redistribution of data value


Essay by Maria Savona: “The growth of interest in personal data has been unprecedented. Issues of privacy violation, power abuse, practices of electoral behaviour manipulation unveiled in the Cambridge Analytica scandal, and a sense of imminent impingement of our democracies are at the forefront of policy debates. Yet, these concerns seem to overlook the issue of concentration of equity value (stemming from data value, which I use interchangeably here) that underpins the current structure of big tech business models. Whilst these quasi-monopolies own the digital infrastructure, they do not own the personal data that provide the raw material for data analytics. 

The European Commission has been at the forefront of global action to promote convergence of the governance of data (privacy), including, but not limited to, the General Data Protection Regulation (GDPR) (European Commission 2016), enforced in May 2018. Attempts to enforce similar regulations are emerging around the world, including the California Consumer Privacy Act, which came into effect on 1 January 2020. Notwithstanding greater awareness among citizens around the use of their data, companies find that complying with GDPR is, at best, a useless nuisance. 

Data have been seen as ‘innovation investment’ since the beginning of the 1990s. The first edition of the Oslo Manual, the OECD’s international guidelines for collecting and using data on innovation in firms, dates back to 19921 and included the collection of databases on employee best practices as innovation investments. Data are also measured as an ‘intangible asset’ (Corrado et al. 2009 was one of the pioneering studies). What has changed over the last decade? The scale of data generation today is such that its management and control might have already gone well beyond the capacity of the very tech giants we are all feeding. Concerns around data governance and data privacy might be too little and too late. 

In this column, I argue that economists have failed twice: first, to predict the massive concentration of data value in the hands of large platforms; and second, to account for the complexity of the political economy aspects of data accumulation. Based on a pair of recent papers (Savona 2019a, 2019b), I systematise recent research and propose a novel data rights approach to redistribute data value whilst not undermining the range of ethical, legal, and governance challenges that this poses….(More)”.

Personal experiences bridge moral and political divides better than facts


Paper by Emily Kubin, Curtis Puryear, Chelsea Schein, and Kurt Gray: “All Americans are affected by rising political polarization, whether because of a gridlocked Congress or antagonistic holiday dinners. People believe that facts are essential for earning the respect of political adversaries, but our research shows that this belief is wrong. We find that sharing personal experiences about a political issue—especially experiences involving harm—help to foster respect via increased perceptions of rationality. This research provides a straightforward pathway for increasing moral understanding and decreasing political intolerance. These findings also raise questions about how science and society should understand the nature of truth in the era of “fake news.” In moral and political disagreements, everyday people treat subjective experiences as truer than objective facts….(More)”

Digital platforms for development: Foundations and research agenda


Paper by Carla Bonina, Kari Koskinen, Ben Eaton, and Annabelle Gawer: “Digital platforms hold a central position in today’s world economy and are said to offer a great potential for the economies and societies in the global South. Yet, to date, the scholarly literature on digital platforms has largely concentrated on business while their developmental implications remain understudied. In part, this is because digital platforms are a challenging research object due to their lack of conceptual definition, their spread across different regions and industries, and their intertwined nature with institutions, actors and digital technologies. The purpose of this article is to contribute to the ongoing debate in information systems and ICT4D research to understand what digital platforms mean for development. To do so, we first define what digital platforms are and differentiate between transaction and innovation platforms, and explain their key characteristics in terms of purpose, research foundations, material properties and business models. We add the socio‐technical context digital platforms operate and the linkages to developmental outcomes. We then conduct an extensive review to explore what current areas, developmental goals, tensions and issues emerge in the literature on platforms and development and identify relevant gaps in our knowledge. We later elaborate on six research questions to advance the studies on digital platforms for development: on indigenous innovation, digital platforms and institutions, on exacerbation of inequalities, on alternative forms of value, on the dark side of platforms and on the applicability of the platform typology for development….(More)”.

Using “Big Data” to forecast migration


Blog Post by Jasper Tjaden, Andres Arau, Muertizha Nuermaimaiti, Imge Cetin, Eduardo Acostamadiedo, Marzia Rango: Act 1 — High Expectations

“Data is the new oil,” they say. ‘Big Data’ is even bigger than that. The “data revolution” will contribute to solving societies’ problems and help governments adopt better policies and run more effective programs. In the migration field, digital trace data are seen as a potentially powerful tool to improve migration management processes (visa applicationsasylum decision and geographic allocation of asylum seeker, facilitating integration, “smart borders” etc.).1

Forecasting migration is one particular area where big data seems to excite data nerds (like us) and policymakers alike. If there is one way big data has already made a difference, it is its ability to bring different actors together — data scientists, business people and policy makers — to sit through countless slides with numbers, tables and graphs. Traditional migration data sources, like censuses, administrative data and surveys, have never quite managed to generate the same level of excitement.

Many EU countries are currently heavily investing in new ways to forecast migration. Relatively large numbers of asylum seekers in 2014, 2015 and 2016 strained the capacity of many EU governments. Better forecasting tools are meant to help governments prepare in advance.

In a recent European Migration Network study, 10 out of the 22 EU governments surveyed said they make use of forecasting methods, many using open source data for “early warning and risk analysis” purposes. The 2020 European Migration Network conference was dedicated entirely to the theme of forecasting migration, hosting more than 15 expert presentations on the topic. The recently proposed EU Pact on Migration and Asylum outlines a “Migration Preparedness and Crisis Blueprint” which “should provide timely and adequate information in order to establish the updated migration situational awareness and provide for early warning/forecasting, as well as increase resilience to efficiently deal with any type of migration crisis.” (p. 4) The European Commission is currently finalizing a feasibility study on the use of artificial intelligence for predicting migration to the EU; Frontex — the EU Border Agency — is scaling up efforts to forecast irregular border crossings; EASO — the European Asylum Support Office — is devising a composite “push-factor index” and experimenting with forecasting asylum-related migration flows using machine learning and data at scale. In Fall 2020, during Germany’s EU Council Presidency, the German Interior Ministry organized a workshop series around Migration 4.0 highlighting the benefits of various ways to “digitalize” migration management. At the same time, the EU is investing substantial resources in migration forecasting research under its Horizon2020 programme, including QuantMigITFLOWS, and HumMingBird.

Is all this excitement warranted?

Yes, it is….(More)” See also: Big Data for Migration Alliance