Exploring city digital twins as policy tools: A task-based approach to generating synthetic data on urban mobility


Paper by Gleb Papyshev and Masaru Yarime: “This article discusses the technology of city digital twins (CDTs) and its potential applications in the policymaking context. The article analyzes the history of the development of the concept of digital twins and how it is now being adopted on a city-scale. One of the most advanced projects in the field—Virtual Singapore—is discussed in detail to determine the scope of its potential domains of application and highlight challenges associated with it. Concerns related to data privacy, availability, and its applicability for predictive simulations are analyzed, and potential usage of synthetic data is proposed as a way to address these challenges. The authors argue that despite the abundance of urban data, the historical data are not always applicable for predictions about the events for which there does not exist any data, as well as discuss the potential privacy challenges of the usage of micro-level individual mobility data in CDTs. A task-based approach to urban mobility data generation is proposed in the last section of the article. This approach suggests that city authorities can establish services responsible for asking people to conduct certain activities in an urban environment in order to create data for possible policy interventions for which there does not exist useful historical data. This approach can help in addressing the challenges associated with the availability of data without raising privacy concerns, as the data generated through this approach will not represent any real individual in society….(More)”.

A comprehensive study of technological change


Article by Scott Murray: The societal impacts of technological change can be seen in many domains, from messenger RNA vaccines and automation to drones and climate change. The pace of that technological change can affect its impact, and how quickly a technology improves in performance can be an indicator of its future importance. For decision-makers like investors, entrepreneurs, and policymakers, predicting which technologies are fast improving (and which are overhyped) can mean the difference between success and failure.

New research from MIT aims to assist in the prediction of technology performance improvement using U.S. patents as a dataset. The study describes 97 percent of the U.S. patent system as a set of 1,757 discrete technology domains, and quantitatively assesses each domain for its improvement potential.

“The rate of improvement can only be empirically estimated when substantial performance measurements are made over long time periods,” says Anuraag Singh SM ’20, lead author of the paper. “In some large technological fields, including software and clinical medicine, such measures have rarely, if ever, been made.”

previous MIT study provided empirical measures for 30 technological domains, but the patent sets identified for those technologies cover less than 15 percent of the patents in the U.S. patent system. The major purpose of this new study is to provide predictions of the performance improvement rates for the thousands of domains not accessed by empirical measurement. To accomplish this, the researchers developed a method using a new probability-based algorithm, machine learning, natural language processing, and patent network analytics….(More)”.

Stewardship of global collective behavior


Paper by Joseph B. Bak-Coleman et al: “Collective behavior provides a framework for understanding how the actions and properties of groups emerge from the way individuals generate and share information. In humans, information flows were initially shaped by natural selection yet are increasingly structured by emerging communication technologies. Our larger, more complex social networks now transfer high-fidelity information over vast distances at low cost. The digital age and the rise of social media have accelerated changes to our social systems, with poorly understood functional consequences. This gap in our knowledge represents a principal challenge to scientific progress, democracy, and actions to address global crises. We argue that the study of collective behavior must rise to a “crisis discipline” just as medicine, conservation, and climate science have, with a focus on providing actionable insight to policymakers and regulators for the stewardship of social systems….(More)”.

Machine Learning and Mobile Phone Data Can Improve the Targeting of Humanitarian Assistance


Paper by Emily Aiken et al: “The COVID-19 pandemic has devastated many low- and middle-income countries (LMICs), causing widespread food insecurity and a sharp decline in living standards. In response to this crisis, governments and humanitarian organizations worldwide have mobilized targeted social assistance programs. Targeting is a central challenge in the administration of these programs: given available data, how does one rapidly identify the individuals and families with the greatest need? This challenge is particularly acute in the large number of LMICs that lack recent and comprehensive data on household income and wealth.

Here we show that non-traditional “big” data from satellites and mobile phone networks can improve the targeting of anti-poverty programs. Our approach uses traditional survey-based measures of consumption and wealth to train machine learning algorithms that recognize patterns of poverty in non-traditional data; the trained algorithms are then used to prioritize aid to the poorest regions and mobile subscribers. We evaluate this approach by studying Novissi, Togo’s flagship emergency cash transfer program, which used these algorithms to determine eligibility for a rural assistance program that disbursed millions of dollars in COVID-19 relief aid. Our analysis compares outcomes – including exclusion errors, total social welfare, and measures of fairness – under different targeting regimes. Relative to the geographic targeting options considered by the Government of Togo at the time, the machine learning approach reduces errors of exclusion by 4-21%. Relative to methods that require a comprehensive social registry (a hypothetical exercise; no such registry exists in Togo), the machine learning approach increases exclusion errors by 9-35%. These results highlight the potential for new data sources to contribute to humanitarian response efforts, particularly in crisis settings when traditional data are missing or out of date….(More)”.

Have behavioral sciences delivered on their promise to influence environmental policy and conservation practice?


Paper by Maria Alejandra Velez and Lina Moros: “After four decades of refining our understanding of decision-making processes, a form of consensus has developed around the crucial role that behavioral science can play in changing non-cooperative decisions and promoting pro-environmental behaviors. However, has behavioral science delivered on its promise to influence environmental policy and conservation practice? We discuss key lessons coming from studies into the dual process theory of thinking and the presence of cognitive biases, social norms and intrinsic motivations. We then discuss the empirical findings by reviewing relevant research published over the past five years, and identify emerging lessons. Recent studies focus on providing feedback, manipulating framing, using green nudges, or activating social norms on urban contexts, mainly energy and water. Interventions are needed in the context of common pool resources in the global south. We end by discussing the great potential for scaling-up programs and interventions, but there are still challenges for research and practice….(More)”

The coloniality of collaboration: sources of epistemic obedience in data-intensive astronomy in Chile


Paper by Sebastián Lehuedé: “Data collaborations have gained currency over the last decade as a means for data- and skills-poor actors to thrive as a fourth paradigm takes hold in the sciences. Against this backdrop, this article traces the emergence of a collaborative subject position that strives to establish reciprocal and technical-oriented collaborations so as to catch up with the ongoing changes in research.

Combining insights from the modernity/coloniality group, political theory and science and technology studies, the article argues that this positionality engenders epistemic obedience by bracketing off critical questions regarding with whom and for whom knowledge is generated. In particular, a dis-embedding of the data producers, the erosion of local ties, and a data conformism are identified as fresh sources of obedience impinging upon the capacity to conduct research attuned to the needs and visions of the local context. A discursive-material analysis of interviews and field notes stemming from the case of astronomy data in Chile is conducted, examining the vision of local actors aiming to gain proximity to the mega observatories producing vast volumes of data in the Atacama Desert.

Given that these observatories are predominantly under the control of organisations from the United States and Europe, the adoption of a collaborative stance is now seen as the best means to ensure skills and technology transfer to local research teams. Delving into the epistemological dimension of data colonialism, this article warns that an increased emphasis on collaboration runs the risk of reproducing planetary hierarchies in times of data-intensive research….(More)”.

Household Financial Transaction Data


Paper by Scott R. Baker & Lorenz Kueng: “The growth of the availability and use of detailed household financial transaction microdata has dramatically expanded the ability of researchers to understand both household decision-making as well as aggregate fluctuations across a wide range of fields. This class of transaction data is derived from a myriad of sources including financial institutions, FinTech apps, and payment intermediaries. We review how these detailed data have been utilized in finance and economics research and the benefits they enable beyond more traditional measures of income, spending, and wealth. We discuss the future potential for this flexible class of data in firm-focused research, real-time policy analysis, and macro statistics….(More)”.

The Predictive Power of Patents


Paper by Sabrina Safrin: “This article explains that domestic patenting activity may foreshadow a country’s level of regulation of path-breaking technologies. The article considers whether different governments will act with a light or a heavy regulatory hand when encountering a new disruptive technology. The article hypothesizes that part of the answer to this important regulatory, economic, and geopolitical question may lie in an unexpected place: the world’s patent offices. Countries with early and significant patent activity in an emerging technology are more likely to view themselves as having a stake in the technology and therefore will be less inclined to subject the technology to extensive health, safety and environmental regulation that would constrain it. The article introduces the term “patent footprint” to describe a country’s degree of patenting activity in a new technology, and the article posits that a country’s patent footprint may provide an early clue to its willingness or reluctance to strenuously regulate the new technology. Even more so, lack of geographic diversity in patent footprints may help predict whether an emerging technology will face extensive international regulation. Patent footprints provide a useful tool to policymakers, businesses, investors, and NGOs considering the health, safety, and environmental regulation of a disruptive technology. The predictive power of patent footprints adds to the literature on the broader function of patents in society….(More)”.

On the forecastability of food insecurity


Paper by Pietro Foini,  Michele Tizzoni, Daniela Paolotti, and Elisa Omodei: “Food insecurity, defined as the lack of physical or economic access to safe, nutritious and sufficient food, remains one of the main challenges included in the 2030 Agenda for Sustainable Development. Near real-time data on the food insecurity situation collected by international organizations such as the World Food Programme can be crucial to monitor and forecast time trends of insufficient food consumption levels in countries at risk.

Here, using food consumption observations in combination with secondary data on conflict, extreme weather events and economic shocks, we build a forecasting model based on gradient boosted regression trees to create predictions on the evolution of insufficient food consumption trends up to 30 days in to the future in 6 countries (Burkina Faso, Cameroon, Mali, Nigeria, Syria and Yemen). Results show that the number of available historical observations is a key element for the forecasting model performance. Among the 6 countries studied in this work, for those with the longest food insecurity time series, the proposed forecasting model makes it possible to forecast the prevalence of people with insufficient food consumption up to 30 days into the future with higher accuracy than a naive approach based on the last measured prevalence only. The framework developed in this work could provide decision makers with a tool to assess how the food insecurity situation will evolve in the near future in countries at risk. Results clearly point to the added value of continuous near real-time data collection at sub-national level…(More)”.

Fighting Climate Change: The Role of Norms, Preferences, and Moral Values


Paper by Armin Falk: “We document individual willingness to fight climate change and its behavioral determinants in a large representative sample of US adults. Willingness to fight climate change – as measured through an incentivized donation decision – is highly heterogeneous across the population. Individual beliefs about social norms, economic preferences such as patience and altruism, as well as universal moral values positively predict climate preferences. Moreover, we document systematic misperceptions of prevalent social norms. Respondents vastly underestimate the prevalence of climate- friendly behaviors and norms among their fellow citizens. Providing respondents with correct information causally raises individual willingness to fight climate change as well as individual support for climate policies. The effects are strongest for individuals who are skeptical about the existence and threat of global warming…(More)”.