Paper by Geoff Keeling et al: “Moral imagination” is the capacity to register that one’s perspective on a decision-making situation is limited, and to imagine alternative perspectives that reveal new considerations or approaches. We have developed a Moral Imagination approach that aims to drive a culture of responsible innovation, ethical awareness, deliberation, decision-making, and commitment in organizations developing new technologies. We here present a case study that illustrates one key aspect of our approach – the technomoral scenario – as we have applied it in our work with product and engineering teams. Technomoral scenarios are fictional narratives that raise ethical issues surrounding the interaction between emerging technologies and society. Through facilitated roleplaying and discussion, participants are prompted to examine their own intentions, articulate justifications for actions, and consider the impact of decisions on various stakeholders. This process helps developers to reenvision their choices and responsibilities, ultimately contributing to a culture of responsible innovation…(More)”.
Human-AI coevolution
Paper by Dino Pedreschi et al: “Human-AI coevolution, defined as a process in which humans and AI algorithms continuously influence each other, increasingly characterises our society, but is understudied in artificial intelligence and complexity science literature. Recommender systems and assistants play a prominent role in human-AI coevolution, as they permeate many facets of daily life and influence human choices through online platforms. The interaction between users and AI results in a potentially endless feedback loop, wherein users’ choices generate data to train AI models, which, in turn, shape subsequent user preferences. This human-AI feedback loop has peculiar characteristics compared to traditional human-machine interaction and gives rise to complex and often “unintended” systemic outcomes. This paper introduces human-AI coevolution as the cornerstone for a new field of study at the intersection between AI and complexity science focused on the theoretical, empirical, and mathematical investigation of the human-AI feedback loop. In doing so, we: (i) outline the pros and cons of existing methodologies and highlight shortcomings and potential ways for capturing feedback loop mechanisms; (ii) propose a reflection at the intersection between complexity science, AI and society; (iii) provide real-world examples for different human-AI ecosystems; and (iv) illustrate challenges to the creation of such a field of study, conceptualising them at increasing levels of abstraction, i.e., scientific, legal and socio-political…(More)”.
Code and Craft: How Generative Ai Tools Facilitate Job Crafting in Software Development
Paper by Leonie Rebecca Freise et al: “The rapid evolution of the software development industry challenges developers to manage their diverse tasks effectively. Traditional assistant tools in software development often fall short of supporting developers efficiently. This paper explores how generative artificial intelligence (GAI) tools, such as Github Copilot or ChatGPT, facilitate job crafting—a process where employees reshape their jobs to meet evolving demands. By integrating GAI tools into workflows, software developers can focus more on creative problem-solving, enhancing job satisfaction, and fostering a more innovative work environment. This study investigates how GAI tools influence task, cognitive, and relational job crafting behaviors among software developers, examining its implications for professional growth and adaptability within the industry. The paper provides insights into the transformative impacts of GAI tools on software development job crafting practices, emphasizing their role in enabling developers to redefine their job functions…(More)”.
Quantitative Urban Economics
Paper by Stephen J. Redding: “This paper reviews recent quantitative urban models. These models are sufficiently rich to capture observed features of the data, such as many asymmetric locations and a rich geography of the transport network. Yet these models remain sufficiently tractable as to permit an analytical characterization of their theoretical properties. With only a small number of structural parameters (elasticities) to be estimated, they lend themselves to transparent identification. As they rationalize the observed spatial distribution of economic activity within cities, they can be used to undertake counterfactuals for the impact of empirically-realistic public-policy interventions on this observed distribution. Empirical applications include estimating the strength of agglomeration economies and evaluating the impact of transport infrastructure improvements (e.g., railroads, roads, Rapid Bus Transit Systems), zoning and land use regulations, place-based policies, and new technologies such as remote working…(More)”.
Beached Plastic Debris Index; a modern index for detecting plastics on beaches
Paper by Jenna Guffogg et al: “Plastic pollution on shorelines poses a significant threat to coastal ecosystems, underscoring the urgent need for scalable detection methods to facilitate debris removal. In this study, the Beached Plastic Debris Index (BPDI) was developed to detect plastic accumulation on beaches using shortwave infrared spectral features. To validate the BPDI, plastic targets with varying sub-pixel covers were placed on a sand spit and captured using WorldView-3 satellite imagery. The performance of the BPDI was analysed in comparison with the Normalized Difference Plastic Index (NDPI), the Plastic Index (PI), and two hydrocarbon indices (HI, HC). The BPDI successfully detected the plastic targets from sand, water, and vegetation, outperforming the other indices and identifying pixels with <30 % plastic cover. The robustness of the BPDI suggests its potential as an effective tool for mapping plastic debris accumulations along coastlines…(More)”.
Addressing Data Challenges to Drive the Transformation of Smart Cities
Paper by Ekaterina Gilman et al: “Cities serve as vital hubs of economic activity and knowledge generation and dissemination. As such, cities bear a significant responsibility to uphold environmental protection measures while promoting the welfare and living comfort of their residents. There are diverse views on the development of smart cities, from integrating Information and Communication Technologies into urban environments for better operational decisions to supporting sustainability, wealth, and comfort of people. However, for all these cases, data are the key ingredient and enabler for the vision and realization of smart cities. This article explores the challenges associated with smart city data. We start with gaining an understanding of the concept of a smart city, how to measure that the city is a smart one, and what architectures and platforms exist to develop one. Afterwards, we research the challenges associated with the data of the cities, including availability, heterogeneity, management, analysis, privacy, and security. Finally, we discuss ethical issues. This article aims to serve as a “one-stop shop” covering data-related issues of smart cities with references for diving deeper into particular topics of interest…(More)”.
Artificial Intelligence, Scientific Discovery, and Product Innovation
Paper by Aidan Toner-Rodgers: “… studies the impact of artificial intelligence on innovation, exploiting the randomized introduction of a new materials discovery technology to 1,018 scientists in the R&D lab of a large U.S. firm. AI-assisted researchers discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation. These compounds possess more novel chemical structures and lead to more radical inventions. However, the technology has strikingly disparate effects across the productivity distribution: while the bottom third of scientists see little benefit, the output of top researchers nearly doubles. Investigating the mechanisms behind these results, I show that AI automates 57% of “idea-generation” tasks, reallocating researchers to the new task of evaluating model-produced candidate materials. Top scientists leverage their domain knowledge to prioritize promising AI suggestions, while others waste significant resources testing false positives. Together, these findings demonstrate the potential of AI-augmented research and highlight the complementarity between algorithms and expertise in the innovative process. Survey evidence reveals that these gains come at a cost, however, as 82% of scientists report reduced satisfaction with their work due to decreased creativity and skill underutilization…(More)”.
Privacy during pandemics: Attitudes to public use of personal data
Paper by Eleonora Freddi and Ole Christian Wasenden: “In this paper we investigate people’s attitudes to privacy and sharing of personal data when used to help society combat a contagious disease, such as COVID-19. Through a two-wave survey, we investigate the role of personal characteristics, and the effect of information, in shaping privacy attitudes. By conducting the survey in Norway and Sweden, which adopted very different strategies to handle the COVID-19 pandemic, we analyze potential differences in privacy attitudes due to policy changes. We find that privacy concern is negatively correlated with allowing public use of personal data. Trust in the entity collecting data and collectivist preferences are positively correlated with this type of data usage. Providing more information about the public benefit of sharing personal data makes respondents more positive to the use of their data, while providing additional information about the costs associated with data sharing does not change attitudes. The analysis suggests that stating a clear purpose and benefit for the data collection makes respondents more positive about sharing. Despite very different policy approaches, we do not find any major differences in privacy attitudes between Norway and Sweden. Findings are also similar between the two survey waves, suggesting a minor role for contextual changes…(More)”
The need for climate data stewardship: 10 tensions and reflections regarding climate data governance
Paper by Stefaan Verhulst: “Datafication—the increase in data generation and advancements in data analysis—offers new possibilities for governing and tackling worldwide challenges such as climate change. However, employing data in policymaking carries various risks, such as exacerbating inequalities, introducing biases, and creating gaps in access. This paper articulates 10 core tensions related to climate data and its implications for climate data governance, ranging from the diversity of data sources and stakeholders to issues of quality, access, and the balancing act between local needs and global imperatives. Through examining these tensions, the article advocates for a paradigm shift towards multi-stakeholder governance, data stewardship, and equitable data practices to harness the potential of climate data for the public good. It underscores the critical role of data stewards in navigating these challenges, fostering a responsible data ecology, and ultimately contributing to a more sustainable and just approach to climate action and broader social issues…(More)”.
Once Upon a Crime: Towards Crime Prediction from Demographics and Mobile Data
Paper by Andrey Bogomolov, Bruno Lepri, Jacopo Staiano, Nuria Oliver, Fabio Pianesi, and Alex Pentland: “In this paper, we present a novel approach to predict crime in a geographic space from multiple data sources, in particular mobile phone and demographic data. The main contribution of the proposed approach lies in using aggregated and anonymized human behavioral data derived from mobile network activity to tackle the crime prediction problem. While previous research efforts have used either background historical knowledge or offenders’ profiling, our findings support the hypothesis that aggregated human behavioral data captured from the mobile network infrastructure, in combination with basic demographic information, can be used to predict crime. In our experimental results with real crime data from London we obtain an accuracy of almost 70% when predicting whether a specific area in the city will be a crime hotspot or not. Moreover, we provide a discussion of the implications of our findings for data-driven crime analysis…(More)”.