Paper by Leonie Rebecca Freise et al: “The rapid evolution of the software development industry challenges developers to manage their diverse tasks effectively. Traditional assistant tools in software development often fall short of supporting developers efficiently. This paper explores how generative artificial intelligence (GAI) tools, such as Github Copilot or ChatGPT, facilitate job crafting—a process where employees reshape their jobs to meet evolving demands. By integrating GAI tools into workflows, software developers can focus more on creative problem-solving, enhancing job satisfaction, and fostering a more innovative work environment. This study investigates how GAI tools influence task, cognitive, and relational job crafting behaviors among software developers, examining its implications for professional growth and adaptability within the industry. The paper provides insights into the transformative impacts of GAI tools on software development job crafting practices, emphasizing their role in enabling developers to redefine their job functions…(More)”.
How to evaluate statistical claims
Blog by Sean Trott: “…The goal of this post is to distill what I take to be the most important, immediately applicable, and generalizable insights from these classes. That means that readers should be able to apply those insights without a background in math or knowing how to, say, build a linear model in R. In that way, it’ll be similar to my previous post about “useful cognitive lenses to see through”, but with a greater focus on evaluating claims specifically.
Lesson #1: Consider the whole distribution, not just the central tendency.
If you spend much time reading news articles or social media posts, the odds are good you’ll encounter some descriptive statistics: numbers summarizing or describing a distribution (a set of numbers or values in a dataset). One of the most commonly used descriptive statistics is the arithmetic mean: the sum of every value in a distribution, divided by the number of values overall. The arithmetic mean is a measure of “central tendency”, which just means it’s a way to characterize the typical or expected value in that distribution.
The arithmetic mean is a really useful measure. But as many readers might already know, it’s not perfect. It’s strongly affected by outliers—values that are really different from the rest of the distribution—and things like the skew of a distribution (see the image below for examples of skewed distribution).
In particular, the mean is pulled in the direction of outliers or distribution skew. That’s the logic behind the joke about the average salary of people at a bar jumping up as soon as a billionaire walks in. It’s also why other measures of central tendency, such as the median, are often presented alongside (or instead of) the mean—especially for distributions that happen to be very skewed, such as income or wealth.
It’s not that one of these measures is more “correct”. As Stephen Jay Gould wrote in his article The Median Is Not the Message, they’re just different perspectives on the same distribution:
A politician in power might say with pride, “The mean income of our citizens is $15,000 per year.” The leader of the opposition might retort, “But half our citizens make less than $10,000 per year.” Both are right, but neither cites a statistic with impassive objectivity. The first invokes a mean, the second a median. (Means are higher than medians in such cases because one millionaire may outweigh hundreds of poor people in setting a mean, but can balance only one mendicant in calculating a median.)..(More)”
AI Analysis of Body Camera Videos Offers a Data-Driven Approach to Police Reform
Article by Ingrid Wickelgren: But unless something tragic happens, body camera footage generally goes unseen. “We spend so much money collecting and storing this data, but it’s almost never used for anything,” says Benjamin Graham, a political scientist at the University of Southern California.
Graham is among a small number of scientists who are reimagining this footage as data rather than just evidence. Their work leverages advances in natural language processing, which relies on artificial intelligence, to automate the analysis of video transcripts of citizen-police interactions. The findings have enabled police departments to spot policing problems, find ways to fix them and determine whether the fixes improve behavior.
Only a small number of police agencies have opened their databases to researchers so far. But if this footage were analyzed routinely, it would be a “real game changer,” says Jennifer Eberhardt, a Stanford University psychologist, who pioneered this line of research. “We can see beat-by-beat, moment-by-moment how an interaction unfolds.”
In papers published over the past seven years, Eberhardt and her colleagues have examined body camera footage to reveal how police speak to white and Black people differently and what type of talk is likely to either gain a person’s trust or portend an undesirable outcome, such as handcuffing or arrest. The findings have refined and enhanced police training. In a study published in PNAS Nexus in September, the researchers showed that the new training changed officers’ behavior…(More)”.
Access, Signal, Action: Data Stewardship Lessons from Valencia’s Floods
Article by Marta Poblet, Stefaan Verhulst, and Anna Colom: “Valencia has a rich history in water management, a legacy shaped by both triumphs and tragedies. This connection to water is embedded in the city’s identity, yet modern floods test its resilience in new ways.
During the recent floods, Valencians experienced a troubling paradox. In today’s connected world, digital information flows through traditional and social media, weather apps, and government alert systems designed to warn us of danger and guide rapid responses. Despite this abundance of data, a tragedy unfolded last month in Valencia. This raises a crucial question: how can we ensure access to the right data, filter it for critical signals, and transform those signals into timely, effective action?
Data stewardship becomes essential in this process.
In particular, the devastating floods in Valencia underscore the importance of:
- having access to data to strengthen the signal (first mile challenges)
- separating signal from noise
- translating signal into action (last mile challenges)…(More)”.
Quantitative Urban Economics
Paper by Stephen J. Redding: “This paper reviews recent quantitative urban models. These models are sufficiently rich to capture observed features of the data, such as many asymmetric locations and a rich geography of the transport network. Yet these models remain sufficiently tractable as to permit an analytical characterization of their theoretical properties. With only a small number of structural parameters (elasticities) to be estimated, they lend themselves to transparent identification. As they rationalize the observed spatial distribution of economic activity within cities, they can be used to undertake counterfactuals for the impact of empirically-realistic public-policy interventions on this observed distribution. Empirical applications include estimating the strength of agglomeration economies and evaluating the impact of transport infrastructure improvements (e.g., railroads, roads, Rapid Bus Transit Systems), zoning and land use regulations, place-based policies, and new technologies such as remote working…(More)”.
Operational Learning
International Red Cross: “Operational learning in emergencies is the lesson learned from managing and dealing with crises, refining protocols for resource allocation, decision-making, communication strategies, and others. The summaries are generated using AI and Large Language Models, based on data coming from Final DREF Reports, Emergency Appeal reports and others…(More)”
Beached Plastic Debris Index; a modern index for detecting plastics on beaches
Paper by Jenna Guffogg et al: “Plastic pollution on shorelines poses a significant threat to coastal ecosystems, underscoring the urgent need for scalable detection methods to facilitate debris removal. In this study, the Beached Plastic Debris Index (BPDI) was developed to detect plastic accumulation on beaches using shortwave infrared spectral features. To validate the BPDI, plastic targets with varying sub-pixel covers were placed on a sand spit and captured using WorldView-3 satellite imagery. The performance of the BPDI was analysed in comparison with the Normalized Difference Plastic Index (NDPI), the Plastic Index (PI), and two hydrocarbon indices (HI, HC). The BPDI successfully detected the plastic targets from sand, water, and vegetation, outperforming the other indices and identifying pixels with <30 % plastic cover. The robustness of the BPDI suggests its potential as an effective tool for mapping plastic debris accumulations along coastlines…(More)”.
The history of AI and power in government
Book chapter by Shirley Kempeneer: “…begins by examining the simultaneous development of statistics and the state. Drawing on the works of notable scholars like Alain Desrosières, Theodore Porter, James Scott, and Michel Foucault, the chapter explores measurement as a product of modernity. It discusses the politics and power of (large) numbers, through their ability to make societies legible and controllable, also in the context of colonialism. The chapter then discusses the shift from data to big data and how AI and the state, just like statistics and the state, are mutually constitutive. It zooms in on shifting power relations, discussing the militarization of society, the outsourcing of the state to tech contractors, the exploitation of human bodies under the guise of ‘automation’, and the oppression of vulnerable citizens. Where news media often focus on the power of AI, that is supposedly escaping our control, this chapter relocates power in AI-systems, building on the work of Kate Crawford, Bruno Latour, and Emily Bender…(More)”
Addressing Data Challenges to Drive the Transformation of Smart Cities
Paper by Ekaterina Gilman et al: “Cities serve as vital hubs of economic activity and knowledge generation and dissemination. As such, cities bear a significant responsibility to uphold environmental protection measures while promoting the welfare and living comfort of their residents. There are diverse views on the development of smart cities, from integrating Information and Communication Technologies into urban environments for better operational decisions to supporting sustainability, wealth, and comfort of people. However, for all these cases, data are the key ingredient and enabler for the vision and realization of smart cities. This article explores the challenges associated with smart city data. We start with gaining an understanding of the concept of a smart city, how to measure that the city is a smart one, and what architectures and platforms exist to develop one. Afterwards, we research the challenges associated with the data of the cities, including availability, heterogeneity, management, analysis, privacy, and security. Finally, we discuss ethical issues. This article aims to serve as a “one-stop shop” covering data-related issues of smart cities with references for diving deeper into particular topics of interest…(More)”.
Artificial Intelligence, Scientific Discovery, and Product Innovation
Paper by Aidan Toner-Rodgers: “… studies the impact of artificial intelligence on innovation, exploiting the randomized introduction of a new materials discovery technology to 1,018 scientists in the R&D lab of a large U.S. firm. AI-assisted researchers discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation. These compounds possess more novel chemical structures and lead to more radical inventions. However, the technology has strikingly disparate effects across the productivity distribution: while the bottom third of scientists see little benefit, the output of top researchers nearly doubles. Investigating the mechanisms behind these results, I show that AI automates 57% of “idea-generation” tasks, reallocating researchers to the new task of evaluating model-produced candidate materials. Top scientists leverage their domain knowledge to prioritize promising AI suggestions, while others waste significant resources testing false positives. Together, these findings demonstrate the potential of AI-augmented research and highlight the complementarity between algorithms and expertise in the innovative process. Survey evidence reveals that these gains come at a cost, however, as 82% of scientists report reduced satisfaction with their work due to decreased creativity and skill underutilization…(More)”.