Understanding local government responsible AI strategy: An international municipal policy document analysis


Paper by Anne David et al: “The burgeoning capabilities of artificial intelligence (AI) have prompted numerous local governments worldwide to consider its integration into their operations. Nevertheless, instances of notable AI failures have heightened ethical concerns, emphasising the imperative for local governments to approach the adoption of AI technologies in a responsible manner. While local government AI guidelines endeavour to incorporate characteristics of responsible innovation and technology (RIT), it remains essential to assess the extent to which these characteristics have been integrated into policy guidelines to facilitate more effective AI governance in the future. This study closely examines local government policy documents (n = 26) through the lens of RIT, employing directed content analysis with thematic data analysis software. The results reveal that: (a) Not all RIT characteristics have been given equal consideration in these policy documents; (b) Participatory and deliberate considerations were the most frequently mentioned responsible AI characteristics in policy documents; (c) Adaptable, explainable, sustainable, and accountable considerations were the least present responsible AI characteristics in policy documents; (d) Many of the considerations overlapped with each other as local governments were at the early stages of identifying them. Furthermore, the paper summarised strategies aimed at assisting local authorities in identifying their strengths and weaknesses in responsible AI characteristics, thereby facilitating their transformation into governing entities with responsible AI practices. The study informs local government policymakers, practitioners, and researchers on the critical aspects of responsible AI policymaking…(More)” See also: AI Localism

City Tech


Book by Rob Walker: “The world is rapidly urbanizing, and experts predict that up to 80 percent of the population will live in cities by 2050. To accommodate that growth while ensuring quality of life for all residents, cities are increasingly turning to technology. From apps that make it easier for citizens to pitch in on civic improvement projects to comprehensive plans for smarter streets and neighborhoods, new tools and approaches are taking root across the United States and around the world. In this thoughtful, inquisitive collection, Rob Walker—former New York Times columnist and author of the City Tech column for Land Lines magazine—investigates the new technologies afoot and their implications for planners, policymakers, residents, and the virtual and literal landscapes of the cities we call home…(More)”

AI helped Uncle Sam catch $1 billion of fraud in one year. And it’s just getting started


Article by Matt Egan: “The federal government’s bet on using artificial intelligence to fight financial crime appears to be paying off.

Machine learning AI helped the US Treasury Department to sift through massive amounts of data and recover $1 billion worth of check fraud in fiscal 2024 alone, according to new estimates shared first with CNN. That’s nearly triple what the Treasury recovered in the prior fiscal year.

“It’s really been transformative,” Renata Miskell, a top Treasury official, told CNN in a phone interview.

“Leveraging data has upped our game in fraud detection and prevention,” Miskell said.

The Treasury Department credited AI with helping officials prevent and recover more than $4 billion worth of fraud overall in fiscal 2024, a six-fold spike from the year before.

US officials quietly started using AI to detect financial crime in late 2022, taking a page out of what many banks and credit card companies already do to stop bad guys.

The goal is to protect taxpayer money against fraud, which spiked during the Covid-19 pandemic as the federal government scrambled to disburse emergency aid to consumers and businesses.

To be sure, Treasury is not using generative AI, the kind that has captivated users of OpenAI’s ChatGPT and Google’s Gemini by generating images, crafting song lyrics and answering complex questions (even though it still sometimes struggles with simple queries)…(More)”.

Statistical Significance—and Why It Matters for Parenting


Blog by Emily Oster: “…When we say an effect is “statistically significant at the 5% level,” what this means is that there is less than a 5% chance that we’d see an effect of this size if the true effect were zero. (The “5% level” is a common cutoff, but things can be significant at the 1% or 10% level also.) 

The natural follow-up question is: Why would any effect we see occur by chance? The answer lies in the fact that data is “noisy”: it comes with error. To see this a bit more, we can think about what would happen if we studied a setting where we know our true effect is zero. 

My fake study 

Imagine the following (fake) study. Participants are randomly assigned to eat a package of either blue or green M&Ms, and then they flip a (fair) coin and you see if it is heads. Your analysis will compare the number of heads that people flip after eating blue versus green M&Ms and report whether this is “statistically significant at the 5% level.”…(More)”.

Emerging technologies in the humanitarian sector


Report and project by Rand: “Emerging technologies have often been explored in the humanitarian sector through small scale pilot projects, testing their application in a specific context with limited opportunities to replicate the testing across various contexts. The level of familiarity and knowledge of technological development varies across the specific types of humanitarian activities undertaken and technology areas considered.

The study team identified five promising technology areas for the humanitarian sector that could be further explored out to 2030:

  • Advanced manufacturing systems are likely to offer humanitarians opportunities to produce resources and tools in an operating environment characterised by scarcity, the rise of simultaneous crises, and exposure to more intense and severe climate events.
  • Early Warning Systems are likely to support preparedness and response efforts across the humanitarian sector while multifactorial crises are likely to arise.
  • Camp monitoring systems are likely to support efforts not only to address security risks, but also support planning and management activities of sites or the health and wellbeing of displaced populations.
  • Coordination platforms are likely to enhance data collection and information-sharing across various humanitarian stakeholders for the development of timely and bespoke crisis response.
  • Privacy-enhancing technologies (PETs) can support ongoing efforts to comply with increased data privacy and data protection requirements in a humanitarian operating environment in which data collection will remain necessary.

Beyond these five technology areas, the study team also considered three innovation journey opportunities:

  • The establishment of a technology horizon scanning coalition
  • Visioning for emerging technologies in crisis recovery
  • An emerging technology narrative initiative.

To accompany the deployment of specific technologies in the humanitarian sector, the study team also developed a four-step approach aimed to identify specific guidance needs for end-users and humanitarian practitioners…(More)”.

External Researcher Access to Closed Foundation Models


Report by Esme Harrington and Dr. Mathias Vermeulen: “…addresses a pressing issue: independent researchers need better conditions for accessing and studying the AI models that big companies have developed. Foundation models — the core technology behind many AI applications — are controlled mainly by a few major players who decide who can study or use them.

What’s the problem with access?

  • Limited access: Companies like OpenAI, Google and others are the gatekeepers. They often restrict access to researchers whose work aligns with their priorities, which means independent, public-interest research can be left out in the cold.
  • High-end costs: Even when access is granted, it often comes with a hefty price tag that smaller or less-funded teams can’t afford.
  • Lack of transparency: These companies don’t always share how their models are updated or moderated, making it nearly impossible for researchers to replicate studies or fully understand the technology.
  • Legal risks: When researchers try to scrutinize these models, they sometimes face legal threats if their work uncovers flaws or vulnerabilities in the AI systems.

The research suggests that companies need to offer more affordable and transparent access to improve AI research. Additionally, governments should provide legal protections for researchers, especially when they are acting in the public interest by investigating potential risks…(More)”.

Tech Agnostic


Book by Greg Epstein: “…Today’s technology has overtaken religion as the chief influence on twenty-first century life and community. In Tech Agnostic, Harvard and MIT’s influential humanist chaplain Greg Epstein explores what it means to be a critical thinker with respect to this new faith. Encouraging readers to reassert their common humanity beyond the seductive sheen of “tech,” this book argues for tech agnosticism—not worship—as a way of life. Without suggesting we return to a mythical pre-tech past, Epstein shows why we must maintain a freethinking critical perspective toward innovation until it proves itself worthy of our faith or not.

Epstein asks probing questions that center humanity at the heart of engineering: Who profits from an uncritical faith in technology? How can we remedy technology’s problems while retaining its benefits? Showing how unbelief has always served humanity, Epstein revisits the historical apostates, skeptics, mystics, Cassandras, heretics, and whistleblowers who embody the tech reformation we desperately need. He argues that we must learn how to collectively demand that technology serve our pursuit of human lives that are deeply worth living…(More)”.

The Number


Article by John Lanchester: “…The other pieces published in this series have human protagonists. This one doesn’t: The main character of this piece is not a person but a number. Like all the facts and numbers cited above, it comes from the federal government. It’s a very important number, which has for a century described economic reality, shaped political debate and determined the fate of presidents: the consumer price index.

The CPI is crucial for multiple reasons, and one of them is not because of what it is but what it represents. The gathering of data exemplifies our ambition for a stable, coherent society. The United States is an Enlightenment project based on the supremacy of reason; on the idea that things can be empirically tested; that there are self-evident truths; that liberty, progress and constitutional government walk arm in arm and together form the recipe for the ideal state. Statistics — numbers created by the state to help it understand itself and ultimately to govern itself — are not some side effect of that project but a central part of what government is and does…(More)”.

Key lesson of this year’s Nobel Prize: The importance of unlocking data responsibly to advance science and improve people’s lives


Article by Stefaan Verhulst, Anna Colom, and Marta Poblet: “This year’s Nobel Prize for Chemistry owes a lot to available, standardised, high quality data that can be reused to improve people’s lives. The winners, Prof David Baker from the University of Washington, and Demis Hassabis and John M. Jumper from Google DeepMind, were awarded respectively for the development and prediction of new proteins that can have important medical applications. These developments build on AI models that can predict protein structures in unprecedented ways. However, key to these models and their potential to unlock health discoveries is an open curated dataset with high quality and standardised data, something still rare despite the pace and scale of AI-driven development.

We live in a paradoxical time of both data abundance and data scarcity: a lot of data is being created and stored, but it tends to be inaccessible due to private interests and weak regulations. The challenge, then, is to prevent the misuse of data whilst avoiding its missed use.

The reuse of data remains limited in Europe, but a new set of regulations seeks to increase the possibilities of responsible data reuse. When the European Commission made the case for its European Data Strategy in 2020, it envisaged the European Union “a role model for a society empowered by data to make better decisions — in business and the public sector,” and acknowledged the need to improve “governance structures for handling data and to increase its pools of quality data available for use and reuse”…(More)”.

It is about time! Exploring the clashing timeframes of politics and public policy experiments


Paper by Ringa Raudla, Külli Sarapuu, Johanna Vallistu, and Nastassia Harbuzova: “Although existing studies on experimental policymaking have acknowledged the importance of the political setting in which policy experiments take place, we lack systematic knowledge on how various political dimensions affect experimental policymaking. In this article, we address a specific gap in the existing understanding of the politics of experimentation: how political timeframes influence experimental policymaking. Drawing on theoretical discussions on experimental policymaking, public policy, electoral politics, and mediatization of politics, we outline expectations about how electoral and problem cycles may influence the timing, design, and learning from policy experiments. We argue electoral timeframes are likely to discourage politicians from undertaking large-scale policy experiments and if politicians decide to launch experiments, they prefer shorter designs. The electoral cycle may lead politicians to draw too hasty conclusions or ignore the experiment’s results altogether. We expect problem cycles to shorten politicians’ time horizons further as there is pressure to solve problems quickly. We probe the plausibility of our theoretical expectations using interview data from two different country contexts: Estonia and Finland…(More)“.