Lifecycles, pipelines, and value chains: toward a focus on events in responsible artificial intelligence for health


Paper by Joseph Donia et al: “Process-oriented approaches to the responsible development, implementation, and oversight of artificial intelligence (AI) systems have proliferated in recent years. Variously referred to as lifecycles, pipelines, or value chains, these approaches demonstrate a common focus on systematically mapping key activities and normative considerations throughout the development and use of AI systems. At the same time, these approaches risk focusing on proximal activities of development and use at the expense of a focus on the events and value conflicts that shape how key decisions are made in practice. In this article we report on the results of an ‘embedded’ ethics research study focused on SPOTT– a ‘Smart Physiotherapy Tracking Technology’ employing AI and undergoing development and commercialization at an academic health sciences centre. Through interviews and focus groups with the development and commercialization team, patients, and policy and ethics experts, we suggest that a more expansive design and development lifecycle shaped by key events offers a more robust approach to normative analysis of digital health technologies, especially where those technologies’ actual uses are underspecified or in flux. We introduce five of these key events, outlining their implications for responsible design and governance of AI for health, and present a set of critical questions intended for others doing applied ethics and policy work. We briefly conclude with a reflection on the value of this approach for engaging with health AI ecosystems more broadly…(More)”.

A shared destiny for public sector data


Blog post by Shona Nicol: “As a data professional, it can sometime feel hard to get others interested in data. Perhaps like many in this profession, I can often express the importance and value of data for good in an overly technical way. However when our biggest challenges in Scotland include eradicating child poverty, growing the economy and tackling the climate emergency, I would argue that we should all take an interest in data because it’s going to be foundational in helping us solve these problems.

Data is already intrinsic to shaping our society and how services are delivered. And public sector data is a vital component in making sure that services for the people of Scotland are being delivered efficiently and effectively. Despite an ever growing awareness of the transformative power of data to improve the design and delivery of services, feedback from public sector staff shows that they can face difficulties when trying to influence colleagues and senior leaders around the need to invest in data.

A vision gap

In the Scottish Government’s data maturity programme and more widely, we regularly hear about the challenges data professionals encounter when trying to enact change. This community tell us that a long-term vision for public sector data for Scotland could help them by providing the context for what they are trying to achieve locally.

Earlier this year we started to scope how we might do this. We recognised that organisations are already working to deliver local and national strategies and policies that relate to data, so any vision had to be able to sit alongside those, be meaningful in different settings, agnostic of technology and relevant to any public sector organisation. We wanted to offer opportunities for alignment, not enforce an instruction manual…(More)”.

AI in the Public Service: Here for Good


Special Issue of Ethos: “…For the public good, we want AI to help unlock and drive transformative impact, in areas where there is significant potential for breakthroughs, such as cancer research, material sciences or climate change. But we also want to raise the level of generalised adoption. For the user base in the public sector, we want to learn how best to use this new tool in ways that can allow us to not only do things better, but do better things.

This is not to suggest that AI is always the best solution: it is one of many tools in the digital toolkit. Sometimes, simpler computational methods will suffice. That said, AI represents new, untapped potential for the Public Service to enhance our daily work and deliver better outcomes that ultimately benefit Singapore and Singaporeans….

To promote general adoption, we made available AI tools, such as Pair, 1 SmartCompose, 2 and AIBots. 3 They are useful to a wide range of public officers for many general tasks. Other common tools of this nature may include chatbots to support customer-facing and service delivery needs, translation, summarisation, and so on. Much of what public officers do involves words and language, which is an area that LLM-based AI technology can now help with.

Beyond improving the productivity of the Public Service, the real value lies in AI’s broader ability to transform our business and operating models to deliver greater impact. In driving adoption, we want to encourage public officers to experiment with different approaches to figure out where we can create new value by doing things differently, rather than just settle for incremental value from doing things the same old ways using new tools.

For example, we have seen how AI and automation have transformed language translation, software engineering, identity verification and border clearance. This is just the beginning and much more is possible in many other domains…(More)”.

Understanding local government responsible AI strategy: An international municipal policy document analysis


Paper by Anne David et al: “The burgeoning capabilities of artificial intelligence (AI) have prompted numerous local governments worldwide to consider its integration into their operations. Nevertheless, instances of notable AI failures have heightened ethical concerns, emphasising the imperative for local governments to approach the adoption of AI technologies in a responsible manner. While local government AI guidelines endeavour to incorporate characteristics of responsible innovation and technology (RIT), it remains essential to assess the extent to which these characteristics have been integrated into policy guidelines to facilitate more effective AI governance in the future. This study closely examines local government policy documents (n = 26) through the lens of RIT, employing directed content analysis with thematic data analysis software. The results reveal that: (a) Not all RIT characteristics have been given equal consideration in these policy documents; (b) Participatory and deliberate considerations were the most frequently mentioned responsible AI characteristics in policy documents; (c) Adaptable, explainable, sustainable, and accountable considerations were the least present responsible AI characteristics in policy documents; (d) Many of the considerations overlapped with each other as local governments were at the early stages of identifying them. Furthermore, the paper summarised strategies aimed at assisting local authorities in identifying their strengths and weaknesses in responsible AI characteristics, thereby facilitating their transformation into governing entities with responsible AI practices. The study informs local government policymakers, practitioners, and researchers on the critical aspects of responsible AI policymaking…(More)” See also: AI Localism

City Tech


Book by Rob Walker: “The world is rapidly urbanizing, and experts predict that up to 80 percent of the population will live in cities by 2050. To accommodate that growth while ensuring quality of life for all residents, cities are increasingly turning to technology. From apps that make it easier for citizens to pitch in on civic improvement projects to comprehensive plans for smarter streets and neighborhoods, new tools and approaches are taking root across the United States and around the world. In this thoughtful, inquisitive collection, Rob Walker—former New York Times columnist and author of the City Tech column for Land Lines magazine—investigates the new technologies afoot and their implications for planners, policymakers, residents, and the virtual and literal landscapes of the cities we call home…(More)”

AI helped Uncle Sam catch $1 billion of fraud in one year. And it’s just getting started


Article by Matt Egan: “The federal government’s bet on using artificial intelligence to fight financial crime appears to be paying off.

Machine learning AI helped the US Treasury Department to sift through massive amounts of data and recover $1 billion worth of check fraud in fiscal 2024 alone, according to new estimates shared first with CNN. That’s nearly triple what the Treasury recovered in the prior fiscal year.

“It’s really been transformative,” Renata Miskell, a top Treasury official, told CNN in a phone interview.

“Leveraging data has upped our game in fraud detection and prevention,” Miskell said.

The Treasury Department credited AI with helping officials prevent and recover more than $4 billion worth of fraud overall in fiscal 2024, a six-fold spike from the year before.

US officials quietly started using AI to detect financial crime in late 2022, taking a page out of what many banks and credit card companies already do to stop bad guys.

The goal is to protect taxpayer money against fraud, which spiked during the Covid-19 pandemic as the federal government scrambled to disburse emergency aid to consumers and businesses.

To be sure, Treasury is not using generative AI, the kind that has captivated users of OpenAI’s ChatGPT and Google’s Gemini by generating images, crafting song lyrics and answering complex questions (even though it still sometimes struggles with simple queries)…(More)”.

Statistical Significance—and Why It Matters for Parenting


Blog by Emily Oster: “…When we say an effect is “statistically significant at the 5% level,” what this means is that there is less than a 5% chance that we’d see an effect of this size if the true effect were zero. (The “5% level” is a common cutoff, but things can be significant at the 1% or 10% level also.) 

The natural follow-up question is: Why would any effect we see occur by chance? The answer lies in the fact that data is “noisy”: it comes with error. To see this a bit more, we can think about what would happen if we studied a setting where we know our true effect is zero. 

My fake study 

Imagine the following (fake) study. Participants are randomly assigned to eat a package of either blue or green M&Ms, and then they flip a (fair) coin and you see if it is heads. Your analysis will compare the number of heads that people flip after eating blue versus green M&Ms and report whether this is “statistically significant at the 5% level.”…(More)”.

Emerging technologies in the humanitarian sector


Report and project by Rand: “Emerging technologies have often been explored in the humanitarian sector through small scale pilot projects, testing their application in a specific context with limited opportunities to replicate the testing across various contexts. The level of familiarity and knowledge of technological development varies across the specific types of humanitarian activities undertaken and technology areas considered.

The study team identified five promising technology areas for the humanitarian sector that could be further explored out to 2030:

  • Advanced manufacturing systems are likely to offer humanitarians opportunities to produce resources and tools in an operating environment characterised by scarcity, the rise of simultaneous crises, and exposure to more intense and severe climate events.
  • Early Warning Systems are likely to support preparedness and response efforts across the humanitarian sector while multifactorial crises are likely to arise.
  • Camp monitoring systems are likely to support efforts not only to address security risks, but also support planning and management activities of sites or the health and wellbeing of displaced populations.
  • Coordination platforms are likely to enhance data collection and information-sharing across various humanitarian stakeholders for the development of timely and bespoke crisis response.
  • Privacy-enhancing technologies (PETs) can support ongoing efforts to comply with increased data privacy and data protection requirements in a humanitarian operating environment in which data collection will remain necessary.

Beyond these five technology areas, the study team also considered three innovation journey opportunities:

  • The establishment of a technology horizon scanning coalition
  • Visioning for emerging technologies in crisis recovery
  • An emerging technology narrative initiative.

To accompany the deployment of specific technologies in the humanitarian sector, the study team also developed a four-step approach aimed to identify specific guidance needs for end-users and humanitarian practitioners…(More)”.

External Researcher Access to Closed Foundation Models


Report by Esme Harrington and Dr. Mathias Vermeulen: “…addresses a pressing issue: independent researchers need better conditions for accessing and studying the AI models that big companies have developed. Foundation models — the core technology behind many AI applications — are controlled mainly by a few major players who decide who can study or use them.

What’s the problem with access?

  • Limited access: Companies like OpenAI, Google and others are the gatekeepers. They often restrict access to researchers whose work aligns with their priorities, which means independent, public-interest research can be left out in the cold.
  • High-end costs: Even when access is granted, it often comes with a hefty price tag that smaller or less-funded teams can’t afford.
  • Lack of transparency: These companies don’t always share how their models are updated or moderated, making it nearly impossible for researchers to replicate studies or fully understand the technology.
  • Legal risks: When researchers try to scrutinize these models, they sometimes face legal threats if their work uncovers flaws or vulnerabilities in the AI systems.

The research suggests that companies need to offer more affordable and transparent access to improve AI research. Additionally, governments should provide legal protections for researchers, especially when they are acting in the public interest by investigating potential risks…(More)”.

Tech Agnostic


Book by Greg Epstein: “…Today’s technology has overtaken religion as the chief influence on twenty-first century life and community. In Tech Agnostic, Harvard and MIT’s influential humanist chaplain Greg Epstein explores what it means to be a critical thinker with respect to this new faith. Encouraging readers to reassert their common humanity beyond the seductive sheen of “tech,” this book argues for tech agnosticism—not worship—as a way of life. Without suggesting we return to a mythical pre-tech past, Epstein shows why we must maintain a freethinking critical perspective toward innovation until it proves itself worthy of our faith or not.

Epstein asks probing questions that center humanity at the heart of engineering: Who profits from an uncritical faith in technology? How can we remedy technology’s problems while retaining its benefits? Showing how unbelief has always served humanity, Epstein revisits the historical apostates, skeptics, mystics, Cassandras, heretics, and whistleblowers who embody the tech reformation we desperately need. He argues that we must learn how to collectively demand that technology serve our pursuit of human lives that are deeply worth living…(More)”.