Robodebt: When automation fails


Article by Don Moynihan: “From 2016 to 2020, the Australian government operated an automated debt assessment and recovery system, known as “Robodebt,” to recover fraudulent or overpaid welfare benefits. The goal was to save $4.77 billion through debt recovery and reduced public service costs. However, the algorithm and policies at the heart of Robodebt caused wildly inaccurate assessments, and administrative burdens that disproportionately impacted those with the least resources. After a federal court ruled the policy unlawful, the government was forced to terminate Robodebt and agree to a $1.8 billion settlement.

Robodebt is important because it is an example of a costly failure with automation. By automation, I mean the use of data to create digital defaults for decisions. This could involve the use of AI, or it could mean the use of algorithms reading administrative data. Cases like Robodebt serve as canaries in the coalmine for policymakers interested in using AI or algorithms as an means to downsize public services on the hazy notion that automation will pick up the slack. But I think they are missing the very real risks involved.

To be clear, the lesson is not “all automation is bad.” Indeed, it offer real benefits in potentially reducing administrative costs and hassles and increasing access to public services (e.g. the use of automated or “ex parte” renewals for Medicaid, for example, which Republicans are considering limiting in their new budget bill). It is this promise that makes automation so attractive to policymakers. But it is also the case that automation can be used to deny access to services, and to put people into digital cages that are burdensome to escape from. This is why we need to learn from cases where it has been deployed.

The experience of Robodebt underlines the dangers of using citizens as lab rats to adopt AI on a broad scale before it is has been proven to work. Alongside the parallel collapse of the Dutch government childcare system, Robodebt provides an extraordinarily rich text to understand how automated decision processes can go wrong.

I recently wrote about Robodebt (with co-authors Morten Hybschmann, Kathryn Gimborys, Scott Loudin, Will McClellan), both in the journal of Perspectives on Public Management and Governance and as a teaching case study at the Better Government Lab...(More)”.

Practitioner perspectives on informing decisions in One Health sectors with predictive models


Paper by Kim M. Pepin: “Every decision a person makes is based on a model. A model is an idea about how a process works based on previous experience, observation, or other data. Models may not be explicit or stated (Johnson-Laird, 2010), but they serve to simplify a complex world. Models vary dramatically from conceptual (idea) to statistical (mathematical expression relating observed data to an assumed process and/or other data) or analytical/computational (quantitative algorithm describing a process). Predictive models of complex systems describe an understanding of how systems work, often in mathematical or statistical terms, using data, knowledge, and/or expert opinion. They provide means for predicting outcomes of interest, studying different management decision impacts, and quantifying decision risk and uncertainty (Berger et al. 2021; Li et al. 2017). They can help decision-makers assimilate how multiple pieces of information determine an outcome of interest about a complex system (Berger et al. 2021; Hemming et al. 2022).

People rely daily on system-level models to reach objectives. Choosing the fastest route to a destination is one example. Such a decision may be based on either a mental model of the road system developed from previous experience or a traffic prediction mapping application based on mathematical algorithms and current data. Either way, a system-level model has been applied and there is some uncertainty. In contrast, predicting outcomes for new and complex phenomena, such as emerging disease spread, a biological invasion risk (Chen et al. 2023; Elderd et al. 2006; Pepin et al. 2022), or climatic impacts on ecosystems is more uncertain. Here public service decision-makers may turn to mathematical models when expert opinion and experience do not resolve enough uncertainty about decision outcomes. But using models to guide decisions also relies on expert opinion and experience. Also, even technical experts need to make modeling choices regarding model structure and data inputs that have uncertainty (Elderd et al. 2006) and these might not be completely objective decisions (Bedson et al. 2021). Thus, using models for guiding decisions has subjectivity from both the developer and end-user, which can lead to apprehension or lack of trust about using models to inform decisions.

Models may be particularly advantageous to decision-making in One Health sectors, including health of humans, agriculture, wildlife, and the environment (hereafter called One Health sectors) and their interconnectedness (Adisasmito et al. 2022)…(More)”.

The Global A.I. Divide


Article by Adam Satariano and Paul Mozur: “Last month, Sam Altman, the chief executive of the artificial intelligence company OpenAI, donned a helmet, work boots and a luminescent high-visibility vest to visit the construction site of the company’s new data center project in Texas.

Bigger than New York’s Central Park, the estimated $60 billion project, which has its own natural gas plant, will be one of the most powerful computing hubs ever created when completed as soon as next year.

Around the same time as Mr. Altman’s visit to Texas, Nicolás Wolovick, a computer science professor at the National University of Córdoba in Argentina, was running what counts as one of his country’s most advanced A.I. computing hubs. It was in a converted room at the university, where wires snaked between aging A.I. chips and server computers.

“Everything is becoming more split,” Dr. Wolovick said. “We are losing.”

Artificial intelligence has created a new digital divide, fracturing the world between nations with the computing power for building cutting-edge A.I. systems and those without. The split is influencing geopolitics and global economics, creating new dependencies and prompting a desperate rush to not be excluded from a technology race that could reorder economies, drive scientific discovery and change the way that people live and work.

The biggest beneficiaries by far are the United States, China and the European Union. Those regions host more than half of the world’s most powerful data centers, which are used for developing the most complex A.I. systems, according to data compiled by Oxford University researchers. Only 32 countries, or about 16 percent of nations, have these large facilities filled with microchips and computers, giving them what is known in industry parlance as “compute power.”..(More)”.

ChatGPT Has Already Polluted the Internet So Badly That It’s Hobbling Future AI Development


Article by Frank Landymore: “The rapid rise of ChatGPT — and the cavalcade of competitors’ generative models that followed suit — has polluted the internet with so much useless slop that it’s already kneecapping the development of future AI models.

As the AI-generated data clouds the human creations that these models are so heavily dependent on amalgamating, it becomes inevitable that a greater share of what these so-called intelligences learn from and imitate is itself an ersatz AI creation. 

Repeat this process enough, and AI development begins to resemble a maximalist game of telephone in which not only is the quality of the content being produced diminished, resembling less and less what it’s originally supposed to be replacing, but in which the participants actively become stupider. The industry likes to describe this scenario as AI “model collapse.”

As a consequence, the finite amount of data predating ChatGPT’s rise becomes extremely valuable. In a new featureThe Register likens this to the demand for “low-background steel,” or steel that was produced before the detonation of the first nuclear bombs, starting in July 1945 with the US’s Trinity test. 

Just as the explosion of AI chatbots has irreversibly polluted the internet, so did the detonation of the atom bomb release radionuclides and other particulates that have seeped into virtually all steel produced thereafter. That makes modern metals unsuitable for use in some highly sensitive scientific and medical equipment. And so, what’s old is new: a major source of low-background steel, even today, is WW1 and WW2 era battleships, including a huge naval fleet that was scuttled by German Admiral Ludwig von Reuter in 1919…(More)”.

How to Make Small Beautiful: The Promise of Democratic Innovations


Paper by Christoph Niessen & Wouter Veenendaal: “Small states are on average more likely to be democracies and it is often assumed that democracy functions better in small polities. ‘Small is beautiful’, proponents say. Yet, empirical scholarship shows that, while smallness comes with socio-political proximity, which facilitates participation and policy implementation, it also incentivizes personalism, clientelism and power concentration. Largeness, instead, comes with greater socio-political distance, but strengthens institutional checks and entails scale advantages. In this article, we depart from this trade-off and, wondering ‘how to make small beautiful’, we examine a potential remedy: democratic innovations. To do so, we first show that representative institutions were adopted in small polities by replication rather than by choice, and that they can aggravate the democratic problems associated with smallness. Subsequently, we draw on four usages of direct and deliberative democratic practices in small polities to explore which promises they offer to correct some of these pitfalls…(More)”.

Government at a Glance 2025


OECD Report: “Governments face a highly complex operating environment marked by major demographic, environmental, and digital shifts, alongside low trust and constrained fiscal space. 

Responding effectively means concentrating efforts on three fronts: Enhancing individuals’ sense of dignity in their interactions with government, restoring a sense of security amid rapid societal and economic changes, and improving government efficiency and effectiveness to help boost productivity in the economy, while restoring public finances. These priorities converge in the governance of the green transition.

Government at a Glance 2025 offers evidence-based tools to tackle these long-term challenges…

Governments are not yet making the most of digital tools and data to improve effectiveness and efficiency

Data, digital tools and AI all offer the prospect of efficiency gains. OECD countries score, on average, 0.61 on the Digital Government Index (on a 0-1 scale) but could improve their digital policy frameworks, whole-of-government approaches and use of data as a strategic asset. On average, only 47% of OECD governments’ high-value datasets are openly available, falling to just 37% in education and 42% in health and social welfare…(More)”.

Disappearing people: A global demographic data crisis threatens public policy


Article by Jessica M. Espey, Andrew J. Tatem, and Dana R. Thomson: “Every day, decisions that affect our lives—such as where to locate hospitals and how to allocate resources for schools—depend on knowing how many people live where and who they are; for example, their ages, occupations, living conditions, and needs. Such core demographic data in most countries come from a census, a count of the population usually conducted every 10 years. But something alarming is happening to many of these critical data sources. As widely discussed at the United Nations (UN) Statistical Commission meeting in New York in March, fewer countries have managed to complete a census in recent years. And even when they are conducted, censuses have been shown to undercount members of certain groups in important ways. Redressing this predicament requires investment and technological solutions alongside extensive political outreach, citizen engagement, and new partnerships…(More)”

Why PeaceTech must be the next frontier of innovation and investment


Article by Stefaan Verhulst and Artur Kluz: “…amidst this frenzy, a crucial question is being left unasked: Can technology be used not just to win wars, but to prevent them and save people’s lives?

There is an emerging field that dares to pose this question—PeaceTech. It is the use of technology to save human lives, prevent conflict, de-escalate violence, rebuild fractured communities, and secure fragile peace in post-conflict environments.

From early warning systems that predict outbreaks of violence, to platforms ensuring aid transparency, and mobile tools connecting refugees to services: PeaceTech is real, it works—and it is radically underfunded.

Unlike the vast sums pouring into defense startups, peace building efforts, including PeaceTech organizations and ventures, struggle for scraps. The United Nations Secretary-General released in 2020 its ambitious goal to fundraise $1.5 billion in peacebuilding support over a total of seven years. In contrast, private investment in defense tech crossed $34 billion in 2023 alone. 

Why is PeaceTech so neglected?

One reason PeaceTech is so neglected is cultural: in the tech world, “peace” can seem abstract or idealistic—soft power in a world of hard tech. In reality, peace is not soft; it is among the hardest, most complex challenges of our time. Peace requires systemic thinking, early intervention, global coordination, and a massive infrastructure of care, trust, and monitoring. Maintaining peace in a hyper-polarized, technologically complex world is a feat of engineering, diplomacy, and foresight.

And it’s a business opportunity. According to the Institute for Economics and Peace, violence costs the global economy over $17 trillion per year—about 13% of global GDP. Even modest improvements in peace would unlock billions in economic value.

Consider the peace dividend from predictive analytics that can help governments or international organizations intervene or mediate before conflict breaks out, or AI-powered verification tools to enforce ceasefires and disinformation controls. PeaceTech, if scaled, could become a multibillion dollar market—and a critical piece of the security architecture of the future…(More)”. ..See also Kluz Prize for PeaceTech (Applications Open)

DeepSeek Inside: Origins, Technology, and Impact


Article by Michael A. Cusumano: “The release of DeepSeek V3 and R1 in January 2025 caused steep declines in the stock prices of companies that provide generative artificial intelligence (GenAI) infrastructure technology and datacenter services. These two large language models (LLMs) came from a little-known Chinese startup with approximately 200 employees versus at least 3,500 for industry-leader OpenAI. DeepSeek seemed to have developed this powerful technology much more cheaply than previously thought possible. If true, DeepSeek had the potential to disrupt the economics of the entire GenAI ecosystem and the dominance of U.S. companies ranging from OpenAI to Nvidia.

DeepSeek-R1 defines itself as “an artificial intelligence language model developed by OpenAI, specifically based on the generative pre-trained transformer (GPT) architecture.” Here, DeepSeek acknowledges that the transformer researchers (who published their landmark paper while at Google in 2017) and OpenAI developed its basic technology. Nonetheless, V3 and R1 display impressive skills in neural-network system design, engineering, and optimization, and DeepSeek’s publications provide rare insights into how the technology actually works. This column reviews, for the non-expert reader, what we know about DeepSeek’s origins, technology, and impact so far…(More)”.

The war over the peace business


Article by Tekendra Parmar: “At the second annual AI+ Expo in Washington, DC, in early June, war is the word of the day.

As a mix of Beltway bureaucrats, military personnel, and Washington’s consultant class peruse the expansive Walter E. Washington Convention Center, a Palantir booth showcases its latest in data-collection suites for “warfighters.” Lockheed Martin touts the many ways it is implementing AI throughout its weaponry systems. On the soundstage, the defense tech darling Mach Industries is selling its newest uncrewed aerial vehicles. “We’re living in a world with great-power competition,” the presenter says. “We can’t rule out the possibility of war — but the best way to prevent a war is deterrence,” he says, flanked by videos of drones flying through what looked like the rugged mountains and valleys of Kandahar.

Hosted by the Special Competitive Studies Project, a think tank led by former Google CEO Eric Schmidt, the expo says it seeks to bridge the gap between Silicon Valley entrepreneurs and Washington policymakers to “strengthen” America and its allies’ “competitiveness in critical technologies.”

One floor below, a startup called Anadyr Horizon is making a very different sales pitch, for software that seeks to prevent war rather than fight it: “Peace tech,” as the company’s cofounder Arvid Bell calls it. Dressed in white khakis and a black pinstripe suit jacket with a dove and olive branch pinned to his lapel (a gift from his husband), the former Harvard political scientist begins by noting that Russia’s all-out invasion of Ukraine had come as a surprise to many political scientists. But his AI software, he says, could predict it.

Long the domain of fantasy and science fiction, the idea of forecasting conflict has now become a serious pursuit. In Isaac Asimov’s 1950s “Foundation” series, the main character develops an algorithm that allows him to predict the decline of the Galactic Empire, angering its rulers and forcing him into exile. During the coronavirus pandemic, the US State Department experimented with AI fed with Twitter data to predict “COVID cases” and “violent events.” In its AI audit two years ago, the State Department revealed that it started training AI on “open-source political, social, and economic datasets” to predict “mass civilian killings.” The UN is also said to have experimented with AI to model the war in Gaza…(More)”… ..See also Kluz Prize for PeaceTech (Applications Open)