Explore our articles
View All Results

Stefaan Verhulst

Report by Eurostat: “Artificial intelligence (AI) is transforming the European Union’s economy and society, reshaping how businesses operate and how individuals live and work. This statistical report examines the usage of AI technologies among the enterprises as well as citizens of the EU, providing key insights based on the latest available data…(More)”.

The use of artificial intelligence technologies in the European Union

Article by Stefaan Verhulst: “The AI Index Report 2026, released this week by Stanford HAI, offers a compelling portrait of what can only be described as an ongoing AI Summer. The indicators are striking: rapid adoption reaching more than half the population within three years, surging investment, near-human performance across multiple domains, and widespread deployment in science, medicine, and the economy. By nearly every conventional metric — capability, capital, and diffusion — AI is accelerating.

AI Index Report 2026 — Figure 2.1.1.

Yet, embedded within the report is a quieter but more consequential story: the deepening of a data winter. Nowhere is this more clearly articulated than in the report’s own section on the potential exhaustion of training data (page 25).

The report notes growing concern among leading researchers that we may be approaching “peak data”—a point at which access to high-quality human-generated text and web data is effectively exhausted. Some projections suggest that this depletion could occur as early as sometime between 2026 and 2032. This is not a marginal issue. Data exhaustion directly challenges the scaling paradigm that has underpinned AI’s recent breakthroughs. What appears as exponential growth in capability may, in fact, be approaching a structural ceiling–not due to limits in compute or model design, but due to constraints in data availability. In other words, the AI summer may be running on finite fuel.

The report further underscores that synthetic data — often proposed as a solution to data scarcity — has not yet proven to be a full substitute for real-world data, particularly in pre-training contexts. While hybrid approaches combining real and synthetic data can accelerate training, they do not surpass the performance of models trained on high-quality real data. Purely synthetic training, meanwhile, remains effective only in narrower or smaller-scale settings (e.g., for specialized RAG applications or sector-specific models). The implication is clear: the quality and diversity of real-world data remain irreplaceable at the frontier…(More)”.

AI Summer, Data Winter: What the AI Index Reveals — and What It Doesn’t Yet Measure

Report by Valerie Wirtschafter: “Three consecutive administrations have made adoption of artificial intelligence (AI) across the U.S. federal government a priority. Most recently, the Trump administration’s AI Action Plan highlighted AI’s potential to “help deliver the highly responsive government the American people expect and deserve.” To assess the current state of AI adoption across the federal government, this report draws on AI use case inventories from 2023 to 2025, federal jobs data, OMB memoranda, request for information submissions, and interviews with current and former federal technologists across eight agencies.

While the scope and pace of AI adoption accelerated significantly over the past three years, AI use across the federal government remains concentrated among a handful of large agencies. Workforce capacity constraints, a risk-averse culture, procurement and funding challenges, and low public trust in AI systems slow adoption efforts.

To bolster responsible AI adoption, the federal government could expand support for technical talent and AI literacy across agencies; continue to address the structural barriers in procurement, regulation, and budgeting that hinder technology modernization more broadly; and foster public trust through stronger transparency practices, improved use case inventories, and a focus on high-impact, positive applications that demonstrably improve how government serves the American people…(More)”.

Assessing the state of AI adoption across the federal government

Book by Gwen Ottinger: “For many people, science and social justice seem to be natural allies-the slogan “science is real” often accompanies affirmations of diversity and reproductive rights. In practice, too, doing science is an increasingly prevalent strategy of social and environmental justice movements. But while it seems apparent that science can aid in the pursuit of justice, it can be hard to explain how it does so-and thus hard to know how to deploy science most strategically.

In The Science of Repair, Gwen Ottinger draws on years of on-the-ground research to offer a much-needed explanation of how science works to combat injustice. Telling the stories of ordinary people who’ve turned to science in the hopes of reducing toxic pollution in their communities, the scientists and innovators who’ve developed methods to enable communities to better represent their experiences, and the charismatic technologies that they’ve deployed, Ottinger presents a surprising conclusion: proving that people have been harmed, in itself, rarely advances justice. The process of investigating injustice, on the other hand, can strengthen shared standards for right and wrong, increase ordinary people’s ability to hold powerful actors accountable, and bolster hope that wrongs will be redressed-all essential elements of a just society.

For those who believe that science should matter to public discourse and decision-making, Gwen Ottinger’s engaging new work offers clear steps to help ensure that scientific investigations further justice. It brings much needed nuance to our thinking about how science can do good in the world and why we should defend it…(More)”

The Science of Repair: How People who Believe in Facts Can Build a Better Future

Book by Nana Ariel & Dana Riesenfeld: “… explores the modern obsession with originality through the figure that most threatens it: the cliché. From the rise of industrial print to the age of artificial intelligence, it shows how the notion of the cliché has shaped our understanding of creativity, banality, independent thought, and the limits of human agency. Rather than treating clichés as fixed, exhausted expressions, the book understands them as constructed experiences of déjà vu – moments when language feels strangely familiar, as if we have already heard or said it too times before. The cliché is a dynamic cultural form that makes us feel the weight of the already-said.

The book examines how clichés are not only used naïvely or dismissed ironically, but are continually negotiated in literature, art, popular culture, and everyday discourse –  inhabited, twisted, and revalued within different contexts. Such negotiations reveal how speakers and writers situate themselves within the tension between convention and invention, the collective and the singular, sincerity and performance. 

The book traces how clichés have come to define what it means to be both human and modern. With the emergence of AI, in which machines learn through repetition and prediction, and as concerns about the homogenization of human discourse increase, the clichéss returns as a central mechanism. The authors reveal clichés as scorned yet indispensable – something we can’t live with, and can’t live without…(More)”.

Clichés We Live By

Resource by Andreas Marx et al: “…The assessment of whether an implemented project can be considered “successful” frequently focuses on conventional metrics such as technical commissioning, user numbers reached, or cost-benefit analyses. While this is often politically desired and regarded as sufficient, it rarely reflects the actual state of affairs. The findings of such assessments provide important insights into efficiency, but do not answer the central question of whether and to what extent projects are genuinely effective and contribute to overarching strategic goals — such as improved quality of life, social participation, or contributions to global sustainability and digitalization agendas. The systematic measurement of impacts, however, remains rarely established in practice, due in large part to its greater complexity compared to the verification of basic implementation parameters.

This is precisely where the present guide comes in. Its aim is to provide municipalities with practical guidance and to reframe impact-oriented evaluation not as an additional burden, but as a useful steering instrument…(More)

Impact-Oriented Evaluation of Smart City Projects

Article by David S. Johnson, Maggie Meinhardt, and John Sabelhaus: “For many household surveys in the United States, response rates have been steadily declining for at least the past two decades.” This is a quote from a National Academies of Sciences report from 2013. It is still true today, and it is true for all wealthy countries. Suffering from low response rates and increasing costs, surveys are often described as 20th century technology that needs to be replaced.

But surveys capture things we cannot get from administrative data, as Census Bureau Deputy Director Ron Jarmin noted at a recent event. While administrative data could provide a person’s employment and earnings, only surveys can determine (for example) whether someone was looking for work, which is key for measuring the unemployment rate.

Declining survey participation, both in the U.S. and abroad, is often raised as a large challenge for the statistical system and cited as a reason to eliminate surveys in favor of other measurement strategies. But rather than discard this important data source, researchers should seek to understand how response rates impact the statistics we care about and why response rates are falling in the first place.

The key issues for how survey participation impacts economic statistics is whether lower response rates lead to less statistical precision and whether they actually create statistical bias. Lower response rates lead to lower sample sizes and thus less precision, but the statistics may well remain unbiased so long as differences in survey participation are not correlated with the economic outcome we are measuring. Statistical bias is a larger concern because policymakers would be reading economic signposts that are literally pointing in the wrong direction.

There are many plausible reasons why survey response rates are declining. Among these are the difficulty of contacting the individuals who are (randomly) chosen for the survey sample, respondent concerns about the time burden of completing a survey, and respondent fears about the privacy of their personal data. These difficulties are not unique to government economic surveys, and although the challenges may be getting worse, the unique role of economic surveys means we need to move forward using tried and true methods for improving survey participation…(More)”.

Why did people stop responding to federal economic surveys? What can be done?

Paper by Irene Hardill, Sophie Milnes, Sarah Mills & Rhys Dafydd Jones: “Governments worldwide are grappling with the advent of AI and its potential for governance, improving public services and impacts for its citizenry. This provocation outlines these unfolding geographies of governance. Specifically, we provide a critical lens into the themes of digital transformation, AI and navigating polycrisis, with potential wider global resonance and interest for geographers. To do so, we critically reflect on the potential impacts of AI on public service delivery focusing on civil society organisations offering advice services to citizens in Wales and England. We outline key considerations centred on difference and devolution; connecting with citizens; and discontent and trust…(More)”.

Civil society in crisis times: new geographies of governance in an era of AI

OECD Report: “Mission-oriented innovation policies (MOIPs) have become an important tool for addressing complex societal challenges, with more than 260 missions launched worldwide since the late 2010s. Their rapid expansion has raised both expectations and concerns, highlighting the need for stronger design and implementation strategies. This OECD report draws lessons from a year-long dialogue between policymakers and researchers, exploring how to frame missions, mobilise actors and resources, crowd in private investment and deliver on shared agendas. It offers examples and shared perspectives from those who both think and do missions, as well as a set of converging perspectives on the best practices around mission-oriented innovation policy…(More)”.

Forging New Frontiers in Mission‑Oriented Innovation Policies

Article by Tripp Mickle, Cade Metz, Dylan Freedman, Teresa Mondría Terol and Keith Collins: “A recent analysis of AI Overviews found that they were accurate approximately nine out of 10 times. But with Google processing more than five trillion searches a year, this means that it provides tens of millions of erroneous answers every hour (or hundreds of thousands of inaccuracies every minute), according to an analysis done by an A.I. start-up called Oumi.

More than half of the accurate responses were “ungrounded,” meaning they linked to websites that did not completely support the information they provided. This makes it challenging to check AI Overviews’ accuracy.

Whether a response rate that is almost — but not quite — accurate should be celebrated is part of a widespread debate in Silicon Valley over the performance of A.I. systems. It speaks to the fundamental core of what we can trust online.

Some technologists argue that Google’s AI Overviews are reasonably accurate and that they have improved in recent months. But others worry that the average person may not realize those results need double-checking.

At the request of The New York Times, Oumi analyzed the accuracy of Google’s AI Overviews using a benchmark test called SimpleQA, which is widely used across the industry to measure the accuracy of A.I. systems. The start-up tested Google’s system in October, when the most complex questions were answered using an A.I. technology called Gemini 2, and then again in February, after it was upgraded to Gemini 3, a more powerful A.I. technology.

In both cases, Oumi’s analysis focused on 4,326 Google searches. The company found that the results were accurate 85 percent of the time with Gemini 2 and 91 percent of the time with Gemini 3…(More)”.

How Accurate Are Google’s A.I. Overviews?

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday