Explore our articles
View All Results

Stefaan Verhulst

Paper by Teodora Lalova-Spinks et al: “The reuse of health data is critical for advancing health research, yet it raises complex ethical, legal, and societal challenges. In the European Union, the recently adopted European Health Data Space (EHDS) aims to harmonize access to and reuse of health data for research and innovation, while safeguarding individual rights. However, questions remain about what patients value in data reuse and how their values can be embedded in governance frameworks. Belgium, with its strong research tradition and central role in EU policymaking, offers an important testbed for these questions…(More)”.

What patients value in data reuse for oncology research: a multi-stakeholder qualitative study to inform the European Health Data Space implementation in Belgium and beyond

Article by Anusha Krishnan: “What does a global map of plant life look like, and what happens when the data behind it is incomplete?

A recent study published in Nature Communications in January 2026, describes such a map, built from field surveys, earth observation systems, and millions of observations recorded by citizen scientists around the world.

This map now offers one of the most in-depth views of how plants function across ecosystems. However, the map also exposes something else. These are large, persistent gaps in the data that scientists rely on to understand the Earth’s vegetation, which means that quite a bit of the world’s plant life is still poorly documented.

The study used 31 plant traits such as size, growth strategy, leaf characteristics, wood density, reproductive traits, and resource use to outline a global ‘plant economics’ spectrum. These characteristics, also known as functional traits, can help us understand how plant strategies change in response to climate and ecosystem stress.

Currently, most global biodiversity data only tell us what species are found where; they don’t tell us what roles they play in carbon storage and ecosystem dynamics. Mapping these traits on a global scale gives us a spectrum of characteristics spanning fast-growing, nutrient-hungry plants to slow-growing, stress-tolerant ones and how these traits support plant growth, survival, adaptation, and persistence in an ever-changing world. This is especially important for informing models on energy, nutrient, and water cycles which are increasingly being used to plan infrastructure, agricultural, and energy strategies in a world faced with climate change.

The researchers used a combination of data from detailed field surveys collected by scientists, millions of observations from citizen scientists, and environmental information derived from satellites and climate records to create this global plant trait map.

They then used machine-learning models to link the plant traits with environmental conditions like temperature, rainfall, and soil properties to predict plant traits in places where direct measurements were unavailable. The models were generated using three approaches, namely, scientific surveys only, citizen science only, and both combined…(More)”.

Plugging data gaps in global plant diversity using citizen science

Resource by the AI & Democracy Foundation: “… is intended to track the capabilities, research questions, and product gaps that stand between us and deliberative democratic systems that can handle the challenges posed by AI advances.

This map builds on Democratic System Cards (ICML 2025), by providing a concrete path toward improving each of the core dimensions underlying the quality of democratic processes. The current version is intended to particularly accelerate those focused on improving representative deliberative democratic processes. It provides a map of critical ‘democratic capabilities’ across each dimension and supports prioritization about what to research, fund, build, and apply in order to have the most impact.

Our ultimate goal is that key actors making consequential decisions—especially on AI—have access to processes that are sufficiently high quality (e.g., representative, informed, substantive, deliberative, robust, and legible), whether they are governments, corporations, or transnational institutions. The deliberative processes and systems they employ will vary depending on their purpose and context and we need the toolbox and capabilities necessary to work across those contexts…(More)”.

The Democratic Capabilities Gap Map

Report by Eurostat: “Artificial intelligence (AI) is transforming the European Union’s economy and society, reshaping how businesses operate and how individuals live and work. This statistical report examines the usage of AI technologies among the enterprises as well as citizens of the EU, providing key insights based on the latest available data…(More)”.

The use of artificial intelligence technologies in the European Union

Article by Stefaan Verhulst: “The AI Index Report 2026, released this week by Stanford HAI, offers a compelling portrait of what can only be described as an ongoing AI Summer. The indicators are striking: rapid adoption reaching more than half the population within three years, surging investment, near-human performance across multiple domains, and widespread deployment in science, medicine, and the economy. By nearly every conventional metric — capability, capital, and diffusion — AI is accelerating.

AI Index Report 2026 — Figure 2.1.1.

Yet, embedded within the report is a quieter but more consequential story: the deepening of a data winter. Nowhere is this more clearly articulated than in the report’s own section on the potential exhaustion of training data (page 25).

The report notes growing concern among leading researchers that we may be approaching “peak data”—a point at which access to high-quality human-generated text and web data is effectively exhausted. Some projections suggest that this depletion could occur as early as sometime between 2026 and 2032. This is not a marginal issue. Data exhaustion directly challenges the scaling paradigm that has underpinned AI’s recent breakthroughs. What appears as exponential growth in capability may, in fact, be approaching a structural ceiling–not due to limits in compute or model design, but due to constraints in data availability. In other words, the AI summer may be running on finite fuel.

The report further underscores that synthetic data — often proposed as a solution to data scarcity — has not yet proven to be a full substitute for real-world data, particularly in pre-training contexts. While hybrid approaches combining real and synthetic data can accelerate training, they do not surpass the performance of models trained on high-quality real data. Purely synthetic training, meanwhile, remains effective only in narrower or smaller-scale settings (e.g., for specialized RAG applications or sector-specific models). The implication is clear: the quality and diversity of real-world data remain irreplaceable at the frontier…(More)”.

AI Summer, Data Winter: What the AI Index Reveals — and What It Doesn’t Yet Measure

Report by Valerie Wirtschafter: “Three consecutive administrations have made adoption of artificial intelligence (AI) across the U.S. federal government a priority. Most recently, the Trump administration’s AI Action Plan highlighted AI’s potential to “help deliver the highly responsive government the American people expect and deserve.” To assess the current state of AI adoption across the federal government, this report draws on AI use case inventories from 2023 to 2025, federal jobs data, OMB memoranda, request for information submissions, and interviews with current and former federal technologists across eight agencies.

While the scope and pace of AI adoption accelerated significantly over the past three years, AI use across the federal government remains concentrated among a handful of large agencies. Workforce capacity constraints, a risk-averse culture, procurement and funding challenges, and low public trust in AI systems slow adoption efforts.

To bolster responsible AI adoption, the federal government could expand support for technical talent and AI literacy across agencies; continue to address the structural barriers in procurement, regulation, and budgeting that hinder technology modernization more broadly; and foster public trust through stronger transparency practices, improved use case inventories, and a focus on high-impact, positive applications that demonstrably improve how government serves the American people…(More)”.

Assessing the state of AI adoption across the federal government

Book by Gwen Ottinger: “For many people, science and social justice seem to be natural allies-the slogan “science is real” often accompanies affirmations of diversity and reproductive rights. In practice, too, doing science is an increasingly prevalent strategy of social and environmental justice movements. But while it seems apparent that science can aid in the pursuit of justice, it can be hard to explain how it does so-and thus hard to know how to deploy science most strategically.

In The Science of Repair, Gwen Ottinger draws on years of on-the-ground research to offer a much-needed explanation of how science works to combat injustice. Telling the stories of ordinary people who’ve turned to science in the hopes of reducing toxic pollution in their communities, the scientists and innovators who’ve developed methods to enable communities to better represent their experiences, and the charismatic technologies that they’ve deployed, Ottinger presents a surprising conclusion: proving that people have been harmed, in itself, rarely advances justice. The process of investigating injustice, on the other hand, can strengthen shared standards for right and wrong, increase ordinary people’s ability to hold powerful actors accountable, and bolster hope that wrongs will be redressed-all essential elements of a just society.

For those who believe that science should matter to public discourse and decision-making, Gwen Ottinger’s engaging new work offers clear steps to help ensure that scientific investigations further justice. It brings much needed nuance to our thinking about how science can do good in the world and why we should defend it…(More)”

The Science of Repair: How People who Believe in Facts Can Build a Better Future

Book by Nana Ariel & Dana Riesenfeld: “… explores the modern obsession with originality through the figure that most threatens it: the cliché. From the rise of industrial print to the age of artificial intelligence, it shows how the notion of the cliché has shaped our understanding of creativity, banality, independent thought, and the limits of human agency. Rather than treating clichés as fixed, exhausted expressions, the book understands them as constructed experiences of déjà vu – moments when language feels strangely familiar, as if we have already heard or said it too times before. The cliché is a dynamic cultural form that makes us feel the weight of the already-said.

The book examines how clichés are not only used naïvely or dismissed ironically, but are continually negotiated in literature, art, popular culture, and everyday discourse –  inhabited, twisted, and revalued within different contexts. Such negotiations reveal how speakers and writers situate themselves within the tension between convention and invention, the collective and the singular, sincerity and performance. 

The book traces how clichés have come to define what it means to be both human and modern. With the emergence of AI, in which machines learn through repetition and prediction, and as concerns about the homogenization of human discourse increase, the clichéss returns as a central mechanism. The authors reveal clichés as scorned yet indispensable – something we can’t live with, and can’t live without…(More)”.

Clichés We Live By

Resource by Andreas Marx et al: “…The assessment of whether an implemented project can be considered “successful” frequently focuses on conventional metrics such as technical commissioning, user numbers reached, or cost-benefit analyses. While this is often politically desired and regarded as sufficient, it rarely reflects the actual state of affairs. The findings of such assessments provide important insights into efficiency, but do not answer the central question of whether and to what extent projects are genuinely effective and contribute to overarching strategic goals — such as improved quality of life, social participation, or contributions to global sustainability and digitalization agendas. The systematic measurement of impacts, however, remains rarely established in practice, due in large part to its greater complexity compared to the verification of basic implementation parameters.

This is precisely where the present guide comes in. Its aim is to provide municipalities with practical guidance and to reframe impact-oriented evaluation not as an additional burden, but as a useful steering instrument…(More)

Impact-Oriented Evaluation of Smart City Projects

Article by David S. Johnson, Maggie Meinhardt, and John Sabelhaus: “For many household surveys in the United States, response rates have been steadily declining for at least the past two decades.” This is a quote from a National Academies of Sciences report from 2013. It is still true today, and it is true for all wealthy countries. Suffering from low response rates and increasing costs, surveys are often described as 20th century technology that needs to be replaced.

But surveys capture things we cannot get from administrative data, as Census Bureau Deputy Director Ron Jarmin noted at a recent event. While administrative data could provide a person’s employment and earnings, only surveys can determine (for example) whether someone was looking for work, which is key for measuring the unemployment rate.

Declining survey participation, both in the U.S. and abroad, is often raised as a large challenge for the statistical system and cited as a reason to eliminate surveys in favor of other measurement strategies. But rather than discard this important data source, researchers should seek to understand how response rates impact the statistics we care about and why response rates are falling in the first place.

The key issues for how survey participation impacts economic statistics is whether lower response rates lead to less statistical precision and whether they actually create statistical bias. Lower response rates lead to lower sample sizes and thus less precision, but the statistics may well remain unbiased so long as differences in survey participation are not correlated with the economic outcome we are measuring. Statistical bias is a larger concern because policymakers would be reading economic signposts that are literally pointing in the wrong direction.

There are many plausible reasons why survey response rates are declining. Among these are the difficulty of contacting the individuals who are (randomly) chosen for the survey sample, respondent concerns about the time burden of completing a survey, and respondent fears about the privacy of their personal data. These difficulties are not unique to government economic surveys, and although the challenges may be getting worse, the unique role of economic surveys means we need to move forward using tried and true methods for improving survey participation…(More)”.

Why did people stop responding to federal economic surveys? What can be done?

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday