Explore our articles
View All Results

Stefaan Verhulst

Unesco Report: “Artificial intelligence (AI) is rapidly being embedded across companies’ products, services and internal operations, yet governance and disclosure are not evolving at the same speed. This report looks at corporate practice in the context of the emerging responsible AI regulatory landscape and analyses publicly available data collected by the Thomson Reuters Foundation’s AI Company Data Initiative, the largest global dataset of corporate responsible AI disclosures. As privately developed or deployed AI systems shape more of daily life, transparency must move beyond technical descriptions to show how accountability works— including who makes decisions, how ethical issues are escalated, and what remediation paths exist when things go wrong. Clear responsibility for harms or breaches should be identifiable in practice, not just in principle. Just as we expect openness and accountability from government, it is important that the private sector meets comparable transparency standards for AI that affects the public…(More)”.

Responsible AI in practice: 2025 global insights from the AI Company Data Initiative

Paper by Thibault Schrepel: “A digital brain, as coined by Andrej Karpathy, is a personal knowledge infrastructure built from documents you trust. It maps connections between sources, surfaces patterns and inconsistencies. It generates answers on demand with references to the underlying material. The more it is used, the more connections it builds. The output is a private, queryable Wikipedia. On demand, the system generates wiki pages on any theme covered by the corpus, and each new page feeds back into the knowledge base.

This guide documents a pipeline for building such a system from any document corpus, for research purposes. The pipeline adapts Karpathy’s methodology and adds six research-specific contributions. (1) A schema design procedure encodes authority hierarchies and surfaces gaps in the literature. (2) A centrality-weighted wiki generation procedure anchors each article around the most-connected sources. (3) A six-step research protocol produces hypotheses rather than retrieved information. (4) Claim-level extraction moves the unit of analysis from documents to propositions, which makes visible the incompatibilities that document-level graphs hide. (5) A persistent hypothesis register stores every query-generated conjecture and re-tests it as the corpus grows. (6) A complexity-theoretic diagnostic layer measures the graph’s network properties and reports how the field is structured. The pipeline is field-neutral. The two implementations documented here, one academic research corpus and one European Commission decisions dataset, are illustrative examples…(More)”.

Building a Digital Brain for Research: A Guide to Queryable Knowledge Graphs

Paper by Teodora Lalova-Spinks et al: “The reuse of health data is critical for advancing health research, yet it raises complex ethical, legal, and societal challenges. In the European Union, the recently adopted European Health Data Space (EHDS) aims to harmonize access to and reuse of health data for research and innovation, while safeguarding individual rights. However, questions remain about what patients value in data reuse and how their values can be embedded in governance frameworks. Belgium, with its strong research tradition and central role in EU policymaking, offers an important testbed for these questions…(More)”.

What patients value in data reuse for oncology research: a multi-stakeholder qualitative study to inform the European Health Data Space implementation in Belgium and beyond

Article by Anusha Krishnan: “What does a global map of plant life look like, and what happens when the data behind it is incomplete?

A recent study published in Nature Communications in January 2026, describes such a map, built from field surveys, earth observation systems, and millions of observations recorded by citizen scientists around the world.

This map now offers one of the most in-depth views of how plants function across ecosystems. However, the map also exposes something else. These are large, persistent gaps in the data that scientists rely on to understand the Earth’s vegetation, which means that quite a bit of the world’s plant life is still poorly documented.

The study used 31 plant traits such as size, growth strategy, leaf characteristics, wood density, reproductive traits, and resource use to outline a global ‘plant economics’ spectrum. These characteristics, also known as functional traits, can help us understand how plant strategies change in response to climate and ecosystem stress.

Currently, most global biodiversity data only tell us what species are found where; they don’t tell us what roles they play in carbon storage and ecosystem dynamics. Mapping these traits on a global scale gives us a spectrum of characteristics spanning fast-growing, nutrient-hungry plants to slow-growing, stress-tolerant ones and how these traits support plant growth, survival, adaptation, and persistence in an ever-changing world. This is especially important for informing models on energy, nutrient, and water cycles which are increasingly being used to plan infrastructure, agricultural, and energy strategies in a world faced with climate change.

The researchers used a combination of data from detailed field surveys collected by scientists, millions of observations from citizen scientists, and environmental information derived from satellites and climate records to create this global plant trait map.

They then used machine-learning models to link the plant traits with environmental conditions like temperature, rainfall, and soil properties to predict plant traits in places where direct measurements were unavailable. The models were generated using three approaches, namely, scientific surveys only, citizen science only, and both combined…(More)”.

Plugging data gaps in global plant diversity using citizen science

Resource by the AI & Democracy Foundation: “… is intended to track the capabilities, research questions, and product gaps that stand between us and deliberative democratic systems that can handle the challenges posed by AI advances.

This map builds on Democratic System Cards (ICML 2025), by providing a concrete path toward improving each of the core dimensions underlying the quality of democratic processes. The current version is intended to particularly accelerate those focused on improving representative deliberative democratic processes. It provides a map of critical ‘democratic capabilities’ across each dimension and supports prioritization about what to research, fund, build, and apply in order to have the most impact.

Our ultimate goal is that key actors making consequential decisions—especially on AI—have access to processes that are sufficiently high quality (e.g., representative, informed, substantive, deliberative, robust, and legible), whether they are governments, corporations, or transnational institutions. The deliberative processes and systems they employ will vary depending on their purpose and context and we need the toolbox and capabilities necessary to work across those contexts…(More)”.

The Democratic Capabilities Gap Map

Report by Eurostat: “Artificial intelligence (AI) is transforming the European Union’s economy and society, reshaping how businesses operate and how individuals live and work. This statistical report examines the usage of AI technologies among the enterprises as well as citizens of the EU, providing key insights based on the latest available data…(More)”.

The use of artificial intelligence technologies in the European Union

Article by Stefaan Verhulst: “The AI Index Report 2026, released this week by Stanford HAI, offers a compelling portrait of what can only be described as an ongoing AI Summer. The indicators are striking: rapid adoption reaching more than half the population within three years, surging investment, near-human performance across multiple domains, and widespread deployment in science, medicine, and the economy. By nearly every conventional metric — capability, capital, and diffusion — AI is accelerating.

AI Index Report 2026 — Figure 2.1.1.

Yet, embedded within the report is a quieter but more consequential story: the deepening of a data winter. Nowhere is this more clearly articulated than in the report’s own section on the potential exhaustion of training data (page 25).

The report notes growing concern among leading researchers that we may be approaching “peak data”—a point at which access to high-quality human-generated text and web data is effectively exhausted. Some projections suggest that this depletion could occur as early as sometime between 2026 and 2032. This is not a marginal issue. Data exhaustion directly challenges the scaling paradigm that has underpinned AI’s recent breakthroughs. What appears as exponential growth in capability may, in fact, be approaching a structural ceiling–not due to limits in compute or model design, but due to constraints in data availability. In other words, the AI summer may be running on finite fuel.

The report further underscores that synthetic data — often proposed as a solution to data scarcity — has not yet proven to be a full substitute for real-world data, particularly in pre-training contexts. While hybrid approaches combining real and synthetic data can accelerate training, they do not surpass the performance of models trained on high-quality real data. Purely synthetic training, meanwhile, remains effective only in narrower or smaller-scale settings (e.g., for specialized RAG applications or sector-specific models). The implication is clear: the quality and diversity of real-world data remain irreplaceable at the frontier…(More)”.

AI Summer, Data Winter: What the AI Index Reveals — and What It Doesn’t Yet Measure

Report by Valerie Wirtschafter: “Three consecutive administrations have made adoption of artificial intelligence (AI) across the U.S. federal government a priority. Most recently, the Trump administration’s AI Action Plan highlighted AI’s potential to “help deliver the highly responsive government the American people expect and deserve.” To assess the current state of AI adoption across the federal government, this report draws on AI use case inventories from 2023 to 2025, federal jobs data, OMB memoranda, request for information submissions, and interviews with current and former federal technologists across eight agencies.

While the scope and pace of AI adoption accelerated significantly over the past three years, AI use across the federal government remains concentrated among a handful of large agencies. Workforce capacity constraints, a risk-averse culture, procurement and funding challenges, and low public trust in AI systems slow adoption efforts.

To bolster responsible AI adoption, the federal government could expand support for technical talent and AI literacy across agencies; continue to address the structural barriers in procurement, regulation, and budgeting that hinder technology modernization more broadly; and foster public trust through stronger transparency practices, improved use case inventories, and a focus on high-impact, positive applications that demonstrably improve how government serves the American people…(More)”.

Assessing the state of AI adoption across the federal government

Book by Gwen Ottinger: “For many people, science and social justice seem to be natural allies-the slogan “science is real” often accompanies affirmations of diversity and reproductive rights. In practice, too, doing science is an increasingly prevalent strategy of social and environmental justice movements. But while it seems apparent that science can aid in the pursuit of justice, it can be hard to explain how it does so-and thus hard to know how to deploy science most strategically.

In The Science of Repair, Gwen Ottinger draws on years of on-the-ground research to offer a much-needed explanation of how science works to combat injustice. Telling the stories of ordinary people who’ve turned to science in the hopes of reducing toxic pollution in their communities, the scientists and innovators who’ve developed methods to enable communities to better represent their experiences, and the charismatic technologies that they’ve deployed, Ottinger presents a surprising conclusion: proving that people have been harmed, in itself, rarely advances justice. The process of investigating injustice, on the other hand, can strengthen shared standards for right and wrong, increase ordinary people’s ability to hold powerful actors accountable, and bolster hope that wrongs will be redressed-all essential elements of a just society.

For those who believe that science should matter to public discourse and decision-making, Gwen Ottinger’s engaging new work offers clear steps to help ensure that scientific investigations further justice. It brings much needed nuance to our thinking about how science can do good in the world and why we should defend it…(More)”

The Science of Repair: How People who Believe in Facts Can Build a Better Future

Book by Nana Ariel & Dana Riesenfeld: “… explores the modern obsession with originality through the figure that most threatens it: the cliché. From the rise of industrial print to the age of artificial intelligence, it shows how the notion of the cliché has shaped our understanding of creativity, banality, independent thought, and the limits of human agency. Rather than treating clichés as fixed, exhausted expressions, the book understands them as constructed experiences of déjà vu – moments when language feels strangely familiar, as if we have already heard or said it too times before. The cliché is a dynamic cultural form that makes us feel the weight of the already-said.

The book examines how clichés are not only used naïvely or dismissed ironically, but are continually negotiated in literature, art, popular culture, and everyday discourse –  inhabited, twisted, and revalued within different contexts. Such negotiations reveal how speakers and writers situate themselves within the tension between convention and invention, the collective and the singular, sincerity and performance. 

The book traces how clichés have come to define what it means to be both human and modern. With the emergence of AI, in which machines learn through repetition and prediction, and as concerns about the homogenization of human discourse increase, the clichéss returns as a central mechanism. The authors reveal clichés as scorned yet indispensable – something we can’t live with, and can’t live without…(More)”.

Clichés We Live By

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday