Explore our articles
View All Results

Stefaan Verhulst

Book by Silvia Danielak: “Roads, bridges, a renewable power plant, and an electricity grid: UN peacekeepers might be unusual infrastructure builders, but they’re certainly not unambitious. Since the beginning of the UN’s peacekeeping activities after the end of World War II, the Blue Helmets have cemented streets, constructed bridges, and dug wells in conflict zones. But how did the military arm of the world’s primary diplomatic forum become involved in such activities in its quest for peace, and with what consequences? Peace Infrastructures analyzes the turn to ever-more-complex infrastructure projects, from early road building via urban community projects to the commissioning of entire renewable power plants, in the context of an evolving understanding of peace “problems” and solutions. Tracing the global travel of policies, technologies, and expertise, Silvia Danielak investigates how the shift toward risk management, legacy, and climate security was driven by, and materialized in, conflict zones, shaping the very idea of peace.

The book critically engages with the UN’s ambition to insert itself in the sustainable development of the countries it seeks to assist, arguing that we need to consider peace operations’ spatial, urban, and material ways of engagement—especially in the face of mounting climate risks. Infrastructure is poised to take a more prominent position within peace operations, but a more nuanced understanding that recognizes its opportunities, as well as its potential for violence, is required…(More)”.

Peace Infrastructures

Article by Stefaan Verhulst: “The AI Index Report 2026, released this week by Stanford HAI, offers a compelling portrait of what can only be described as an ongoing AI Summer. The indicators are striking: rapid adoption reaching more than half the population within three years, surging investment, near-human performance across multiple domains, and widespread deployment in science, medicine, and the economy. By nearly every conventional metric — capability, capital, and diffusion — AI is accelerating.

AI Index Report 2026 — Figure 2.1.1.

Yet, embedded within the report is a quieter but more consequential story: the deepening of a data winter. Nowhere is this more clearly articulated than in the report’s own section on the potential exhaustion of training data (page 25).

The report notes growing concern among leading researchers that we may be approaching “peak data”—a point at which access to high-quality human-generated text and web data is effectively exhausted. Some projections suggest that this depletion could occur as early as sometime between 2026 and 2032. This is not a marginal issue. Data exhaustion directly challenges the scaling paradigm that has underpinned AI’s recent breakthroughs. What appears as exponential growth in capability may, in fact, be approaching a structural ceiling–not due to limits in compute or model design, but due to constraints in data availability. In other words, the AI summer may be running on finite fuel.

The report further underscores that synthetic data — often proposed as a solution to data scarcity — has not yet proven to be a full substitute for real-world data, particularly in pre-training contexts. While hybrid approaches combining real and synthetic data can accelerate training, they do not surpass the performance of models trained on high-quality real data. Purely synthetic training, meanwhile, remains effective only in narrower or smaller-scale settings (e.g., for specialized RAG applications or sector-specific models). The implication is clear: the quality and diversity of real-world data remain irreplaceable at the frontier…(More)”.

AI Summer, Data Winter: What the AI Index Reveals — and What It Doesn’t Yet Measure

Paper by Lucia Velasco, et al: “Artificial intelligence is rapidly becoming a foundational layer of the global economy, with projections indicating that the AI market will reach $4.8 trillion by 2033 – approximately the size of Germany’s entire economy. Yet this transformation is unfolding with stark inequality. While advanced economies aggressively invest in local AI capacity and infrastructure, low- and lower-middle-income countries (LLMICs) face systemic barriers that threaten to lock them into technological dependency. Recent announcements from governments and major technology firms show large AI funding commitments1, but unclear governance and poor coordination risk turning these investments into deeper global AI inequality rather than lasting domestic capacity…(More)”.

Financing the AI Triad: Compute, Data and Algorithms A framework to build local ecosystems

Article by Valerie Wirtschafter: “On February 28, 2026, a joint U.S.-Israeli military campaign struck Iranian nuclear facilities, military infrastructure, and leadership targets in what was officially dubbed Operation Epic Fury. Social media quickly flooded with false footage of the conflict, including massive explosions in Tel Aviv, successful Iranian missile strikes on U.S. warships, and satellite imagery purporting to show damage to American military bases in the Gulf.

Some of this footage was recycled from unrelated conflicts, including in Ukraine, and even from video games. Yet some of it was entirely fabricated and created with now ubiquitous generative artificial intelligence (AI) tools that can produce even more realistic content at scale. Several observers of the space emphasized the unprecedented volume of AI-generated content and its increasing sophistication.

While much has been written about the potential for AI-generated imagery, videos, and audio to flood the information ecosystem and make it increasingly difficult to parse what is true, AI content has previously only made up a small portion of the misleading content circulating across the web. During 2024, which was deemed “the year of the elections,” AI-generated content—while present—did not derail electoral processes around the world. And in the early days of the Israel-Hamas war, AI content was again present, but it represented just a small fraction of the overall misleading claims and recycled imagery circulating online. Does the current ongoing conflict in Iran truly represent a significant leap in AI-generated imagery? And if so, what might explain such a meaningful shift?…(More)”.

Generative AI as a weapon of war in Iran

Article by Stefaan Verhulst: “As artificial intelligence systems rapidly evolve and start to impact nearly every sector of society, the conversation around governance has mainly focused on models (and their output): their transparency, fairness, accountability, and alignment. Yet this focus, while necessary, is incomplete. AI systems are only as reliable, equitable, and effective as the data (input) on which they are trained and operate.

Data governance is not peripheral to AI governance — it is its bedrock.

At the same time, the rise of AI is not simply placing new demands on data governance; it is fundamentally transforming it. What counts as data, how it is curated, who has a say in its use, and which institutional arrangements govern it are all being reimagined in response to AI’s capabilities and risks.

This essay examines 10 key areas or shifts where data governance is being reshaped—either to accommodate AI or as a direct consequence of it…(More)”.

Data Governance in the AI Era: 10 Shifts Redefining Data, Institutions, and Practice

Article by Stefaan Verhulst and Despite decades of investment in statistical systems and open data initiatives, official data remains difficult to discover, interpret, and apply in practice. The challenge is no longer one of availability, but of (re)usability. This persistent gap underscores a broader paradox at the heart of contemporary data governance: data may be open, yet it remains functionally inaccessible for many intended users.

In this context, the International Monetary Fund has been a pioneer in exploring how artificial intelligence and open data can intersect to address this usability challenge. Its StatGPT: AI for Official Statistics report, by James TebrakeBachir BoukherouaaJeff Danforth, and Niva Harikrishnan, offers a timely and important contribution to this evolving conversation – pointing toward a future where AI can make official data more navigable, interpretable, and actionable.

The data challenge is no longer just about availability, but about (re)usability.

The report provides a detailed account of the friction users face across the data lifecycle. Even highly motivated users must navigate fragmented portals, inconsistent terminology, and siloed datasets, often spending significant time assembling information that should be readily accessible. 

The result is a fragmented ecosystem in which metadata and data are distributed across institutions and platforms, forcing users to navigate multiple systems and standards—and to reconstruct context—before they can assess whether the data is re-usable. 

This resonates strongly with broader observations across the open data ecosystem: access alone does not guarantee impact. Without the ability to meaningfully engage with data, openness risks becoming performative rather than transformative…(More)”.

StatGPT and the Fourth Wave of Open Data

Article by Mona Mourshed and Nalini Tarakeshwar: “Following international aid declines, philanthropy is searching for innovative ways to support non-profits and Global South governments in delivering service solutions where outcomes data plays a central role.

Achieving data-driven innovation requires more than gathering the right facts – it must generate change in daily routines. The global philanthropy sector is now waking up this idea.

Below are three key lessons from non-profits that have successfully deployed data in their work in the Global South, and seen real progression in their goals of driving meaningful system change.

Lesson 1: Data users respond far better to carrots than sticks

If government staff feel that something bad will happen should their data reveal underperformance, they are unlikely to gather it. Philanthropy can play a catalytic role by supporting projects that combine data usage with fresh incentives and support.

Generation India works with national and state-government entities in a public-private partnership structure funded equally by both. Previously, training providers in government-funded programmes were reimbursed largely on training and certification; those two milestones accounted for more than 70% of the government payment per learner. While the remainder of government payment per learner did include some outcomes metrics, such as job placement and three-month job retention, the process for proving these outcome metrics was cumbersome and lengthy, discouraging efforts in this direction. Further, since training providers had learned how to break even on the 70% of input-related payments, they were willing to forgo additional outcome-related payments. The combined result was a job placement rate of less than 25%.

To turn things around, the partnership of Generation India and government entities reduced the input payments linked to programme completion to 56% and increased outcomes compensation to 44%. In parallel, it introduced new payment milestones based on job placement within three months of programme completion and job retention at three- and six-months after the initial placement, both of which are verified by third parties.

There’s a similar playbook at the Brazilian Collaborative Leadership alliance, a partnership between the Lemann Foundation and federal, state and municipal governments, which reaches 70% of first and second graders in the country. To advance literacy, Lemann Foundation funds teacher training and provides better-quality textbooks for students at all participating schools. The state commits to joining the national literacy programme, which includes instruction materials and assessments of second grade students. The state also recognizes schools with the best results by granting their principals cash awards with an average value of $10,000. While the recognized schools receive 60-75% of the cash award immediately, they can only access the remaining 25-40% if they help another school in their community improve its literacy outcomes, which spurs an additional layer of support. Lastly, 2-5% of state tax revenue is given to municipality governments based on their performance against targets, with each free to decide how it uses these funds….(More)”.

How non-profits and governments use data to drive real system change

Article by Hélène Landemore: “American democracy has a personality problem.

At its core, our political system is a popularity contest. Elections reward those who are comfortable performing in public and on social media, projecting confidence and dominating attention. This dynamic tends to select for so-called alpha types, the charismatic and the daring, but also the entitled, the arrogant and even the narcissistic.

This raises a basic but rarely asked question: Why are we filtering out the quiet voices? And at what cost?

Over the past two decades, my research on collective intelligence in politics, democratic theory and the design of our institutions shows that the system structurally excludes those I call, in my new book, “the shy.” By the shy I mean not just the natural introverts, but all the people who have internalized the idea that they lack power, that politics is not built for them, and who could never imagine running for office. That is, potentially, most of us, though predictable groups — women, the young and many minorities — are overrepresented in that category.

The early-20th-century British writer G.K. Chesterton once offered a striking and unusual metaphor for what democracy should look like. He wrote, “All real democracy is an attempt (like that of a jolly hostess) to bring the shy people out.” What would our democratic institutions look like if we took that metaphor seriously?

One answer — perhaps the most promising one we have at this time — can be found in citizens’ assemblies.

Citizens’ assemblies are large groups of ordinary people, selected by lottery, who come together to learn about a public issue, hear from experts and advocacy groups, deliberate with one another and make recommendations. Picture jury duty for politics. Through random selection, citizens’ assemblies reach deep into the body politic to bring even the initially unwilling to the table. Once seated, participants are given time, structure and support to find their voices and contribute to forming a thoughtful collective judgment…(More)”.

No Shy Person Left Behind

Paper by Kayla Schwoerer: “Despite widespread adoption of open government data (OGD) initiatives, actual use remains limited, raising questions about how these public digital platforms are designed and governed. Prior research highlights the importance of data quality and usability for encouraging OGD use, yet empirical evidence linking specific design choices to observed user behavior remains scarce. This study draws on affordance theory to examine how metadata design features embedded in open data platforms shape open data use. The analysis draws on primary data collected from 15 U.S. cities’ open data platforms (N = 5863) to first assess the extent to which government agencies actualize metadata affordances to promote data quality and usability then test the relationship between affordance actualizations and two observed measures of use: dataset views and downloads. Results show that multiple dimensions of metadata practice are strongly and consistently associated with OGD use, with some practices linked to substantially higher levels of open data use. Even within a shared platform environment, variation in how publishers provide metadata correspond to meaningful differences in how often datasets are accessed, highlighting that metadata governance is not merely a technical detail but a factor that materially shapes user engagement with open data…(More)”.

Same platform, different outcomes: Metadata practices and open data use

Article by Amrita Sengupta and Shweta Mohandas: “The rapid integration of artificial intelligence in healthcare settings raises questions about the adequacy of existing data protection frameworks, particularly the reliance on informed consent as the primary mechanism for legitimatising the collection and use of health data for AI model training. This paper examines whether informed consent, as operationalized under India’s Digital Personal Data Protection Act (DPDPA) 2023, can serve as a satisfactory legal and ethical basis for using health data in AI development.

Drawing on the historical evolution of consent from medical research contexts to contemporary digital data protection regimes, this paper demonstrates that consent-based frameworks face structural limitations when applied to AI systems. The analysis reveals a trifecta of consent challenges: patients must consent to medical procedures, to digital health record creation, and implicitly to future AI model training, often without comprehending the scope, purpose, or risks of data reuse.

This paper advances three broad analyses: first, the limitations of informed consent in data protection and operationalisation challenges in healthcare, the dilution of patient consent and autonomy in AI model training, and the role of anonymisation for use of data for AI. Recognizing these limitations, the paper proposes alternative governance frameworks that complement individual consent…(More)”.

The imaginary of informed consent: Rethinking approaches to data use for AI in healthcare

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday