Explore our articles
View All Results

Stefaan Verhulst

Article by Soren Kaplan: “…Nonprofits face growing pressure to do more with less: Rising demand, shrinking funding, and increasingly complex social issues often exceed the capacity—and mission—of any single organization. At the same time, because today’s most urgent challenges are interconnected and systemic, effectively addressing hunger, homelessness, education, or health equity requires a level of strategic coordination that few organizations can achieve on their own.

Most nonprofits operate in isolation from each other, for reasons that are easy to understand. Structural incentives such as funding models, branding, and board expectations often reinforce competition over collaboration. As a result, even when their missions overlap, organizations frequently compete for limited funding, volunteers, and visibility, duplicating services in some areas and while leaving needs unmet in others.

Five organizations in Contra Costa County, California, have been developing a new model to align their efforts focused on food insecurity without sacrificing their autonomy as distinct organizations. I would suggest that the example of The Food Security Collaborative offers a replicable blueprint that other social sector leaders can adapt to their local contexts, a model for how—rather than working in isolation—nonprofits can connect their missions, integrate data, share resources, and coordinate services to amplify impact across a shared system.

I call this model an “Impact Collaborative.”..(More)”.

The Impact Collaborative

Article by Mariel Lozada and Rina Chandran: “Among the thousands of candidates seeking election in Colombia’s parliamentary polls this month is an artificial intelligence avatar. Its creator hopes it will change the way Indigenous people in the country are represented, despite concerns of bias and access.

Gaitana — created by Carlos Redondo, and other members of the Zenú community —is the digital representation of two Indigenous candidates for Senate and congressional seats in the March 8 election. Named for a 16th-century revolutionary leader, Gaitana is depicted as a blue-skinned woman who is an environmentalist and animal rights advocate. The bot communicates in Spanish, and currently has more than 10,000 users.

Colombian law requires human candidates, so Redondo is competing for the Senate, and Alba Rincón, an anthropologist and sociologist from the Emberá Katío ethnic group, is running for the House of Representatives. On the ballot, though, they appear as IA, the Spanish acronym for artificial intelligence. If elected, Redondo and Rincón will occupy seats reserved for Indigenous people, and defer to the digital platform to seek consensus from their communities on all legislative matters, Redondo told Rest of World…(More)”.

An AI avatar is running to represent Indigenous voters in Colombia

Book edited by Steven Bernstein and William D. Coleman: “Globalization has challenged taken-for-granted relationships of rule in local, regional, national, and international settings. This unsettling of legitimacy raises questions. Under what conditions do individuals and communities accept globalized decision making as legitimate? And what political practices do individuals and collectivities under globalization use to exercise autonomy?

To answer these questions, the contributors to Unsettled Legitimacy explore the disruptions and reconfigurations of political authority that accompany globalization. Arguing that we live in an era in which political legitimacy at multiple scales of authority is under strain, they show that globalization has also created demands for regulation, security, and the protection of rights and expressions of individual and collective autonomy within and across multiple political and geographic spaces. Instead of offering simplistic arguments for or against global governance, enhanced democracy, or economic integration, the contributors provide a sophisticated examination of the complexities of legitimacy and autonomy in a globalizing world…(More)”.

Unsettled Legitimacy

Paper by Nicolien Janssens and Frederik van de Putte: “Recent years have seen an increase in the use of online deliberation platforms (DPs). One of the main objectives of DPs is to enhance democratic participation, by allowing citizens to post, comment, and vote on policy proposals. But in what order should these proposals be listed? This paper makes a start with the principled evaluation of sorting methods on DPs. First, we introduce a conceptual framework that allows us to classify and compare sorting methods in terms of their purpose and the parameters they take into account. Second, we observe that the choice for a sorting method is often ad hoc and rarely justified. Third and last, we criticise sorting by number of approvals (‘likes’), a method that is very common in practice. On the one hand, we show that if approvals are used for sorting, this should be done in an integrated way, also taking into account other parameters. On the other hand, we argue that even if proposals are on a par in terms of those other parameters, there are other, more appropriate ways to sort proposals in light of the approvals they have received…(More)”.

Sorting Methods for Online Deliberation: Towards a Principled Approach

Essay by Stefaan Verhulst: “Today, the question facing governments is no longer whether they should use data. Especially in an age of Artificial Intelligence, that debate is long settled. The harder and more urgent question is how to govern data in ways that are trusted, durable, and fit for increasingly complex societal challenges. In short – how to make government data initiatives more effective and legitimate at the same time?

This question is where data stewardship steps in. Within a government setting, data stewardship is the practice of governing public-sector data as a shared civic asset, one whose value depends not only on technical performance but on legitimacy and institutional accountability. It begins from a recognition that data is a social artifact, embedded in social, political, and cultural processes. Only such a lived, non-technocratic approach can help local and state governments navigate the trade-offs, uncertainties, and value conflicts that increasingly define public-sector data use.

Limitations of the Current Approach

Too often, data strategy is framed as a technical exercise. But the shortcomings of a technocratic approach leave state and local data and innovation officers navigating tensions that technical solutions alone cannot resolve: promoting data sharing and reuse while also safeguarding privacy and civil liberties; delivering innovation and efficiency even as public skepticism and trust deficits deepen; and responding to growing pressure to deploy AI systems despite unclear lines of accountability and uneven institutional capacity. 

These are not engineering problems but governance challenges that require judgment, legitimacy, and sustained institutional stewardship. While current orientations toward data governance, focusing on  risk avoidance and compliance, continue to have an important role, today’s data requires socially embedded governance that centers legitimacy and public trust as facilitated by data stewards. 

The Vital Role of Data Stewards 

Data stewards move beyond today’s limited, technocratic approach, instead advancing—and embodying—a principle of legitimacy rather than merely compliance. This shift reframes the questions that guide data practice, from whether a particular use is legally permissible to whether it is socially appropriate, publicly understandable, and institutionally accountable…(More)”.

From Data Ambition to Public Value

Article by Sharifah Sekalala, Shajoe J. Lake, Allan Maleche, and Timothy Wafula: “On 4 December 2025, Kenya became the first country to sign a five-year bilateral global health agreement (BGHA) called a Health Cooperation Framework with the US, worth about US$1.6–2.5 billion and presented as a bold move to strengthen HIV, TB, malaria, and pandemic preparedness, including surveillance and workforce reforms. Soon after, Rwanda and Uganda signed similar BGHAs worth US$228 million and US$2.3 billion, respetcively. These agreements are explicitly branded as part of an “America First” global health strategy, under which dozens of similar bilateral deals are expected across Africa.

The official explanation is that the US needs near real-time access to Kenyan health data, including outbreak surveillance, to protect global health security and guide its investments. The BGHAs give US agencies access to digital health systems and outbreak databases, and authority to audit a sample of facilities, in exchange for large-scale funding that is framed as helping these countries achieve “health sovereignty”. A separate US template for BGHAs, reported in November, asked partner countries to share biological specimens and genetic sequences of pathogens with epidemic potential within days of detection, with specimen-sharing commitments lasting up to 25 years but without guaranteed reciprocal access to any vaccines or treatments later developed. Here, health data and pathogen access are the price of re-entering US funding circuits after earlier aid cuts.

Kenyan officials claim that only aggregated data will be shared and that Kenyan law, including the Data Protection Act and Digital Health Act, formally prevails over the agreement. However, in practice, when core systems, hosting, and analytics are controlled through foreign-designed architectures and contracts, local regulators face a steep uphill battle to supervise what happens once health data migrates…(More)”.

America First, Africa Last? Health data deals and the new scramble for pathogens

Article by Dalmeet Singh Chawla: “Policy-relevant research is drowning in the flood of scientific papers that are published every day. But what if temporary ‘pop up’ journals that were devoted to a single, urgent question could deliver clear, actionable information straight to the policymakers who need it?

That’s the thinking behind the Pop-Up Journal Initiative, which aims to connect policymakers looking for solid evidence to back fresh policies, with researchers who are collecting the relevant evidence.

With funding of some US$1 million from two non-profit organizations — the Alfred P. Sloan Foundation in New York City and Coefficient Giving in San Francisco, California — the initiative will set up journals that will publish articles focused on a single question for a period of time, roughly a few years, before closing the pop-up journal to submissions.

“I’m less interested in topics than in questions, and I’m less interested in publishing than I am in curation,” says one of the project leaders, Daniel Goroff, who is the vice-president and programme director at the Alfred P. Sloan Foundation.

The first pop‑up journal is set to begin publishing this year and will concentrate on a classic economics question: how much growth does investment in research and development (R&D) actually generate?

Goroff says this kind of question is often asked in high-level government discussions, but clear answers are often scarce.

“When I’ve testified before [US] Congress or dealt with an appropriations bill or a budget negotiation, this question, of what is the return on investments when you’re doing R&D, comes up quite often,” says Goroff. “It’s been asked by economists in very formal ways since at least the 1950s, but the data and the methods that were available were really not very strong.”

Most researchers would jump at the chance to get their paper in the hands of the right policymaker — but would they submit their work to a journal that stops publishing after a few years, instead of a more prestigious title?..(More)”.

Pop-up journals for policy research: can temporary titles deliver answers?

Article by Solomon Messing and Joshua A. Tucker: “A viral blog post recently compared the current moment in artificial intelligence (AI) to February 2020, just before COVID-19 turned the world upside down. While that analogy may be flawed, it’s hard to ignore recent developments with AI coding agents,1 which prompted our colleague Andy Hall to post that AI agents are coming for the social sciences “like a freight train.”

Here’s why we agree: in just the past month, we’ve used AI coding agents to do the following: (1) transform a minimal implementation of a method to analyze heterogenous treatments and treatment effects into a fully functional, modular, and well-documented R package in a just over a day; (2) produced a twenty-page summary for our own edification of business responses to the Russian invasion of Ukraine based on materials found on this website, including data visualizations, statistical analyses, and a complete replication file in under an hour; (3) developed the infrastructure, data collection, analysis, and reporting pipeline for a pilot study examining what kinds of political prompts LLMs refuse to address across five languages and five frontier models.

Yes, those of us working in the academy have been wrestling with what generative AI means for teaching for a few years now, and lots of us have begun to integrate generative AI into routine tasks like summarizing papers and even coding assistance. But the current moment feels like it could be quite different, and we suspect many things in the academy are about to change…(More)”.

The train has left the station: Agentic AI and the future of social science research

Book edited by Stephen Kwamena Aikins and Tamara Dimitrijevska-Markoski: “…offers a timely and comprehensive exploration of how AI is transforming public institutions across the globe. From climate resilience and urban planning to justice and equitable service delivery, this book examines the profound opportunities and challenges that AI brings to the heart of governance. Drawing on insights from leading scholars and practitioners, it reveals how governments are harnessing advanced technologies—such as machine learning, data analytics, and robotics—to anticipate societal needs, improve policy outcomes, and engage citizens in new ways.

Yet, as algorithms increasingly shape the fabric of public life, the book does not shy away from the pressing ethical, legal, and societal dilemmas that accompany AI’s rise. It confronts questions of algorithmic bias, accountability, privacy, and the urgent need for democratic oversight in a world run by data. Through in-depth case studies, empirical and conceptual investigations, policy insights, and expert analyses, Artificial Intelligence and Government provides a multidimensional understanding of AI’s influence on government operations, offering practical guidance for policymakers, public administrators, technologists, researchers, and engaged citizens alike.

The book’s central aim is to bridge the critical knowledge gap surrounding AI’s integration into government. It examines the current state of AI adoption across governments worldwide, including adoption strategies, readiness frameworks, and how governments are innovating with AI technologies, while offering philosophical and forward-looking critique. Additionally, the book analyzes the barriers to AI adoption, assesses the impact on policy, service quality and citizen engagement, and offers solutions to mitigate implementation challenges. By examining both the innovations and the ethical complexities of AI in the public sector, the book equips readers with the insights and principles needed to build fairer, more transparent, and future-ready institutions…(More)”.

Artificial Intelligence and Government

Article by Shanta Devarajan and Eeshani Kandpal: “At least since former World Bank President Jim Wolfensohn coined the phrase “knowledge bank,” there have been periodic efforts to strengthen evidence-based policymaking at the World Bank. They have focused overwhelmingly on the supply of knowledge–with a steady stream of “flagship reports.” The World Bank has invested in better data, more rigorous research, systematic reviews, impact evaluations, and increasingly sophisticated analytics to inform its operations. The most recent reorganization aims to create a “knowledge bank.”

Yet previous reorganizations and rhetoric have not consistently translated into improved research quality or greater development impact. High-quality evidence often fails to shape policy choices, lending priorities, or institutional reforms in low- and middle-income countries—or even within the bank itself.

What is missing from this conversation is the demand for knowledge.

Evidence-based policy does not emerge simply because good evidence exists. It emerges when institutions are structured so that decision-makers (1) want to know, and (2) are rewarded for using knowledge. The history of places like Bell Labs illustrates that insight production depends at least as much on institutional demand for understanding as on the technical ability to generate it…(More)”.

The World Bank Doesn’t Need to Generate More Knowledge. It Needs to Want It.

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday