Explore our articles
View All Results

Stefaan Verhulst

Article by Dalmeet Singh Chawla: “Policy-relevant research is drowning in the flood of scientific papers that are published every day. But what if temporary ‘pop up’ journals that were devoted to a single, urgent question could deliver clear, actionable information straight to the policymakers who need it?

That’s the thinking behind the Pop-Up Journal Initiative, which aims to connect policymakers looking for solid evidence to back fresh policies, with researchers who are collecting the relevant evidence.

With funding of some US$1 million from two non-profit organizations — the Alfred P. Sloan Foundation in New York City and Coefficient Giving in San Francisco, California — the initiative will set up journals that will publish articles focused on a single question for a period of time, roughly a few years, before closing the pop-up journal to submissions.

“I’m less interested in topics than in questions, and I’m less interested in publishing than I am in curation,” says one of the project leaders, Daniel Goroff, who is the vice-president and programme director at the Alfred P. Sloan Foundation.

The first pop‑up journal is set to begin publishing this year and will concentrate on a classic economics question: how much growth does investment in research and development (R&D) actually generate?

Goroff says this kind of question is often asked in high-level government discussions, but clear answers are often scarce.

“When I’ve testified before [US] Congress or dealt with an appropriations bill or a budget negotiation, this question, of what is the return on investments when you’re doing R&D, comes up quite often,” says Goroff. “It’s been asked by economists in very formal ways since at least the 1950s, but the data and the methods that were available were really not very strong.”

Most researchers would jump at the chance to get their paper in the hands of the right policymaker — but would they submit their work to a journal that stops publishing after a few years, instead of a more prestigious title?..(More)”.

Pop-up journals for policy research: can temporary titles deliver answers?

Article by Solomon Messing and Joshua A. Tucker: “A viral blog post recently compared the current moment in artificial intelligence (AI) to February 2020, just before COVID-19 turned the world upside down. While that analogy may be flawed, it’s hard to ignore recent developments with AI coding agents,1 which prompted our colleague Andy Hall to post that AI agents are coming for the social sciences “like a freight train.”

Here’s why we agree: in just the past month, we’ve used AI coding agents to do the following: (1) transform a minimal implementation of a method to analyze heterogenous treatments and treatment effects into a fully functional, modular, and well-documented R package in a just over a day; (2) produced a twenty-page summary for our own edification of business responses to the Russian invasion of Ukraine based on materials found on this website, including data visualizations, statistical analyses, and a complete replication file in under an hour; (3) developed the infrastructure, data collection, analysis, and reporting pipeline for a pilot study examining what kinds of political prompts LLMs refuse to address across five languages and five frontier models.

Yes, those of us working in the academy have been wrestling with what generative AI means for teaching for a few years now, and lots of us have begun to integrate generative AI into routine tasks like summarizing papers and even coding assistance. But the current moment feels like it could be quite different, and we suspect many things in the academy are about to change…(More)”.

The train has left the station: Agentic AI and the future of social science research

Book edited by Stephen Kwamena Aikins and Tamara Dimitrijevska-Markoski: “…offers a timely and comprehensive exploration of how AI is transforming public institutions across the globe. From climate resilience and urban planning to justice and equitable service delivery, this book examines the profound opportunities and challenges that AI brings to the heart of governance. Drawing on insights from leading scholars and practitioners, it reveals how governments are harnessing advanced technologies—such as machine learning, data analytics, and robotics—to anticipate societal needs, improve policy outcomes, and engage citizens in new ways.

Yet, as algorithms increasingly shape the fabric of public life, the book does not shy away from the pressing ethical, legal, and societal dilemmas that accompany AI’s rise. It confronts questions of algorithmic bias, accountability, privacy, and the urgent need for democratic oversight in a world run by data. Through in-depth case studies, empirical and conceptual investigations, policy insights, and expert analyses, Artificial Intelligence and Government provides a multidimensional understanding of AI’s influence on government operations, offering practical guidance for policymakers, public administrators, technologists, researchers, and engaged citizens alike.

The book’s central aim is to bridge the critical knowledge gap surrounding AI’s integration into government. It examines the current state of AI adoption across governments worldwide, including adoption strategies, readiness frameworks, and how governments are innovating with AI technologies, while offering philosophical and forward-looking critique. Additionally, the book analyzes the barriers to AI adoption, assesses the impact on policy, service quality and citizen engagement, and offers solutions to mitigate implementation challenges. By examining both the innovations and the ethical complexities of AI in the public sector, the book equips readers with the insights and principles needed to build fairer, more transparent, and future-ready institutions…(More)”.

Artificial Intelligence and Government

Article by Shanta Devarajan and Eeshani Kandpal: “At least since former World Bank President Jim Wolfensohn coined the phrase “knowledge bank,” there have been periodic efforts to strengthen evidence-based policymaking at the World Bank. They have focused overwhelmingly on the supply of knowledge–with a steady stream of “flagship reports.” The World Bank has invested in better data, more rigorous research, systematic reviews, impact evaluations, and increasingly sophisticated analytics to inform its operations. The most recent reorganization aims to create a “knowledge bank.”

Yet previous reorganizations and rhetoric have not consistently translated into improved research quality or greater development impact. High-quality evidence often fails to shape policy choices, lending priorities, or institutional reforms in low- and middle-income countries—or even within the bank itself.

What is missing from this conversation is the demand for knowledge.

Evidence-based policy does not emerge simply because good evidence exists. It emerges when institutions are structured so that decision-makers (1) want to know, and (2) are rewarded for using knowledge. The history of places like Bell Labs illustrates that insight production depends at least as much on institutional demand for understanding as on the technical ability to generate it…(More)”.

The World Bank Doesn’t Need to Generate More Knowledge. It Needs to Want It.

Open Access Book by Bart van der Sloot: “Commonly attributed to the oracle of Delphi, the classic Greek maxim “Know thyself” is in fact much older. Although emblematic, its meaning is by no means unambiguous, even within Greek antiquity. Originally, it served as a call to humility: know your limits, know your place. One was cautioned not to think too highly of oneself, but to recognise the self as a minuscule fragment subject to the caprices of the gods, constrained by societal rules and bodily needs. Later, especially through the work of Plato, the maxim took on a different—perhaps even antithetical—meaning, shifting towards self-knowledge and introspection.1 While not entirely undone of its original undertone, it increasingly came to symbolise humanity’s intellectual capacity for self-understanding and self-correction. In this guise, it was revitalised during the Enlightenment as a credo of individual autonomy.

This book is about both sides of that coin. It explores the myriad ways in which humans are shaped by society, bound by personal histories, and constrained by physical and cognitive limitations. As will become evident, man is an ambivalent being—often far more fragile than his intellectual bravado would suggest. If there is a universal human experience, it is one of limitation, misjudgement, and failure. At the same time, like no other creature, man possesses the unique capacity for self-reflection and self-improvement. Humanity, for better or worse, has managed to bring vast stretches of the universe under its dominion, perpetually striving to refine and master itself. In modern Western culture, autonomy stands perhaps as the ultimate aspiration, and being autonomous as the highest of all aspirations.

Technology can deepen this rift—who has not felt a tinge of shame upon viewing their weekly screen time?—but it also amplifies our capacities, granting us ever greater control over ourselves and our environments, if only through the gentle prodding of fitness apps reminding us that we have yet to meet our daily step count. Yet while it augments self-regulation, it also grants technology companies unprecedented insight into our inner conflicts,…(More)”.

From Autonomy to Ambiguity: Reconfiguring the Legal Landscape in the Age of AI

Article by Michelle Goldberg: “When I saw “Data,” a zippy Off Broadway play about the ethical crises of employees at a Palantir-like A.I. company, last month, I was struck by its prescience. It’s about a brilliant, conflicted computer programmer pulled into a secret project — stop reading here if you want to avoid spoilers — to win a Department of Homeland Security contract for a database tracking immigrants. A brisk theatrical thriller, the play perfectly captures the slick, grandiose language with which tech titans justify their potentially totalitarian projects to the public and perhaps to themselves.

“Data is the language of our time,” says a data analytics manager named Alex, sounding a lot like the Palantir chief Alex Karp. “And like all languages, its narratives will be written by the victors. So if those fluent in the language don’t help democracy flourish, we hurt it. And if we don’t win this contract, someone else less fluent will.”

I’m always on the lookout for art that tries to make sense of our careening, crises-ridden political moment, and found the play invigorating. But over the last two weeks, as events in the real world have come to echo some of the plot points in “Data,” it’s started to seem almost prophetic.

Its protagonist, Maneesh, has created an algorithm with frighteningly accurate predictive powers. When I saw the play, I had no idea whether such technology was really on the horizon. But this week, The Atlantic reported on Mantic, a start-up whose A.I. engine outperforms many of the best human forecasters across domains from politics to sports to entertainment…(More)”.

He Studied Cognitive Science at Stanford. Then He Wrote a Startling Play About A.I. Authoritarianism.

Paper by Daron Acemoglu, Dingwen Kong & Asuman Ozdaglar: “We study how generative AI, and in particular agentic AI, shapes human learning incentives and the long-run evolution of society’s information ecosystem. We build a dynamic model of learning and decision-making in which successful decisions require combining shared, community-level general knowledge with individual-level, context-specific knowledge; these two inputs are complements. Learning exhibits economies of scope: costly human effort jointly produces a private signal about their own context and a “thin” public signal that accumulates into the community’s stock of general knowledge, generating a learning externality. Agentic AI delivers context-specific recommendations that substitute for human effort. By contrast, a richer stock of general knowledge complements human effort by raising its marginal return. The model highlights a sharp dynamic tension: while agentic AI can improve contemporaneous decision quality, it can also erode learning incentives that sustain long-run collective knowledge. When human effort is sufficiently elastic and agentic recommendations exceed an accuracy threshold, the economy can tip into a knowledge-collapse steady state in which general knowledge vanishes ultimately, despite high-quality personalized advice. Welfare is generally non-monotone in agentic accuracy, implying an interior, welfare-maximizing level of agentic precision and motivating information-design regulations. In contrast, greater aggregation capacity for general knowledge—meaning more effective sharing and pooling of human-generated general knowledge—unambiguously raises welfare and increases resilience to knowledge collapse…(More)”.

AI, Human Cognition and Knowledge Collapse

Article by Dashun Wang: “In the early 1980s, Apple co-founder Steve Jobs described the computer as “a bicycle for our minds”. He was inspired by a Scientific American graphic he’d encountered as a boy, showing that a human on a bicycle is more energy-efficient than any animal1. The metaphor captured the promise of personal computing: tools that enable people to go further and faster with less effort. But the deeper brilliance of bicycles lies in what they do not do: they do not mimic human biology, nor any form found in nature. The bicycle reimagined motion entirely.

By comparison, I propose that artificial-intelligence agents are aeroplanes for the mind — they can speed things up for humans even more than bicycles do, but they are harder to control and the consequences of mistakes can be huge. And scientists are particularly poised to benefit from these tools. Scientific research is, at its core, a journey into the unknown. Yet working in new terrains brings unexpected challenges and frequent failures…(More)”.

AI agents are ‘aeroplanes for the mind’: five ways to ensure that scientists are responsible pilots

Paper by Haydn Belfield: “The artificial intelligence community (AI) has recently engaged in activism in relation to their employers, other members of the community, and their governments in order to shape the societal and ethical implications of AI. It has achieved some notable successes, but prospects for further political organising and activism are uncertain. We survey activism by the AI community over the last six years; apply two analytical frameworks drawing upon the literature on epistemic communities, and worker organising and bargaining; and explore what they imply for the future prospects of the AI community. Success thus far has hinged on a coherent shared culture, and high bargaining power due to the high demand for a limited supply of AI talent. Both are crucial to the future of AI activism and worthy of sustained attention…(More)”.

Activism by the AI Community: Analysing Recent Achievements and Future Prospects

Paper by Amirmohammad Ghavimi: “Collective memory—closely related to, yet distinct from, social memory—plays a significant role in guiding the sustainable transition of cities. Multiple qualitative, quantitative, and mixed methods have been employed to investigate collective memory; however, there remains a need to spatially map it for each city to provide decision-makers with a clear, quantitative guide. Such mapping can help preserve and strengthen a city’s collective memory, thereby informing future urban development. This study examines the urban dimension of collective memory—collective urban memory (CUM)—by mapping its tangible, physical aspects through a facilitated Public Participation GIS (PPGIS) approach within a citizen science framework. Due to challenges in encouraging public use of the mobile GIS application QField, we adopted a facilitated PPGIS approach, whereby trained interviewers assisted participants in the data collection process. Results from Oldenburg, Germany, identified several significant urban locations that play key roles in the city’s CUM. Notably, certain places are mentioned disproportionately by different age groups, while a common core set of tangible landmarks emerges across the population. These findings highlight the value of mapping CUM to support culturally sensitive and sustainable city planning…(More)”.

Mapping Collective Memory: A Public Participation GIS Case Study with a Citizen Science Approach

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday