Explore our articles
View All Results

Stefaan Verhulst

Paper by Melissa Ross, Hazel Jovita, and Lucas Veloso: “Growing research on citizens’ assemblies has focused primarily on ‘frontstage’ standardization and deliberative quality, centering the experience of assembly members. Only very recently have studies and guidelines turned to the ‘backstage,’ considering who makes decisions and how when it comes to organizing and running citizens’ assemblies. This article provides four original contributions to this emerging line of inquiry. First, we identify three constitutive elements of governance: who governs (stakeholders), what (decision-making), and how (integrity). Second, we provide a working definition of governance as the negotiation of tensions between deliberative values and practical constraints in commissioning, designing, and delivering citizens’ assemblies. Third, we illustrate these findings with original focus group and interview data from the 2021 Global Assembly on Climate and Ecological Crisis, centering the experience of a global community of practice. Fourth, we reveal three key tensions at the core of governing citizens’ assemblies: (1) collaboration across diverse stakeholders, (2) grounding decision-making, and (3) balancing horizontal and vertical logics. These elements and tensions offer insights for both future research and practice…(More)”.

The Governance of Citizens’ Assemblies: Negotiating deliberative values and practical constraints and practical constraints

Report by Janna Anderson and Lee Rainie: “Hundreds of global technology experts share insights, urging an all-encompassing systems response by leaders to serve humanity’s best interests in light of rapid technological change…These globally-located experts from all walks of life noted that AI is quickly becoming the invisible operating system of society, shaping how opportunity is distributed, services are delivered, risks are managed and human rights are experienced. Most said the traditional resilience strategies humans have employed for millennia – focused on individual “grit,” and after-the-fact personal adaptation – are not enough to help humanity flourish as we adjust to an AI-infused future.

These experts predicted:

  • AI’s larger role: 82% said AI will have a significantly larger role in shaping our daily lives and key societal systems in the next 10 years or less; 13% said that level of change is 20-30 years away.
  • AI guiding decisions: 56% said that at the time they expect AI will be significantly more advanced it will influence, guide or control “nearly all” or “most” human activities and decisions (another 24% said AI will influence, guide or control nearly half of activities and decisions).
  • Resilience worries: 45% said humans will be only “a little” or “not at all” resilient in the face of that level of change. About half said people will be somewhat to very resilient.
    Of note: Many experts wrote in their essay responses that many to most humans will passively accept the influence of AI systems. Thus, these people will not feel any need to be resilient.
  • Satisfaction concerns: only 33% said people will be more satisfied than dissatisfied with AI systems at that time; 31% said people will be more dissatisfied than satisfied; 33% said people will have an equal amount of satisfaction and dissatisfaction with AI systems.

(See downloadable PDFa four-page news release and 15-page executive summary)…(More)”

Building a Human Resilience Infrastructure for the Age of AI

Book by Daniel P. Aldrich: “What if society could move past traditional “gray infrastructure” approaches—like seawalls and prisons—to address problems such as climate change, terrorism, and crime? In Beyond Common Ground, Daniel P. Aldrich argues that social infrastructure—physical and virtual spaces including parks, libraries, and radio programs—offer a more effective alternative by fostering coproduction and multiple benefits.

Drawing on qualitative and quantitative evidence from nine countries across Africa, Asia, and North America, this book demonstrates how these systems build social capital and resilience—and proposes practical policies for implementation. Case studies show that facilities in Japan, such as the elder-led Ibasho center, reduced mortality during the 2011 tsunami and accelerated recovery. Greening initiatives in Philadelphia mitigated crime, while radio countered extremist recruitment in the Sahel. Beyond Common Ground positions social infrastructure as a “polysolution” for our interconnected crises, urging society to stop treating it as a Cinderella service and prioritize equitable distribution…(More)”.

Beyond Common Ground: How Everyday Places Solve Big Social Challenges

Paper by Andrew Caplin: “Many important economic decisions depend not only on acquiring information, but on determining which questions are worth asking before committing to an action. Individuals and organizations must organize inquiry: they select questions, interpret answers, and update beliefs in ways that determine whether decisive distinctions are identified or missed. Despite its importance, the organization of inquiry is largely absent from formal economic analysis.


This paper studies decision-making under \emph{diagnostic uncertainty}, in which agents must learn not only about payoff-relevant states, but about which questions are informative. We develop a model in which agents choose questions sequentially prior to commitment, while facing uncertainty over a latent diagnostic structure that governs how questions generate answers. Before each inquiry step, agents may incur cognitive cost to refine beliefs about this diagnostic structure. The resulting problem is a finite-horizon dynamic program over inquiry, in which costly attention is allocated to belief transformations over diagnostic structure rather than directly to payoff-relevant states.

Two canonical diagnostic geometries isolate distinct planning margins. In one, value depends on locating a decisive question; in the other, on maintaining a correct sequence of questions. Both environments admit closed-form solutions and yield a common representation in which optimal inquiry increases the probability of success relative to uninformed search.

The framework identifies a distinct margin of economic behavior—planning under diagnostic uncertainty—that becomes increasingly important in environments where answers are abundant but the organization of inquiry remains scarce…(More)”.

Planning under Diagnostic Uncertainty: Question-Driven Learning in the Age of AI

Article by Carl Zimmer: “Scientists publish more than 10 million studies and other publications a year. Some of those findings will add to humanity’s storehouse of knowledge. But some will be wrong.

To assess a study, scientists can replicate it to see if they get the same result. But seven years ago, a team of hundreds of scientists set out to find a faster way to judge new scientific literature. They built artificial intelligence systems to predict whether studies would hold up to scrutiny.

The project, funded by the Defense Advanced Research Projects Agency, or DARPA, was called Systematizing Confidence in Open Research and Evidence — SCORE, for short. The idea came from Adam Russell, then a program manager for the agency. He envisioned generating a kind of credit score for science.

“People can say, ‘Hey, this is likely to be robust, we can premise a policy on it,’” said Dr. Russell, who is now at the University of Southern California. “‘But this? Nah, this might make for a book in the airport.’”

The SCORE team inspected hundreds of studies, running many of them again, to better understand what makes research hold up. Now it is publishing a raft of papers on those efforts.

For now, a scientific credit score remains a dream, the researchers say. Artificial intelligence cannot make reliable predictions…

For more than 15 years, some scientists have been trying to change the culture. They started by documenting the extent of the problem. In the early 2010s, Dr. Nosek and colleagues replicated 100 psychology papers — and matched the original results only 39 percent of the time.

In another project, Dr. Nosek teamed up with cancer biologists to replicate 50 experiments on animals and human cells. Fewer than half of the results withstood their scrutiny…(More)”.

Can Science Predict When a Study Won’t Hold Up?

Article by Stephen Sims: “On June 13, 2025, Iran’s air defense network was largely silent in the face of an intense Israeli bombing campaign. Just before the attack, swarms of explosive quadcopter drones, launched by Israel from inside Iranian territory and acting on vast troves of intelligence sifted with the use of AI to select targets, had taken out Iran’s radar systems and numerous missile sites. Israel’s one-two punch made Iran an object lesson in how a combination of AI and drones is blazing a new trajectory for international politics.

Not long before, on June 1, Ukraine had employed a strikingly similar tactic, using cargo trucks with false inventories to smuggle drones deep into Russian territory. The drones had been trained using AI to recognize Tu-95 “Bear” bombers based on photographs taken of a decommissioned version in a Ukrainian air museum and to recognize the weakest point of the bombers, often the fuel tanks in the wings. This allowed the drones, flying first autonomously and then with human pilots, to strike Russian bombers with high precision as far away as Siberia.

In the grand scheme of geopolitics, these events were small. The conflict between Iran and Israel ended up being more like glorified shadowboxing than real war, and the Ukrainian strike on Russia did nothing to change the relentless, grinding attrition of the front line. These events are not obvious ruptures in international politics, as when nuclear fire consumed Hiroshima and Nagasaki in August 1945. That moment announced with dreadful clarity that the future of war and strategy would never be the same. The use of AI coupled with drones, however, is more like Sputnik in 1957, a seemingly small event that nevertheless drastically altered the human relationship to technology.

Heidegger once remarked that the first images of Earth from the Moon shocked him because they revealed a new way of grasping the human condition, drained of direct human experience. AI-enabled drone strikes carry a similar symbolic charge: they represent war drained of direct human contact…(More)”.

A Shakeup Is Coming for the Nation-State

Article by Emanuel Maiberg: “After months of heated debate and previous attempts to restrict the use of large language models on Wikipedia, on March 20 volunteer editors accepted a new policy that prohibits using them to create articles for the online encyclopedia. “Text generated by large language models (LLMs) often violates several of Wikipedia’s core content policies,” Wikipedia’s new policy states. “For this reason, the use of LLMs to generate or rewrite article content is prohibited, save for the exceptions given below.” The new policy, which was accepted in an overwhelming 40 to 2 vote among editors, allows editors to use LLMs to suggest basic copyedits to their own writing, which can be incorporated into the article or rewritten after human review if the LLM doesn’t generate entirely new content on its own. “Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited,” the policy states. “The use of LLMs to translate articles from another language’s Wikipedia into the English Wikipedia must follow the guidance laid out at Wikipedia:LLM-assisted translation.” I previously reported about editors using LLMs to translate Wikipedia articles and  Wikipedia editor, Ilyas Lebleu, who goes by Chaotic Enby on Wikipedia and who proposed the guideline said that it seemed unlikely the policy will last because previously the editor community has been divided on the issue…(More) “.

Wikipedia Bans AI-Generated Content

Book by Marina Nitze, Matthew Weaver, and Mikey Dickerson: “When the system breaks, what do you do? You’re in the middle of a meltdown. The platform is down, the phones are ringing, the headlines are brutal, and your team is looking to you for answers. The usual playbooks—careful planning, expert consultation, bold strategy—aren’t working. What if we told you that instead of the end of the world, this is your moment to create lasting, transformative change?     
 
Crisis Engineering is your field guide to leading through the chaos—and coming out stronger than before. Drawing on decades of experience inside some of the most complex systems in industry and government, Marina Nitze, Matthew Weaver, and Mikey Dickerson, of the crisis engineering firm Layer Aleph, reveal their powerful, hands-on framework for navigating high-stakes crises.
From the rescue of HealthCare.gov to wildfire response and pandemic logistics, this book offers real-world stories, practical tools, and hard-won insights into how complex systems fail—and how to help them recover. You’ll learn: 

  • How to identify the 5 signals of a crisis—and use them to your advantage
  • Why traditional leadership instincts fail under pressure—and what to do instead
  • How to stand up your own crisis engineering effort when it matters most

Whether you’re in tech, government, healthcare, or any other critical system, Crisis Engineering gives you the mindset, tools, and vocabulary to lead with clarity and create lasting change…(More)”.

Crisis Engineering: Time-Tested Tools for Turning Chaos Into Clarity

Research Agenda & Bibliography of Proposals by Anna Lenhart: “In recent years, academics, advocates, and policymakers have proposed or discussed the need for a new digital regulator (NDR) – a new agency of the federal government that regulates the AI and technology industry, with a particular focus on market competition, data privacy, and transparency & safety. We have documented over 20 academic papers and studies, think tank reports, books and parts of books, essays and op-eds, and pieces of legislation that propose such agencies or analyze such proposals. 

On February 25, 2026, the Institute for Data, Democracy and Politics at George Washington University and the Vanderbilt Policy Accelerator hosted many of the experts who authored those proposals for a day-long summit to discuss the need for an NDR and open questions related to the design of the agency. Informed by those discussions, this research agenda outlines questions we believe still deserve additional research attention, across disciplines. We are publishing this agenda in hopes to inspire scholarly work on these issues. Some areas may already have work that we have inadvertently missed from our literature review, and we welcome input from those interested in these issues…(More)“.

Designing a New Digital Regulator

Paper by Adnan Firoze, et al: “Historically, only resource-rich U.S. cities have collected data about where their public trees are, usually through labor-intensive manual surveys or via coarse canopy-cover estimation. However, a significant portion of city trees are on private property, making them difficult to quantify with surveys, yet they contribute uniquely to species diversity and ecosystem service distribution. Further, canopy-cover estimation cannot provide information about tree density, locations of trees across different land types, or changes in tree counts. Cities are under continual change, and the mean mortality rate of urban trees is twice that of rural trees.Thus, frequent updating of tree analytics is critical for sustainable, habitable cities.

Method.  Recent advances in computing—in particular, generative artificial intelligence (AI)—have enabled our multidisciplinary team, spanning computer science, engineering, and forestry, to develop a first-of-its-kind computational method that can individually locate and maintain an inventory of trees in at least 330 U.S. cities (Figure  1). Using satellite data, this approach can complete the inventory process in less than a day of automated computing. Individual trees are challenging to discern in satellite images due to occlusion and resolution limitations, which in turn limits traditional segmentation-based approaches. Our approach leverages several key insights to enable a scalable generative AI solution. First, a frequent capture rate of satellite imagery (e.g., daily, monthly, etc.) provides spatiotemporal vegetation footprints, yielding richer information than single images. Our method includes a deep spatiotemporal vegetation cover classification using satellite images that classifies a city into tree, grass, and background, followed by a cluster-creation process and then individual tree localization using a set of conditional generative adversarial networks (cGANs). Further, our method can be applied to current or archived satellite imagery, allowing for change detection and historical analysis…(More)”.

Where Are the City Trees? Monitoring Urban Trees across the U.S. Using Generative AI

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday