Explore our articles
View All Results

Stefaan Verhulst

Blog by Andrew Knight and Nicolás Rebolledo: “When we discuss patterns in government, it can seem like a relatively modern concept. But the idea of codifying and reusing what works runs deep in human history.

Consider the Clovis points of prehistoric North America—fluted stone spearheads made 13,000 years ago, spread across vast distances in almost identical form. The point itself was the pattern, passed from maker to maker. Ancient Egypt used cubit rods—state-issued measuring sticks that ensured everyone worked to the same standard. Medieval guilds transmitted design knowledge through apprenticeships, guaranteeing quality and protecting craft reputations. Edo Japan’s printed kimono catalogues enabled ordinary customers to browse designs and commission garments, scaling choice while allowing for local adaptation.

In 1837, patterns became explicit government business. Britain established the Government School of Design (which became the Royal College of Art) to teach artisans how to apply patterns to ceramics and textiles. This was an industrial strategy—patterns as statecraft to make British goods competitive.

In the 1960s, architect Christopher Alexander formalised this practice into “pattern languages”—documented, repeatable solutions for complex design challenges. This laid the foundation for how we think about design patterns today across architecture, software development, and service design.

Patterns are among the oldest design technologies we have, carrying values that have always shaped how societies create, share, and govern…(More)”

Patterns for the global public sector

Blog by Cass Sunstein: “In the last fifty years or so, there has been an explosion of empirical work on how and when human beings depart from perfect rationality. This work has led, not surprisingly, to a rethinking of paternalism and its limits.

We now have three camps, more or less:

  • coercive paternalists, who urge that behavioral findings greatly strengthen arguments for mandates and bans (and leave John Stuart Mill in the dust, more or less);
  • libertarian paternalists, who urge that behavioral findings point to a host of freedom-preserving interventions, such as warnings, reminders, and automatic enrollment; and
  • antipaternalists, who urge that behavioral findings justify only, or at most, efforts to strengthen people’s capacities to make good choices.

It is important to see that each of the three views can be taken as a dogma, or a fighting faith, or instead as a presumption or an inclination.

For example, you could be a libertarian paternalist while also liking some mandates and bans (for example, compulsory seatbelt laws and social security laws). I like libertarian paternalism, but I certainly agree that there is a place for mandates and bans, even to protect people from their own mistakes. You could be an antipaternalist while also liking some nudges (for example, warnings about allergens). Still, presumptions and inclinations matter a lot.

A whole book could easily be written on the underlying debates. (I may have written one; who knows?) My main purpose here is far more modest. It is to put members of the three camps in the same room, so to speak, and to see what they might have to say to each other…(More)”.

Paternalism and Behavioral Economics

Article by John Thornhill: “It is rare for a central banking institution to model the economic impact of human extinction (spoiler alert: GDP goes to zero). But a startling chart depicting that scenario was shown in a recent research paper from the Federal Reserve Bank of Dallas.

Forecasting the likely impact of artificial intelligence on US economic growth, the researchers presented three scenarios. Their central forecast was that AI might boost the trend growth of US GDP per capita to 2.1 per cent for 10 years. “Not trivial but not earth shattering either,” the report’s authors, Mark Wynne and Lillian Derr, wrote.

But the bank also considered what might happen if AI achieved the technological singularity, when machine intelligence surpasses the human kind and becomes ever smarter. 

In a good case, that superintelligence could trigger a massive rise in GDP and end scarcity. In a bad one, it could lead to the rise of malevolent machines and end humanity. There was, the authors noted, little empirical evidence behind either of these extreme scenarios, although some economists have been exploring both possibilities.

Evidently, there is a wide spectrum of views among economists about AI. But the economic consensus is that it might be no more consequential than some other technological advances, such as electricity, the internal combustion engine and computers.  

It takes a massive technological jolt to shift an economy the size of the US above its growth trend line of just under 2 per cent a year. For more than a century, that trend has held pretty steady in spite of two world wars, the Depression and periodic global financial crises, not to mention myriad previous technological advances…

But AI evangelists hear such arguments with slack jaws. Many of them depict economists as a downbeat and conservative tribe, vainly trying to predict the future by looking in the rear-view mirror. The way they see it, automating brawn triggered the Industrial Revolution and automating the brain will lead to an even bigger jump in productivity. That should surely shift the trend line in a dramatic way.

Last week, the Stanford Digital Economy Lab hosted a seminar to debate the contrasting views of economists and technologists. The discussion was led by Tamay Besiroglu, co-founder of Mechanize, an AI start-up that wants to enable “the full automation of the economy”.

One way of thinking about AI, he said, was that it would enable us to inject significant new inputs into the economy by massively increasing the number of digital workers to tackle many more tasks. “AI effectively turns labour into a type of capital,” Besiroglu said. ..Although the differences between economists and technologists appear stark, Erik Brynjolfsson, director of the Stanford Digital Economy Lab, says they are not incompatible. “I think they both have a lot of truth to their positions. And there’s a way to reconcile them,” he told me.

After studying productivity gains from previous general-purpose technologies such as steam engines, electricity and IT, Brynjolfsson suggests the biggest economic impact often comes from investments in complementary areas, rather than from direct investments in these technologies themselves…(More)”.

Who’s right about AI: economists or technologists?

Paper by Abubakar Bello Bada et al: “Software development as an engineering discipline is characterized by tension between abstraction and precision. It has undergone a tremendous transformation over the decades, from highly rigid machine language programming to the modern day vibe coding that tends to democratize software development through automation, abstraction, and artificial intelligence (AI). Vibe coding, a term that refers to AI-assisted and intuition-driven software development methodology. This paper first provides the historical trajectory of software development, arguing that each stage has incrementally democratized software development. The current shift powered by Large Language Models (LLMs) represents the most significant stride in the democratization of software development yet. This paper also enumerates the implications of this shift and the evolution of software development expertise. It concludes that while vibe coding has its challenges, it aligns with the historical evolution of software development, which is the relentless pursuit of higher-level abstraction to harness human creativity and collective intelligence…(More)”.

Democracy in Software Development: The Rise of Vibe Coding

Thesis by Jin Gao: “Cities are dynamic and evolving organisms shaped through the check-and-balance of interest exchange. As cities gain complexity and more stakeholders become involved in decision-making, reaching consensus becomes the core challenge and the essence of the urbanism process. This thesis introduces a computational framework for AI-augmented collective decision-making in urban settings. Based on real-world case studies, the core decision-making process is abstracted as a multiplayer board game modeling the check-and-balance dynamics among stakeholders with differing values. Players are encouraged to balance short-term interests and long-term resilience, and evaluate the risks and benefits of collaboration. The system is implemented as a physical interactive play-table with digital interfaces, enabling two use cases: simulating potential outcomes via AI self-play, and human–agent co-play via human-inthe-loop interactions. Technically, the framework integrates multi-agent reinforcement learning (MARL) for agent strategy training, multi-agent large language model (LLM) discussions to enable natural language negotiation, and retrieval-augmented generation (RAG) to ground decisions in contextual knowledge. Together, these components form a full-stack pipeline for simulating collective decision-making enriched by human participation. This research offers a novel participatory tool for planners, policymakers, architects, and the public to examine how differing values shape development trajectories. It also demonstrates an integrated approach to collective intelligence, combining numerical optimization, language-based reasoning, and human participation, to explore how AI–AI and AI–human collaboration can emerge within complex multi-stakeholder environments…(More)”.

Mediators: Participatory Collective Intelligence for Multi-Stakeholder Urban Decision-Making

Article by Nilesh Christopher: “When Dhiraj Singha began applying for postdoctoral sociology fellowships in Bengaluru, India, in March, he wanted to make sure the English in his application was pitch-perfect. So he turned to ChatGPT.

He was surprised to see that in addition to smoothing out his language, it changed his identity—swapping out his surname for “Sharma,” which is associated with privileged high-caste Indians. Though his application did not mention his last name, the chatbot apparently interpreted the “s” in his email address as Sharma rather than Singha, which signals someone from the caste-oppressed Dalits.

“The experience [of AI] actually mirrored society,” Singha says. 

Singha says the swap reminded him of the sorts of microaggressions he’s encountered when dealing with people from more privileged castes. Growing up in a Dalit neighborhood in West Bengal, India, he felt anxious about his surname, he says. Relatives would discount or ridicule his ambition of becoming a teacher, implying that Dalits were unworthy of a job intended for privileged castes. Through education, Singha overcame the internalized shame, becoming a first-generation college graduate in his family. Over time he learned to present himself confidently in academic circles.

But this experience with ChatGPT brought all that pain back. “It reaffirms who is normal or fit to write an academic cover letter,” Singha says, “by considering what is most likely or most probable.”

Singha’s experience is far from unique. An MIT Technology Review investigation finds that caste bias is rampant in OpenAI’s products, including ChatGPT. Though CEO Sam Altman boasted during the launch of GPT-5 in August that India was its second-largest market, we found that both this new model, which now powers ChatGPT, and Sora, OpenAI’s text-to-video generator, exhibit caste bias. This risks entrenching discriminatory views in ways that are currently going unaddressed. 

Working closely with Jay Chooi, a Harvard undergraduate AI safety researcher, we developed a test inspired by AI fairness studies conducted by researchers from the University of Oxford and New York University, and we ran the tests through Inspect, a framework for AI safety testing developed by the UK AI Security Institute…(More)”.

OpenAI is huge in India. Its models are steeped in caste bias.

Article by Jairo Acuña – Alfaro and Andrea Bolzon: “That digital transformation can reconfigure institutions, improve public services, and, above all, bring the State closer to its citizens is almost self-evident. But when innovation enters the realm of justice, it takes on a deeper meaning: it is not only about modernizing processes, but about redefining the relationship between society, rights, and democracy.

Brazil—home to 94 judicial courts and profound structural challenges—has achieved a transformation in access to justice that is arguably unparalleled globally. From the first law on the digitalization of judicial processes in 2006 (Law No. 11.419) to the launch of the Jus.br portal in 2024, the National Council of Justice (CNJ), with UNDP’s support through the Justice 4.0 project, has digitally transformed 365 million judicial cases. This shift is key to building a justice system that is more efficient, transparent, and closer to people.

The results speak for themselves: 100% integration of the 94 courts, 100% connection of 221 data sources, 98% adoption of notification services, and 97% use of single sign-on. Brazil’s experience in digital transformation and innovation in judicial services is now attracting attention beyond its borders…(More)”.

E-Justice: The Codebase of Democracy

Report by Hannah Chafetz, Adam Zable, Sara Marcucci, Christopher Rosselot, and Stefaan Verhulst: “Most people now generate large amounts of digital data through their everyday activities and interactions – whether commuting, shopping, communicating or searching for things online. These social data sources are increasingly being used in health and wellbeing research around the world. Yet, questions remain around:

  • the unique value of social data for health and wellbeing research
  • how social data can be integrated into cross-disciplinary health research programs
  • how to make social data more accessible to health researchers

This landscape review, commissioned by Wellcome and produced by The GovLab, aims to answer these questions by mapping how social data has been used in health and wellbeing research around the world. This review mainly focuses on the United Kingdom (UK) and low- and middle-income countries (LMICs). This report examines the opportunities and current challenges in this space, to identify areas where greater investment and coordination are needed.  

This review was guided by an international advisory board and conducted using several methods including a literature review of over 290 studies, group discussions (referred to as “studios” in the report), interviews and a peer review with 23 experts.  

The goal of this report is to raise the profile of social data for health and to inform funders, researchers and practitioners on how to connect new initiatives, reduce duplication and integrate social data more effectively into health research ecosystems worldwide…(More)”.

Social Data for Health

Article by Li Hongyi: “A common problem in innovation programs is that we do not know what we are innovating for. Are we trying to reduce costs? Improve usability? Save time? Or are we just trying to do something “new”? Without a clear goal your only reference point is what you are already doing. Then your only source of feedback is whether anyone is unhappy about change: and someone always is. So you get stuck, wanting to innovate but not able to move.

Conversely, when you have a clear goal you can be very flexible about how to get there. In the private sector, it might be profit. In F1, it is lap time. In AI, it is quality benchmark scores. Once you know what you are trying to achieve, you can stop obsessing over how you achieve it. Good metrics tell you what to care about, but also what not to care about.

Practically, even when a public sector team manages to overcome the bureaucracy, technical challenges, and operations to build something really good and present it to leadership, it often gets shot down with a simple “That’s not how we do things”. This is not really anyone’s fault. It is hard to make something new happen when your job is to make sure nothing bad ever happens…(More)”.

How To Be Innovative

Article by Zoë Brammer, Ankur Vora, Anine Andresen and Shahar Avin: “The success of AI governance efforts will largely rest on foresight, or the ability of AI labs, policymakers and others to identify, assess and prepare for divergent AI scenarios. Traditional governance tools like policy papers, roundtables, or government RFIs have their place, but are often too slow or vague for a technology as fast-advancing, general-purpose, and uncertain as AI. Data-driven forecasts and predictions, such as those developed by Epoch AI and Metaculus, and vivid scenarios such as those painted by AI 2027, are one component of what is needed. Still, even these methods don’t force participants to grapple with the messiness of human decision-making in such scenarios.

Why games? Why science?

In Art of Wargaming, Peter Perla tells us that strategic wargames began in earnest in the early 19th century, when Baron von Reisswitz and his son developed a tabletop exercise to teach the Prussian General Staff about military strategy in dynamic, uncertain environments. Today, ‘serious games’ remain best known in military and security domains, but they are used everywhere from education to business strategy.

In recent years, Technology Strategy Roleplay, a charity organisation, has pioneered the application of serious games to AI governance. TSR’s Intelligence Rising game simulates the paths by which AI capabilities and risks might take shape, and invites decision-makers to role-play the incentives, tensions and trade-offs that result. To date, more than 250 participants from governments, tech firms, think tanks and beyond have taken part.

Building on this example, we at Google DeepMind wanted to co-design a game to explore how AI may affect science and society. Why? As we outlined in a past essay, we believe that the application of AI to science could be its most consequential. As a domain, science also aligns nicely with the five criteria that successful games require, as outlined in a past paper by TSR’s Shahar Avin and colleagues:

  1. Many actors must work together: Scientific progress rests on the interplay between policymakers, funders, academic researchers, corporate labs, and others. Their varying incentives, timelines, and ethical frameworks naturally lead to tensions that games are well-placed to explore…(More)”.
Science 2030: Designing role-playing games to help AI governance

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday