Stefaan Verhulst
Article by John Thornhill: “It is rare for a central banking institution to model the economic impact of human extinction (spoiler alert: GDP goes to zero). But a startling chart depicting that scenario was shown in a recent research paper from the Federal Reserve Bank of Dallas.
Forecasting the likely impact of artificial intelligence on US economic growth, the researchers presented three scenarios. Their central forecast was that AI might boost the trend growth of US GDP per capita to 2.1 per cent for 10 years. “Not trivial but not earth shattering either,” the report’s authors, Mark Wynne and Lillian Derr, wrote.
But the bank also considered what might happen if AI achieved the technological singularity, when machine intelligence surpasses the human kind and becomes ever smarter.
In a good case, that superintelligence could trigger a massive rise in GDP and end scarcity. In a bad one, it could lead to the rise of malevolent machines and end humanity. There was, the authors noted, little empirical evidence behind either of these extreme scenarios, although some economists have been exploring both possibilities.
Evidently, there is a wide spectrum of views among economists about AI. But the economic consensus is that it might be no more consequential than some other technological advances, such as electricity, the internal combustion engine and computers.
It takes a massive technological jolt to shift an economy the size of the US above its growth trend line of just under 2 per cent a year. For more than a century, that trend has held pretty steady in spite of two world wars, the Depression and periodic global financial crises, not to mention myriad previous technological advances…
But AI evangelists hear such arguments with slack jaws. Many of them depict economists as a downbeat and conservative tribe, vainly trying to predict the future by looking in the rear-view mirror. The way they see it, automating brawn triggered the Industrial Revolution and automating the brain will lead to an even bigger jump in productivity. That should surely shift the trend line in a dramatic way.
Last week, the Stanford Digital Economy Lab hosted a seminar to debate the contrasting views of economists and technologists. The discussion was led by Tamay Besiroglu, co-founder of Mechanize, an AI start-up that wants to enable “the full automation of the economy”.
One way of thinking about AI, he said, was that it would enable us to inject significant new inputs into the economy by massively increasing the number of digital workers to tackle many more tasks. “AI effectively turns labour into a type of capital,” Besiroglu said. ..Although the differences between economists and technologists appear stark, Erik Brynjolfsson, director of the Stanford Digital Economy Lab, says they are not incompatible. “I think they both have a lot of truth to their positions. And there’s a way to reconcile them,” he told me.
After studying productivity gains from previous general-purpose technologies such as steam engines, electricity and IT, Brynjolfsson suggests the biggest economic impact often comes from investments in complementary areas, rather than from direct investments in these technologies themselves…(More)”.
Paper by Abubakar Bello Bada et al: “Software development as an engineering discipline is characterized by tension between abstraction and precision. It has undergone a tremendous transformation over the decades, from highly rigid machine language programming to the modern day vibe coding that tends to democratize software development through automation, abstraction, and artificial intelligence (AI). Vibe coding, a term that refers to AI-assisted and intuition-driven software development methodology. This paper first provides the historical trajectory of software development, arguing that each stage has incrementally democratized software development. The current shift powered by Large Language Models (LLMs) represents the most significant stride in the democratization of software development yet. This paper also enumerates the implications of this shift and the evolution of software development expertise. It concludes that while vibe coding has its challenges, it aligns with the historical evolution of software development, which is the relentless pursuit of higher-level abstraction to harness human creativity and collective intelligence…(More)”.
Thesis by Jin Gao: “Cities are dynamic and evolving organisms shaped through the check-and-balance of interest exchange. As cities gain complexity and more stakeholders become involved in decision-making, reaching consensus becomes the core challenge and the essence of the urbanism process. This thesis introduces a computational framework for AI-augmented collective decision-making in urban settings. Based on real-world case studies, the core decision-making process is abstracted as a multiplayer board game modeling the check-and-balance dynamics among stakeholders with differing values. Players are encouraged to balance short-term interests and long-term resilience, and evaluate the risks and benefits of collaboration. The system is implemented as a physical interactive play-table with digital interfaces, enabling two use cases: simulating potential outcomes via AI self-play, and human–agent co-play via human-inthe-loop interactions. Technically, the framework integrates multi-agent reinforcement learning (MARL) for agent strategy training, multi-agent large language model (LLM) discussions to enable natural language negotiation, and retrieval-augmented generation (RAG) to ground decisions in contextual knowledge. Together, these components form a full-stack pipeline for simulating collective decision-making enriched by human participation. This research offers a novel participatory tool for planners, policymakers, architects, and the public to examine how differing values shape development trajectories. It also demonstrates an integrated approach to collective intelligence, combining numerical optimization, language-based reasoning, and human participation, to explore how AI–AI and AI–human collaboration can emerge within complex multi-stakeholder environments…(More)”.
Article by Nilesh Christopher: “When Dhiraj Singha began applying for postdoctoral sociology fellowships in Bengaluru, India, in March, he wanted to make sure the English in his application was pitch-perfect. So he turned to ChatGPT.
He was surprised to see that in addition to smoothing out his language, it changed his identity—swapping out his surname for “Sharma,” which is associated with privileged high-caste Indians. Though his application did not mention his last name, the chatbot apparently interpreted the “s” in his email address as Sharma rather than Singha, which signals someone from the caste-oppressed Dalits.
“The experience [of AI] actually mirrored society,” Singha says.
Singha says the swap reminded him of the sorts of microaggressions he’s encountered when dealing with people from more privileged castes. Growing up in a Dalit neighborhood in West Bengal, India, he felt anxious about his surname, he says. Relatives would discount or ridicule his ambition of becoming a teacher, implying that Dalits were unworthy of a job intended for privileged castes. Through education, Singha overcame the internalized shame, becoming a first-generation college graduate in his family. Over time he learned to present himself confidently in academic circles.
But this experience with ChatGPT brought all that pain back. “It reaffirms who is normal or fit to write an academic cover letter,” Singha says, “by considering what is most likely or most probable.”
Singha’s experience is far from unique. An MIT Technology Review investigation finds that caste bias is rampant in OpenAI’s products, including ChatGPT. Though CEO Sam Altman boasted during the launch of GPT-5 in August that India was its second-largest market, we found that both this new model, which now powers ChatGPT, and Sora, OpenAI’s text-to-video generator, exhibit caste bias. This risks entrenching discriminatory views in ways that are currently going unaddressed.
Working closely with Jay Chooi, a Harvard undergraduate AI safety researcher, we developed a test inspired by AI fairness studies conducted by researchers from the University of Oxford and New York University, and we ran the tests through Inspect, a framework for AI safety testing developed by the UK AI Security Institute…(More)”.
Article by Jairo Acuña – Alfaro and Andrea Bolzon: “That digital transformation can reconfigure institutions, improve public services, and, above all, bring the State closer to its citizens is almost self-evident. But when innovation enters the realm of justice, it takes on a deeper meaning: it is not only about modernizing processes, but about redefining the relationship between society, rights, and democracy.
Brazil—home to 94 judicial courts and profound structural challenges—has achieved a transformation in access to justice that is arguably unparalleled globally. From the first law on the digitalization of judicial processes in 2006 (Law No. 11.419) to the launch of the Jus.br portal in 2024, the National Council of Justice (CNJ), with UNDP’s support through the Justice 4.0 project, has digitally transformed 365 million judicial cases. This shift is key to building a justice system that is more efficient, transparent, and closer to people.
The results speak for themselves: 100% integration of the 94 courts, 100% connection of 221 data sources, 98% adoption of notification services, and 97% use of single sign-on. Brazil’s experience in digital transformation and innovation in judicial services is now attracting attention beyond its borders…(More)”.
Report by Hannah Chafetz, Adam Zable, Sara Marcucci, Christopher Rosselot, and Stefaan Verhulst: “Most people now generate large amounts of digital data through their everyday activities and interactions – whether commuting, shopping, communicating or searching for things online. These social data sources are increasingly being used in health and wellbeing research around the world. Yet, questions remain around:
- the unique value of social data for health and wellbeing research
- how social data can be integrated into cross-disciplinary health research programs
- how to make social data more accessible to health researchers
This landscape review, commissioned by Wellcome and produced by The GovLab, aims to answer these questions by mapping how social data has been used in health and wellbeing research around the world. This review mainly focuses on the United Kingdom (UK) and low- and middle-income countries (LMICs). This report examines the opportunities and current challenges in this space, to identify areas where greater investment and coordination are needed.
This review was guided by an international advisory board and conducted using several methods including a literature review of over 290 studies, group discussions (referred to as “studios” in the report), interviews and a peer review with 23 experts.
The goal of this report is to raise the profile of social data for health and to inform funders, researchers and practitioners on how to connect new initiatives, reduce duplication and integrate social data more effectively into health research ecosystems worldwide…(More)”.
Article by Li Hongyi: “A common problem in innovation programs is that we do not know what we are innovating for. Are we trying to reduce costs? Improve usability? Save time? Or are we just trying to do something “new”? Without a clear goal your only reference point is what you are already doing. Then your only source of feedback is whether anyone is unhappy about change: and someone always is. So you get stuck, wanting to innovate but not able to move.
Conversely, when you have a clear goal you can be very flexible about how to get there. In the private sector, it might be profit. In F1, it is lap time. In AI, it is quality benchmark scores. Once you know what you are trying to achieve, you can stop obsessing over how you achieve it. Good metrics tell you what to care about, but also what not to care about.
Practically, even when a public sector team manages to overcome the bureaucracy, technical challenges, and operations to build something really good and present it to leadership, it often gets shot down with a simple “That’s not how we do things”. This is not really anyone’s fault. It is hard to make something new happen when your job is to make sure nothing bad ever happens…(More)”.
Article by Zoë Brammer, Ankur Vora, Anine Andresen and Shahar Avin: “The success of AI governance efforts will largely rest on foresight, or the ability of AI labs, policymakers and others to identify, assess and prepare for divergent AI scenarios. Traditional governance tools like policy papers, roundtables, or government RFIs have their place, but are often too slow or vague for a technology as fast-advancing, general-purpose, and uncertain as AI. Data-driven forecasts and predictions, such as those developed by Epoch AI and Metaculus, and vivid scenarios such as those painted by AI 2027, are one component of what is needed. Still, even these methods don’t force participants to grapple with the messiness of human decision-making in such scenarios.
Why games? Why science?
In Art of Wargaming, Peter Perla tells us that strategic wargames began in earnest in the early 19th century, when Baron von Reisswitz and his son developed a tabletop exercise to teach the Prussian General Staff about military strategy in dynamic, uncertain environments. Today, ‘serious games’ remain best known in military and security domains, but they are used everywhere from education to business strategy.
In recent years, Technology Strategy Roleplay, a charity organisation, has pioneered the application of serious games to AI governance. TSR’s Intelligence Rising game simulates the paths by which AI capabilities and risks might take shape, and invites decision-makers to role-play the incentives, tensions and trade-offs that result. To date, more than 250 participants from governments, tech firms, think tanks and beyond have taken part.
Building on this example, we at Google DeepMind wanted to co-design a game to explore how AI may affect science and society. Why? As we outlined in a past essay, we believe that the application of AI to science could be its most consequential. As a domain, science also aligns nicely with the five criteria that successful games require, as outlined in a past paper by TSR’s Shahar Avin and colleagues:
- Many actors must work together: Scientific progress rests on the interplay between policymakers, funders, academic researchers, corporate labs, and others. Their varying incentives, timelines, and ethical frameworks naturally lead to tensions that games are well-placed to explore…(More)”.
Book by Soroush Saghafian: “Improving public policies, creating the next generation of AI systems, reducing crime, making hospitals more efficient, addressing climate change, controlling pandemics, and reducing disruption in supply chains are all problems where big picture ideas from analytics science have had large-scale impact. What are those ideas? Who came up with them? Will insights from analytics science help solve even more daunting societal challenges? This book takes readers on an engaging tour of the evolution of analytics science and how it brought together ideas and tools from many different fields – AI, machine learning, data science, OR, optimization, statistics, economics, and more – to make the world a better place. Using these ideas and tools, big picture insights emerge from simplified settings that get at the essence of a problem, leading to superior approaches to complex societal issues. A fascinating read for anyone interested in how problems can be solved by leveraging analytics…(More)”.
OECD Report: “Countries count AI compute infrastructure as a strategic asset without systematically tracking its distribution, availability and access. A new OECD Working Paper presents a methodology to help fill this gap by tracking and estimating the availability and global physical distribution of public cloud compute for AI.
Compute infrastructure is a foundational input for AI development and deployment, alongside data and algorithms. “AI compute” refers to the specialised hardware and software stacks required to train and run AI models. But as AI systems become more complex, their need for AI compute grows exponentially.
The OECD collaborated with researchers from Oxford University Innovation on this new Working Paper to help operationalise a data collection framework outlined in an earlier OECD paper, A blueprint for building national compute capacity for artificial intelligence…
Housed in data centres, AI compute comprises clusters of specialised semiconductors, or chips, known as AI accelerators. For the most part, three types of providers operate these clusters: government-funded computing facilities, private compute clusters, and public cloud providers (Figure 1).
Public cloud AI compute refers to on-demand services from commercial providers, available to the general public.
Figure 1. Different types of AI compute and focus of this analysis

This paper focuses on public cloud AI compute, which is particularly relevant for policymakers because:
- It is accessible to a wide range of actors, including SMEs, academic institutions, and public agencies.
- It plays a central role in the development and deployment of the generative AI systems quickly diffusing into economies and societies.
- It is more transparent and measurable than private compute clusters or government-funded facilities, which often lack publicly available data…(More)”.