Explore our articles
View All Results

Stefaan Verhulst

Article with Sami Mahroum: “In my experimentation with AI-augmented policy analysis for government clients, I have found that these systems excel at what I call “sentiment-aware policy design.” While traditional tools might show that a congestion charge reduces traffic by 22%, AI systems can remind you that the term “congestion charge” polls substantially worse than “clean-air fee”; that implementation during election years multiplies political risk; and that exempting delivery vehicles creates coalition-building possibilities with small-business groups.

The point isn’t to replace human judgment. It is to make experienced insiders’ implicit political knowledge more explicit, systematic, and testable. With AI, the abstract-rationality crowd gets quantitative rigor, the bounded-rationality practitioners get political intelligence, and – crucially – both can see the other’s perspective clearly.

Moreover, when combined with web-search capabilities, AI tools can contribute near-real-time sentiment analysis. This matters because policies designed to address last quarter’s concerns might no longer fit the political terrain when they launch in the coming quarter. By the time a pension reform reaches parliament, murmurs of a recession may have changed voters’ priorities entirely.

AI-powered analysis can reveal how specific issues are being discussed across news, social media, parliamentary debates, stakeholder communications, and other channels. It can identify rising concerns and flag when a window of political opportunity has opened or closed. Such insights can help governments counter the perception that they are slow, deaf, and disconnected from everyday realities. AI cannot make governments omniscient, but it can make them more responsive and less blind to the political consequences of technical decisions.

The high-trust technocracies succeed partly because they have systematized the integration of technical excellence with political responsiveness. Now, AI offers democracies the means to do so as well…(More)”.

How AI Could Restore Trust in Democratic Governance

Article by Brian Callaci and Sandeep Vaheesan: “…The scholarship on state capacity emphasizes the plurality and unevenness of state capacities. For example, states can strengthen their capacities in some areas, such as repression, while self-consciously weakening their capacities in others, such as corporate regulation. States exercise their power for different ends and use assorted means, some good, some bad. Some state agencies deliver health care for millions, while others target working-class people through tax audits and imprisonment. Moreover, we care not just about the state’s capacity to act, but also about the democratic legitimacy of those actions. States make some decisions through democratic means, such as legislation and regulation based on public input and consultation, and others through undemocratic methods, such as court decisions. And while state capacity entails the ability of the state to pursue its own goals autonomously from powerful social groups such as large corporations, there are other social groups, like poor communities suffering the effects of environmental racism, that do not have enough power to influence the state.

Taxation, spending, regulation, and public provision are all aspects of state capacity. A prerequisite for a state that meets even Weber’s minimal criteria would be fiscal capacity: the ability to collect taxes and direct public resources to the state’s desired ends. On the taxation front, U.S. state capacity is clearly heading in the wrong direction, with corporations and wealthy individuals openly pursuing a multitude of tax avoidance strategies with little fear of negative consequences. At the sub-federal level, states and municipalities compete with one another for private investment via offers of deregulation and subsidies, allowing powerful corporations to choose the level of regulation and taxation they desire. On the spending side, the federal government’s reliance on private contractors and unwillingness to use its bargaining power as a large buyer means it has limited control over military procurement costs…(More)”.

Rethinking State Capacity

Blog by Mohamed Shareef: “…For two decades, Asian governments have counted broadband subscriptions, celebrated connectivity percentages, and commissioned policy frameworks.  

Meanwhile, fishing communities in the Maldives still can’t afford 1GB of data, Pakistani e-government services crash during internet disruptions, and Tongan government operations collapsed for five weeks after a volcanic eruption severed their only submarine cable.  

The gap between digital strategy documents and actual service delivery has never been wider. Here’s how Asian governments can close it.  

Measure what citizens actually experience 

Your ministry reports 85 per cent internet penetration. But can your citizens actually access government services during monsoon season when submarine cables fail? Can rural hospitals use your telemedicine platform on 3G networks? What percentage of median household income does meaningful connectivity actually cost? For Asian governments, this means replacing vanity metrics with citizen-centered measurements:
Instead of: “Fiber deployed to 500 district” . Measure: “Healthcare centers in 500 districts can access national health records during extreme weather events” 

Instead of: “75 per cent smartphone penetration”. Measure: “Percentage of citizens who can afford data plans sufficient for essential government services” 
Instead of: “E-government portal launched”. Measure: “Government services accessible to citizens using entry-level devices on congested networks” 

Bangladesh’s experience with biometric identity systems, India’s Aadhaar implementation challenges, and Indonesia’s struggles with connectivity in remote islands offer lessons.  The question isn’t whether you have digital infrastructure. It’s whether that infrastructure delivers services when citizens need them most…(More)”.

Why Asian governments are measuring the wrong things

Paper by Pietro Bini, Lin William Cong, Xing Huang & Lawrence J. JinDo generative AI models, particularly large language models (LLMs), exhibit systematic behavioral biases in economic and financial decisions? If so, how can these biases be mitigated? Drawing on the cognitive psychology and experimental economics literatures, we conduct the most comprehensive set of experiments to date—originally designed to document human biases—on prominent LLM families across model versions and scales. We document systematic patterns in LLM behavior. In preference-based tasks, responses become more human-like as models become more advanced or larger, while in belief-based tasks, advanced large-scale models frequently generate rational responses. Prompting LLMs to make rational decisions reduces biases…(More)”.

Behavioral Economics of AI: LLM Biases and Corrections

Paper by Sándor Juhász, Johannes Wachs, Jermain Kaminski and César A. Hidalgo: “Despite the growing importance of the digital sector, research on economic complexity and its implications continues to rely mostly on administrative records—e.g. data on exports, patents, and employment—that have blind spots when it comes to the digital economy. In this paper we use data on the geography of programming languages used in open-source software to extend economic complexity ideas to the digital economy. We estimate a country’s software economic complexity index (ECIsoftware) and show that it complements the ability of measures of complexity based on trade, patents, and research to account for international differences in GDP per capita, income inequality, and emissions. We also show that open-source software follows the principle of relatedness, meaning that a country’s entries and exits in programming languages are partly explained by its current pattern of specialization. Together, these findings help extend economic complexity ideas and their policy implications to the digital economy…(More)”.

The software complexity of nations

Article by Elie Dolgin: “AI is turning scientists into publishing machines—and quietly funneling them into the same crowded corners of research.

That’s the conclusion of an analysis of more than 40 million academic papers, which found that scientists who use AI tools in their research publish more papers, accumulate more citations, and reach leadership roles sooner than peers who don’t.

But there’s a catch. As individual scholars soar through the academic ranks, science as a whole shrinks its curiosity. AI-heavy research covers less topical ground, clusters around the same data-rich problems, and sparks less follow-on engagement between studies.

The findings highlight a tension between personal career advancement and collective scientific progress, as tools such as ChatGPT and AlphaFold seem to reward speed and scale—but not surprise.

“You have this conflict between individual incentives and science as a whole,” says James Evans, a sociologist at the University of Chicago who led the study.

And as more researchers pile onto the same scientific bandwagons, some experts worry about a feedback loop of conformity and declining originality. “This is very problematic,” says Luís Nunes Amaral, a physicist who studies complex systems at Northwestern University. “We are digging the same hole deeper and deeper.”

Evans and his colleagues published the findings 14 January in the journal Nature…(More)”.

AI Boosts Research Careers but Flattens Scientific Discovery

Article by Mike McIntire: “Genetic researchers were seeking children for an ambitious, federally funded project to track brain development — a study that they told families could yield invaluable discoveries about DNA’s impact on behavior and disease.

They also promised that the children’s sensitive data would be closely guarded in the decade-long study, which got underway in 2015. Promotional materials included a cartoon of a Black child saying it felt good knowing that “scientists are taking steps to keep my information safe.”

The scientists did not keep it safe.

A group of fringe researchers thwarted safeguards at the National Institutes of Health and gained access to data from thousands of children. The researchers have used it to produce at least 16 papers purporting to find biological evidence for differences in intelligence between races, ranking ethnicities by I.Q. scores and suggesting Black people earn less because they are not very smart.

Mainstream geneticists have rejected their work as biased and unscientific. Yet by relying on genetic and other personal data from the prominent project, known as the Adolescent Brain Cognitive Development Study, the researchers gave their theories an air of analytical rigor…(More)”.

Genetic Data From Over 20,000 U.S. Children Misused for ‘Race Science’

Report by Access Partnerships: “AI is reshaping work at a pace that most labor market information systems were not built to measure. Against this backdrop, the pressing question is not simply “who works where?” as it used to be in the past, but what people actually do, what skills they use, and how AI is changing tasks inside roles.

Today, many countries still rely on infrequent surveys, broad occupational categories, and siloed administrative datasets. That makes it harder to spot early signals of changing skills demand, target training investment, or support employers and workers as AI adoption accelerates.

Modernizing labor market data for the AI age

Our report, developed in partnership with Workday, helps governments modernize labor market data systems to better navigate AI-driven change. It establishes a global baseline across 21 countries, identifies system gaps, and sets out a practical pathway to strengthen readiness over time.

At the center is a maturity framework benchmarking countries across six dimensions of AI-ready labor market data: Forecasting readiness, Labor market granularity, Accessibility, Interoperability and integration, and Real-time responsiveness (FLAIR)…(More)”.

Labor Data Readiness in the Age of AI

Paper by Anush Ganesh, and Krusha Bhatt: “As society advances toward a digital economy with increasing dependence on internet-based services, data has attained prominence as an essential currency supporting market power. This paper examines the emerging jurisprudence on excessive data collection by dominant digital platforms, comparing approaches developed in India and the European Union. The Indian approach, exemplified by the WhatsApp Privacy (2025) decision, integrates competition law with constitutional protections, particularly the right to privacy under Article 21 of the Indian Constitution. Meanwhile, the European approach, crystallized in the Facebook Germany case, integrates competition law with data protection principles enshrined in the General Data Protection Regulation (GDPR). Despite their different legal foundations, these approaches display convergence in recognizing that dominant platforms’ data collection practices can constitute abusive exploitation of market power. This paper argues that this convergence creates opportunities for a unified analytical framework that respects jurisdictional diversity while enabling more effective global platform regulation…(More)”.

Convergence of Competition Law and Constitutional Rights: A Comparative Study of the WhatsApp (India) and Facebook (Germany) Cases

Paper by Daniel Thilo Schroeder et al: “Advances in artificial intelligence (AI) offer the prospect of manipulating beliefs and behaviors on a population-wide level. Large language models (LLMs) and autonomous agents let influence campaigns reach unprecedented scale and precision. Generative tools can expand propaganda output without sacrificing credibility and inexpensively create falsehoods that are rated as more human-like than those written by humans. Techniques meant to refine AI reasoning, such as chain-of-thought prompting, can be used to generate more convincing falsehoods. Enabled by these capabilities, a disruptive threat is emerging: swarms of collaborative, malicious AI agents. Fusing LLM reasoning with multiagent architectures, these systems are capable of coordinating autonomously, infiltrating communities, and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy. Because the resulting harms stem from design, commercial incentives, and governance, we prioritize interventions at multiple leverage points, focusing on pragmatic mechanisms over voluntary compliance…(More)”.

How malicious AI swarms can threaten democracy

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday