Stefaan Verhulst
Book by Sigge Winther Nielsen: “The wicked problems of our time – climate change, migration, inequality, productivity, and mental health – remain stubbornly unresolved in Western democracies. It’s time for a new way to puzzle.
This book takes the reader on a journey from London to Berlin, and from Rome to Washington DC and Copenhagen, to discover why modern politics keeps breaking down. We’re tackling twenty-first century problems with twentiethcentury ideas and nineteenth-century institutions.
In search of answers, the author gate-crashes a Conservative party at an English estate, visits a stressed-out German minister at his Mallorcan holiday home, and listens in on power gossip in Washington DC’s restaurant scene. The Puzzle State offers one of the most engaging and thoughtful analyses of Western democracies in recent years. Based on over 100 interviews with top political players, surveys conducted across five countries, and stories of high-profile policy failures, it uncovers a missing link in reform efforts: the absence of ongoing feedback loops between decision-makers and frontline practitioners in schools, hospitals, and companies.
But there is a way forward. The author introduces the concept of the Puzzle State, which uses collective intelligence and AI to (re)connect politicians with the people implementing reforms on the ground. The Puzzle State tackles highprofile wicked problems and enables policies to adapt as they meet messy realities. No one holds all the pieces in the Puzzle State. The feedback loop must cut across all sectors – from civil society to corporations – just like solving a complex puzzle requires commitment, cooperation, and creativity…(More)”.
Article by Jenifer Whiston: “At Free Law Project, we believe the law belongs to everyone. But for too long, the information needed to understand and use the law—especially in civil rights litigation—has been locked behind paywalls, scattered across jurisdictions, or buried in technical complexity.
That’s why we teamed up with the Civil Rights Litigation Clearinghouse on an exploratory grant from Arnold Ventures: to see how artificial intelligence could help unlock these barriers, making court documents more accessible and legal research more open and accurate.
What We Learned
This six-month project gave us the opportunity to experiment boldly. Together, we researched and tested AI based approaches and tools to:
- Classify legal documents into categories like motions, orders, or opinions.
- Summarize filings and entire cases, turning dense legal text into plain-English explanations.
- Generate structured metadata to make cases easier to track and analyze.
- Trace the path of a legal case as it moves through the court system
- Enable semantic search, allowing users to type questions in everyday language and find relevant cases.
- Build the foundation of an open-source citator, so researchers and advocates can see when cases are overturned, affirmed, or otherwise impacted by later decisions.
These prototypes showed us not only what’s possible, but what’s practical. By testing real AI models on real data, we proved that tools like these can responsibly scale to improve the infrastructure of justice…(More)”.
Report by Rainer Kattel et al: “The four-stage assessment approach for assessing city government dynamic capabilities reflects the complexity of city governments. It blends rigour with usefulness, striving to be both robust and usable, adding value to city governments and those that support their efforts. This report summarises how to assess and compare dynamic capabilities across city governments, provides an assessment of dynamic capabilities in a selected sample set of cities, and sets out future work that will explore how to create an index that can be scaled to thousands of cities and used to effectively build capabilities, positively transforming cities and creating better lives for residents…(More)”.
Blog by the Stanford Peace Innovation Lab: “…In the years since, the concept of peace technology has expanded globally. Accelerators—structures that concentrate resources, mentorship, and networks around innovation—have played a central role in this evolution. Governments, universities, nonprofits, and private companies now host peace tech accelerators, each bringing their own framing of what “peace tech” means. Looking across these initiatives, we see the emergence of a diverse but connected ecosystem that, as of 2025, continues to grow with new conferences, alliances, and prizes…(More)”.
Paper by Lukas Linsi et al: “International organisations (IOs) collect and disseminate a wide array of statistics. In many cases, the phenomena these statistics seek to quantify defy precise measurement. Hard-to-measure statistics frequently represent ‘guesstimates’ rather than solid measures. Nonetheless, they are often presented as if they were perfectly reliable, precise estimates. What drives IOs to disseminate guesstimates and why are they frequently presented with seemingly excessive precision? To answer these questions, we adopt an ecological framework where IOs must pay attention to both internal and external audiences. The framework informs three mechanisms: Mock precise guesstimates are fuelled by how organisations seek to attract attention to their work, signal scientific competence, and consolidate their professional standing. Empirically, we evaluate these mechanisms in the context of statistics on (illicit) trade, employing a mixed methods approach that integrates elite interviews, expert surveys and a survey experiment. Our findings show how organisational and professional incentives lead to the use of mock precision. While the field we study is IOs, the underlying dynamics are of broader applicability. Not least, they raise questions about mock precision as a (un)scientific practice also commonly used by academic researchers in international political economy (IPE), economics and the wider social sciences…(More)”.
Report by BVBA Research: “…presents a novel method to track technological progress in real time using data from ArXiv, the world’s most popular preprint repository in computer science, physics and mathematics.
Key points
- Understanding the process of science creation is key to monitoring the rapid digital transformation. Using natural language processing techniques (NLP), at BBVA Research we have developed a new set of technological indicators using data from ArXiv, the world’s most popular open-access preprint repository in physics, mathematics and computer science.
- Our monthly indicators track ArXiv preprint publications in computer science, physics, and mathematics to deliver a granular and real-time signal of research activity, thus of technological progress and digital transformation. These metrics highly correlate with traditional proxies of research activity such as patents.
- Computer science preprints have exploded from under 10% of all ArXiv submissions in 2010 to nearly 50 % in 2025, driven predominantly by AI research now representing about 70% of all computer science output. Conversely, research on physics and mathematics have grown in parallel at a steady rate.
- There was a turning point in the computer science paradigm around 2014, with the research focus drastically moving from classic computing theory (e.g. information systems, data structures and numerical analysis) to AI-related disciplines (machine learning, computer vision, computational linguistics and others). Research on cryptography and security was the only subfield essentially unaffected by the AI boom.
- Scientific innovation plays a crucial role in shaping productivity trends, labor market dynamics, capital flows, investment strategies, and regulatory policies. These indicators provide valuable insights into our rapidly evolving world, enabling early detection of technological shocks and informing strategic investment decisions…(More)”.
Essay by Ed Bradon: “Systems thinking promises to give us a toolkit to design complex systems that work from the ground up. It fails because it ignores an important fact: systems fight back…
The systems that enable modern life share a common origin. The water supply, the internet, the international supply chains bringing us cheap goods: each began life as a simple, working system. The first electric grid was no more than a handful of electric lamps hooked up to a water wheel in Godalming, England, in 1881. It then took successive decades of tinkering and iteration by thousands of very smart people to scale these systems to the advanced state we enjoy today. At no point did a single genius map out the final, finished product.
But this lineage of (mostly) working systems is easily forgotten. Instead, we prefer a more flattering story: that complex systems are deliberate creations, the product of careful analysis. And, relatedly, that by performing this analysis – now known as ‘systems thinking’ in the halls of government – we can bring unruly ones to heel. It is an optimistic perspective, casting us as the masters of our systems and our destiny…(More)”.
Paper by John Wihbey and Samantha D’Alonzo: “…reviews and translates a broad array of academic research on “silicon sampling”—using Large Language Models (LLMs) to simulate public opinion—and offers guidance for practitioners, particularly those in communications and media industries, conducting message testing and exploratory audience-feedback research. Findings show LLMs are effective complements for preliminary tasks like refining surveys but are generally not reliable substitutes for human respondents, especially in policy settings. The models struggle to capture nuanced opinions and often stereotype groups due to training data bias and internal safety filters. Therefore, the most prudent approach is a hybrid pipeline that uses AI to improve research design while maintaining human samples as the gold standard for data. As the technology evolves, practitioners must remain vigilant about these core limitations. Responsible deployment requires transparency and robust validation of AI findings against human benchmarks. Based on the translational literature review we perform here, we offer a decision framework that can guide research integrity while leveraging the benefits of AI…(More)”
Article by Stefaan Verhulst, Roeland Beerten and Johannes Jutting: Declining survey responses, politically motivated dismissals, and accusations of “rigged” numbers point to a dangerous spiral where official statistics — the bedrock of evidence-based policy — becomes just another casualty of distrust in government. In the below, we suggest a different path: moving beyond averages and aggregates toward more citizen-centric statistics that reflect lived realities, invite participation, and help rebuild the fragile trust between governments and the governed.

“What gets measured gets managed,” the adage goes. But what if what gets measured fails to reflect how people actually live, how they feel, and perhaps more importantly, what they care about? For too long, statistical agencies, the bedrock of evidence-based policymaking, have privileged averages over outliers, aggregates over anomalies, and the macro over the personal–in short, facts over feelings. The result? A statistical lens that often overlooks lived realities and held perceptions.
The strong emphasis on averages, national-level perspectives, and technocratic indicators always carried certain risks. In recent years the phrase “You can’t eat GDP” has popped up with increasing frequency: neatly constructed technical indicators often clash with lived reality, as citizens discovered during the post-COVID years of persistently high inflation for basic goods. Policies that failed to address citizen concerns have fueled discontent and anger in significant parts of the population, paving the way for a surge of populist and anti-democratic parties in both rich and poor countries. In today’s era of polycrisis, there is a growing imperative for reimagined policy processes that innovates and regains citizen trust. For that, we need to reinvent what and how we collect, interpret, use and communicate the evidence base for policies. In short, we need more trustworthy statistical foundations.
The challenge, it is important to emphasize, isn’t merely technical. It is epistemological and democratic. We face a potential crisis of inclusion and accountability, in which the question is not only how to measure, but also who gets to decide what counts as knowledge. If statistics remain too narrowly focused on averages and aggregates, they risk alienating the very citizens they are meant to serve. The legitimacy of official statistics will increasingly depend on their ability to reflect lived realities, incorporate diverse perspectives, and communicate findings in ways that resonate with public experience. In what follows, we therefore argue that, if official statistics are to remain legitimate, and trusted, they must evolve to include lived experiences — an approach that we call citizen-centric statistics…(More)”.
Article by Marianne Dhenin: “Big tech companies have played an outsize role in the war on Gaza since October 2023—from social media companies, who have been accused of systemic censorship of Palestine-related content, to Microsoft, Google, Amazon, and Palantir signing lucrative contracts to provide artificial intelligence (AI) and other technologies to the Israeli military.

Concerned with the industry’s role in the attacks on Gaza, Paul Biggar, founder of Darklang and CircleCI, a startup turned billion-dollar technology company, founded Tech for Palestine in January 2024. The organization serves as an incubator, helping entrepreneurs whose projects support human rights for Palestinians develop and grow their businesses. “Our projects are, on the one hand, using tech for good, and on the other hand, addressing the systemic challenges around Israel in the tech industry,” Biggar says.
He got an insider’s look at how the technology industry equips the Israeli military during his more than a decade as CEO and board member of the companies he founded. He was removed from the board of CircleCI after writing a blog post in December 2023, condemning industry bigwigs for “actively cheer[ing] on the genocide” in Palestine. At the time, the official death toll in the territory exceeded 18,600 people. The official death toll has since risen to over 60,000 people, and in August 2025, a United Nations-backed panel declared that famine is underway in the enclave.
Since its launch, Tech for Palestine has grown from a community of tech workers and other volunteers loosely organized on the communication platform Discord to a grant-making nonprofit that employs five full-time Palestinian engineers and supports 70 projects. It became a 501(c)(3) organization in December 2024, enabling it to solicit large private donations and source smaller donations through a donation link on its website, with the goal of scaling up to supporting 100 projects by the end of 2025.
Tech for Palestine’s most ambitious projects include Boycat, an app and browser extension that helps users identify products from companies that profit from human rights abuses in Palestine, and UpScrolled, an Instagram alternative that promises no shadow banning, no biased algorithms, and no favoritism. Meta, which owns Instagram and Facebook, has been found to censor content in support of Palestine on its platforms, according to an audit conducted by Human Rights Watch…(More)”.