Explore our articles
View All Results

Stefaan Verhulst

Book by John P. Wihbey: “The large, corporate global platforms networking the world’s publics now host most of the world’s information and communication. Much has been written about social media platforms, and many have argued for platform accountability, responsibility, and transparency. But relatively few works have tried to place platform dynamics and challenges in the context of history, especially with an eye toward sensibly regulating these communications technologies. In Governing Babel, John Wihbey considers the ongoing, high-stakes debate over social media platforms and free speech, and how these companies ought to manage their tremendous power.

Wihbey takes readers on a journey into the high-pressure and controversial world of social media content moderation, looking at issues through relevant cultural, legal, historical, and global lenses. The book addresses a vast challenge—how to create new rules to deal with the ills of our communications and media systems—but the central argument it develops is relatively simple. The idea is that those who create and manage systems for communications hosting user-generated content have both a responsibility to look after their platforms and a duty to respond to problems. They must, in effect, adopt a central response principle that allows their platforms to take reasonable action when potential harms present themselves. And finally, they should be judged, and subject to sanction, according to the good faith and persistence of their efforts…(More)”.

Governing Babel: The Debate over Social Media Platforms and Free Speech—and What Comes Next

Article by Eli Tan: “The devices are part of an industry known as precision farming, a data-driven approach for optimizing production that is booming with the addition of A.I. and other technologies. Last year, the livestock-monitoring industry alone was valued at more than $5 billion, according to Grand View Research, a market research firm.

Farmers have long used technology to collect and analyze data, with the origins of precision farming dating to the 1990s. In the early 2000s, satellite imagery changed the way farmers determined crop schedules, as did drones and eventually sensors in the fields. Nowadays, if you drive by farms in places like California’s Central Valley, you may not see any humans at all.It is not just dairy farms seeing a change. Elsewhere in the Central Valley, which produces about half of America’s fruits and vegetables, autonomous tractors that use the same sensors as robot taxis and apple-picking tools with A.I. cameras have become popular.

The most common precision farming technology, like GPS maps that track crop yields and auto-steering tractors, are used by 70 percent of large farms today, up from less than 10 percent in the early 2000s, according to Department of Agriculture data.

“The tech is getting faster every year,” said Deepak Joshi, a professor of precision agriculture at Kansas State University. “It used to be we’d have new technologies every few years, and now it’s every six months.”The new products are helping farmers reduce costs as tariffs and inflation raise the prices of farm equipment and feed. They also allow farmers to do more work with fewer people as the Trump administration cracks down on illegal immigration.

Precision farming has boomed as the costs of cameras and sensors have plunged and A.I. models that analyze data have improved, said Charlie Wu, the founder of Orchard Robotics, a farming robotics start-up in San Francisco. Much like the rest of the A.I. industry, agricultural technology is now anchored by chips made by Nvidia, he said…(More)”.

Cows Wear High-Tech Collars Now

Book by Sigge Winther Nielsen: “The wicked problems of our time – climate change, migration, inequality, productivity, and mental health – remain stubbornly unresolved in Western democracies. It’s time for a new way to puzzle.

This book takes the reader on a journey from London to Berlin, and from Rome to Washington DC and Copenhagen, to discover why modern politics keeps breaking down. We’re tackling twenty-first century problems with twentiethcentury ideas and nineteenth-century institutions.

In search of answers, the author gate-crashes a Conservative party at an English estate, visits a stressed-out German minister at his Mallorcan holiday home, and listens in on power gossip in Washington DC’s restaurant scene. The Puzzle State offers one of the most engaging and thoughtful analyses of Western democracies in recent years. Based on over 100 interviews with top political players, surveys conducted across five countries, and stories of high-profile policy failures, it uncovers a missing link in reform efforts: the absence of ongoing feedback loops between decision-makers and frontline practitioners in schools, hospitals, and companies.

But there is a way forward. The author introduces the concept of the Puzzle State, which uses collective intelligence and AI to (re)connect politicians with the people implementing reforms on the ground. The Puzzle State tackles highprofile wicked problems and enables policies to adapt as they meet messy realities. No one holds all the pieces in the Puzzle State. The feedback loop must cut across all sectors – from civil society to corporations – just like solving a complex puzzle requires commitment, cooperation, and creativity…(More)”.

The Puzzle State: How to Govern Wicked Problems in Western Democracies 

Article by Jenifer Whiston: “At Free Law Project, we believe the law belongs to everyone. But for too long, the information needed to understand and use the law—especially in civil rights litigation—has been locked behind paywalls, scattered across jurisdictions, or buried in technical complexity.

That’s why we teamed up with the Civil Rights Litigation Clearinghouse on an exploratory grant from Arnold Ventures: to see how artificial intelligence could help unlock these barriers, making court documents more accessible and legal research more open and accurate.

What We Learned

This six-month project gave us the opportunity to experiment boldly. Together, we researched and tested AI based approaches and tools to:

  • Classify legal documents into categories like motions, orders, or opinions.
  • Summarize filings and entire cases, turning dense legal text into plain-English explanations.
  • Generate structured metadata to make cases easier to track and analyze.
  • Trace the path of a legal case as it moves through the court system
  • Enable semantic search, allowing users to type questions in everyday language and find relevant cases.
  • Build the foundation of an open-source citator, so researchers and advocates can see when cases are overturned, affirmed, or otherwise impacted by later decisions.

These prototypes showed us not only what’s possible, but what’s practical. By testing real AI models on real data, we proved that tools like these can responsibly scale to improve the infrastructure of justice…(More)”.

Opening Doors with AI: How Free Law Project and the Civil Rights Litigation Clearinghouse Are Reimagining Legal Research

Report by Rainer Kattel et al: “The four-stage assessment approach for assessing city government dynamic capabilities reflects the complexity of city governments. It blends rigour with usefulness, striving to be both robust and usable, adding value to city governments and those that support their efforts. This report summarises how to assess and compare dynamic capabilities across city governments, provides an assessment of dynamic capabilities in a selected sample set of cities, and sets out future work that will explore how to create an index that can be scaled to thousands of cities and used to effectively build capabilities, positively transforming cities and creating better lives for residents…(More)”.

Assessing Dynamic Capabilities in City Governments: Creating a Public Sector Capabilities Index

Blog by the Stanford Peace Innovation Lab: “…In the years since, the concept of peace technology has expanded globally. Accelerators—structures that concentrate resources, mentorship, and networks around innovation—have played a central role in this evolution. Governments, universities, nonprofits, and private companies now host peace tech accelerators, each bringing their own framing of what “peace tech” means. Looking across these initiatives, we see the emergence of a diverse but connected ecosystem that, as of 2025, continues to grow with new conferences, alliances, and prizes…(More)”.

From Concept to Ecosystem: The Evolution of PeaceTech Accelerators

Paper by Lukas Linsi et al: “International organisations (IOs) collect and disseminate a wide array of statistics. In many cases, the phenomena these statistics seek to quantify defy precise measurement. Hard-to-measure statistics frequently represent ‘guesstimates’ rather than solid measures. Nonetheless, they are often presented as if they were perfectly reliable, precise estimates. What drives IOs to disseminate guesstimates and why are they frequently presented with seemingly excessive precision? To answer these questions, we adopt an ecological framework where IOs must pay attention to both internal and external audiences. The framework informs three mechanisms: Mock precise guesstimates are fuelled by how organisations seek to attract attention to their work, signal scientific competence, and consolidate their professional standing. Empirically, we evaluate these mechanisms in the context of statistics on (illicit) trade, employing a mixed methods approach that integrates elite interviews, expert surveys and a survey experiment. Our findings show how organisational and professional incentives lead to the use of mock precision. While the field we study is IOs, the underlying dynamics are of broader applicability. Not least, they raise questions about mock precision as a (un)scientific practice also commonly used by academic researchers in international political economy (IPE), economics and the wider social sciences…(More)”.

Governing through guesstimates: mock precision in international organisations

Report by BVBA Research: “…presents a novel method to track technological progress in real time using data from ArXiv, the world’s most popular preprint repository in computer science, physics and mathematics.

Key points

  • Understanding the process of science creation is key to monitoring the rapid digital transformation. Using natural language processing techniques (NLP), at BBVA Research we have developed a new set of technological indicators using data from ArXiv, the world’s most popular open-access preprint repository in physics, mathematics and computer science.
  • Our monthly indicators track ArXiv preprint publications in computer science, physics, and mathematics to deliver a granular and real-time signal of research activity, thus of technological progress and digital transformation. These metrics highly correlate with traditional proxies of research activity such as patents.
  • Computer science preprints have exploded from under 10% of all ArXiv submissions in 2010 to nearly 50 % in 2025, driven predominantly by AI research now representing about 70% of all computer science output. Conversely, research on physics and mathematics have grown in parallel at a steady rate.
  • There was a turning point in the computer science paradigm around 2014, with the research focus drastically moving from classic computing theory (e.g. information systems, data structures and numerical analysis) to AI-related disciplines (machine learning, computer vision, computational linguistics and others). Research on cryptography and security was the only subfield essentially unaffected by the AI boom.
  • Scientific innovation plays a crucial role in shaping productivity trends, labor market dynamics, capital flows, investment strategies, and regulatory policies. These indicators provide valuable insights into our rapidly evolving world, enabling early detection of technological shocks and informing strategic investment decisions…(More)”.
Measuring Technological Progress in Real Time with ArXiv

Essay by Ed Bradon: “Systems thinking promises to give us a toolkit to design complex systems that work from the ground up. It fails because it ignores an important fact: systems fight back…

The systems that enable modern life share a common origin. The water supply, the internet, the international supply chains bringing us cheap goods: each began life as a simple, working system. The first electric grid was no more than a handful of electric lamps hooked up to a water wheel in Godalming, England, in 1881. It then took successive decades of tinkering and iteration by thousands of very smart people to scale these systems to the advanced state we enjoy today. At no point did a single genius map out the final, finished product.

But this lineage of (mostly) working systems is easily forgotten. Instead, we prefer a more flattering story: that complex systems are deliberate creations, the product of careful analysis. And, relatedly, that by performing this analysis – now known as ‘systems thinking’ in the halls of government – we can bring unruly ones to heel. It is an optimistic perspective, casting us as the masters of our systems and our destiny…(More)”.

Magical systems thinking

Paper by John Wihbey and Samantha D’Alonzo: “…reviews and translates a broad array of academic research on “silicon sampling”—using Large Language Models (LLMs) to simulate public opinion—and offers guidance for practitioners, particularly those in communications and media industries, conducting message testing and exploratory audience-feedback research. Findings show LLMs are effective complements for preliminary tasks like refining surveys but are generally not reliable substitutes for human respondents, especially in policy settings. The models struggle to capture nuanced opinions and often stereotype groups due to training data bias and internal safety filters. Therefore, the most prudent approach is a hybrid pipeline that uses AI to improve research design while maintaining human samples as the gold standard for data. As the technology evolves, practitioners must remain vigilant about these core limitations. Responsible deployment requires transparency and robust validation of AI findings against human benchmarks. Based on the translational literature review we perform here, we offer a decision framework that can guide research integrity while leveraging the benefits of AI…(More)”

AI Simulations of Audience Attitudes and Policy Preferences: “Silicon Sampling” Guidance for Communications Practitioners 

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday