Explore our articles
View All Results

Stefaan Verhulst

Paper by Irene Hardill, Sophie Milnes, Sarah Mills & Rhys Dafydd Jones: “Governments worldwide are grappling with the advent of AI and its potential for governance, improving public services and impacts for its citizenry. This provocation outlines these unfolding geographies of governance. Specifically, we provide a critical lens into the themes of digital transformation, AI and navigating polycrisis, with potential wider global resonance and interest for geographers. To do so, we critically reflect on the potential impacts of AI on public service delivery focusing on civil society organisations offering advice services to citizens in Wales and England. We outline key considerations centred on difference and devolution; connecting with citizens; and discontent and trust…(More)”.

Civil society in crisis times: new geographies of governance in an era of AI

OECD Report: “Mission-oriented innovation policies (MOIPs) have become an important tool for addressing complex societal challenges, with more than 260 missions launched worldwide since the late 2010s. Their rapid expansion has raised both expectations and concerns, highlighting the need for stronger design and implementation strategies. This OECD report draws lessons from a year-long dialogue between policymakers and researchers, exploring how to frame missions, mobilise actors and resources, crowd in private investment and deliver on shared agendas. It offers examples and shared perspectives from those who both think and do missions, as well as a set of converging perspectives on the best practices around mission-oriented innovation policy…(More)”.

Forging New Frontiers in Mission‑Oriented Innovation Policies

Article by Tripp Mickle, Cade Metz, Dylan Freedman, Teresa Mondría Terol and Keith Collins: “A recent analysis of AI Overviews found that they were accurate approximately nine out of 10 times. But with Google processing more than five trillion searches a year, this means that it provides tens of millions of erroneous answers every hour (or hundreds of thousands of inaccuracies every minute), according to an analysis done by an A.I. start-up called Oumi.

More than half of the accurate responses were “ungrounded,” meaning they linked to websites that did not completely support the information they provided. This makes it challenging to check AI Overviews’ accuracy.

Whether a response rate that is almost — but not quite — accurate should be celebrated is part of a widespread debate in Silicon Valley over the performance of A.I. systems. It speaks to the fundamental core of what we can trust online.

Some technologists argue that Google’s AI Overviews are reasonably accurate and that they have improved in recent months. But others worry that the average person may not realize those results need double-checking.

At the request of The New York Times, Oumi analyzed the accuracy of Google’s AI Overviews using a benchmark test called SimpleQA, which is widely used across the industry to measure the accuracy of A.I. systems. The start-up tested Google’s system in October, when the most complex questions were answered using an A.I. technology called Gemini 2, and then again in February, after it was upgraded to Gemini 3, a more powerful A.I. technology.

In both cases, Oumi’s analysis focused on 4,326 Google searches. The company found that the results were accurate 85 percent of the time with Gemini 2 and 91 percent of the time with Gemini 3…(More)”.

How Accurate Are Google’s A.I. Overviews?

Article by Al Landes: “Trying to track down that deleted tweet or verify what a website actually said last month? The Internet Archive’s Wayback Machine has served as your digital time machine for three decades, preserving over a trillion web pages. But this critical tool—the only public archive of its scale—now faces systematic blocking from major media outlets who fear their content might train AI competitors.

Twenty-Three Major News Sites Have Gone Dark

The New York Times, USA Today’s 200+ outlets, and Reddit have all cut off the preservation tool that journalists rely on.

According to Originality AI analysis, 23 major news sites currently block ia_archiverbot, the Wayback Machine’s web crawler. USA Today Co. operates over 200 media outlets, making its blocking decision particularly devastating. The Guardian takes a more subtle approach—allowing crawling but filtering archived content from public access. When you search for historical Guardian articles, you’ll hit digital dead ends.

Publishers Cite AI Training Fears

The New York Times claims archived content violates copyright law, though details remain murky.

Publishers justify blocking with two arguments: preventing AI companies from training on their archived content, and general anti-scraping measures. The Times stated that archived content is being used “to directly compete with us,” but declined to specify whether this represents documented violations or hypothetical concerns. USA Today Co. frames its blocking as routine bot prevention, though the impact falls hardest on preservation efforts…(More)”.

23 Major News Sites Have Blocked the Wayback Machine

Article by Norman Sadeh: “Artificial intelligence (AI), especially the new generation of increasingly autonomous, agentic AI systems, has triggered understandable concerns about privacy. These systems can read our email messages, draft our documents, navigate our calendars, answer our questions, and even act on our behalf. They observe, analyze, and infer, often continuously. They can derive sensitive attributes from our digital traces and, with growing autonomy, sometimes initiate actions based on these inferences. Many of the privacy fears surrounding AI are real. But as paradoxical as it may seem, AI, including agentic AI, is also becoming essential to protecting privacy.

This column argues that without AI, adequate privacy has become simply out of reach. This is not because AI is benign; it most definitely is not. Rather, the modern digital ecosystem has evolved to a point where no human, unaided, can understand, monitor, or manage the complexity of today’s data practices. For two decades, my collaborators and I have studied why people struggle to manage their privacy and why this struggle keeps getting worse despite decades of regulation, policy work, and advances in privacy enhancing technologies. From mobile applications and IoT devices to location-sharing, video analytics, websites, and AI chatbots, we have found the same underlying truth: privacy is too dynamic, too contextual, and too cognitively demanding for people to manage manually. When it comes to managing privacy, AI is not merely helpful. It is indispensable…(More)”.

No Privacy without AI

Book by Silvia Danielak: “Roads, bridges, a renewable power plant, and an electricity grid: UN peacekeepers might be unusual infrastructure builders, but they’re certainly not unambitious. Since the beginning of the UN’s peacekeeping activities after the end of World War II, the Blue Helmets have cemented streets, constructed bridges, and dug wells in conflict zones. But how did the military arm of the world’s primary diplomatic forum become involved in such activities in its quest for peace, and with what consequences? Peace Infrastructures analyzes the turn to ever-more-complex infrastructure projects, from early road building via urban community projects to the commissioning of entire renewable power plants, in the context of an evolving understanding of peace “problems” and solutions. Tracing the global travel of policies, technologies, and expertise, Silvia Danielak investigates how the shift toward risk management, legacy, and climate security was driven by, and materialized in, conflict zones, shaping the very idea of peace.

The book critically engages with the UN’s ambition to insert itself in the sustainable development of the countries it seeks to assist, arguing that we need to consider peace operations’ spatial, urban, and material ways of engagement—especially in the face of mounting climate risks. Infrastructure is poised to take a more prominent position within peace operations, but a more nuanced understanding that recognizes its opportunities, as well as its potential for violence, is required…(More)”.

Peace Infrastructures

Report by the World Economic Forum: “Intergenerational foresight is about making better decisions today by thinking seriously about the long-term. Drawing on insights from experts around the world, this handbook demonstrates that credible leadership means taking responsibility for the future, rather than treating it as distant or abstract.

It sets out practical principles, tools and real-world examples to help leaders spot risks, opportunities and trade-offs across generations, and build long-term thinking into policy, strategy and governance.

As the world grows more uncertain and some decisions become harder to reverse, the ability to think and act across generations will matter more than ever. This handbook shows how…(More)”.

Intergenerational Foresight: An Approach for Long-Term Responsibility in Governance

Blog by Kate Murray: “Released in February 2026 as a product of the C2PA for G+LAM Community of Practice, the white paper “Content Authenticity and Provenance in the Age of Artificial Intelligence: A Call-to-Action for the LAMs Community” (download PDF) advocates for libraries, archives, and museums (LAMs) to take proactive and pragmatic steps to ensure that digital collections content, especially content impacted by AI at any point in its lifecycle, remains authentic, transparent, and verifiable from creation through access in order to meet the LAMs community’s mission of public trust.

While content authenticity and provenance (CAP) have long been archival principles, existing processes are increasingly impacted, or have the potential to be impacted, by AI-mediated workflows. There are growing expectations from researchers, donors, the public and heritage practitioners to document these impacts comprehensively and consistently by expanding/extending traditional content authenticity and provenance data to address/respond to/consider AI impacts.

This is a critical and decisive moment. The impact of AI on collections imposes risks that demand thoughtful collective attention. Even with the best intentions of maintaining transparency with CAP data, AI technologies introduce novel ethical, legal, and privacy threats. At the same time, AI is transforming, in real-time, the creation, organization and analysis of data at a pace that defies the LAMs community’s traditionally deliberative response to change…(More)”.

Content Authenticity and Provenance in the Age of Artificial Intelligence

Article by John Burn-Murdoch: “…Last year I used detailed data on the ideological positions of people who post on social media to show that they over-represent the radical right and left, confirming the polarisation hypothesis. Over the past week I have used the same dataset of tens of thousands of responses to questions on policy preferences and sociopolitical beliefs to test whether and how the most widely used AI chatbots shape conversations about politics and society. The results strongly support the theory of AI chatbots as depolarising and technocratising.

I found that while different AI platforms behave in subtly different ways, all of them nudge people away from the most extreme positions and towards more moderate and expert-aligned stances. On average, Grok guides conversations about policy and society towards the centre-right — a rightward push for most people but a moderating nudge towards the centre for those who start out as conservative hardliners. OpenAI’s GPT, Google’s Gemini and the Chinese model DeepSeek all exert similarly sized nudges towards a centre-left worldview — a slight leftward nudge for most people but a moderating push away from fringe leftwing positions.

Importantly, this remains true after accounting for partisan differences in AI platform usage and chatbots’ sycophantic tendencies. Even when the AI bots know a user’s political leanings, conversations with LLMs still direct hardline partisans on both flanks away from extreme beliefs on average.

In addition, I found that while conspiratorial beliefs about topics including rigged elections and a link between vaccines and autism are over-represented among people who post to social media relative to the overall population, the opposite is true of AI chatbots, which almost never express agreement with these claims…(More)”.

Social media is populist and polarising; AI may be the opposite

Paper by Lucia Velasco, et al: “Artificial intelligence is rapidly becoming a foundational layer of the global economy, with projections indicating that the AI market will reach $4.8 trillion by 2033 – approximately the size of Germany’s entire economy. Yet this transformation is unfolding with stark inequality. While advanced economies aggressively invest in local AI capacity and infrastructure, low- and lower-middle-income countries (LLMICs) face systemic barriers that threaten to lock them into technological dependency. Recent announcements from governments and major technology firms show large AI funding commitments1, but unclear governance and poor coordination risk turning these investments into deeper global AI inequality rather than lasting domestic capacity…(More)”.

Financing the AI Triad: Compute, Data and Algorithms A framework to build local ecosystems

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday