Explore our articles
View All Results

Stefaan Verhulst

Article by Norman Sadeh: “Artificial intelligence (AI), especially the new generation of increasingly autonomous, agentic AI systems, has triggered understandable concerns about privacy. These systems can read our email messages, draft our documents, navigate our calendars, answer our questions, and even act on our behalf. They observe, analyze, and infer, often continuously. They can derive sensitive attributes from our digital traces and, with growing autonomy, sometimes initiate actions based on these inferences. Many of the privacy fears surrounding AI are real. But as paradoxical as it may seem, AI, including agentic AI, is also becoming essential to protecting privacy.

This column argues that without AI, adequate privacy has become simply out of reach. This is not because AI is benign; it most definitely is not. Rather, the modern digital ecosystem has evolved to a point where no human, unaided, can understand, monitor, or manage the complexity of today’s data practices. For two decades, my collaborators and I have studied why people struggle to manage their privacy and why this struggle keeps getting worse despite decades of regulation, policy work, and advances in privacy enhancing technologies. From mobile applications and IoT devices to location-sharing, video analytics, websites, and AI chatbots, we have found the same underlying truth: privacy is too dynamic, too contextual, and too cognitively demanding for people to manage manually. When it comes to managing privacy, AI is not merely helpful. It is indispensable…(More)”.

No Privacy without AI

Book by Silvia Danielak: “Roads, bridges, a renewable power plant, and an electricity grid: UN peacekeepers might be unusual infrastructure builders, but they’re certainly not unambitious. Since the beginning of the UN’s peacekeeping activities after the end of World War II, the Blue Helmets have cemented streets, constructed bridges, and dug wells in conflict zones. But how did the military arm of the world’s primary diplomatic forum become involved in such activities in its quest for peace, and with what consequences? Peace Infrastructures analyzes the turn to ever-more-complex infrastructure projects, from early road building via urban community projects to the commissioning of entire renewable power plants, in the context of an evolving understanding of peace “problems” and solutions. Tracing the global travel of policies, technologies, and expertise, Silvia Danielak investigates how the shift toward risk management, legacy, and climate security was driven by, and materialized in, conflict zones, shaping the very idea of peace.

The book critically engages with the UN’s ambition to insert itself in the sustainable development of the countries it seeks to assist, arguing that we need to consider peace operations’ spatial, urban, and material ways of engagement—especially in the face of mounting climate risks. Infrastructure is poised to take a more prominent position within peace operations, but a more nuanced understanding that recognizes its opportunities, as well as its potential for violence, is required…(More)”.

Peace Infrastructures

Report by the World Economic Forum: “Intergenerational foresight is about making better decisions today by thinking seriously about the long-term. Drawing on insights from experts around the world, this handbook demonstrates that credible leadership means taking responsibility for the future, rather than treating it as distant or abstract.

It sets out practical principles, tools and real-world examples to help leaders spot risks, opportunities and trade-offs across generations, and build long-term thinking into policy, strategy and governance.

As the world grows more uncertain and some decisions become harder to reverse, the ability to think and act across generations will matter more than ever. This handbook shows how…(More)”.

Intergenerational Foresight: An Approach for Long-Term Responsibility in Governance

Blog by Kate Murray: “Released in February 2026 as a product of the C2PA for G+LAM Community of Practice, the white paper “Content Authenticity and Provenance in the Age of Artificial Intelligence: A Call-to-Action for the LAMs Community” (download PDF) advocates for libraries, archives, and museums (LAMs) to take proactive and pragmatic steps to ensure that digital collections content, especially content impacted by AI at any point in its lifecycle, remains authentic, transparent, and verifiable from creation through access in order to meet the LAMs community’s mission of public trust.

While content authenticity and provenance (CAP) have long been archival principles, existing processes are increasingly impacted, or have the potential to be impacted, by AI-mediated workflows. There are growing expectations from researchers, donors, the public and heritage practitioners to document these impacts comprehensively and consistently by expanding/extending traditional content authenticity and provenance data to address/respond to/consider AI impacts.

This is a critical and decisive moment. The impact of AI on collections imposes risks that demand thoughtful collective attention. Even with the best intentions of maintaining transparency with CAP data, AI technologies introduce novel ethical, legal, and privacy threats. At the same time, AI is transforming, in real-time, the creation, organization and analysis of data at a pace that defies the LAMs community’s traditionally deliberative response to change…(More)”.

Content Authenticity and Provenance in the Age of Artificial Intelligence

Article by John Burn-Murdoch: “…Last year I used detailed data on the ideological positions of people who post on social media to show that they over-represent the radical right and left, confirming the polarisation hypothesis. Over the past week I have used the same dataset of tens of thousands of responses to questions on policy preferences and sociopolitical beliefs to test whether and how the most widely used AI chatbots shape conversations about politics and society. The results strongly support the theory of AI chatbots as depolarising and technocratising.

I found that while different AI platforms behave in subtly different ways, all of them nudge people away from the most extreme positions and towards more moderate and expert-aligned stances. On average, Grok guides conversations about policy and society towards the centre-right — a rightward push for most people but a moderating nudge towards the centre for those who start out as conservative hardliners. OpenAI’s GPT, Google’s Gemini and the Chinese model DeepSeek all exert similarly sized nudges towards a centre-left worldview — a slight leftward nudge for most people but a moderating push away from fringe leftwing positions.

Importantly, this remains true after accounting for partisan differences in AI platform usage and chatbots’ sycophantic tendencies. Even when the AI bots know a user’s political leanings, conversations with LLMs still direct hardline partisans on both flanks away from extreme beliefs on average.

In addition, I found that while conspiratorial beliefs about topics including rigged elections and a link between vaccines and autism are over-represented among people who post to social media relative to the overall population, the opposite is true of AI chatbots, which almost never express agreement with these claims…(More)”.

Social media is populist and polarising; AI may be the opposite

Paper by Lucia Velasco, et al: “Artificial intelligence is rapidly becoming a foundational layer of the global economy, with projections indicating that the AI market will reach $4.8 trillion by 2033 – approximately the size of Germany’s entire economy. Yet this transformation is unfolding with stark inequality. While advanced economies aggressively invest in local AI capacity and infrastructure, low- and lower-middle-income countries (LLMICs) face systemic barriers that threaten to lock them into technological dependency. Recent announcements from governments and major technology firms show large AI funding commitments1, but unclear governance and poor coordination risk turning these investments into deeper global AI inequality rather than lasting domestic capacity…(More)”.

Financing the AI Triad: Compute, Data and Algorithms A framework to build local ecosystems

Article by Valerie Wirtschafter: “On February 28, 2026, a joint U.S.-Israeli military campaign struck Iranian nuclear facilities, military infrastructure, and leadership targets in what was officially dubbed Operation Epic Fury. Social media quickly flooded with false footage of the conflict, including massive explosions in Tel Aviv, successful Iranian missile strikes on U.S. warships, and satellite imagery purporting to show damage to American military bases in the Gulf.

Some of this footage was recycled from unrelated conflicts, including in Ukraine, and even from video games. Yet some of it was entirely fabricated and created with now ubiquitous generative artificial intelligence (AI) tools that can produce even more realistic content at scale. Several observers of the space emphasized the unprecedented volume of AI-generated content and its increasing sophistication.

While much has been written about the potential for AI-generated imagery, videos, and audio to flood the information ecosystem and make it increasingly difficult to parse what is true, AI content has previously only made up a small portion of the misleading content circulating across the web. During 2024, which was deemed “the year of the elections,” AI-generated content—while present—did not derail electoral processes around the world. And in the early days of the Israel-Hamas war, AI content was again present, but it represented just a small fraction of the overall misleading claims and recycled imagery circulating online. Does the current ongoing conflict in Iran truly represent a significant leap in AI-generated imagery? And if so, what might explain such a meaningful shift?…(More)”.

Generative AI as a weapon of war in Iran

Article by Matteo Wong: “For the past several weeks, Anthropic says it secretly possessed a tool potentially capable of commandeering most computer servers in the world. This is a bot that, if unleashed, might be able to hack into banks, exfiltrate state secrets, and fry crucial infrastructure. Already, according to the company, this AI model has identified thousands of major cybersecurity vulnerabilities—including exploits in every single major operating system and browser. This level of cyberattack is typically available only to elite, state-sponsored hacking cells in a very small number of countries including China, Russia, and the United States. Now it’s in the hands of a private company.

On Tuesday, the company officially announced the existence of the model, known as Claude Mythos Preview. For now, the bot will be available only to a consortium of many of the world’s biggest tech companies—including Apple, Microsoft, Google, and Nvidia. These partners can use Mythos Preview to scan and secure bugs and exploits in their software. Other than that, Anthropic will not immediately release Mythos Preview to the public, having determined that doing so without more robust safeguards would be too dangerous…(More)”.

Claude Mythos Is Everyone’s Problem: What happens when AI can hack everything?

Policy Brief by B. Courtney Doagoo: “Many jurisdictions around the world have been scrambling to address complex questions about generative artificial intelligence (AI) and intellectual property law. Authorship has been at the centre of the generative AI copyright debate, as some generated outputs are becoming almost indistinguishable from human-authored works. This debate will soon expand to include additional subsets of frontier technologies, such as brain-computer interfaces, and different sets of regulatory frameworks.

Anticipatory governance, using strategic intelligence, can assist policy makers develop a forward-looking proactive governance structure and process. This strategy does not mean rapid or over-regulation but instead calls for a systems-level evolution in the way jurisdictions approach governance for frontier technologies as they emerge and converge…(More)”.

Anticipating the Mind-Machine: Governance Innovation for Frontier Technologies

Paper by Kimberlyn Rachael Leary and Joel Cutcher-Gershenfeld: “Social innovations are at risk at a time when surprise is employed as a unilateral government strategy in order to shrink and refocus government operations. Social innovations involve collective efforts, frequently spanning public and private stakeholders. The needed trust and reciprocal understandings are undercut when government employs the logic of reengineering combined with surprise as a strategy—what we term “Surprise by Design.” This contrasts with past federal restructuring initiatives that employed a mix of administrative expertise (top down) and frontline continuous improvement (bottom up) as strategies for negotiated changes. Surprise is a particular type of top-down imposed change, which disrupts patterned roles and routines in order to impose a reconceptualization of how government will function in society. This article documents surprise as a change strategy and identifies needed adjustments to two relevant lateral models for negotiated change. By taking this into account, social innovation initiatives can be more resilient in the face of Surprise by Design…(More)”.

Surprise by Design: The Risks for Social Innovation when Surprise is Imposed as a Governing Strategy 

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday