Explore our articles
View All Results

Stefaan Verhulst

Blog by Kate Murray: “Released in February 2026 as a product of the C2PA for G+LAM Community of Practice, the white paper “Content Authenticity and Provenance in the Age of Artificial Intelligence: A Call-to-Action for the LAMs Community” (download PDF) advocates for libraries, archives, and museums (LAMs) to take proactive and pragmatic steps to ensure that digital collections content, especially content impacted by AI at any point in its lifecycle, remains authentic, transparent, and verifiable from creation through access in order to meet the LAMs community’s mission of public trust.

While content authenticity and provenance (CAP) have long been archival principles, existing processes are increasingly impacted, or have the potential to be impacted, by AI-mediated workflows. There are growing expectations from researchers, donors, the public and heritage practitioners to document these impacts comprehensively and consistently by expanding/extending traditional content authenticity and provenance data to address/respond to/consider AI impacts.

This is a critical and decisive moment. The impact of AI on collections imposes risks that demand thoughtful collective attention. Even with the best intentions of maintaining transparency with CAP data, AI technologies introduce novel ethical, legal, and privacy threats. At the same time, AI is transforming, in real-time, the creation, organization and analysis of data at a pace that defies the LAMs community’s traditionally deliberative response to change…(More)”.

Content Authenticity and Provenance in the Age of Artificial Intelligence

Article by John Burn-Murdoch: “…Last year I used detailed data on the ideological positions of people who post on social media to show that they over-represent the radical right and left, confirming the polarisation hypothesis. Over the past week I have used the same dataset of tens of thousands of responses to questions on policy preferences and sociopolitical beliefs to test whether and how the most widely used AI chatbots shape conversations about politics and society. The results strongly support the theory of AI chatbots as depolarising and technocratising.

I found that while different AI platforms behave in subtly different ways, all of them nudge people away from the most extreme positions and towards more moderate and expert-aligned stances. On average, Grok guides conversations about policy and society towards the centre-right — a rightward push for most people but a moderating nudge towards the centre for those who start out as conservative hardliners. OpenAI’s GPT, Google’s Gemini and the Chinese model DeepSeek all exert similarly sized nudges towards a centre-left worldview — a slight leftward nudge for most people but a moderating push away from fringe leftwing positions.

Importantly, this remains true after accounting for partisan differences in AI platform usage and chatbots’ sycophantic tendencies. Even when the AI bots know a user’s political leanings, conversations with LLMs still direct hardline partisans on both flanks away from extreme beliefs on average.

In addition, I found that while conspiratorial beliefs about topics including rigged elections and a link between vaccines and autism are over-represented among people who post to social media relative to the overall population, the opposite is true of AI chatbots, which almost never express agreement with these claims…(More)”.

Social media is populist and polarising; AI may be the opposite

Paper by Lucia Velasco, et al: “Artificial intelligence is rapidly becoming a foundational layer of the global economy, with projections indicating that the AI market will reach $4.8 trillion by 2033 – approximately the size of Germany’s entire economy. Yet this transformation is unfolding with stark inequality. While advanced economies aggressively invest in local AI capacity and infrastructure, low- and lower-middle-income countries (LLMICs) face systemic barriers that threaten to lock them into technological dependency. Recent announcements from governments and major technology firms show large AI funding commitments1, but unclear governance and poor coordination risk turning these investments into deeper global AI inequality rather than lasting domestic capacity…(More)”.

Financing the AI Triad: Compute, Data and Algorithms A framework to build local ecosystems

Article by Valerie Wirtschafter: “On February 28, 2026, a joint U.S.-Israeli military campaign struck Iranian nuclear facilities, military infrastructure, and leadership targets in what was officially dubbed Operation Epic Fury. Social media quickly flooded with false footage of the conflict, including massive explosions in Tel Aviv, successful Iranian missile strikes on U.S. warships, and satellite imagery purporting to show damage to American military bases in the Gulf.

Some of this footage was recycled from unrelated conflicts, including in Ukraine, and even from video games. Yet some of it was entirely fabricated and created with now ubiquitous generative artificial intelligence (AI) tools that can produce even more realistic content at scale. Several observers of the space emphasized the unprecedented volume of AI-generated content and its increasing sophistication.

While much has been written about the potential for AI-generated imagery, videos, and audio to flood the information ecosystem and make it increasingly difficult to parse what is true, AI content has previously only made up a small portion of the misleading content circulating across the web. During 2024, which was deemed “the year of the elections,” AI-generated content—while present—did not derail electoral processes around the world. And in the early days of the Israel-Hamas war, AI content was again present, but it represented just a small fraction of the overall misleading claims and recycled imagery circulating online. Does the current ongoing conflict in Iran truly represent a significant leap in AI-generated imagery? And if so, what might explain such a meaningful shift?…(More)”.

Generative AI as a weapon of war in Iran

Article by Matteo Wong: “For the past several weeks, Anthropic says it secretly possessed a tool potentially capable of commandeering most computer servers in the world. This is a bot that, if unleashed, might be able to hack into banks, exfiltrate state secrets, and fry crucial infrastructure. Already, according to the company, this AI model has identified thousands of major cybersecurity vulnerabilities—including exploits in every single major operating system and browser. This level of cyberattack is typically available only to elite, state-sponsored hacking cells in a very small number of countries including China, Russia, and the United States. Now it’s in the hands of a private company.

On Tuesday, the company officially announced the existence of the model, known as Claude Mythos Preview. For now, the bot will be available only to a consortium of many of the world’s biggest tech companies—including Apple, Microsoft, Google, and Nvidia. These partners can use Mythos Preview to scan and secure bugs and exploits in their software. Other than that, Anthropic will not immediately release Mythos Preview to the public, having determined that doing so without more robust safeguards would be too dangerous…(More)”.

Claude Mythos Is Everyone’s Problem: What happens when AI can hack everything?

Policy Brief by B. Courtney Doagoo: “Many jurisdictions around the world have been scrambling to address complex questions about generative artificial intelligence (AI) and intellectual property law. Authorship has been at the centre of the generative AI copyright debate, as some generated outputs are becoming almost indistinguishable from human-authored works. This debate will soon expand to include additional subsets of frontier technologies, such as brain-computer interfaces, and different sets of regulatory frameworks.

Anticipatory governance, using strategic intelligence, can assist policy makers develop a forward-looking proactive governance structure and process. This strategy does not mean rapid or over-regulation but instead calls for a systems-level evolution in the way jurisdictions approach governance for frontier technologies as they emerge and converge…(More)”.

Anticipating the Mind-Machine: Governance Innovation for Frontier Technologies

Paper by Kimberlyn Rachael Leary and Joel Cutcher-Gershenfeld: “Social innovations are at risk at a time when surprise is employed as a unilateral government strategy in order to shrink and refocus government operations. Social innovations involve collective efforts, frequently spanning public and private stakeholders. The needed trust and reciprocal understandings are undercut when government employs the logic of reengineering combined with surprise as a strategy—what we term “Surprise by Design.” This contrasts with past federal restructuring initiatives that employed a mix of administrative expertise (top down) and frontline continuous improvement (bottom up) as strategies for negotiated changes. Surprise is a particular type of top-down imposed change, which disrupts patterned roles and routines in order to impose a reconceptualization of how government will function in society. This article documents surprise as a change strategy and identifies needed adjustments to two relevant lateral models for negotiated change. By taking this into account, social innovation initiatives can be more resilient in the face of Surprise by Design…(More)”.

Surprise by Design: The Risks for Social Innovation when Surprise is Imposed as a Governing Strategy 

Article by Stefaan Verhulst: “As artificial intelligence systems rapidly evolve and start to impact nearly every sector of society, the conversation around governance has mainly focused on models (and their output): their transparency, fairness, accountability, and alignment. Yet this focus, while necessary, is incomplete. AI systems are only as reliable, equitable, and effective as the data (input) on which they are trained and operate.

Data governance is not peripheral to AI governance — it is its bedrock.

At the same time, the rise of AI is not simply placing new demands on data governance; it is fundamentally transforming it. What counts as data, how it is curated, who has a say in its use, and which institutional arrangements govern it are all being reimagined in response to AI’s capabilities and risks.

This essay examines 10 key areas or shifts where data governance is being reshaped—either to accommodate AI or as a direct consequence of it…(More)”.

Data Governance in the AI Era: 10 Shifts Redefining Data, Institutions, and Practice

Article by Stefaan Verhulst and Despite decades of investment in statistical systems and open data initiatives, official data remains difficult to discover, interpret, and apply in practice. The challenge is no longer one of availability, but of (re)usability. This persistent gap underscores a broader paradox at the heart of contemporary data governance: data may be open, yet it remains functionally inaccessible for many intended users.

In this context, the International Monetary Fund has been a pioneer in exploring how artificial intelligence and open data can intersect to address this usability challenge. Its StatGPT: AI for Official Statistics report, by James TebrakeBachir BoukherouaaJeff Danforth, and Niva Harikrishnan, offers a timely and important contribution to this evolving conversation – pointing toward a future where AI can make official data more navigable, interpretable, and actionable.

The data challenge is no longer just about availability, but about (re)usability.

The report provides a detailed account of the friction users face across the data lifecycle. Even highly motivated users must navigate fragmented portals, inconsistent terminology, and siloed datasets, often spending significant time assembling information that should be readily accessible. 

The result is a fragmented ecosystem in which metadata and data are distributed across institutions and platforms, forcing users to navigate multiple systems and standards—and to reconstruct context—before they can assess whether the data is re-usable. 

This resonates strongly with broader observations across the open data ecosystem: access alone does not guarantee impact. Without the ability to meaningfully engage with data, openness risks becoming performative rather than transformative…(More)”.

StatGPT and the Fourth Wave of Open Data

Article by Mona Mourshed and Nalini Tarakeshwar: “Following international aid declines, philanthropy is searching for innovative ways to support non-profits and Global South governments in delivering service solutions where outcomes data plays a central role.

Achieving data-driven innovation requires more than gathering the right facts – it must generate change in daily routines. The global philanthropy sector is now waking up this idea.

Below are three key lessons from non-profits that have successfully deployed data in their work in the Global South, and seen real progression in their goals of driving meaningful system change.

Lesson 1: Data users respond far better to carrots than sticks

If government staff feel that something bad will happen should their data reveal underperformance, they are unlikely to gather it. Philanthropy can play a catalytic role by supporting projects that combine data usage with fresh incentives and support.

Generation India works with national and state-government entities in a public-private partnership structure funded equally by both. Previously, training providers in government-funded programmes were reimbursed largely on training and certification; those two milestones accounted for more than 70% of the government payment per learner. While the remainder of government payment per learner did include some outcomes metrics, such as job placement and three-month job retention, the process for proving these outcome metrics was cumbersome and lengthy, discouraging efforts in this direction. Further, since training providers had learned how to break even on the 70% of input-related payments, they were willing to forgo additional outcome-related payments. The combined result was a job placement rate of less than 25%.

To turn things around, the partnership of Generation India and government entities reduced the input payments linked to programme completion to 56% and increased outcomes compensation to 44%. In parallel, it introduced new payment milestones based on job placement within three months of programme completion and job retention at three- and six-months after the initial placement, both of which are verified by third parties.

There’s a similar playbook at the Brazilian Collaborative Leadership alliance, a partnership between the Lemann Foundation and federal, state and municipal governments, which reaches 70% of first and second graders in the country. To advance literacy, Lemann Foundation funds teacher training and provides better-quality textbooks for students at all participating schools. The state commits to joining the national literacy programme, which includes instruction materials and assessments of second grade students. The state also recognizes schools with the best results by granting their principals cash awards with an average value of $10,000. While the recognized schools receive 60-75% of the cash award immediately, they can only access the remaining 25-40% if they help another school in their community improve its literacy outcomes, which spurs an additional layer of support. Lastly, 2-5% of state tax revenue is given to municipality governments based on their performance against targets, with each free to decide how it uses these funds….(More)”.

How non-profits and governments use data to drive real system change

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday