Explore our articles
View All Results

Stefaan Verhulst

Article by Diyi Liu and Shashank Mohan: “As the world becomes increasingly interconnected, the rules and norms governing technology no longer remain confined to their jurisdictions of origin. Instead, they ripple outward, creating what scholars have labelled national and regional effects that reflect both governance philosophies and geopolitical ambitions. They represent the process of norm externalization—whereby regulations, standards, and governance approaches developed in one jurisdiction influence or are adopted by others, either through market mechanisms, deliberate policy diffusion, or in response to capacity constraints and power asymmetries. 

These effects are not merely academic constructs, but powerful forces reshaping the global digital order: They enable mapping pathways of policy transfer across borders, and shed light on how external influences interact with domestic politics in regulatory outcomes. Further, norm externalization characterizes geopolitical influence and economic and technological leverage of exporting countries, and reveals normative alignments between socio-political systems that seek to adopt these governance models. 

When the European Union (EU) implements stringent data protection standards through the General Data Protection Regulation (GDPR), companies worldwide often find it more efficient and cost-effective to apply these standards globally rather than maintain separate systems for each jurisdiction, as witnessed when Microsoft extended GDPR rights to all users worldwide. As China extends its Digital Silk Road (DSR) through infrastructure investments across the Majority World (e.g., the cross-border cable projects), its technological standards and governance approaches could be transferred to recipient countries. Similarly, as India develops and exports its digital public infrastructure (DPI), it exerts influence over other countries in the Majority World to adopt similar population-wide digital welfare schemes that are based on open-standards and operationalize interoperability. 

The competition between different regulatory models is particularly consequential for countries in the Majority World, which find themselves navigating competing governance frameworks while attempting to assert their own digital sovereignty…(More)”

Regional Power, Policy Shaping and Digital Futures: Norm Externalization through the Delhi and Beijing

Article by Bloomberg Cities Network: “…The following three lessons from Ho’s work offer practical guidance for local leaders interested in realizing this potential.

Inserting AI at key moments to unlock action.

Most city leaders already know about the more common applications of AI. But Ho argues that being more ambitious with the technology doesn’t always mean developing a new “end-to-end solution,” as he describes them. In fact, sometimes, local leaders can achieve massive impact when they insert the technology in a strategic way at a critical point in a complex workflow. And that, in turn, can create space for civil servants to deliver what people need.

For example, as part of a partnership with California’s Santa Clara County, Ho and his team at Stanford RegLab adapted a large language model so it could quickly parse through millions of records. The objective? Identifying property deeds containing discriminatory language that was intended, decades ago, to limit who could purchase certain homes. Doing so manually could consume nearly 10 years of staff time. The new tool did an initial analysis with near-perfect accuracy in just a few days. And while that work still calls for human review, the lesson for Ho is that this sort of approach can help ensure civil servants aren’t so consumed with bureaucracy that they are unavailable for frontline service delivery.

“If you are expending your time on these internal tasks, there are other services that necessarily have to take a hit,” Ho explains. Instead, tools such as this one can help ensure local leaders have capacity to maintain adequate staffing at frontline service counters that provide everything from birth certificates to public benefits.

Creating space for human discretion.

Ho cautions against relying on AI to independently manage some of local government’s most sensitive responsibilities, public benefits chief among them. But even in these complex areas, he argues, AI can play a valuable supporting role by streamlining delivery workflows and freeing up civil servants to focus on the human side of service….

Triggering conversations about how to improve policies.

Ho’s work isn’t only showing how cities can free teams from red-tape tasks. It’s also demonstrating how cities can get to the root of those problems: by improving over-complicated policies and programs for the long term. 

As part of his team’s collaboration focused on San Francisco’s municipal code, the city has been using a search system developed by RegLab to identify every case where legislation requires agency personnel to produce potentially time-consuming reports. Some of the findings would be comical if they weren’t in danger of using precious capacity, such as the rule calling for regular updates on the state of city newspaper racks that no longer exist…(More)”

3 ways AI can help cities add a human touch to service delivery

Press Release: “The U.S. Patent and Trademark Office (USPTO) is launching DesignVision, the first artificial intelligence (AI)-based image search tool available to design patent examiners via the Patents End-to-End (PE2E) search suite. DesignVision is the latest step in the agency’s broader efforts to streamline and modernize examination and reduce application pendency. 

DesignVision is an AI-powered tool that is capable of searching U.S. and foreign industrial design collections using image(s) as an input query. The tool provides centralized access and federated searching of design patents, registrations, trademarks, and industrial designs from over 80 global registers, and returns search results based on, and sortable by, image similarity. 

DesignVision will augment—not replace—design examiners’ other search tools. Examiners can continue using other PE2E search tools and non-patent literature when conducting their research. The complete text of the Official Gazette Notice can be found on the patent related notices page of the USPTO website…(More)“.

USPTO launches new design patent examination AI tool

Reports by GMF Technology: “The People’s Republic of China (PRC) builds and exports digital infrastructure as part of its Belt and Road Initiative. While the PRC sells technology deals as a “win-win” with partner countries, the agreements also serve Beijing’s geopolitical interests.

That makes it imperative for European and US policymakers to understand the PRC’s tech footprint and assess its global economic and security impact.

To support policymakers in this endeavor, GMF Technology, using a “technology stack” or “tech stack” framework, has produced a series of reports that map the presence of the PRC and its affiliated entities across countries’ technology domains.

Newly released reports on Kazakhstan, Kyrgyzstan, Serbia, and Uzbekistan are built on previous work in two studies by GMF’s Alliance for Securing Democracy (ASD) on the future internet and the digital information stack released, respectively, in 2020 and 2022. The new reports on Central Asian countries, where Russia maintains significant influence as a legacy of Soviet rule, also examine Kremlin influence there. 

The “tech stack” framework features five layers:  

  • network Infrastructure: including optical cables (terrestrial and undersea), telecommunications equipment, satellites, and space-based connectivity infrastructure
  • data Infrastructure: including cloud technology and data centers
  • devices: including hand-held consumer instruments such as mobile phones, tablets, and laptops, and more advanced Internet-of-Things and AI-enabled devices such as electric vehicles and surveillance cameras
  • applications: includes hardware, software, data analytics, and digital platforms to deliver tailored solutions to consumers, sectors, and industries (e.g., robotic manufacturing)
  • governance: includes the legal and normative framework that governs technology use across the entire tech stack..(More)”.
Wired for Influence: Assessing the People’s Republic of China’s Technology Footprint in Europe and Central Asia

Article by Ali Shiri: “…There are two categories of emerging LLM-enhanced tools that support academic research:

1. AI research assistants: The number of AI research assistants that support different aspects and steps of the research process is growing at an exponential rate. These technologies have the potential to enhance and extend traditional research methods in academic work. Examples include AI assistants that support:

  • Concept mapping (Kumu, GitMind, MindMeister);
  • Literature and systematic reviews (Elicit, Undermind, NotebookLM, SciSpace);
  • Literature search (Consensus, ResearchRabbit, Connected Papers, Scite);
  • Literature analysis and summarization (Scholarcy, Paper Digest, Keenious);
  • And research topic and trend detection and analysis (Scinapse, tlooto, Dimension AI).

2. ‘Deep research’ AI agents: The field of artificial intelligence is advancing quickly with the rise of “deep research” AI agents. These next-generation agents combine LLMs, retrieval-augmented generation and sophisticated reasoning frameworks to conduct in-depth, multi-step analyses.

Research is currently being conducted to evaluate the quality and effectiveness of deep research tools. New evaluation criteria are being developed to assess their performance and quality.

Criteria include elements such as cost, speed, editing ease and overall user experience — as well as citation and writing quality, and how these deep research tools adhere to prompts…(More)”.

AI in universities: How large language models are transforming research

Paper by Sara Marcucci and Stefaan Verhulst: “As Artificial Intelligence (AI) systems become increasingly embedded in societal decision-making, they have simultaneously deepened longstanding asymmetries of data, information and control. Central to this dynamic is what this paper terms agency asymmetry: the systematic lack of meaningful participation by individuals and communities in shaping the data and AI systems that inform decisions that impact their lives. This asymmetry is not merely a technical or procedural shortcoming; it is a structural feature of contemporary data and AI governance that underpins a range of interrelated harms–from algorithmic opacity and marginalization, to ecological degradation.

This paper proposes Digital Self-Determination (DSD) as a normative and practical framework for addressing these challenges. Building on the principle of self-determination as both individual autonomy and collective agency, DSD offers tools for empowering communities and individuals to determine how data-based technologies are designed, implemented and used. 

It seeks to contribute to current debates on AI governance and provides a systematic account of how it can be operationalized across the AI lifecycle. In particular, it identifies four domains of intervention–processes, policies, people, and technologies–and illustrates how DSD approaches can be mobilized to confront agency asymmetries, whether through data stewardship models, participatory audits, or inclusive policy instruments…(More)”.

Advancing Agency: Digital Self-Determination as a Framework for AI Governance

Essay by Dan Williams: “America’s epistemic challenges run deeper than social media.

Many people sense that the United States is undergoing an epistemic crisis, a breakdown in the country’s collective capacity to agree on basic facts, distinguish truth from falsehood, and adhere to norms of rational debate. 

This crisis encompasses many things: rampant political lies; misinformation; and conspiracy theories; widespread beliefs in demonstrable falsehoods (“misperceptions”); intense polarization in preferred information sources; and collapsing trust in institutions meant to uphold basic standards of truth and evidence (such as scienceuniversitiesprofessional journalism, and public health agencies). 

According to survey data, over 60% of Republicans believe Joe Biden’s presidency was illegitimate. 20% of Americans think vaccines are more dangerous than the diseases they prevent, and 36% think the specific risks of COVID-19 vaccines outweigh their benefits. Only 31% of Americans have at least a “fair amount” of confidence in mainstream media, while a record-high 36% have no trust at all. 

What is driving these problems? One influential narrative blames social media platforms like Facebook, Twitter (now X), and YouTube. In the most extreme form of this narrative, such platforms are depicted as technological wrecking balls responsible for shattering the norms and institutions that kept citizens tethered to a shared reality, creating an informational Wild West dominated by viral falsehoods, bias-confirming echo chambers, and know-nothing punditry.

The timing is certainly suspicious. Facebook launched in 2004, YouTube in 2005, and Twitter in 2006. As they and other platforms acquired hundreds of millions of users over the next decade, the health of American democracy and its public sphere deteriorated. By 2016, when Donald Trump was first elected president, many experts were writing about a new “post-truth” or “misinformation” age. 

Moreover, the fundamental architecture of social media platforms seems hostile to rational discourse. Algorithms that recommend content prioritize engagement over accuracy. This can amplify sensational and polarizing material or bias-confirming content, which can drag users into filter bubbles. Meanwhile, the absence of traditional gatekeepers means that influencers with no expertise or ethical scruples can reach vast audiences. 

The dangerous consequences of these problems seem obvious to many casual observers of social media. And some scientific research corroborates this widespread impression. For example, a systematic review of nearly five hundred studies finds suggestive evidence for a link between digital media use and declining political trust, increasing populism, and growing polarization. Evidence also consistently shows an association between social media use and beliefs in conspiracy theories and misinformation…(More)”.

Scapegoating the Algorithm

Paper by Art Alishani, Vincent Homburg, and Ott Velsberg: “Public service providers around the world are now offering chatbots to answer citizens’ questions and deliver digital services. Using these artificial intelligence-powered technologies, citizens can engage in conversations with governments through systems that mimic face-to-face interactions and adjust their use of natural language to citizens’ communication styles. This paper examines emerging experiences with chatbots in government interactions, with a focus on exploring what public administration practitioners and scholars should expect from chatbots in public encounters. Furthermore, it seeks to identify what gaps exist in the general understanding of digital public encounters…(More)”.

Public Encounters and Government Chatbots: When Servers Talk to Citizens

Article by by Rahmin Sarabi: “Across the United States, democracy faces mounting challenges from polarizationpublic distrust, and increasingly complex societal challenges. Traditional systems of civic participation—and the broader foundations of democratic governance—have struggled to adapt as media and electoral incentives increasingly reward outrage over understanding.

Despite these challenges, new possibilities are emerging. Artificial intelligence—specifically large language models (LLMs)—is beginning to serve as a transformative tool for public engagement and policymaking, led by innovative governments and civic institutions.

When used thoughtfully, LLMs can help unlock public wisdom, rebuild trust, and enable better decisionmaking—not by replacing human judgment, but by strengthening it. This promise doesn’t dismiss the serious concerns about AI’s impact on social cohesion, work, and democracy—which remain vital to address. Yet these emerging capabilities can enhance both institutional efficiency and, more importantly, core democratic values: inclusiveness, meaningful participation, and deliberative reasoning.

By strengthening these foundations, AI can enable the collaborative problem-solving today’s interconnected problems demand and help us renew democracy to meet the challenges of our time. This piece examines concrete applications where LLM-based AI is already enhancing democratic processes—from citizen engagement to survey and context analysis—and explores principles for scaling these innovations responsibly…(More)”.

How AI Can Unlock Public Wisdom and Revitalize Democratic Governance

Essay by Antón Barba-Kay: “…Consider the casual assumption that the scope and accessibility of choice is itself “democratic,” such that Reddit, Facebook, Instagram, and TikTok might be called democratic platforms because everyone gets a vote and anyone can become someone. In fact, there’s nothing intrinsically democratic about participation as such at any level. Countries do not become more democratic when the people are more frequently consulted. Expertise and state secrets are part of it, as is the fact that states need to make long-term, binding commitments. Digital consultations (such as have been used by the Five Star Movement in Italy) give undue power to those who control the software through which they take place. There is also the better reason that, when questions are placed before a mass electorate to an extent that exceeds our capacity for deliberation, there occurs a populist or oligarchic reversal. In a direct democracy such as that of ancient Athens, everything begins to turn on who poses the questions and when and why. 

The point is that mass populations can be good judges of what locally touches us but are not able to collectively pay attention to what doesn’t—and that this is an entirely democratic, republican situation. I mean that the will of the people is not a fact but an artifice: It is not the input of democratic government but the output of its codified processes. Intermediate institutions such as parties, caucuses, primaries, debates, and the Electoral College were designed in theory to sublimate citizen passions and interests into more substantive formulations than their raw expression allows. The Federalist Papers explicitly speak of it as a machine—a Newtonian “system” of “bodies,” “springs,” “energies,” and “balances” operating under universally observable laws. The point of this machine is to produce the popular will by representing it. The Constitution is intended to be the context within which popular decision can find its most articulate expression through the medium of good government—which is why any one person’s claim to immediately embody the will of the people is anti-democratic. 

For digital services as well as democratic processes, there is a point beyond which the intensification of involvement degrades the objects of preference themselves. Just as social media has made us impatient for headlines spicy or outrageous, the focus on and capture of choice as such can destroy the conditions of meaningful choice themselves…(More)”.

Democracy by the Book. Is data the last lingua franca?

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday