Explore our articles
View All Results

Stefaan Verhulst

Book edited by Mariavittoria Catanzariti, Francesca Incardona, Giorgio Resta, and Anders Sönnerborg: “The recent adoption of the EU Regulation on the European Health Data Space is a significant development in European data law. While the need to protect the confidentiality of information and control over personal data — and, more generally, fundamental rights, particularly those of vulnerable people — is undeniable, the importance of using data for public interest, such as in healthcare and scientific research, has been brought to the fore by the Covid-19 pandemic.

This book addresses the controversial issues surrounding data sharing, including data protection, ownership and reuse, and the related ethical considerations. With contributions from experts in various fields, including medicine and law, it encourages interdisciplinary dialogue on the use of health data in Europe and beyond…(More)”.

Data Privacy, Data Property, and Data Sharing

Primer by the European Data Protection Supervisor: “The idea behind a Digital Identity Wallet (DIW) is to provide users with an easy way to store their identity data and credentials in a digital repository. This enables them to access services in both the physical and digital worlds while ensuring accountability for transactions.

The purpose of this TechDispatch is to introduce the concept of a DIW, understand the privacy risks that exist when using a DIW, and discuss relevant data protection by design and by default requirements and their implementation, including relevant technologies. Eventually, we assess how the European Digital Identity Wallet (EUDIW), mandated by the eIDAS 2 Regulation, fits within the framework outlined.

In general, we can identify four main actors within an identity management ecosystem: the users of a DIW, identity and attribute providers (IdPs), relying parties (RPs) and the scheme authority. Depending on the governance schema, other actors can also play a role. Various digital identity models have been developed over time and are currently in use. These include the isolated model, the centralised model, the federated model and the decentralised model, depending on the architecture of the schema and on the role of the IdP. We describe these models and assess their respective pros and cons in Chapter 2.

In a user-centric identity paradigm, where credentials are stored under the user’s control, such as with a DIW, there is no need for the RP to access the IdP to verify the user’s credentials with each request. This mitigates the risk that an IdP profiles users by observing and linking their transactions with different RPs.

DIW solutions are typically implemented through a combination of mobile applications and cloud infrastructure, and can be used for identification and authorisation, as well as for issuing and using digital signatures. These solutions use digital credentials constructed and protected by cryptographic techniques, called ‘verifiable credentials’. Well-designed DIWs can provide easy access to public and private services, and enhance users’ control and privacy, convenience, interoperability and data security…(More)”.

Digital Identity Wallets

OECD Paper: “Rigorous impact evaluations, particularly randomised trials, can provide governments with valuable insights into whether policies and programmes achieve their intended outcomes. Employed effectively, they offer an evidence base that goes beyond assumptions and precedent, supporting better resource allocation and more effective services for citizens. However, despite their potential, such evaluations remain underused in many contexts, given a number of technical and political barriers. This report explores how governments can overcome these barriers and deliver high-quality evaluations to contribute to policy development. It briefly discusses the potential for artificial intelligence to contribute to impact evaluation. It sets out the main evaluation methods available, ranging from randomised trials to quasi-experimental approaches, and highlights the conditions under which each can be applied. It addresses ethical issues and highlights how such concerns can be addressed through careful design and stakeholder engagement. There are also options for ensuring that evaluations can remain cost-effective, including through greater use of administrative data, alignment with policy priorities, and partnerships with wider networks. In particular, it underlines the value of international co-operation and peer learning to build capacity, share methods, and upscale effective programmes…(More)”.

Unleashing the policy potential of rigorous impact evaluation and randomised trials

JRC Policy Brief: “…outlines the EU’s strategic imperative to assert digital sovereignty while remaining open to global collaboration. Defined as the EU’s capacity to exercise strategic independence in the digital domain—encompassing data governance, infrastructure control, and innovation—digital sovereignty aims to reduce vulnerabilities in economic, security, and technological spheres. The brief emphasizes that this does not equate to isolation or protectionism but rather to strengthening EU competencies in critical areas such as semiconductors, cloud services, and AI, while aligning with democratic values like transparency and the rule of law. A multi-layered framework is proposed, structured across four interlinked dimensions: (1) Digital Governance, focusing on regulatory frameworks and international influence; (2) Digital Infrastructures, Software, and Data, emphasizing secure connectivity, cybersecurity, and data ecosystems; (3) Digital Products and Markets, addressing industrial competitiveness and fair competition; and (4) People, highlighting the need for digital literacy and citizen empowerment. The brief underscores both opportunities (e.g., EU-led initiatives like the Digital Services Act and EuroStack) and risks, including structural dependencies on non-EU providers, fragmented national strategies, and gaps in digital skills…(More)”.

Open but Not Powerless: Towards a Common Understanding of EU Digital Sovereignty

Book by Shannon Mattern: “Computational models of urbanism—smart cities that use data-driven planning and algorithmic administration—promise to deliver new urban efficiencies and conveniences. Yet these models limit our understanding of what we can know about a city. A City Is Not a Computer reveals how cities encompass myriad forms of local and indigenous intelligences and knowledge institutions, arguing that these resources are a vital supplement and corrective to increasingly prevalent algorithmic models.

Shannon Mattern begins by examining the ethical and ontological implications of urban technologies and computational models, discussing how they shape and in many cases profoundly limit our engagement with cities. She looks at the methods and underlying assumptions of data-driven urbanism, and demonstrates how the “city-as-computer” metaphor, which undergirds much of today’s urban policy and design, reduces place-based knowledge to information processing. Mattern then imagines how we might sustain institutions and infrastructures that constitute more diverse, open, inclusive urban forms. She shows how the public library functions as a steward of urban intelligence, and describes the scales of upkeep needed to sustain a city’s many moving parts, from spinning hard drives to bridge repairs…(More)”.

A City Is Not a Computer: Other Urban Intelligences

Article by Manas Tripathi and Ashish Bhasin: “Corporate restructuring, such as mergers, acquisitions, and bankruptcy, now raises complex data-ownership challenges for regulators, especially when activities cross borders and fall under multiple legal authorities. As organizations become more digital, controlling user data has become a core issue during restructuring. Policymakers must protect citizens’ data, evaluate the value of data assets, and ensure that competition rules are followed throughout the restructuring process. Although countries have strengthened rights such as the right to know and the right to be forgotten, many firms still exploit legal gaps to access or repurpose user data during restructuring. This article examines how organizations use these loopholes to shift or expand data ownership, often bypassing regulatory protections. Using a detailed case study, we uncover the blind spots in current oversight. To address these issues, we introduce the Data Ownership Governance for Corporate Restructuring (DOGCR) framework. The framework promotes accountability and offers a structured approach for managing data ownership transitions before, during, and after corporate restructuring…(More)”.

Whose data is it, anyway? Deliberating data ownership during corporate restructuring

Book by Joshua Gans: “It is well recognized that recent advances in AI are exclusively advances in statistical techniques for prediction. While this may facilitate automation, this result is secondary to AI’s impact on decision-making. From an economics perspective, predictions have their first-order impacts on the efficiency of decision-making.

In The Microeconomics of Artificial Intelligence, Joshua Gans examines AI as prediction that enhances and perhaps enables decision-making, focusing on the impacts that arise within firms or industries rather than broad economy-wide impacts on employment and productivity. He analyzes what the supply and production characteristics of AI are and what the drivers of the demand for AI prediction are. Putting these together, he explores how supply and demand conditions lead to a price for predictions and how this price is shaped by market structure. Finally, from a microeconomics perspective, he explores the key policy trade-offs for antitrust, privacy, and other regulations…(More)”.

The Microeconomics of Artificial Intelligence

Essay by Amelia Acker: “A series of exploratory case studies were conducted throughout the 1960s to research centralizing access to government data. In response, social and behavioral researchers—both within and outside the federal government—proposed what came to be known as the National Data Center. The proposal prompted several congressional hearings in the House and Senate throughout 1966. Led by Congressman Cornelius Gallagher and Senator Edward V. Long, respectively, the hearings addressed the possible invasion of privacy that would result from a data center using computer technology and automated recordkeeping to manage data gathered from the public. According to privacy scholar Priscilla Regan, “Congress’s first discussions concerning computerized record systems cast the issue in terms of the idea that individual privacy was threatened by technological change.” But, as the hearings continued and critiques in the press began to circulate, concerns shifted from focusing on the potential impacts of new computing technology on data processing to the sheer volume of information being collected about individuals—some three billion records, according to a Senate subcommittee report.

By the end of the year, the congressional inquiries exploded into a full-blown controversy, and as one observer wrote in 1967, the plan for the National Data Center “acquired the image of a design to establish a gargantuan centralized national data center calculated to bring Orwell’s 1984 at least as close as 1970.” These fears about files with personal information being aggregated into dossiers and made accessible through computers would shape data protections in the United States for decades to come…(More)”.

How “Archive” Became a Verb

Report by the National Academies: “Foundation models – artificial intelligence systems trained on massive data sets to perform a wide range of tasks – have the potential to transform scientific discovery and innovation. At the request of the U.S. Department of Energy (DOE), the National Academies conducted a study to consider the capabilities of current foundation models as well as future possibilities and challenges. Foundation Models for Scientific Discovery and Innovation explores how foundation models can complement traditional computational methods to advance scientific discovery, highlights successful use cases, and recommends strategic approaches and investments to support DOE’s mission…(More)”.

Foundation Models for Scientific Discovery and Innovation: Opportunities Across the Department of Energy and the Scientific Enterprise

Blog by Daniel Schuman: “The Government Publishing Office grabbed the spotlight at the final Congressional Data Task Force meeting of 2025 last Wednesday by announcing that it is launching a Model Context Protocol server for artificial intelligence tools to access official GPO publication information. The MCP server lets AI tools like ChatGPT and Gemini pull in official GPO documents, allowing them to rely on current, authoritative information when answering questions.

Here’s why this matters. Large Language Models are trained on large collections of text, but that training is fixed at a point in time and can become outdated. As a result, an AI may not know about recent events or changes and may even give confident but incorrect answers.

Technologies like an MCP server address this problem by allowing an AI system to consult trusted, up-to-date sources when it needs them. When a question requires current or authoritative information, the AI can request that information from the MCP server, which returns official data—such as publications from the Government Publishing Office—that the AI can then use in its response. Most importantly, the design of an MCP server allows for machine-to-machine access, helping ensure responses are grounded in authoritative sources rather than generated guesses.

Adding MCP creates another mechanism for the public to access GPO publications, alongside search, APIs, and bulk data access. It is a good example of the legislative branch racing ahead to meet the public need for authoritative, machine-readable information.

GPO’s Mark Caudill said his office implemented the MCP both to respond to growing demand for AI-accessible data and to avoid having to choose the “best” AI agent. This is in line with GPO’s mission of being a trusted repository of the official record of the federal government. With a wide range of AI tools in use, from general use ones like ChatGPT and Gemini to more specific ones geared toward legal research, GPO’s adoption of MCP allows it to be agnostic across that ecosystem.

A user would configure the LLM of their choice to connect to GovInfo’s MCP, allowing it to draw data from GPO publications rather than being limited to its training data. How well the model interprets those publications and returns quality answers to users is beyond GPO’s control.

GPO also has expanded access to data in ways that don’t involve AI, including expansion of its customizable RSS feeds for users interested in specific types of documents or the latest data from specific federal offices or courts.. The video and slides for the event are available on the Legislative Branch Innovation Hub…(More)”.

How Congress Is Wiring Its Data for the AI Era

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday