Explore our articles
View All Results

Stefaan Verhulst

Book by Shannon Mattern: “Computational models of urbanism—smart cities that use data-driven planning and algorithmic administration—promise to deliver new urban efficiencies and conveniences. Yet these models limit our understanding of what we can know about a city. A City Is Not a Computer reveals how cities encompass myriad forms of local and indigenous intelligences and knowledge institutions, arguing that these resources are a vital supplement and corrective to increasingly prevalent algorithmic models.

Shannon Mattern begins by examining the ethical and ontological implications of urban technologies and computational models, discussing how they shape and in many cases profoundly limit our engagement with cities. She looks at the methods and underlying assumptions of data-driven urbanism, and demonstrates how the “city-as-computer” metaphor, which undergirds much of today’s urban policy and design, reduces place-based knowledge to information processing. Mattern then imagines how we might sustain institutions and infrastructures that constitute more diverse, open, inclusive urban forms. She shows how the public library functions as a steward of urban intelligence, and describes the scales of upkeep needed to sustain a city’s many moving parts, from spinning hard drives to bridge repairs…(More)”.

A City Is Not a Computer: Other Urban Intelligences

Article by Manas Tripathi and Ashish Bhasin: “Corporate restructuring, such as mergers, acquisitions, and bankruptcy, now raises complex data-ownership challenges for regulators, especially when activities cross borders and fall under multiple legal authorities. As organizations become more digital, controlling user data has become a core issue during restructuring. Policymakers must protect citizens’ data, evaluate the value of data assets, and ensure that competition rules are followed throughout the restructuring process. Although countries have strengthened rights such as the right to know and the right to be forgotten, many firms still exploit legal gaps to access or repurpose user data during restructuring. This article examines how organizations use these loopholes to shift or expand data ownership, often bypassing regulatory protections. Using a detailed case study, we uncover the blind spots in current oversight. To address these issues, we introduce the Data Ownership Governance for Corporate Restructuring (DOGCR) framework. The framework promotes accountability and offers a structured approach for managing data ownership transitions before, during, and after corporate restructuring…(More)”.

Whose data is it, anyway? Deliberating data ownership during corporate restructuring

Book by Joshua Gans: “It is well recognized that recent advances in AI are exclusively advances in statistical techniques for prediction. While this may facilitate automation, this result is secondary to AI’s impact on decision-making. From an economics perspective, predictions have their first-order impacts on the efficiency of decision-making.

In The Microeconomics of Artificial Intelligence, Joshua Gans examines AI as prediction that enhances and perhaps enables decision-making, focusing on the impacts that arise within firms or industries rather than broad economy-wide impacts on employment and productivity. He analyzes what the supply and production characteristics of AI are and what the drivers of the demand for AI prediction are. Putting these together, he explores how supply and demand conditions lead to a price for predictions and how this price is shaped by market structure. Finally, from a microeconomics perspective, he explores the key policy trade-offs for antitrust, privacy, and other regulations…(More)”.

The Microeconomics of Artificial Intelligence

Essay by Amelia Acker: “A series of exploratory case studies were conducted throughout the 1960s to research centralizing access to government data. In response, social and behavioral researchers—both within and outside the federal government—proposed what came to be known as the National Data Center. The proposal prompted several congressional hearings in the House and Senate throughout 1966. Led by Congressman Cornelius Gallagher and Senator Edward V. Long, respectively, the hearings addressed the possible invasion of privacy that would result from a data center using computer technology and automated recordkeeping to manage data gathered from the public. According to privacy scholar Priscilla Regan, “Congress’s first discussions concerning computerized record systems cast the issue in terms of the idea that individual privacy was threatened by technological change.” But, as the hearings continued and critiques in the press began to circulate, concerns shifted from focusing on the potential impacts of new computing technology on data processing to the sheer volume of information being collected about individuals—some three billion records, according to a Senate subcommittee report.

By the end of the year, the congressional inquiries exploded into a full-blown controversy, and as one observer wrote in 1967, the plan for the National Data Center “acquired the image of a design to establish a gargantuan centralized national data center calculated to bring Orwell’s 1984 at least as close as 1970.” These fears about files with personal information being aggregated into dossiers and made accessible through computers would shape data protections in the United States for decades to come…(More)”.

How “Archive” Became a Verb

Report by the National Academies: “Foundation models – artificial intelligence systems trained on massive data sets to perform a wide range of tasks – have the potential to transform scientific discovery and innovation. At the request of the U.S. Department of Energy (DOE), the National Academies conducted a study to consider the capabilities of current foundation models as well as future possibilities and challenges. Foundation Models for Scientific Discovery and Innovation explores how foundation models can complement traditional computational methods to advance scientific discovery, highlights successful use cases, and recommends strategic approaches and investments to support DOE’s mission…(More)”.

Foundation Models for Scientific Discovery and Innovation: Opportunities Across the Department of Energy and the Scientific Enterprise

Blog by Daniel Schuman: “The Government Publishing Office grabbed the spotlight at the final Congressional Data Task Force meeting of 2025 last Wednesday by announcing that it is launching a Model Context Protocol server for artificial intelligence tools to access official GPO publication information. The MCP server lets AI tools like ChatGPT and Gemini pull in official GPO documents, allowing them to rely on current, authoritative information when answering questions.

Here’s why this matters. Large Language Models are trained on large collections of text, but that training is fixed at a point in time and can become outdated. As a result, an AI may not know about recent events or changes and may even give confident but incorrect answers.

Technologies like an MCP server address this problem by allowing an AI system to consult trusted, up-to-date sources when it needs them. When a question requires current or authoritative information, the AI can request that information from the MCP server, which returns official data—such as publications from the Government Publishing Office—that the AI can then use in its response. Most importantly, the design of an MCP server allows for machine-to-machine access, helping ensure responses are grounded in authoritative sources rather than generated guesses.

Adding MCP creates another mechanism for the public to access GPO publications, alongside search, APIs, and bulk data access. It is a good example of the legislative branch racing ahead to meet the public need for authoritative, machine-readable information.

GPO’s Mark Caudill said his office implemented the MCP both to respond to growing demand for AI-accessible data and to avoid having to choose the “best” AI agent. This is in line with GPO’s mission of being a trusted repository of the official record of the federal government. With a wide range of AI tools in use, from general use ones like ChatGPT and Gemini to more specific ones geared toward legal research, GPO’s adoption of MCP allows it to be agnostic across that ecosystem.

A user would configure the LLM of their choice to connect to GovInfo’s MCP, allowing it to draw data from GPO publications rather than being limited to its training data. How well the model interprets those publications and returns quality answers to users is beyond GPO’s control.

GPO also has expanded access to data in ways that don’t involve AI, including expansion of its customizable RSS feeds for users interested in specific types of documents or the latest data from specific federal offices or courts.. The video and slides for the event are available on the Legislative Branch Innovation Hub…(More)”.

How Congress Is Wiring Its Data for the AI Era

JRC policy brief: “…emphasizes the importance of including an inclusive approach in regulatory sandboxing, to ensure digital innovations deliver equitable and impactful outcomes. The document outlines the risks associated with improper design of sandboxing, such as biases and exclusion, and provides a practical guide to ensure inclusion in regulatory sandboxes. The main recommendations are grouped among six themes: scoping, regulations and safeguards, stakeholder engagement, data and technology, governance, and reach and impact. Ultimately, it underscores that while inclusive sandboxing may require more resources, it delivers greater value by fostering equitable innovation that benefits all stakeholders…(More)”.

Innovating Together – A Guide to Inclusive Regulatory Sandboxing

Article by Tanjimul Islam: “Starlink, the satellite internet service run by Elon Musk’s SpaceX, launched in 2019. Since then, it has become available in more than 150 markets, with 8 million users. 

Starlink’s expansion has at times struggled against regulatory red tape. But during the period of time Elon Musk served in U.S. President Donald Trump’s government, Musk’s Starlink was activated or approved in at least 13 countries, including India, Vietnam, Pakistan, and Bangladesh. In some of these places, Starlink’s applications had stalled for years until they were suddenly greenlit. Rest of World’s recent investigative feature explores the way Musk’s business benefited from his close ties to Trump: “There were [American] government officials, whether authorized or not, who were basically saying, if you want favorable treatment from Trump, you better be good to Musk,” said Blair Levin, who led the Obama administration’s National Broadband Plan and was formerly chief of staff with the Federal Communications Commission. 

Part of space-based internet’s appeal is that it reaches areas where traditional internet infrastructure is sparse or nonexistent. Starlink — and the companies trying to compete with it — is now fueling a satellite boom. It’s estimated that in 2025, a SpaceX rocket has, on average, brought Starlink satellites into space every three days. Despite the billions of dollars of investment, and the thousands of satellites zipping around our planet, it’s still hard to really picture the satellite internet industry, which is why Rest of World put together this visualization…(More)”.

How Starlink became the world’s internet alternative

Article by Stefaan Verhulst: “The European Union’s pursuit to create a single data market has always been a balancing act between fostering public interest goals and safeguarding private enterprise. The Data Act (Regulation (EU) 2023/2854), which became applicable on September 12 2025, codified this tension, particularly in its Business-to-Government (B2G) provisions under Chapter V.

Initially, these provisions required data holders to share data with public sector bodies in cases of “exceptional need”, which was divided into two tracks: urgent Public Emergencies and non-emergency Public Interest Tasks.

However, the European Commission’s Digital Omnibus Package, published last month, has signaled a definitive pivot. The core message: B2G data sharing is now being refined and confined as a measure of last resort. This narrowing protects the private sector but simultaneously creates a critical challenge: without a designated data steward within both the public and private sector this restrictive, haphazard approach will fail to build the trusted, long-term data ecosystems necessary to address emergency and non-emergency, systemic societal challenges…(More)”.

Is this the end of Business-to-Government (B2G) sharing? How the European Commission’s Digital Omnibus Confines B2G Data Sharing to a ‘Last Resort’ Option

Report by the American Statistical Association (ASA):”… The report documents significant challenges facing the 13 federal statistical agencies and outlines nine new recommendations to strengthen the nation’s statistical infrastructure.

Federal statistics—produced by agencies including the Bureau of Labor Statistics, the Census Bureau and the National Center for Health Statistics—serve as essential infrastructure for economic policy, public health decisions and democratic governance. These data inform everything from interest rate decisions to public health responses and business planning.

“Federal statistics are fundamental infrastructure, similar to roads, bridges and power grids,” said ASA Executive Director Ron Wasserstein. “This report shows that immediate investment and coordination are needed to ensure these agencies can meet current and future information needs.”

Key Findings

The report documents the following concerning trends:

  • Staffing losses: Most agencies have lost 20-30% of their staff, affecting their ability to innovate and meet expanding demands for more timely and granular data.
  • Budget constraints: Eight of 13 agencies have lost at least 16% of purchasing power since FY2009, even as congressional mandates have increased.
  • Declining public trust: The percentage of US.adults who trust federal statistics declined from 57% in June 2025 to 52% in September 2025, according to surveys conducted by NORC at the University of Chicago.
  • System coordination challenges: The decentralized structure of the federal statistical system, while promoting subject-matter expertise, lacks dedicated funding for system-wide initiatives such as joint IT upgrades and coordinated data-sharing…(More)”.
The Nation’s Data at Risk: 2025 Report

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday