Explore our articles
View All Results

Stefaan Verhulst

OECD paper: “…identifies the most frequently cited features in existing definitions of agentic AI and AI agents, examines how these features are described across sources, and maps them to the key elements of the OECD definition of an AI system. By highlighting both shared traits and differences, the paper aims to support clearer conceptual understanding and inform future research and policymaking. It also provides descriptive data on recent trends in the uptake of AI agents and agentic AI…(More)”.

The agentic AI landscape and its conceptual foundations

Article by Daniël Jurg, et al: “This article introduces “data mirroring,” a methodological framework for conducting data-donation-based interviews using Data Download Packages (DDPs) from digital platforms. Since the General Data Protection Regulation took effect, DDPs have found application in research. While the literature on the value of DDPs primarily points toward scaling and validating aggregate-level data, their potential to illuminate complex user–media relationships within datafied environments at the micro-level appears underexplored. Drawing from recent conceptualizations of the “data mirror,” which captures the feedback loops between users and digital media, this article provides theoretical grounding and practical guidelines for “mirroring” DDPs to users. Based on exercises with 64 participants, we articulate through an illustrative case study how DDPs can serve as prompts, contexts, and reflections, revealing reflexive strategies users employ to curate information flows on algorithmic platforms like Instagram. In addition, we introduce an open-source web application to operationalize DDPs as data mirrors for non-technical researchers…(More)”.

Data mirroring: A methodological framework for data-donation-based interviews in media use research

Tool by the dataindex.us: “… excited to launch the Data Checkup – a comprehensive framework for assessing the health of federal data collections, highlighting key dimensions of risk and presenting a clear status of data well-being.

When we started dataindex.us, one of our earliest tools was a URL tracker: a simple way to monitor whether a webpage or data download link was up or down. In early 2025, that kind of monitoring became urgent as thousands of federal webpages and datasets went dark.

As many of those pages came back online, often changed from their original form, we realized URL tracking wasn’t sufficient. Threats to federal data are coming from multiple directions, including loss of capacity, reduced funding, targeted removal of variables, and the termination of datasets that don’t align with administration priorities.

The more important question became: how do we assess the risk that a dataset might disappear, change, or degrade in the future? We needed a way to evaluate the health of a federal dataset that was broad enough to apply across many types of data, yet specific enough to capture the different ways datasets can be put at risk. That led us to develop the Data Checkup.

Once we had an initial concept, we brought together experts from across the data ecosystem to get feedback on that concept. The current Data Checkup framework reflects the feedback received from more than 30 colleagues.

The result is a framework built around six dimensions:

  • Historical Data Availability
  • Future Data Availability
  • Data Quality
  • Statutory Context
  • Staffing and Funding
  • Policy

Each dimension is assessed and assigned a status that communicates its level of risk:

  • Gone
  • High Risk
  • Moderate Risk
  • No Known Issue

Together, this assessment provides a more complete picture of dataset health than availability checks alone.

The Data Checkup is designed to serve the needs of both data users and data advocates. It supports a wide range of use cases, including academic research, policy decision-making, journalism, advocacy, and litigation…Here you can see the Data Checkup framework applied to a subset of datasets. At a high level, it provides a snapshot of dataset wellbeing, allowing you to quickly identify which datasets are facing risks…(More)”

Data Checkup overview showing risk assessment cards for eight federal datasets: American Community Survey (ACS), American Time Use Survey (ATUS), Consumer Price Index (CPI), Current Employment Statistics (CES), Homeland Infrastructure Foundation-Level Data (HIFLD) Open, Medicare Current Beneficiary Survey (MCBS), National Assessment of Educational Progress (NAEP), and National Crime Victimization Survey (NCVS). Each card displays six risk dimensions color-coded from white (No Known Issue) through yellow (Moderate Risk) and pink (High Risk) to black (Gone).

The Data Checkup: A Framework for Assessing the Health of Federal Datasets

Article by Oliver Roeder: “…From childhood, maps present the wooden feel of permanence. A globe sits on the sideboard. A teacher yanks down a spooled world from above the chalkboard, year after year. Road atlases are forever wedged in seat-back pockets. But maps, of course, could always be changing. In the short term, buildings and roads are built. In the medium term, territory is conquered and nations fall and are founded. In the long term, rivers change course and glaciers melt and mountains rise. In this era of conflicts in Ukraine and the Middle East, the undoing of national and international orders, and technological upheaval, a change in maps appears to be accelerating.

A colour-coded map of the New York City subway system
John Tauranac’s subway map of New York City

The world is impossible to map perfectly — too detailed, too spherical, too much fractal coastline. A map is necessarily a model. In one direction, the model asymptotes to the one-to-one scale map from the Jorge Luis Borges story that literally covers the land, a map of the empire the size of the empire, which proves useless and is left in tatters across the desert. There is a limit in the other direction too, not map expanded to world, but world collapsed into map — pure abstraction on a phone screen, obscuring the real world outside, geodata for delivery companies and rideshare apps.

But if maps abstract the world, their makers demand and encourage a close association with it. There must eventually be a search for the literal ground truth. As the filmmaker Carl Theodor Dreyer once said, “You can’t simplify reality without understanding it first.” Cartography, in turn, informs interactions with the world…(More)”.

How to map a fractured world

Essay by Nick Vlahos: “…The consistent challenge of scale often misidentifies the main constraint in modern democracy. The binding limit on mass deliberation is not simply that there are “too many people” spread across a large territory. It is also not that masses are inherently poor deliberators and completely prone to be swayed by demagogues. Surely, it is not hard to appreciate why large amounts of people might not be able to govern large territories. However, as I see it, the problem is that modern societies have organised time as if democracy were an after-hours activity, and organised political economy as if participation were a luxury rather than a public obligation. Put differently, the claim that “a million people cannot deliberate” often means something more prosaic: a million people cannot all stop working, travel, prepare, deliberate, and return to work without destabilising the economy. It is not that people cannot be organised in masses for conversation, because we have enough theoretical and practical resources to design such processes, both in-person and online…(More)”.

Why Millions Can Deliberate: It Just Requires an Economy That Supports Mass Participation

Book by Hélène Landemore: “Bought by special interests, detached from real life, obsessed with reelection. Politicians make big promises, deliver little to nothing, and keep the game rigged in their favor. But what can we do?

In Politics Without Politicians, acclaimed political theorist Hélène Landemore asks and answers a radical question: What if we didn’t need politicians at all? What if everyday people—under the right conditions—could govern much better?

With disarming clarity and a deep sense of urgency, Landemore argues that electoral politics is broken but democracy isn’t. We’ve just been doing it wrong. Drawing on ancient Athenian practices and contemporary citizens’ assemblies, Landemore champions an alternative approach that is alive, working, and growing around the world: civic lotteries that select everyday people to govern—not as career politicians but as temporary stewards of the common good.

When regular citizens come together in this way, they make smarter, fairer, more forward-thinking decisions, often bringing out the best in one another. Witnessing this process firsthand, Landemore has learned that democracy should be like a good party where even the shyest guests feel welcome to speak, listen, and be heard.

With sharp analysis and real-world examples, drawing from her experience with deliberative processes in France and elsewhere, Landemore shows us how to move beyond democracy as a spectator sport, embracing it as a shared practice—not just in the voting booth but in shaping the laws and policies that govern our lives.

This is not a book about what’s wrong—it’s a manifesto for what’s possible. If you’ve ever felt powerless, Politics Without Politicians will show you how “We the People” take back democracy…(More)”.

Politics Without Politicians: The Case for Citizen Rule

Paper by Iiris Lehto: “The datafication of healthcare and social welfare services has increased the demand for data care work. Data care work denotes the practical, hands-on labour of caring for data. Drawing on ethnographic material from a data team, this article examines its mundane practices within a wellbeing services county in Finland, with a focus on the sticking points that constitute the dark side of data care work. These sticking points stem mainly from organisational factors, regulatory and policy changes, and technical challenges that frequently intersect. The analysis further suggests that the sticking points reflect persistent struggles to maintain data quality, a task that is central to data care work. Inaccurate data can produce biased decisions, particularly in such areas as funding and care workforce allocation. Acknowledging this often-hidden labour is crucial for understanding how data infrastructures function in everyday healthcare and social welfare settings. As a relatively new approach in care research, the concept of data care work enables a broader examination of the implications of datafication for these services…(More)”.

Caring for data: ethnographic study of data care work in Finland

Paper by Xiao Xiang Zhu, Sining Chen, Fahong Zhang, Yilei Shi, and Yuanyuan Wang: “We introduce GlobalBuildingAtlas, a publicly available dataset providing global and complete coverage of building polygons, heights and Level of Detail 1 (LoD1) 3D building models. This is the first open dataset to offer high quality, consistent, and complete building data in 2D and 3D form at the individual building level on a global scale. Towards this dataset, we developed machine learning-based pipelines to derive building polygons and heights (called GBA.Height) from global PlanetScope satellite data, respectively. Also a quality-based fusion strategy was employed to generate higher-quality polygons (called GBA.Polygon) based on existing open building polygons, including our own derived one. With more than 2.75 billion buildings worldwide, GBA.Polygon surpasses the most comprehensive database to date by more than 1 billion buildings…(More)”.

GlobalBuildingAtlas: an open global and complete dataset of building polygons, heights and LoD1 3D models

About: “…At the heart of JAIGP lies a commitment to learning through collaborative exploration. We believe that understanding emerges not from perfect knowledge, but from thoughtful inquiry conducted in partnership with both humans and AI systems.

In this space, we embrace productive uncertainty. We recognize that AI-generated research challenges traditional notions of authorship, creativity, and expertise. Rather than pretending to have all the answers, we invite researchers, thinkers, and curious minds to join us in exploring these questions together.

Every paper submitted to JAIGP represents an experiment in human-AI collaboration. Some experiments will succeed brilliantly; others will teach us valuable lessons. All contributions help us understand the evolving landscape of AI-assisted research. Through this collective exploration, we learn not just about our research topics, but about the very nature of knowledge creation in the age of AI…(More)”.

The Journal for AI Generated Papers

Book by Tom Griffiths: “Everyone has a basic understanding of how the physical world works. We learn about physics and chemistry in school, letting us explain the world around us in terms of concepts like force, acceleration, and gravity—the Laws of Nature. But we don’t have the same fluency with concepts needed to understand the world inside us—the Laws of Thought. While the story of how mathematics has been used to reveal the mysteries of the universe is familiar, the story of how it has been used to study the mind is not.

There is no one better to tell that story than Tom Griffiths, the head of Princeton’s AI Lab and a renowned expert in the field of cognitive science. In this groundbreaking book, he explains the three major approaches to formalizing thought—rules and symbols, neural networks, and probability and statistics—introducing each idea through the stories of the people behind it. As informed conversations about thought, language, and learning become ever more pressing in the age of AI, The Laws of Thought is an essential read for anyone interested in the future of technology…(More)“.

The Laws of Thought

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday