Explore our articles
View All Results

Stefaan Verhulst

Article by Michelle Goldberg: “When I saw “Data,” a zippy Off Broadway play about the ethical crises of employees at a Palantir-like A.I. company, last month, I was struck by its prescience. It’s about a brilliant, conflicted computer programmer pulled into a secret project — stop reading here if you want to avoid spoilers — to win a Department of Homeland Security contract for a database tracking immigrants. A brisk theatrical thriller, the play perfectly captures the slick, grandiose language with which tech titans justify their potentially totalitarian projects to the public and perhaps to themselves.

“Data is the language of our time,” says a data analytics manager named Alex, sounding a lot like the Palantir chief Alex Karp. “And like all languages, its narratives will be written by the victors. So if those fluent in the language don’t help democracy flourish, we hurt it. And if we don’t win this contract, someone else less fluent will.”

I’m always on the lookout for art that tries to make sense of our careening, crises-ridden political moment, and found the play invigorating. But over the last two weeks, as events in the real world have come to echo some of the plot points in “Data,” it’s started to seem almost prophetic.

Its protagonist, Maneesh, has created an algorithm with frighteningly accurate predictive powers. When I saw the play, I had no idea whether such technology was really on the horizon. But this week, The Atlantic reported on Mantic, a start-up whose A.I. engine outperforms many of the best human forecasters across domains from politics to sports to entertainment…(More)”.

He Studied Cognitive Science at Stanford. Then He Wrote a Startling Play About A.I. Authoritarianism.

Paper by Daron Acemoglu, Dingwen Kong & Asuman Ozdaglar: “We study how generative AI, and in particular agentic AI, shapes human learning incentives and the long-run evolution of society’s information ecosystem. We build a dynamic model of learning and decision-making in which successful decisions require combining shared, community-level general knowledge with individual-level, context-specific knowledge; these two inputs are complements. Learning exhibits economies of scope: costly human effort jointly produces a private signal about their own context and a “thin” public signal that accumulates into the community’s stock of general knowledge, generating a learning externality. Agentic AI delivers context-specific recommendations that substitute for human effort. By contrast, a richer stock of general knowledge complements human effort by raising its marginal return. The model highlights a sharp dynamic tension: while agentic AI can improve contemporaneous decision quality, it can also erode learning incentives that sustain long-run collective knowledge. When human effort is sufficiently elastic and agentic recommendations exceed an accuracy threshold, the economy can tip into a knowledge-collapse steady state in which general knowledge vanishes ultimately, despite high-quality personalized advice. Welfare is generally non-monotone in agentic accuracy, implying an interior, welfare-maximizing level of agentic precision and motivating information-design regulations. In contrast, greater aggregation capacity for general knowledge—meaning more effective sharing and pooling of human-generated general knowledge—unambiguously raises welfare and increases resilience to knowledge collapse…(More)”.

AI, Human Cognition and Knowledge Collapse

Article by Dashun Wang: “In the early 1980s, Apple co-founder Steve Jobs described the computer as “a bicycle for our minds”. He was inspired by a Scientific American graphic he’d encountered as a boy, showing that a human on a bicycle is more energy-efficient than any animal1. The metaphor captured the promise of personal computing: tools that enable people to go further and faster with less effort. But the deeper brilliance of bicycles lies in what they do not do: they do not mimic human biology, nor any form found in nature. The bicycle reimagined motion entirely.

By comparison, I propose that artificial-intelligence agents are aeroplanes for the mind — they can speed things up for humans even more than bicycles do, but they are harder to control and the consequences of mistakes can be huge. And scientists are particularly poised to benefit from these tools. Scientific research is, at its core, a journey into the unknown. Yet working in new terrains brings unexpected challenges and frequent failures…(More)”.

AI agents are ‘aeroplanes for the mind’: five ways to ensure that scientists are responsible pilots

Paper by Haydn Belfield: “The artificial intelligence community (AI) has recently engaged in activism in relation to their employers, other members of the community, and their governments in order to shape the societal and ethical implications of AI. It has achieved some notable successes, but prospects for further political organising and activism are uncertain. We survey activism by the AI community over the last six years; apply two analytical frameworks drawing upon the literature on epistemic communities, and worker organising and bargaining; and explore what they imply for the future prospects of the AI community. Success thus far has hinged on a coherent shared culture, and high bargaining power due to the high demand for a limited supply of AI talent. Both are crucial to the future of AI activism and worthy of sustained attention…(More)”.

Activism by the AI Community: Analysing Recent Achievements and Future Prospects

Paper by Zachary Catanzaro: “Judges now consult ChatGPT about what statutes mean. The scholarly response treats this as a reliability problem. Reliability is beside the point. LLMs generate text by predicting probable token sequences, manipulating symbols without accessing what those symbols mean. But syntax cannot generate semantics. Computational legal interpretation does not fail because the technology is immature. It fails because it is a category error. A theory that fixes meaning in historical usage and treats interpretation as empirical recovery cannot resist algorithms that measure historical usage patterns. The progression from dictionaries to corpus databases to generative models follows originalism’s empirical commitments to their logical end. AI-generated content saturates the corpora on which future models train, and the resulting degradation eliminates marginal claims first; those upon which life and liberty depend. Computational methods did not contaminate originalist interpretation. Originalism was already a jurisprudence that simulated meaning while discarding the semantic content that interpretation requires. The machines simply made the method hyperreal…(More)”.


The Dead Law Theory: The Perils of Simulated Interpretation

Chapter by Maria Michali, Amalia Kallergi, Eva Paraschou, Laurens Landeweerd, Steffi Friedrichs, Athina Vakali & George Gaskell: “…examines three historical case studies: (1) ‘genetic modification to genome editing’; (2) ‘controversies over climate science’, and (3) ‘artificial intelligence in social media’. On this basis it develops an understanding of how public trust and confidence in science, technology, and innovation (STI) can be gained, maintained, or lost. This leads to practical recommendations for ethical and societally sustainable STI. There are both intuitive and evidenced warrants for trust in science. Intuitive warrants arise when innovation creates an immediate sense of familiarity, making the future feel like a natural continuation of the past. Evidenced warrants occur when science generates new insights or produces technologies that benefit individuals or society. However, trust in science may be undermined by scientific fraud, the dismissal of public concerns about innovations that challenge societal values, and the populist rejection of science, often accompanied by conspiracy theories. Building and maintaining public trust in STI is a multifaceted challenge that requires coordinated efforts from scientists, research institutions, funding bodies, regulators, and democratic governance processes. A commitment to transparency, proactive engagement with public concerns, risk assessment and mitigation, responsible communication, and strong regulatory frameworks is essential for navigating the complexities of technological advancement and ensuring public trust…(More)”.

The Conditions for Trust in Science, Technology and Innovation

Report by Data Quality Campaign: “Statewide longitudinal data systems (SLDSs) have enormous potential to be used to improve education and workforce outcomes, but not every system is designed to work in the same way. Even two “good” SLDSs may not look the same because they can be designed with different goals, or functions, in mind: public reports and dashboards, research and analytics, and support for individuals. All three functions are essential and address different people’s data access needs. Each function requires different considerations for infrastructure, data governance, legal frameworks, and ongoing investments.

This brief explores how policymakers can purposefully shape the design of their state’s SLDS to effectively support any or all of the three functions. When a system’s function is aligned with the intended users, required infrastructure, appropriate governance structure, and intended data uses, the system can effectively enable access to data that people need to make education and workforce decisions…(More)”.

Purpose Drives Design: Functions of a Statewide Longitudinal Data System

Article by Vivian Liu: “A group of teenagers is standing in front of a room of residents, government officials, and organizations. They are presenting findings from data they helped to collect. Their work speaks to urgent challenges in their communities, including displacement, air pollution, extreme heat, and lack of community spaces. For some of the teenagers, this is their first experience collecting data and contributing to solutions in their own communities.

Engaging teenagers and young adults in data collection, analysis, and dissemination improves the quality of the results, provides better information for policy and program responses, and supports the next generation of leaders.

In this fifth blog post in our Equity in Action series, we explore how four local organizations that received grants from the Local Data for Equitable Communities program are training and partnering with youth to be the voices shaping community-informed solutions…(More)”.

Local Strategies for Engaging Youth with Data

Paper by Seth Lazar & Lorenzo Manuali: “LLMs are among the most advanced tools ever devised for understanding and generating natural language. Democratic deliberation and decision-making involve, at several distinct stages, the production and comprehension of language. So it is natural to ask whether our best linguistic tools might prove instrumental to one of our most important linguistic tasks involving language. Researchers and practitioners have recently asked whether LLMs can support democratic deliberation by leveraging abilities to summarise content, to aggregate opinions over summarised content, and to represent voters by predicting their preferences over unseen choices. In this paper, we assess whether using LLMs to perform these and related functions really advances the democratic values behind these experiments. We suggest that the record is mixed. In the presence of background inequality of power and resources, as well as deep moral and political disagreement, we should not use LLMs to automate non-instrumentally valuable components of the democratic process, nor should we be tempted to supplant fair and transparent decision-making procedures that are practically necessary to reconcile competing interests and values. However, while LLMs should be kept well clear of formal democratic decision-making processes, we think they can instead strengthen the informal public sphere—the arena that mediates between democratic governments and the polities that they serve, in which political communities seek information, form civic publics, and hold their leaders to account…(More)”

Using LLMs to Enhance Democracy

Article by Guglielmo Gnoni et al: “Europe is more reliant than ever on digital services and the infrastructure that fuels them. A prolonged systemic failure would trigger a cascade of crises: cities losing power, emergency services overwhelmed, and financial disruption.Although operators skillfully manage short-term outages and networks are built to be redundant at the core, Europe’s data infrastructure remains fragile.

Infrastructure providers, investors, and policymakers can coordinate various efforts to safeguard society from the impact of a prolonged outage, especially in a time of rising geopolitical tension.

In this article, we have used EU and industry data to model how European infrastructure would degrade in a major outage—from inconvenience in the first few hours to a systematic breakdown as the outage extends for days. We also illustrate how disruption on this scale is worryingly possible. For example, some subsea cables connecting nations to the global economy lack monitoring where they come onshore.

This “resilience gap” between Europe’s reliance on digital infrastructure and the technology’s ability to operate under stress—whether caused by human action or technical accident—is hard to close. Europe’s digital ecosystem is complex, spanning regulated national telcos and distant tech giants in Silicon Valley, India, and China. Nevertheless, Europe can go further and faster than current initiatives. Our analysis helps define the priorities for urgent action. Digital infrastructure operators, investors, and governments all have a role to play in a concerted effort to avoid prolonged outages with disastrous impact…(More)”.

The Day Europe’s Data Stops Flowing

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday