Stefaan Verhulst
Essay by Henry Farrell & Cosma Rohilla Shalizi: “…. LLMs create social relations between their users and the authors of the text in their training corpora. With the right access to the model and the corpus, one can trace the connections from system output back to individual source texts and their authors (Grosse et al., 2023). These social relations are mechanically mediated, giving users the illusion that they are interacting with just the machine and not an assemblage of people. But mediated social relationships and their illusions are a common fact of modern life. The social relations created by LLMs in turn cut across, and interact with, other social relations, including those shaped by other social technologies.
Our goal here is to clear a common space where the social sciences and computer science and engineering can discuss the social consequences of AI. We draw heavily on the ideas of Simon (1996), who saw AI, political science, administration, economics, computer science, and cognitive psychology as so many branches of the “sciences of the artificial,” studying how human beings create “artifacts” that model, and act on, their environment. From this perspective, AI models are another means of “complex information processing” (Newell and Simon, 1956). As Simon emphasizes, such systems encompass both information technologies, as studied and built by computer scientists and engineers, and social information systems such as markets, bureaucracy, and, although Simon himself does not stress this, democracy (Lindblom, 1965). All such systems process information by reducing complex realities into more tractable ‘coarse-grainings’ or abstractions that (hopefully) capture important features of the data. Producing coarse-grainings is not all that large-scale social institutions do, but it is quite important. Economic, administrative, and political coordination simply cannot work at scale if complex social relationships are not compressed into visible, tractable representations…(More)”.
Book by Turi Munthe: “Our opinions – whether we believe in God or in ghosts, our views on sex or animal rights or immigration, our basic sense of what’s good or fair – are shaped by a breathtaking web of hidden forces. The age-old idea that our views are forged by reason and evidence alone is wrong: we are influenced by everything from the quirks of distant history, through the geology of where we grew up, to the lines of our genetic code.
This astounding book takes us through culture, biology, geography, history, psychology and much more to uncover the hidden DNA of our opinions. It reveals:
- why the descendants of rice farmers have different values to the descendants of grain farmers
- how our physical appearance shapes the way we see the world – and why conventionally attractive people tend to support the free market
- why liberals think pineapple should go on pizza, and why conservatives prefer smooth peanut butter to crunchy
- why hot and humid countries favour authoritarian leaders, and drought-prone ones prefer authoritarian gods
Packed with extraordinary stories and counterintuitive discoveries, Why We Think What We Think asks a fundamental question of ourselves. If we are predisposed to our beliefs, how can we escape the bounds of our own perspective? The answer lies in disagreement. Argument is how we reason, how we think our way to a better world. To thrive, as individuals and societies, we need the other side…(More)”.
Book by John Kampfner: “At a time when democracies seem paralyzed by fear and populations are turning inward, award-winning journalist John Kampfner travels to ten countries confronting our shared challenges with bravery and imagination.
– Taiwan’s health system achieves 90% patient satisfaction at a fraction of the cost of the NHS.
– Moroccan solar panels in the Sahara produce enough clean energy to power two million homes.
– Estonia has transformed itself into a digital pioneer in a single generation – becoming the world’s first fully digital government where 97.6% of citizens access state services online.
– Costa Rica has tripled its economy while doubling its forest cover, proving that green policies can pay direct dividends.
What unites these countries, and more, is a refusal to accept that difficult problems are unsolvable. The places showing true innovation are often those with their backs against the wall – not wealthy nations assuming they have all the answers. Braver New World is an urgent reminder that solutions exist. The question is whether we have the courage to learn…(More)”.
Book by Henry Snow: “Whether on Caribbean plantations in the seventeenth century or in Amazon warehouses today, the powerful have constantly developed new techniques to control workers—and new justifications for doing so. Ideas of control perfected on the factory floor have expanded to dictate our personal lives, political rights, national policy, and the global economy.
Seventeenth-century intellectuals such as William Petty and John Locke argued that human beings were selfish machines who had to be controlled for their own good. A century later, Jeremy and Samuel Bentham tried to do exactly that with their infamous Panopticon prison. When nineteenth-century Japanese elites imported European factory technologies, they came up with new theories of political control to justify this development. After the Second World War, the General Electric Corporation created an internal propaganda department to fight unions, then pitched that propaganda to the country with the help of an actor, the future President Ronald Reagan. Extending these practices, billionaires today dream of extending the algorithmic control of Amazon warehouses into every corner of our lives.
Blending intellectual, economic, and labor history, Control Science is a thrilling and lucid work of history. Henry Snow reveals how common sense about work, the economy, and human nature was fabricated and must now be challenged…(More)”.
White Paper by the Siegel Family Endowment: “We’re living in an era of unprecedented information abundance, yet still struggling to generate real insight. The issue isn’t a lack of data, but a lack of well-formed questions. The way we frame problems—and who gets to frame them—shapes everything that follows.
Better Questions, Better Insights introduces the emerging science of questions: a more rigorous approach to defining, testing, and refining the inquiries that guide our work.
At Siegel Family Endowment, this approach has shaped an inquiry-driven model of philanthropy—one that moves beyond linear solutions toward deeper systems change.
This paper offers a practical framework for embedding inquiry into decision-making, helping organizations move from information to insight—and from insight to impact…
This paper is an invitation. A look under the hood at how we’ve approached inquiry in our own work, and a starting point for shared exploration.
As the complexity of societal challenges grows, our approaches must evolve with it. That means embracing a more rigorous practice of curiosity—asking better questions, together—and expanding who gets to ask them.
If we can do that, we have an opportunity to modernize and democratize philanthropy in ways that better meet this moment…(More)”.
Article at The Economist: “The life of American government beancounters is tough, and not just at cocktail parties. They have a hard time persuading people to talk to them at work, too. A decade ago nearly nine in ten Americans, when approached, agreed to fill out the Current Population Survey, which is administered to about 60,000 households each month and asks about, among other things, employment. Fewer than seven in ten do so now (see chart 1). For the Consumer Expenditure Survey, which tries to capture 3,700 households monthly, the response rate is down from 68% to 40%…(More)”
Article by Johan Harvard, Kurt McLauchlan, David Milestone, Barbara Ubaldi: “Ministers know they are running out of time and money. A fly on the wall in every minister of health’s office will tell you the same story: an inbox full of complaints, stories about backlogs, budget warnings from the finance ministry and messages from the prime minister’s office asking for “quick wins”.
The problem is that there are very few quick wins in health today. Health systems are understaffed, demand is rising and budgets are failing to keep pace. More than 4 billion people around the world still lack access to essential services. Health systems are expected to do more with less – and to do it now.
AI in Health: Promise and Pressure
Everyone is saying that artificial intelligence could be the answer, but no one knows where to start and there is significant risk in getting it wrong. What ministers need isn’t another one-off sales pitch, but a way to cut through the noise and to identify where AI can actually help. To do that they must work out what to prioritise politically and how to turn potential into results.
This paper introduces a practical framework that can help governments decide where AI is most usefully applied and outlines the enablers required to implement it at scale…(More)”.
Essay by Nick Carr: “…The construction of telecommunication networks required enormous capital and extensive managerial coordination. In the United States, media became big business, as the rise of Western Union signified. To inventors, entrepreneurs, and corporate executives, the public’s celebration of communication proved a boon. Not only did it reinforce their messianic sense of self-importance; it served their business interests. It guaranteed them eager customers, enthusiastic investors, and indulgent regulators. As the pace of technological progress quickened, each advance in media systems triggered a new burst of millenarian rhetoric. Nikola Tesla, in an 1898 interview about his plan to create a wireless telegraph, said that he would be “remembered as the inventor who succeeded in abolishing war.” Not to be outdone, his rival, Guglielmo Marconi, declared in 1912 that his invention of radio would “make war impossible.”
Such cheery predictions were put to an early test in the summer of 1914. In the immediate aftermath of the June 28 assassination of Austrian Archduke Franz Ferdinand by a Serbian nationalist in Sarajevo, hundreds of urgent diplomatic messages raced between European capitals through recently strung telegraph and telephone wires. As the historian Stephen Kern has described, the rapid-fire dispatches quickly devolved into ultimatums and threats. Rather than calming the crisis, they inflamed it. “Communication technology imparted a breakneck speed to the usually slow pace of traditional diplomacy and seemed to obviate personal diplomacy,” Kern writes. “Diplomats could not cope with the volume and speed of electronic communication.” Diplomacy, a communicative art, had been overwhelmed by communication. By August, World War I was under way…(More)”.
Article by Stefaan G. Verhulst and Roshni Singh: “Artificial intelligence systems are increasingly designed to remember us. Whether answering a question, drafting an email, or recommending a course of action, modern AI systems draw on accumulated knowledge about a user’s preferences, behaviors, goals, and past interactions to function effectively. This capacity for context — persistent memory about who we are and what we do — is not a secondary feature. It is foundational to how these systems generate value.
But context in AI is more than a technical convenience or feature. It is also a source of risk. The accumulation and reuse of personal information introduces privacy vulnerabilities, particularly when data from different domains is aggregated into a single, unified memory. This is, of course, not a new concern: as Helen Nissenbaum argued in Privacy in Context, privacy depends on maintaining appropriate information flows within specific social contexts, and risks emerge when those boundaries are collapsed. What AI changes is the scale, speed, and inferential power of such aggregation, turning what were once discrete data linkages into continuous, dynamic systems capable of generating new insights, predictions, and vulnerabilities far beyond the original contexts in which the data was produced.
And the persistence of context raises deeper questions about cognitive dependence: when AI systems continuously shape the informational environment in which users think, they do not merely respond to us but influence how we understand ourselves and make decisions. In doing so, they risk constraining what we have described as digital self-determination: the ability of individuals and communities to meaningfully shape the conditions under which their data is (re) used and how it, in turn, shapes them — shifting agency from the user to the system in often opaque and difficult-to-contest ways.
These risks are not limited to one category of AI. They apply across AI systems that store and reuse user data — from large language models and recommendation engines to agentic systems that act autonomously on a user’s behalf. What this article examines is not a particular technology, but a structural feature common to many: the use of context as memory, and the tradeoffs that follow.
Context is often treated as the accumulation of user data, but this framing is incomplete. Context is better understood as the relational structure that gives information meaning by situating it within social, temporal, and functional relationships. It is not simply what is stored, but how information is organized, linked, and interpreted within a given frame. Without these relationships, data may remain present but lose meaning or be misapplied across situations. As Jessica Talisman further elaborates, this spectrum runs from statistical proximity to formal logical commitment; AI systems that conflate these distinct levels of relational strength risk treating correlation as meaning.
In what follows, we draw on emerging writing on AI memory, context, and human-AI interaction to explore three interconnected dimensions of this problem. First, we examine why context matters so much for AI performance, and why it is better understood as a relational structure than as simple data storage. Second, we analyze the privacy risks that arise when contextual boundaries collapse. Third, we consider the cognitive risks of persistent memory: the possibility that AI systems come to shape not only what users do, but how they think. Across these dimensions, we also consider the implications for digital self-determination — that is, the extent to which individuals and communities retain meaningful agency over how they are represented, interpreted, and acted upon within context-aware AI systems. These concerns are especially acute for children and young users, for whom both data exposure and cognitive development are at stake…(More)”.
Article by Stefaan Verhulst: “Last week, at Jesus College, Cambridge University, the inaugural cohort of Digital Statecraft Fellows gathered — alongside a diverse group of policymakers, technologists, scholars, and practitioners — to grapple with a deceptively simple yet profound question: how do we govern in the age of AI?
The convening offered a highly needed space where theory met practice; where geopolitical realities, technical architectures, and governance responses were debated not as abstractions, but as institutional design challenges.
The discussions, grounded in the principles of the Digital Statecraft Manifesto, revealed a field at an inflection point. Digital statecraft is no longer just about digitizing services or regulating platforms at the margins. It is about rethinking the state itself as a coordinator in a world where AI systems, data infrastructures, and global platforms increasingly mediate social, economic, and political life.
Below are my ten high-level takeaways from the convening — signals, perhaps, from the frontier of digital statecraft. In keeping with the spirit of the convening — held under Chatham House Rules — I will not attribute specific remarks to individuals, but instead reflect some of the collective insights that emerged across the discussions…(More)”.