Stefaan Verhulst
Paper by Michał Klincewicz, Mark Alfano, and Amir Ebrahimi Fard: “At least since Francis Bacon, the slogan “knowledge is power” has been used to capture the relationship between decision-making at a group level and information. We know that being able to shape the informational environment for a group is a way to shape their decisions; it is essentially a way to make decisions for them. This paper focuses on strategies that are intentionally, by design, impactful on the decision-making capacities of groups, effectively shaping their ability to take advantage of information in their environment. Among these, the best known are political rhetoric, propaganda, and misinformation. The phenomenon this paper brings out from these is a relatively new strategy, which we call slopaganda. According to The Guardian, News Corp Australia is currently churning out 3000 “local” generative AI (GAI) stories each week. In the coming years, such “generative AI slop” will present multiple knowledge-related (epistemic) challenges. We draw on contemporary research in cognitive science and artificial intelligence to diagnose the problem of slopaganda, describe some recent troubling cases, then suggest several interventions that may help to counter slopaganda…(More)”.
Article by Rod Schoonover, Daniel P. Aldrich, and Daniel Hoyer: “The emergent reality of complex risk demands a fundamental change in how we conceptualize it. To date, policymakers, risk managers, and insurers—to say nothing of ordinary people—have consistently treated disasters as isolated events. Our mental model imagines a linear progression of unfortunate, unpredictable episodes, unfolding without relation to one another or to their own long-term and widely distributed effects. A hurricane makes landfall, we rebuild, we move on. A pandemic emerges, we develop vaccines, we return to normal.
This outdated model of risk leads to reactive, short-sighted policies rather than proactive prevention and preparedness strategies. Key public programs are designed around discrete, historically bounded events, not today’s cascading and compounding crises. For instance, under the US Stafford Act, the Federal Emergency Management Agency (FEMA) must issue separate declarations for each major disaster, delaying aid and fragmenting coordination when multiple hazards strike. The National Flood Insurance Program still relies on historical floodplain maps that by definition underestimate future risks from climate change. Federal crop insurance supports farmers against crop losses from drought, excess moisture, damaging freezes, hail, wind, and disease, but today diverse stressors such as extreme heat and pollinator loss are converging with other known risks.
Our struggle to grasp complex risk has roots in human psychology. The well-documented tendency of humans is to notice and focus on immediate, visible dangers rather than long-term or abstract ones. Even when we can recognize such longer-term and larger-scale threats, we typically put them aside to focus on more immediate and tangible short-term threats. As a result, lawmakers and emergency managers, like people in general, often succumb to what psychologists and cognitive scientists call the availability heuristic: Policies are designed to react to whatever is most salient, which tends to be the most recent, most dramatic incidents—those most readily available to the mind.
These habits—and the policies that reflect them—do not account for the slow onset of risks, or their intersection with other sources of hazard, during the time when disaster might be prevented. Additionally, both cognitive biases and financial incentives may lead people to discount future risks, even when their probability and likely impact are well understood, and to struggle with conceptualizing phenomena that operate on global scales. Our mental processes are good at understanding immediate, tangible risk, not complex risk scenarios evolving over time and space…(More)”.
Article by Yuen Yuen Ang: “Every September, world leaders gather in New York City for the United Nations General Assembly. They come weighed down by climate disasters, widening inequality, democratic erosion, trade wars, and threats to multilateralism. They leave with heavier burdens than when they arrived.
We now have a fashionable word for this convergence of problems: polycrisis. Since the Columbia historian Adam Tooze popularized it at the World Economic Forum in 2023, it has become the apocalyptic buzzword of the decade. Tooze himself was disarmingly frank in admitting that he was only giving fear a name.
What the polycrisis concept says is: Relax, this is actually the condition of our current moment. I think that’s useful, giving the sense a name. It’s therapeutic. Here is your fear, here is something that fundamentally distresses you. That is what it might be called.
Therapy, maybe. Diagnosis, no. Solutions, zero. Yet “polycrisis” has caught on worldwide.
Why? Because it is comfortable. Polycrisis is a descriptor that the establishment can agree on without challenging itself. It abstracts the causes of crises, making them appear as natural convergences rather than the systemic outcomes of extractive and exclusionary orders. And it makes the concept appear global when in fact the voices, experiences, and priorities it reflects are overwhelmingly Eurocentric.
The virality of polycrisis reveals something deeper: the enduring power of elite discourse. Even though the term is empty, its followers amplify it—and the echo reinforces paralysis. If leaders remain content with only naming fear, they will consign themselves to irrelevance.
I see things differently. I call this moment a polytunity—a term I coined in 2024 to reframe disruption not as paralysis but as a once-in-a-generation opportunity for deep transformation. Transformation not only of our institutions, but of our ideas, our paradigm, and the way we think…(More)”.
Book by Eduardo Albrecht: “Governments now routinely use AI-based software to gather information about citizens and determine the level of privacy a person can enjoy, how far they can travel, what public benefits they may receive, and what they can and cannot say publicly. What input do citizens have in how these machines think?
In Political Automation, Eduardo Albrecht explores this question in various domains, including policing, national security, and international peacekeeping. Drawing upon interviews with rights activists, Albrecht examines popular attempts to interact with this novel form of algorithmic governance so far. He then proposes the idea of a Third House, a virtual chamber that legislates exclusively on AI in government decision-making and is based on principles of direct democracy, unlike existing upper and lower houses that are representative. Digital citizens, AI powered replicas of ourselves, would act as our personal emissaries to this Third House. An in-depth look at how political automation impacts the lives of citizens, this book addresses the challenges at the heart of automation in public policy decision-making and offers a way forward…(More)”.
UNICEF Report: “Young people across the Pacific Islands bring creativity, skills, and insights that can drive social, economic, and political development. Yet research conducted by UNICEF in eight Pacific countries – Fiji, the Federated States of Micronesia, Kiribati, Palau, Samoa, Solomon Islands, Tonga and Vanuatu – shows that many youth have limited opportunities to meaningfully participate in decision-making processes at community and national levels.
The study mapped youth networks and initiatives, while assessing enabling environments for participation. It engaged more than 1,300 stakeholders through interviews, focus group discussions, and surveys, revealing both opportunities and barriers. While youth are often active in church and community groups, their involvement rarely translates into real influence. Marginalized groups, including young women and youth with disabilities, face even greater challenges.
Some promising practices were identified, such as youth groups initiating local projects and governments involving young people in policy consultations. National Youth Councils also play a critical role, although their effectiveness varies depending on resources and government support. NGOs and youth-led initiatives emerged as strong drivers of participation, highlighting the energy and innovation that young people bring when given the space.
The research concludes that meaningful youth participation requires more than ad hoc engagement. It calls for stronger legal and policy frameworks, sustained investment, and platforms that empower young people with the knowledge, skills, and confidence to influence decisions. Key recommendations include resourcing Youth Councils, supporting youth-led initiatives, ensuring representation of marginalized groups, and fostering leadership opportunities…(More)”.
Article by Elsevier: “… An understanding of how UK research and innovation supports the Government’s five missions would encompass many dimensions, such as the people, infrastructure and resource of the research system, as well its research output.
A new methodology has been developed to map the UK’s research publications to the Government’s five missions. This offers an approach to understanding one dimension of this question and offers insights into others.
Elsevier has created a new AI-powered methodology using Large Language Models to classify research papers into mission areas. Currently in development, we welcome your feedback to help improve it.
For the first time, this approach enables research outputs to be mapped at scale to narrative descriptions of policy priorities, and it has the potential to be applied to other government or policy priorities. This analysis was produced by classifying 20 million articles published between 2019-2023…(More)”.

About: “…is a decentralized artist-owned archive…Our mission is to cooperatively maintain artworks in perpetuity, ensuring their preservation and access across generations. We manage and grow the value of digital assets by leveraging decentralized storage and encryption. Backed by a network of care, our model offers a new approach to media art valuation, conservation and governance…
Built on a foundation of more than a decade of mutual support and value exchange, our governance model offers a bold proposition of distributive justice in contemporary art. The model allows artists to own the value of their work and grow the equity of their studio practice through sustainable care…(More)”.
Paper by Aileen Nielsen, Chelse Swoopes and Elena Glassman: “As large language models (LLMs) enter judicial workflows, courts face mounting risks of uncritical reliance, conceptual brittleness, and procedural opacity in the unguided use of these tools. Jurists’ early ventures have attracted both praise and scrutiny, yet they have unfolded without critical attention to the role of interface design. This Essay argues that interface design is not a neutral conduit but rather a critical variable in shaping how judges can and will interact with LLM-generated content. Using Judge Newsom’s recent concurrences in Snell and Deleon as case studies, we show how more thoughtfully designed, AI-resilient interfaces could have mitigated problems of opacity, reproducibility, and conceptual brittleness identified in his explorative LLM-informed adjudication.
We offer a course correction on the legal community’s uncritical acceptance of the chat interface for LLM-assisted work. Proprietary consumer-facing chat interfaces are deeply problematic when used for adjudication. Such interfaces obscure the underlying stochasticity of model outputs and fail to support critical engagement with such outputs. In contrast, we describe existing, open-source interfaces designed to support reproducible workflows, enhance user awareness of LLM limitations, and preserve interpretive agency. Such tools could encourage judges to scrutinize LLM outputs, in part by offering affordances for scaling, archiving, and visualizing LLM outputs that are lacking in proprietary chat interfaces. We particularly caution against the uncritical use of LLMs in “hard cases,” where human uncertainty may perversely increase reliance on AI tools just when those tools may be more likely to fail.
Beyond critique, we chart a path forward by articulating a broader vision for AI-resilient law: a system of incorporating law that would support judicial transparency, improve efficiency without compromising legitimacy, and open new possibilities for LLM-augmented legal reading and writing. Interface design is essential to legal AI governance. By foregrounding the design of human-AI interactions, this work proposes to reorient the legal community toward a more principled and truly generative approach to integrating LLMs into legal practice…(More)“.
Paper by Susan Ariel Aaronson: “Following decades of US government-led initiatives to open data and to encourage data flows, it came as a shock to many US trade partners when the Biden administration announced in October 2023 that it would withdraw its support for proposals to encourage cross-border free flow of data being discussed at the World Trade Organization. US policy makers questioned whether supporting these proposals was still in the country’s national interest. The United States followed through on its concerns by issuing executive orders to restrict the sale and transfer of various types of data to China and several other adversary nations. This paper examines how and why the United States became increasingly concerned about the national security risks of cross-border free flow of data and the impact of such restrictions on data…(More)”.
Article by Stefaan Verhulst: “At the turn of the 20th century, Andrew Carnegie was one of the richest men in the world. He was also one of the most reviled, infamous for the harsh labor conditions and occasional violence at his steel mills. Determined to rehabilitate his reputation, Carnegie embarked upon a number of ambitious philanthropic ventures that would redefine his legacy, and leave a lasting impact on the United States and the world.
Among the most ambitious of these were the Carnegie Libraries. Between 1860 and 1930, Carnegie spent almost $60 million (equivalent to around $2.3 billion today), to build a network of 2,509 libraries globally — 1,689 in the United States and the rest in places as diverse as Australia, Fiji, South Africa, and his native Scotland. Carnegie supported these libraries for a number of reasons: to burnish his own reputation, because he thought it would help support immigrant integration into the US, but most of all because he was “dedicated to the diffusion of knowledge.” For Carnegie, greater knowledge was key to fostering all manner of social goods — everything from a healthier democracy to more innovation and better health. Today, many of those libraries still stand in communities across the country, a testament to the lasting impact of Carnegie’s generosity.
The story of Carnegie’s libraries would seem to offer a happy story from the past, a quaint period piece. But it has resonance in the present.
Today, we are once again presented with a landscape in which information is both abundant and scarce, offering tremendous potential for the public good yet largely accessible and reusable only to a small (corporate) minority. This paradox stems from the fact that while more and more aspects of our lives are captured in digital form, the resulting data is increasingly locked away, or inaccessible.
The centrality of data to public life is now undeniable, particularly with the rise of generative artificial intelligence, which relies on vast troves of high-quality, diverse, and timely datasets. Yet access to such data is being steadily eroded as governments, corporations, and institutions impose new restrictions on what can be accessed and reused. In some cases, open data portals and official statistics once celebrated as milestones of transparency have been defunded or scaled back, with fewer datasets published and those that remain limited to low-risk, non-sensitive material. At the same time, private platforms that once offered public APIs for research — such as Twitter (now X), Meta and Reddit — have closed or heavily monetized access, cutting off academics, civil society groups, and smaller enterprises from vital resources.
The drivers of this shift are varied but interlinked. The rise of generative AI has triggered what some call “generative AI-nxiety,” prompting news organizations, academic institutions, and other data custodians to block crawlers and restrict even non-sensitive repositories, often in (understandable) reaction to unconsented scraping for commercial model training. This is compounded by a broader research data lockdown, in which critical resources such as social media datasets used to study misinformation, political discourse, or mental health, and open environmental data essential for climate modeling, are increasingly subject to paywalls, restrictive licensing, or geopolitical disputes.
Rising calls for digital sovereignty have also led to a proliferation of data localization laws that prevent cross-border flows, undermining collaborative efforts on urgent global challenges like pandemic preparedness, disaster response, and environmental monitoring. Meanwhile, in the private sector, data is increasingly treated as a proprietary asset to be hoarded or sold, rather than a shared resource that can be stewarded responsibly for mutual benefit.
Indeed, we may be entering a new “data winter,” one marked by the emergence of new silos and gatekeepers and by a relentless — and socially corrosive — erosion of the open, interoperable data infrastructures that once seemed to hold so much promise.
This narrowing of the data commons comes precisely at a moment when global challenges demand greater openness, collaboration, and trust. Left unchecked, it risks stalling scientific breakthroughs, weakening evidence-based policymaking, deepening inequities in access to knowledge, and entrenching power in the hands of a few large actors, reshaping not only innovation but our collective capacity to understand and respond to the world.
A Carnegie commitment to the “diffusion of knowledge”, updated for the digital age, can help avert this dire situation. Building modern data libraries, embedding principles of the commons, could restore openness while safeguarding privacy and security. Without such action, the promise of our data-rich era may curdle into a new form of information scarcity, with deep and lasting societal costs…(More)”.