Stefaan Verhulst
Article by Fionna Saintraint: “Governance decisions shape every facet of a deliberative mini-public: setting the remit, choosing the experts, identifying how, when and about what people deliberate, and ultimately how they reach and validate their final recommendations. Unsurprisingly, the rapid spread of mini-publics has brought with it a patchwork of process designs, each adapting to different political settings, purposes, and actors. Variations in recruitment, deliberation time, facilitation quality, and information provision reflect the distinct choices of the actors behind each process. However, scholars have observed a gap between DMPs’ normative aspirations and their performance in the face of real world challenges. For instance, the inclusion of marginalised voices is seen by many as a core tenet of DMPs, yet some scholars have found that in practice forums tend to reflect elite preferences. Growing use of mini-publics by public institutions has heightened concerns about cooptation by commissioning bodies or manipulation of processes to drive towards a particular outcome, ethical challenges which scholars of citizen engagement have long warned against.
If we want to understand how the mandates given to deliberative mini-publics become the procedural choices that characterise them, we must examine those calling the shots, to understand how various interests shape the deliberative process, beyond its deliberative quality. This matters because it helps ensure that those who hold power are answerable for decisions that either gate-keep or redistribute democratic authority. Studying governance helps us identify when participation looks inclusive but does in reality little to shift power, creating spaces where people can air grievances without the expectation of policy change. As well as looking at quality and impact, an assessment of governance also needs to be a conversation about integrity…(More)”.
Paper by Bogdan Kulynych, Theresa Stadler, Jean Louis Raisaro, and Carmela Troncoso: “Recent advances in generative modelling have led many to see synthetic data as the go-to solution for a range of problems around data access, scarcity, and under-representation. In this paper, we study three prominent use cases: (1) Sharing synthetic data as a proxy for proprietary datasets to enable statistical analyses while protecting privacy, (2) Augmenting machine learning training sets with synthetic data to improve model performance, and (3) Augmenting datasets with synthetic data to reduce variance in statistical estimation. For each use case, we formalise the problem setting and study, through formal analysis and case studies, under which conditions synthetic data can achieve its intended objectives. We identify fundamental and practical limits that constrain when synthetic data can serve as an effective solution for a particular problem. Our analysis reveals that due to these limits many existing or envisioned use cases of synthetic data are a poor problem fit. Our formalisations and classification of synthetic data use cases enable decision makers to assess whether synthetic data is a suitable approach for their specific data availability problem…(More)”.
Article by Stephen Elstub and Oliver Escobar: “This article compares the historical trajectories of democratic innovations across space and time in the UK by analysing the development and impact of collaborative governance, participatory budgeting, referendums, and mini-publics. This is an interesting country for longer-term analysis. First, the UK has been considered an inhospitable environment for democratic innovation. Second, it has experienced asymmetrical decentralisation of legislative and executive powers from national to subnational institutions. Third, these changes have taken place during a period of democratic backsliding. We analyse how these dynamics are interrelated by charting the trajectory of four types of democratic innovations in four different countries of the UK (space) from the 1970s to the present (time). We find that, after years of limited democratic innovation there has been rapid, although geographically asymmetrical, development in recent decades. We argue that the importance of these differences should not be overstated in relation to democratic deepening. We conclude that, to advance democratic innovations in the UK, a constitutional convention is required…(More)”.
Article by Joe Wilkins: “The machines aren’t just coming for your jobs. Now, they want your bodies as well.
That’s at least the hope of Alexander Liteplo, a software engineer and founder of RentAHuman.ai, a platform for AI agents to “search, book, and pay humans for physical-world tasks.”
When Liteplo launched RentAHuman on Monday, he boasted that he already had over 130 people listed on the platform, including an OnlyFans model and the CEO of an AI startup, a claim which couldn’t be verified. Two days later, the site boasted over 73,000 rentable meatwads, though only 83 profiles were visible to us on its “browse humans” tab, Liteplo included.
The pitch is simple: “robots need your body.” For humans, it’s as simple as making a profile, advertising skills and location, and setting an hourly rate. Then AI agents — autonomous taskbots ostensibly employed by humans — contract these humans out, depending on the tasks they need to get done. The humans then “do the thing,” taking instructions from the AI bot and submitting proof of completion. The humans are then paid through crypto, namely “stablecoins or other methods,” per the website.
With so many AI agents slithering around the web these days, those tasks could be just about anything. From package pickups and shopping to product testing and event attendance, Liteplo is banking on there being enough demand from AI agents to create a robust gig-work ecosystem…(More)”.
Paper by Alberto Bitonti: “Debates on lobbying regulation have focused overwhelmingly on transparency, yet disclosure alone does little to address the deeper democratic challenges of unequal power, narrow representation and public distrust. This article argues that lobbying regulation should be designed not only to make influence visible, but also to make it fairer and more deliberative. Drawing on deliberative democracy, this article develops the concept of an open lobby democracy, proposing three institutional solutions: a register of interested parties to map the full range of stakeholders, a digital deliberative platform to structure exchanges between groups and policy makers and a policy footprint to document and justify decisions in light of prior deliberation. This framework preserves policy makers’ ultimate authority while ensuring more accountable, reasoned and legitimate decisions. By reframing lobbying regulation as a tool for deliberative renewal, this article contributes to ongoing debates on how to mend democracy in times of distrust and complex policy-making challenges…(More)”.
Paper by Laura Mai & Joshua Philipp Elsässer: “Data play a central role in climate law and governance. They inform decision-making and arise from governance mechanisms, such as reporting and disclosure requirements. Beyond supporting climate law and governance, however, data, in a very real sense, do governing work: they constitute and restructure relations between actors, create and sustain forms of authority, disrupt modes of claiming legitimacy, and ultimately, purport to render the climate governable. Working across legal scholarship, international relations, as well as science and technology and critical data studies, we identify, describe, and analyse four functions of data in climate law and governance: meaning-making, orchestration, engagement, and transparency. Linking these functions to political programme (policy), structure (polity), and process (politics), we uncover the multiple ways in which data are not neutral or apolitical ‘inputs’ into climate law and governance. Rather, drawing on current examples from governance practice, we show how data shape what is to be governed, what it means to govern, how governance is done, and for whom…(More)”.
Article by Christopher Mims: “If social media were a literal ecosystem, it would be about as healthy as Cleveland’s Cuyahoga River in the 1960s—when it was so polluted it repeatedly caught fire.
Those conflagrations inspired the creation of the Environmental Protection Agency and the passage of the Clean Water Act. But in 2026, nothing comparable exists for our befouled media landscape.
Which means it’s up to us, as individuals, to stop ingesting the pink slime of AI slop, the forever chemicals of outrage bait and the microplastics of misinformation-for-profit. In an age in which information on the internet is so abundant and so low-quality that it’s essentially noise, job number one is to fight our evolutionary instinct to absorb all available information, and instead filter out unreliable sources and bad data.
Fortunately, there’s a way: critical ignoring.
“It’s not total ignoring,” says Sam Wineburg, who coined the term in 2021. “It’s ignoring after you’ve checked out some initial signals. We think of it as constant vigilance over our own vulnerability.”
Critical ignoring was born of research that Wineburg, an emeritus professor of education at Stanford University, and others did on how the skills of professional fact-checkers could be taught to young people in school. Kids and adults alike need the ability to quickly evaluate the truth of a statement and the reliability of its source, they argued. Since then, the term has taken on a life of its own. It’s become an umbrella for a whole set of skills, some of which might seem counterintuitive.
Here’s the quick-and-dirty on how to start practicing critical ignoring in the year ahead…(More)”.
Paper by Woodrow Hartzog and Jessica M. Silbey: “Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life. They are the mechanisms through which complex societies encourage cooperation and stability, while also adapting to changing circumstances. The real superpower of institutions is their ability to evolve and adapt within a hierarchy of authority and a framework for roles and rules while maintaining legitimacy in the knowledge produced and the actions taken. Purpose-driven institutions built around transparency, cooperation, and accountability empower individuals to take intellectual risks and challenge the status quo. This happens through the machinations of interpersonal relationships within those institutions, which broaden perspectives and strengthen shared commitment to civic goals.
Unfortunately, the affordances of AI systems extinguish these institutional features at every turn. In this essay, we make one simple point: AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other. These systems are anathema to the kind of evolution, transparency, cooperation, and accountability that give vital institutions their purpose and sustainability. In short, current AI systems are a death sentence for civic institutions, and we should treat them as such…(More)”.
Article by R. Trebor Scholz & Mark Esposito: “The digital economy’s story often centers on stock prices and initial public offerings, but the processes and people behind it reveal a very different reality. Across outsourcing hubs like Nairobi, Manila, and Hyderabad, content moderators working for Facebook, OpenAI, and their subcontractors spend hours each day reviewing beheadings, sexual violence, child abuse, and hate speech to train and police AI systems. This form of labor has led many to report severe psychological harm, including depression, anxiety, and post-traumatic stress disorder. Investigations have documented suicide attempts among moderators in Kenya and the Philippines, alongside widespread reports of suicidal ideation linked to relentless exposure to traumatic content, low pay, and a lack of mental-health support. These incidents are not isolated tragedies, but rather symptoms of an industry structured to offload risk downward through opaque contracting chains while concentrating profit and control at the top.
These cases are a stark reminder that when technological systems are designed solely for extraction and efficiency, they isolate and break the people who sustain them. As artificial intelligence (AI) accelerates, we face a similar precipice. Without deliberate intervention, these extractive logics will scale globally, further concentrating power at the top, unless we choose to build a fundamentally different system…(More)”.
Book by Allison Pugh: “With the rapid development of artificial intelligence and labor-saving technologies like self-checkouts and automated factories, the future of work has never been more uncertain, and even jobs requiring high levels of human interaction are no longer safe. The Last Human Job explores the human connections that underlie our work, arguing that what people do for each other in these settings is valuable and worth preserving.
Drawing on in-depth interviews and observations with people in a broad range of professions—from physicians, teachers, and coaches to chaplains, therapists, caregivers, and hairdressers—Allison Pugh develops the concept of “connective labor,” a kind of work that relies on empathy, the spontaneity of human contact, and a mutual recognition of each other’s humanity. The threats to connective labor are not only those posed by advances in AI or apps; Pugh demonstrates how profit-driven campaigns imposing industrial logic shrink the time for workers to connect, enforce new priorities of data and metrics, and introduce standardized practices that hinder our ability to truly see each other. She concludes with profiles of organizations where connective labor thrives, offering practical steps for building a social architecture that works.
Vividly illustrating how connective labor enriches the lives of individuals and binds our communities together, The Last Human Job is a compelling argument for us to recognize, value, and protect humane work in an increasingly automated and disconnected world…(More)”.