Stefaan Verhulst
Article by Stephen Elstub and Oliver Escobar: “This article compares the historical trajectories of democratic innovations across space and time in the UK by analysing the development and impact of collaborative governance, participatory budgeting, referendums, and mini-publics. This is an interesting country for longer-term analysis. First, the UK has been considered an inhospitable environment for democratic innovation. Second, it has experienced asymmetrical decentralisation of legislative and executive powers from national to subnational institutions. Third, these changes have taken place during a period of democratic backsliding. We analyse how these dynamics are interrelated by charting the trajectory of four types of democratic innovations in four different countries of the UK (space) from the 1970s to the present (time). We find that, after years of limited democratic innovation there has been rapid, although geographically asymmetrical, development in recent decades. We argue that the importance of these differences should not be overstated in relation to democratic deepening. We conclude that, to advance democratic innovations in the UK, a constitutional convention is required…(More)”.
Article by Joe Wilkins: “The machines aren’t just coming for your jobs. Now, they want your bodies as well.
That’s at least the hope of Alexander Liteplo, a software engineer and founder of RentAHuman.ai, a platform for AI agents to “search, book, and pay humans for physical-world tasks.”
When Liteplo launched RentAHuman on Monday, he boasted that he already had over 130 people listed on the platform, including an OnlyFans model and the CEO of an AI startup, a claim which couldn’t be verified. Two days later, the site boasted over 73,000 rentable meatwads, though only 83 profiles were visible to us on its “browse humans” tab, Liteplo included.
The pitch is simple: “robots need your body.” For humans, it’s as simple as making a profile, advertising skills and location, and setting an hourly rate. Then AI agents — autonomous taskbots ostensibly employed by humans — contract these humans out, depending on the tasks they need to get done. The humans then “do the thing,” taking instructions from the AI bot and submitting proof of completion. The humans are then paid through crypto, namely “stablecoins or other methods,” per the website.
With so many AI agents slithering around the web these days, those tasks could be just about anything. From package pickups and shopping to product testing and event attendance, Liteplo is banking on there being enough demand from AI agents to create a robust gig-work ecosystem…(More)”.
Paper by Alberto Bitonti: “Debates on lobbying regulation have focused overwhelmingly on transparency, yet disclosure alone does little to address the deeper democratic challenges of unequal power, narrow representation and public distrust. This article argues that lobbying regulation should be designed not only to make influence visible, but also to make it fairer and more deliberative. Drawing on deliberative democracy, this article develops the concept of an open lobby democracy, proposing three institutional solutions: a register of interested parties to map the full range of stakeholders, a digital deliberative platform to structure exchanges between groups and policy makers and a policy footprint to document and justify decisions in light of prior deliberation. This framework preserves policy makers’ ultimate authority while ensuring more accountable, reasoned and legitimate decisions. By reframing lobbying regulation as a tool for deliberative renewal, this article contributes to ongoing debates on how to mend democracy in times of distrust and complex policy-making challenges…(More)”.
Paper by Laura Mai & Joshua Philipp Elsässer: “Data play a central role in climate law and governance. They inform decision-making and arise from governance mechanisms, such as reporting and disclosure requirements. Beyond supporting climate law and governance, however, data, in a very real sense, do governing work: they constitute and restructure relations between actors, create and sustain forms of authority, disrupt modes of claiming legitimacy, and ultimately, purport to render the climate governable. Working across legal scholarship, international relations, as well as science and technology and critical data studies, we identify, describe, and analyse four functions of data in climate law and governance: meaning-making, orchestration, engagement, and transparency. Linking these functions to political programme (policy), structure (polity), and process (politics), we uncover the multiple ways in which data are not neutral or apolitical ‘inputs’ into climate law and governance. Rather, drawing on current examples from governance practice, we show how data shape what is to be governed, what it means to govern, how governance is done, and for whom…(More)”.
Article by Christopher Mims: “If social media were a literal ecosystem, it would be about as healthy as Cleveland’s Cuyahoga River in the 1960s—when it was so polluted it repeatedly caught fire.
Those conflagrations inspired the creation of the Environmental Protection Agency and the passage of the Clean Water Act. But in 2026, nothing comparable exists for our befouled media landscape.
Which means it’s up to us, as individuals, to stop ingesting the pink slime of AI slop, the forever chemicals of outrage bait and the microplastics of misinformation-for-profit. In an age in which information on the internet is so abundant and so low-quality that it’s essentially noise, job number one is to fight our evolutionary instinct to absorb all available information, and instead filter out unreliable sources and bad data.
Fortunately, there’s a way: critical ignoring.
“It’s not total ignoring,” says Sam Wineburg, who coined the term in 2021. “It’s ignoring after you’ve checked out some initial signals. We think of it as constant vigilance over our own vulnerability.”
Critical ignoring was born of research that Wineburg, an emeritus professor of education at Stanford University, and others did on how the skills of professional fact-checkers could be taught to young people in school. Kids and adults alike need the ability to quickly evaluate the truth of a statement and the reliability of its source, they argued. Since then, the term has taken on a life of its own. It’s become an umbrella for a whole set of skills, some of which might seem counterintuitive.
Here’s the quick-and-dirty on how to start practicing critical ignoring in the year ahead…(More)”.
Paper by Woodrow Hartzog and Jessica M. Silbey: “Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life. They are the mechanisms through which complex societies encourage cooperation and stability, while also adapting to changing circumstances. The real superpower of institutions is their ability to evolve and adapt within a hierarchy of authority and a framework for roles and rules while maintaining legitimacy in the knowledge produced and the actions taken. Purpose-driven institutions built around transparency, cooperation, and accountability empower individuals to take intellectual risks and challenge the status quo. This happens through the machinations of interpersonal relationships within those institutions, which broaden perspectives and strengthen shared commitment to civic goals.
Unfortunately, the affordances of AI systems extinguish these institutional features at every turn. In this essay, we make one simple point: AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other. These systems are anathema to the kind of evolution, transparency, cooperation, and accountability that give vital institutions their purpose and sustainability. In short, current AI systems are a death sentence for civic institutions, and we should treat them as such…(More)”.
Article by R. Trebor Scholz & Mark Esposito: “The digital economy’s story often centers on stock prices and initial public offerings, but the processes and people behind it reveal a very different reality. Across outsourcing hubs like Nairobi, Manila, and Hyderabad, content moderators working for Facebook, OpenAI, and their subcontractors spend hours each day reviewing beheadings, sexual violence, child abuse, and hate speech to train and police AI systems. This form of labor has led many to report severe psychological harm, including depression, anxiety, and post-traumatic stress disorder. Investigations have documented suicide attempts among moderators in Kenya and the Philippines, alongside widespread reports of suicidal ideation linked to relentless exposure to traumatic content, low pay, and a lack of mental-health support. These incidents are not isolated tragedies, but rather symptoms of an industry structured to offload risk downward through opaque contracting chains while concentrating profit and control at the top.
These cases are a stark reminder that when technological systems are designed solely for extraction and efficiency, they isolate and break the people who sustain them. As artificial intelligence (AI) accelerates, we face a similar precipice. Without deliberate intervention, these extractive logics will scale globally, further concentrating power at the top, unless we choose to build a fundamentally different system…(More)”.
Book by Allison Pugh: “With the rapid development of artificial intelligence and labor-saving technologies like self-checkouts and automated factories, the future of work has never been more uncertain, and even jobs requiring high levels of human interaction are no longer safe. The Last Human Job explores the human connections that underlie our work, arguing that what people do for each other in these settings is valuable and worth preserving.
Drawing on in-depth interviews and observations with people in a broad range of professions—from physicians, teachers, and coaches to chaplains, therapists, caregivers, and hairdressers—Allison Pugh develops the concept of “connective labor,” a kind of work that relies on empathy, the spontaneity of human contact, and a mutual recognition of each other’s humanity. The threats to connective labor are not only those posed by advances in AI or apps; Pugh demonstrates how profit-driven campaigns imposing industrial logic shrink the time for workers to connect, enforce new priorities of data and metrics, and introduce standardized practices that hinder our ability to truly see each other. She concludes with profiles of organizations where connective labor thrives, offering practical steps for building a social architecture that works.
Vividly illustrating how connective labor enriches the lives of individuals and binds our communities together, The Last Human Job is a compelling argument for us to recognize, value, and protect humane work in an increasingly automated and disconnected world…(More)”.
Blog by Sarah Hubbard and Darshan Goux: “…Public officials now have a myriad of digital deliberation tools and programs to choose from. Some considerations for selecting which tool(s) to use include factors such as whether the technology solution is open-source vs. paid, data collection and retention policies, the engagement modalities it offers (e.g. video, audio, surveys, written input), as well as the procurement processes, staffing requirements, and the overall objectives or scale of the engagement.
Below are a few examples of technologies being used to support public deliberation processes today:
- Engaged California is an initiative and digital platform that aims to channel input on complex issues from the people directly to leaders in state government. Their first effort, Los Angeles wildfire recovery, turned submitted comments into a policy action plan for the State of California. The project leveraged the Ethelo platform and included multiple rounds of discussion.
- Bowling Green, Kentucky, launched their BG 2050 Project to envision the future of the city. The project leveraged Polis to collect input and cluster areas of consensus, and Google’s Sensemaker to analyze data. They engaged 10% of the Bowling Green population, generated thousands of ideas, and reported in post-surveys that 70% of participants felt more confident that their voice mattered and 83% of participants gained a better understanding of different viewpoints.
- Other platforms facilitate real-time, small-group, guided discussions online and may include automation features to manage speaking time, agendas, and more. The Stanford Online Deliberation Platform, Cortico, and Frankly are all tools that use technology to aid in these deliberative conservations. The Stanford Online Deliberation Platform has been used in more than 40 countries and has had over 100,000 hours of deliberation on the platform.
- Multi-purpose platforms such as Decidim provide infrastructure to enable everything from participatory budgeting to assemblies. The platform has over three million users and is used by more than 500 organizations around the world.
This is just a small sample of the current ecosystem and their applications. The organization People Powered maintains a larger list of digital participation platforms…(More)”.
A Primer by Adam Zable, Hannah Chafetz, and Stefaan G. Verhulst: “Philanthropic foundations around the world are beginning to experiment with artificial intelligence (AI) to review proposals, stay up-to-date on the latest research, communicate insights to different audiences, and more. However, questions remain around where AI is most valuable across the grant making cycle, when it should not be used, and what practices and policies are needed to ensure it is applied responsibly.

To address these questions, DATA4Philanthropy reviewed how AI is being used across the grantmaking cycle. This includes: problem definition, prioritization, strategy development, partner identification, grant management, and evaluation and learning. Drawing on desk research conducted between July and December 2025, the primer highlights several examples where philanthropies are already using AI in their work and how they are incorporating human judgement throughout the process. It concludes with a series of recommendations on how philanthropies might begin experimenting with AI…(More)”.