Stefaan Verhulst
Chapter by Renée Sieber, Ana Brandusescu, and Jonathan van Geuns: “…draws on examples of governance challenges from the AI in Canadian Municipalities Community of Practice to examine how municipalities navigate artificial intelligence adoption, balance in-house development and outsourcing, and face a gap in public participation. It presents four recommendations, including iterative adoption, stronger collaboration, deeper debate on social impacts, and more civic involvement to strengthen local AI governance…(More)”.
Article by Mike Kuiken: “…This matters beyond accounting arcana because we’re entering an era where data isn’t just valuable — it’s the essential feedstock for AI. Shouldn’t we be able to measure it?
The government dimension makes this even more urgent. Federal agencies sit on extraordinary data holdings: agricultural yields, geological surveys, anonymised health research. A valuation framework could actually strengthen privacy by forcing explicit accounting for data’s worth and clearer protocols for its protection. Right now, federal data policy is a patchwork of inconsistent practices precisely because we have no systematic way to understand what we’re protecting or why.
Assets that aren’t valued aren’t protected. The Office of Personnel Management breach in 2015 compromised the security clearance of 21.5mn Americans. We’ve solved harder problems before: governments have auctioned the electromagnetic spectrum for decades — rights to invisible frequencies that drive billions in economic value — because we decided it mattered enough to measure.
None of this requires adopting China’s approach wholesale. Beijing’s data exchanges serve state priorities. American capital markets demand more rigour. But the fact that China is experimenting while America refuses to engage with the question at all reveals something about strategic intent versus strategic indifference.
The Financial Accounting Standards Board should initiate a project to develop data asset recognition standards. The Securities and Exchange Commission should study disclosure requirements for material data holdings. Congress should mandate that federal agencies assess the value of their data assets. State and local governments should do the same…(More)”.
Paper by Paolo Andrich et al: “Accurate and timely population data are essential for disaster response and humanitarian planning, but traditional censuses often cannot capture rapid demographic changes. Social media data offer a promising alternative for dynamic population monitoring, but their representativeness remains poorly understood and stringent privacy requirements limit their reliability. Here, we address these limitations in the context of the Philippines by calibrating Facebook user counts with the country’s 2020 census figures. First, we find that differential privacy techniques commonly applied to social media-based population datasets disproportionately mask low-population areas. To address this, we propose a Bayesian imputation approach to recover missing values, restoring data coverage for 5.5% of rural areas. Further, using the imputed social media data and leveraging predictors such as urbanisation level, demographic composition, and socio-economic status, we develop a statistical model for the proportion of Facebook users in each municipality, which links observed Facebook user numbers to the true population levels. Out-of-sample validation demonstrates strong result generalisability, with errors as low as ≈18% and ≈24% for urban and rural Facebook user proportions, respectively. We further demonstrate that accounting for overdispersion and spatial correlations in the data is crucial to obtain accurate estimates and appropriate credible intervals. Crucially, as predictors change over time, the models can be used to regularly update the population predictions, providing a dynamic complement to census-based estimates. These results have direct implications for humanitarian response in disaster-prone regions and offer a general framework for using biased social media signals to generate reliable and timely population data…(More)”.
Article by Fionna Saintraint: “Governance decisions shape every facet of a deliberative mini-public: setting the remit, choosing the experts, identifying how, when and about what people deliberate, and ultimately how they reach and validate their final recommendations. Unsurprisingly, the rapid spread of mini-publics has brought with it a patchwork of process designs, each adapting to different political settings, purposes, and actors. Variations in recruitment, deliberation time, facilitation quality, and information provision reflect the distinct choices of the actors behind each process. However, scholars have observed a gap between DMPs’ normative aspirations and their performance in the face of real world challenges. For instance, the inclusion of marginalised voices is seen by many as a core tenet of DMPs, yet some scholars have found that in practice forums tend to reflect elite preferences. Growing use of mini-publics by public institutions has heightened concerns about cooptation by commissioning bodies or manipulation of processes to drive towards a particular outcome, ethical challenges which scholars of citizen engagement have long warned against.
If we want to understand how the mandates given to deliberative mini-publics become the procedural choices that characterise them, we must examine those calling the shots, to understand how various interests shape the deliberative process, beyond its deliberative quality. This matters because it helps ensure that those who hold power are answerable for decisions that either gate-keep or redistribute democratic authority. Studying governance helps us identify when participation looks inclusive but does in reality little to shift power, creating spaces where people can air grievances without the expectation of policy change. As well as looking at quality and impact, an assessment of governance also needs to be a conversation about integrity…(More)”.
Paper by Bogdan Kulynych, Theresa Stadler, Jean Louis Raisaro, and Carmela Troncoso: “Recent advances in generative modelling have led many to see synthetic data as the go-to solution for a range of problems around data access, scarcity, and under-representation. In this paper, we study three prominent use cases: (1) Sharing synthetic data as a proxy for proprietary datasets to enable statistical analyses while protecting privacy, (2) Augmenting machine learning training sets with synthetic data to improve model performance, and (3) Augmenting datasets with synthetic data to reduce variance in statistical estimation. For each use case, we formalise the problem setting and study, through formal analysis and case studies, under which conditions synthetic data can achieve its intended objectives. We identify fundamental and practical limits that constrain when synthetic data can serve as an effective solution for a particular problem. Our analysis reveals that due to these limits many existing or envisioned use cases of synthetic data are a poor problem fit. Our formalisations and classification of synthetic data use cases enable decision makers to assess whether synthetic data is a suitable approach for their specific data availability problem…(More)”.
Article by Stephen Elstub and Oliver Escobar: “This article compares the historical trajectories of democratic innovations across space and time in the UK by analysing the development and impact of collaborative governance, participatory budgeting, referendums, and mini-publics. This is an interesting country for longer-term analysis. First, the UK has been considered an inhospitable environment for democratic innovation. Second, it has experienced asymmetrical decentralisation of legislative and executive powers from national to subnational institutions. Third, these changes have taken place during a period of democratic backsliding. We analyse how these dynamics are interrelated by charting the trajectory of four types of democratic innovations in four different countries of the UK (space) from the 1970s to the present (time). We find that, after years of limited democratic innovation there has been rapid, although geographically asymmetrical, development in recent decades. We argue that the importance of these differences should not be overstated in relation to democratic deepening. We conclude that, to advance democratic innovations in the UK, a constitutional convention is required…(More)”.
Article by Joe Wilkins: “The machines aren’t just coming for your jobs. Now, they want your bodies as well.
That’s at least the hope of Alexander Liteplo, a software engineer and founder of RentAHuman.ai, a platform for AI agents to “search, book, and pay humans for physical-world tasks.”
When Liteplo launched RentAHuman on Monday, he boasted that he already had over 130 people listed on the platform, including an OnlyFans model and the CEO of an AI startup, a claim which couldn’t be verified. Two days later, the site boasted over 73,000 rentable meatwads, though only 83 profiles were visible to us on its “browse humans” tab, Liteplo included.
The pitch is simple: “robots need your body.” For humans, it’s as simple as making a profile, advertising skills and location, and setting an hourly rate. Then AI agents — autonomous taskbots ostensibly employed by humans — contract these humans out, depending on the tasks they need to get done. The humans then “do the thing,” taking instructions from the AI bot and submitting proof of completion. The humans are then paid through crypto, namely “stablecoins or other methods,” per the website.
With so many AI agents slithering around the web these days, those tasks could be just about anything. From package pickups and shopping to product testing and event attendance, Liteplo is banking on there being enough demand from AI agents to create a robust gig-work ecosystem…(More)”.
Paper by Alberto Bitonti: “Debates on lobbying regulation have focused overwhelmingly on transparency, yet disclosure alone does little to address the deeper democratic challenges of unequal power, narrow representation and public distrust. This article argues that lobbying regulation should be designed not only to make influence visible, but also to make it fairer and more deliberative. Drawing on deliberative democracy, this article develops the concept of an open lobby democracy, proposing three institutional solutions: a register of interested parties to map the full range of stakeholders, a digital deliberative platform to structure exchanges between groups and policy makers and a policy footprint to document and justify decisions in light of prior deliberation. This framework preserves policy makers’ ultimate authority while ensuring more accountable, reasoned and legitimate decisions. By reframing lobbying regulation as a tool for deliberative renewal, this article contributes to ongoing debates on how to mend democracy in times of distrust and complex policy-making challenges…(More)”.
Paper by Laura Mai & Joshua Philipp Elsässer: “Data play a central role in climate law and governance. They inform decision-making and arise from governance mechanisms, such as reporting and disclosure requirements. Beyond supporting climate law and governance, however, data, in a very real sense, do governing work: they constitute and restructure relations between actors, create and sustain forms of authority, disrupt modes of claiming legitimacy, and ultimately, purport to render the climate governable. Working across legal scholarship, international relations, as well as science and technology and critical data studies, we identify, describe, and analyse four functions of data in climate law and governance: meaning-making, orchestration, engagement, and transparency. Linking these functions to political programme (policy), structure (polity), and process (politics), we uncover the multiple ways in which data are not neutral or apolitical ‘inputs’ into climate law and governance. Rather, drawing on current examples from governance practice, we show how data shape what is to be governed, what it means to govern, how governance is done, and for whom…(More)”.
Article by Christopher Mims: “If social media were a literal ecosystem, it would be about as healthy as Cleveland’s Cuyahoga River in the 1960s—when it was so polluted it repeatedly caught fire.
Those conflagrations inspired the creation of the Environmental Protection Agency and the passage of the Clean Water Act. But in 2026, nothing comparable exists for our befouled media landscape.
Which means it’s up to us, as individuals, to stop ingesting the pink slime of AI slop, the forever chemicals of outrage bait and the microplastics of misinformation-for-profit. In an age in which information on the internet is so abundant and so low-quality that it’s essentially noise, job number one is to fight our evolutionary instinct to absorb all available information, and instead filter out unreliable sources and bad data.
Fortunately, there’s a way: critical ignoring.
“It’s not total ignoring,” says Sam Wineburg, who coined the term in 2021. “It’s ignoring after you’ve checked out some initial signals. We think of it as constant vigilance over our own vulnerability.”
Critical ignoring was born of research that Wineburg, an emeritus professor of education at Stanford University, and others did on how the skills of professional fact-checkers could be taught to young people in school. Kids and adults alike need the ability to quickly evaluate the truth of a statement and the reliability of its source, they argued. Since then, the term has taken on a life of its own. It’s become an umbrella for a whole set of skills, some of which might seem counterintuitive.
Here’s the quick-and-dirty on how to start practicing critical ignoring in the year ahead…(More)”.