Stefaan Verhulst
Chapter by Renée Sieber, Ana Brandusescu, and Jonathan van Geuns: “…draws on examples of governance challenges from the AI in Canadian Municipalities Community of Practice to examine how municipalities navigate artificial intelligence adoption, balance in-house development and outsourcing, and face a gap in public participation. It presents four recommendations, including iterative adoption, stronger collaboration, deeper debate on social impacts, and more civic involvement to strengthen local AI governance…(More)”.
Article by Mike Kuiken: “…This matters beyond accounting arcana because we’re entering an era where data isn’t just valuable — it’s the essential feedstock for AI. Shouldn’t we be able to measure it?
The government dimension makes this even more urgent. Federal agencies sit on extraordinary data holdings: agricultural yields, geological surveys, anonymised health research. A valuation framework could actually strengthen privacy by forcing explicit accounting for data’s worth and clearer protocols for its protection. Right now, federal data policy is a patchwork of inconsistent practices precisely because we have no systematic way to understand what we’re protecting or why.
Assets that aren’t valued aren’t protected. The Office of Personnel Management breach in 2015 compromised the security clearance of 21.5mn Americans. We’ve solved harder problems before: governments have auctioned the electromagnetic spectrum for decades — rights to invisible frequencies that drive billions in economic value — because we decided it mattered enough to measure.
None of this requires adopting China’s approach wholesale. Beijing’s data exchanges serve state priorities. American capital markets demand more rigour. But the fact that China is experimenting while America refuses to engage with the question at all reveals something about strategic intent versus strategic indifference.
The Financial Accounting Standards Board should initiate a project to develop data asset recognition standards. The Securities and Exchange Commission should study disclosure requirements for material data holdings. Congress should mandate that federal agencies assess the value of their data assets. State and local governments should do the same…(More)”.
Book by Tom Griffiths: “Everyone has a basic understanding of how the physical world works. We learn about physics and chemistry in school, letting us explain the world around us in terms of concepts like force, acceleration, and gravity—the Laws of Nature. But we don’t have the same fluency with concepts needed to understand the world inside us—the Laws of Thought. While the story of how mathematics has been used to reveal the mysteries of the universe is familiar, the story of how it has been used to study the mind is not.
There is no one better to tell that story than Tom Griffiths, the head of Princeton’s AI Lab and a renowned expert in the field of cognitive science. In this groundbreaking book, he explains the three major approaches to formalizing thought—rules and symbols, neural networks, and probability and statistics—introducing each idea through the stories of the people behind it. As informed conversations about thought, language, and learning become ever more pressing in the age of AI, The Laws of Thought is an essential read for anyone interested in the future of technology…(More)“.
Paper by Paolo Andrich et al: “Accurate and timely population data are essential for disaster response and humanitarian planning, but traditional censuses often cannot capture rapid demographic changes. Social media data offer a promising alternative for dynamic population monitoring, but their representativeness remains poorly understood and stringent privacy requirements limit their reliability. Here, we address these limitations in the context of the Philippines by calibrating Facebook user counts with the country’s 2020 census figures. First, we find that differential privacy techniques commonly applied to social media-based population datasets disproportionately mask low-population areas. To address this, we propose a Bayesian imputation approach to recover missing values, restoring data coverage for 5.5% of rural areas. Further, using the imputed social media data and leveraging predictors such as urbanisation level, demographic composition, and socio-economic status, we develop a statistical model for the proportion of Facebook users in each municipality, which links observed Facebook user numbers to the true population levels. Out-of-sample validation demonstrates strong result generalisability, with errors as low as ≈18% and ≈24% for urban and rural Facebook user proportions, respectively. We further demonstrate that accounting for overdispersion and spatial correlations in the data is crucial to obtain accurate estimates and appropriate credible intervals. Crucially, as predictors change over time, the models can be used to regularly update the population predictions, providing a dynamic complement to census-based estimates. These results have direct implications for humanitarian response in disaster-prone regions and offer a general framework for using biased social media signals to generate reliable and timely population data…(More)”.
Paper by Bogdan Kulynych, Theresa Stadler, Jean Louis Raisaro, and Carmela Troncoso: “Recent advances in generative modelling have led many to see synthetic data as the go-to solution for a range of problems around data access, scarcity, and under-representation. In this paper, we study three prominent use cases: (1) Sharing synthetic data as a proxy for proprietary datasets to enable statistical analyses while protecting privacy, (2) Augmenting machine learning training sets with synthetic data to improve model performance, and (3) Augmenting datasets with synthetic data to reduce variance in statistical estimation. For each use case, we formalise the problem setting and study, through formal analysis and case studies, under which conditions synthetic data can achieve its intended objectives. We identify fundamental and practical limits that constrain when synthetic data can serve as an effective solution for a particular problem. Our analysis reveals that due to these limits many existing or envisioned use cases of synthetic data are a poor problem fit. Our formalisations and classification of synthetic data use cases enable decision makers to assess whether synthetic data is a suitable approach for their specific data availability problem…(More)”.
Article by Stephen Elstub and Oliver Escobar: “This article compares the historical trajectories of democratic innovations across space and time in the UK by analysing the development and impact of collaborative governance, participatory budgeting, referendums, and mini-publics. This is an interesting country for longer-term analysis. First, the UK has been considered an inhospitable environment for democratic innovation. Second, it has experienced asymmetrical decentralisation of legislative and executive powers from national to subnational institutions. Third, these changes have taken place during a period of democratic backsliding. We analyse how these dynamics are interrelated by charting the trajectory of four types of democratic innovations in four different countries of the UK (space) from the 1970s to the present (time). We find that, after years of limited democratic innovation there has been rapid, although geographically asymmetrical, development in recent decades. We argue that the importance of these differences should not be overstated in relation to democratic deepening. We conclude that, to advance democratic innovations in the UK, a constitutional convention is required…(More)”.
Article by Joe Wilkins: “The machines aren’t just coming for your jobs. Now, they want your bodies as well.
That’s at least the hope of Alexander Liteplo, a software engineer and founder of RentAHuman.ai, a platform for AI agents to “search, book, and pay humans for physical-world tasks.”
When Liteplo launched RentAHuman on Monday, he boasted that he already had over 130 people listed on the platform, including an OnlyFans model and the CEO of an AI startup, a claim which couldn’t be verified. Two days later, the site boasted over 73,000 rentable meatwads, though only 83 profiles were visible to us on its “browse humans” tab, Liteplo included.
The pitch is simple: “robots need your body.” For humans, it’s as simple as making a profile, advertising skills and location, and setting an hourly rate. Then AI agents — autonomous taskbots ostensibly employed by humans — contract these humans out, depending on the tasks they need to get done. The humans then “do the thing,” taking instructions from the AI bot and submitting proof of completion. The humans are then paid through crypto, namely “stablecoins or other methods,” per the website.
With so many AI agents slithering around the web these days, those tasks could be just about anything. From package pickups and shopping to product testing and event attendance, Liteplo is banking on there being enough demand from AI agents to create a robust gig-work ecosystem…(More)”.
Paper by Alberto Bitonti: “Debates on lobbying regulation have focused overwhelmingly on transparency, yet disclosure alone does little to address the deeper democratic challenges of unequal power, narrow representation and public distrust. This article argues that lobbying regulation should be designed not only to make influence visible, but also to make it fairer and more deliberative. Drawing on deliberative democracy, this article develops the concept of an open lobby democracy, proposing three institutional solutions: a register of interested parties to map the full range of stakeholders, a digital deliberative platform to structure exchanges between groups and policy makers and a policy footprint to document and justify decisions in light of prior deliberation. This framework preserves policy makers’ ultimate authority while ensuring more accountable, reasoned and legitimate decisions. By reframing lobbying regulation as a tool for deliberative renewal, this article contributes to ongoing debates on how to mend democracy in times of distrust and complex policy-making challenges…(More)”.
Article by Christopher Mims: “If social media were a literal ecosystem, it would be about as healthy as Cleveland’s Cuyahoga River in the 1960s—when it was so polluted it repeatedly caught fire.
Those conflagrations inspired the creation of the Environmental Protection Agency and the passage of the Clean Water Act. But in 2026, nothing comparable exists for our befouled media landscape.
Which means it’s up to us, as individuals, to stop ingesting the pink slime of AI slop, the forever chemicals of outrage bait and the microplastics of misinformation-for-profit. In an age in which information on the internet is so abundant and so low-quality that it’s essentially noise, job number one is to fight our evolutionary instinct to absorb all available information, and instead filter out unreliable sources and bad data.
Fortunately, there’s a way: critical ignoring.
“It’s not total ignoring,” says Sam Wineburg, who coined the term in 2021. “It’s ignoring after you’ve checked out some initial signals. We think of it as constant vigilance over our own vulnerability.”
Critical ignoring was born of research that Wineburg, an emeritus professor of education at Stanford University, and others did on how the skills of professional fact-checkers could be taught to young people in school. Kids and adults alike need the ability to quickly evaluate the truth of a statement and the reliability of its source, they argued. Since then, the term has taken on a life of its own. It’s become an umbrella for a whole set of skills, some of which might seem counterintuitive.
Here’s the quick-and-dirty on how to start practicing critical ignoring in the year ahead…(More)”.
Paper by Woodrow Hartzog and Jessica M. Silbey: “Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life. They are the mechanisms through which complex societies encourage cooperation and stability, while also adapting to changing circumstances. The real superpower of institutions is their ability to evolve and adapt within a hierarchy of authority and a framework for roles and rules while maintaining legitimacy in the knowledge produced and the actions taken. Purpose-driven institutions built around transparency, cooperation, and accountability empower individuals to take intellectual risks and challenge the status quo. This happens through the machinations of interpersonal relationships within those institutions, which broaden perspectives and strengthen shared commitment to civic goals.
Unfortunately, the affordances of AI systems extinguish these institutional features at every turn. In this essay, we make one simple point: AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other. These systems are anathema to the kind of evolution, transparency, cooperation, and accountability that give vital institutions their purpose and sustainability. In short, current AI systems are a death sentence for civic institutions, and we should treat them as such…(More)”.