Explore our articles
View All Results

Stefaan Verhulst

Article by Viktor Mayer-Schönberger: “Much is being said about AI governance these days – by experts and pundits, lobbyists, journalists and policymakers. Convictions run high. Fundamental values are said to be at stake. Increasingly, AI governance statutes are passed. But what do we really mean – or ought to mean – when we speak about AI governance? In the West, at least three distinct categories of meaning can be identified.

The first is by far the most popular and it comes in many different variations and flavours. It’s what has been driving recent legislation, such as the European Union AI Act. And perhaps surprisingly, it has quite little to do with artificial intelligence. Its proponents scrutinise output of AI processing, and find the output wanting, for a variety of reasons. …But expecting near perfection from machines while accepting much less from humans does not lead to better outcomes overall. Rather, it keeps us stuck with more flawed, albeit human outputs… Moreover, terms such as ‘fair’ and ‘responsible’, frequently used in such AI governance debates, offer the advantage of vast interpretative flexibility, facilitating their use by many groups in support of their very diverse agendas. These different AI governance voices mean very different things, when they use the same words – and from their vantage point that’s more often a feature than a bug, because it gives them and their cause anchorage in the public debates.

The second flavour of AI governance offers a very different take. By focusing on the current economic landscape of digital and online services, its proponents suggest that AI governance is less novel and rather a continuation of digital and internet governance debates that have been raging for decades (Mueller Citation2025). They argue that most building blocks of AI have been around for some time – data, processing power and self-learning algorithms – and been utilised quite unevenly in the digital economy, often to the effect that large economic players got larger. ..

The third flavour of AI governance shifts the focus away from how technology affects fairness or markets, yet again. Instead, the attention is on decision-making. If AI is much about helping humans make better decisions, either by guiding them to the supposedly best choice or by choosing for them, AI governance isn’t so much about technology than about how and to what extent individual decision-making processes are shaped by outside influence. It situates the governance question apart from the specifics of a particular technology and asks: How are others, especially society, shaping and altering individual decision-making processes?…(More)”.

Of forests and trees in AI governance

Essay by Andrew Sorota: “…But a quieter danger lies in wait, one that may ultimately prove more corrosive to the human spirit than any killer robot or bioweapon. The risk is that we will come to rely on AI not merely to assist us but to decide for us, surrendering ever larger portions of collective judgment to systems that, by design, cannot acknowledge our dignity.

The tragedy is that we are culturally prepared for such abdication. Our political institutions already depend on what might be called a “paradigm of deference,” in which ordinary citizens are invited to voice preferences episodically — through ballots every few years — while day-to-day decisions are made by elected officials, regulators and technical experts.

Many citizens have even come to defer their civic role entirely by abstaining from voting, whether for symbolic meaning or due to sheer apathy. AI slots neatly into this architecture, promising to supercharge the convenience of deferring while further distancing individuals from the levers of power.

Modern representative democracy itself emerged in the 18th century as a solution to the logistical impossibility of assembling the entire citizenry in one place; it scaled the ancient city-state to the continental republic. That solution carried a price: The experience of direct civic agency was replaced by periodic, symbolic acts of consent. Between elections, citizens mostly observe from the sidelines. Legislative committees craft statutes, administrative agencies draft rules, central banks decide the price of money — all with limited direct public involvement.

This arrangement has normalized an expectation that complex questions belong to specialists. In many domains, that reflex is sensible — neurosurgeons really should make neurosurgical calls. But it also primes us to cede judgment even where the stakes are fundamentally moral or distributive. The democratic story we tell ourselves — that sovereignty rests with the people — persists, but the lived reality is an elaborate hierarchy of custodians. Many citizens have internalized that gap as inevitable…(More)”.

Rescuing Democracy From The Quiet Rule Of AI

Paper by Edoardo Loru et al: “Large Language Models (LLMs) are increasingly embedded in evaluative processes, from information filtering to assessing and addressing knowledge gaps through explanation and credibility judgments. This raises the need to examine how such evaluations are built, what assumptions they rely on, and how their strategies diverge from those of humans. We benchmark six LLMs against expert ratings—NewsGuard and Media Bias/Fact Check—and against human judgments collected through a controlled experiment. We use news domains purely as a controlled benchmark for evaluative tasks, focusing on the underlying mechanisms rather than on news classification per se. To enable direct comparison, we implement a structured agentic framework in which both models and nonexpert participants follow the same evaluation procedure: selecting criteria, retrieving content, and producing justifications. Despite output alignment, our findings show consistent differences in the observable criteria guiding model evaluations, suggesting that lexical associations and statistical priors could influence evaluations in ways that differ from contextual reasoning. This reliance is associated with systematic effects: political asymmetries and a tendency to confuse linguistic form with epistemic reliability—a dynamic we term epistemia, the illusion of knowledge that emerges when surface plausibility replaces verification. Indeed, delegating judgment to such systems may affect the heuristics underlying evaluative processes, suggesting a shift from normative reasoning toward pattern-based approximation and raising open questions about the role of LLMs in evaluative processes…(More)”

The simulation of judgment in LLMs

Article by Samuel Greengard: “Urban planning has always focused on improving the way people, spaces, and objects interact. Yet, translating these complex dynamics into a livable environment is remarkably difficult. Seemingly small differences in design can unleash profound impacts on the people who live in a city.

To better navigate this complexity, planners increasingly are turning to digital technology, including artificial intelligence (AI). While data-driven planning isn’t new, these tools deliver a more sophisticated framework. This evolution, referred to as algorithmic urbanism, blends traditional planning techniques with advanced analytics to address challenges like congestion, health, safety, and quality of life.

“Buildings, streets, trees, and numerous other factors influence how people move about, how economic activity takes place, and how various events unfold,” said Luis Bettencourt, professor of Ecology and Evolution and director of the Mansueto Institute for Urban Innovation at the University of Chicago. “Tools like AI and digital twins spot opportunities to rethink and reinvent things.”

This might include anything from optimizing a network of bicycle lanes to simulating zoning changes and land-use scenarios. It could also incorporate ways to improve recreation, congestion, and energy use. Yet, like other forms of AI, algorithmic urbanism introduces risks, including the potential for perpetuating historical data biases, misuse or abuse of data, and concealing how decisions take place.

The idea of using data and algorithms to design better cities extends back to the 1970s. That’s when computing tools like geographic information systems and business intelligence began to extract insights from data—and to provide more precise methods for managing urban growth.

Satellite imagery, vast databases, and environmental sensors followed. “The technology emerged as a valuable tool for strategic planning,” said Rob Kitchin, Professor of Human Geography at Maynooth University in Ireland. “It allowed planners to run detailed simulations and better understand scenarios, such as if you add a shopping mall, how will it impact traffic flow, congestion, and surrounding infrastructure.”…(More)”

Can AI Help Build Cities Better?

Time Magazine: “The Best Inventions of 2025…Rescuing historical data helps researchers better understand and model climate change, especially in under-resourced regions. Decades-old records documenting daily precipitation and temperature were often handwritten, and MeteoSaver’s software can digitize and transcribe these records into machine-readable formats like spreadsheets alongside human scientists, speeding up the process…(More)”.

MeteoSaver

Paper by Sheikh Kamran Abid et al: “As disasters become more frequent and complex, the integration of artificial intelligence (AI) with crowdsourced data from social media is emerging as a powerful approach to enhance disaster management and community resilience. This study investigates the potential of AI-enhanced crowdsourcing to improve emergency preparedness and response. A systematic review was conducted using both qualitative and quantitative methodologies, guided by the PRISMA framework, to identify and evaluate relevant literature. The findings reveal that AI systems can effectively process real-time social media data to deliver timely alerts, coordinate emergency actions, and engage communities. Key themes explored include the effectiveness of community participation, AI’s capacity to manage large-scale information flows, and the challenges posed by misinformation, data privacy, and infrastructural limitations. The results suggest that when strategically implemented, AI-enhanced crowdsourcing can play a critical role in building adaptive and sustainable disaster management frameworks. The paper concludes with practical and policy-level recommendations for integrating these technologies into Pakistan’s disaster management systems…(More)”.

AI-enhanced crowdsourcing for disaster management: strengthening community resilience through social media

Paper by Joseph E. Stiglitz & Maxim Ventura-Bolet: “We develop a tractable model to study how AI and digital platforms impact the information ecosystem. News producers — who create truthful or untruthful content that becomes a public good or bad — earn revenue from consumer visits. Consumers search for information and differ in their ability to distinguish truthful from untruthful information. AI and digital platforms influence the ecosystem by: improving the efficiency of processing and transmission of information, endangering the producer business model, changing the relative cost of producing misinformation and altering the ability of consumers to screen quality. We find that in the absence of adequate regulation (accountability, content moderation, and intellectual property protection) the quality of the information ecosystem may decline, both because the equilibrium quantity of truthful information declines and the share of misinformation increases; and polarization may intensify. While some of these problems are already evident with digital platforms, AI may have different, and overall more adverse, impacts…(More)”.

The Impact of AI and Digital Platforms on the Information Ecosystem

Paper by Jonathan Proctor et al: “Satellite imagery and machine learning (SIML) are increasingly being combined to remotely measure social and environmental outcomes, yet use of this technology has been limited by insufficient understanding of its strengths and weaknesses. Here, we undertake the most extensive effort yet to characterize the potential and limits of using a SIML technology to measure ground conditions. We conduct 115 standardized large-scale experiments using a composite high-resolution optical image of Earth and a generalizable SIML technology to evaluate what can be accurately measured and where this technology struggles. We find that SIML alone predicts roughly half the variation in ground measurements on average, and that variables describing human society (e.g. female literacy, R²=0.55) are generally as easily measured as natural variables (e.g. bird diversity, R²=0.55). Patterns of performance across measured variable type, space, income and population density indicate that SIML can likely support many new applications and decision-making use cases, although within quantifiable limits…(More)”.

What Can Satellite Imagery and Machine Learning Measure?

Paper by Richard Albert and Kevin Frazier: “Artificial Intelligence (AI) now has the capacity to write a constitution for any country in the world. But should it? The immediate reaction is likely emphatically no—and understandably so, given that there is no greater exercise of popular sovereignty than the act of constituting oneself under higher law legitimated by the consent of the governed. But constitution-making is not a single act at a single moment. It is a series of discrete steps demanding varying degrees of popular participation to produce a text that enjoys legitimacy both in perception and reality. Some of these steps could prudently integrate human-AI collaboration or autonomous AI assistance—or so we argue in this first Article to explain and evaluate how constitutional designers not only could, but also should, harness the extraordinary potential of AI. We combine our expertise as innovators in the use and design of AI with our direct involvement as advisors in constitution-making processes around the world to map the terrain of opportunities and hazards in the next iteration of the continuing fusion of technology with governance. We ask and answer the most important question now confronting constitutional designers: how to use AI in making and reforming constitutions?…(More)”

Should AI Write Your Constitution?

About: “MzansiXchange is a national data exchange initiative to create an integrated data ecosystem that supports effective planning, policymaking, reporting, and service delivery. At its core, it curates, integrates, and makes accessible a wide range of data for the public good.

The initiative addresses key challenges in South Africa’s data landscape, where information is often siloed, fragmented, not interoperable and difficult to access. MzansiXchange addresses these gaps by enabling secure, structured, and coordinated data sharing across government.

The MzansiXchange Pilot will test four key data-sharing themes through carefully selected use cases that demonstrate both immediate value to government operations and broader citizen impact. These use cases span different technical approaches and policy domains, providing comprehensive testing of the platform’s capabilities.

Real-time Data Exchange for Regulation, Compliance & Verification

Real-time data sharing enables immediate verification and compliance checking across government services. Pilot use cases include partnerships with the South African Social Security Agency, National Student Financial Aid Scheme, and Department of Home Affairs to streamline citizen services and reduce administrative burdens through instant data verification.

Bulk Data Exchange for Evidence-based Policy, Planning & Research

Large-scale de-identified data sharing supports informed policymaking and research initiatives. The National Treasury Secure Data Facility will serve as a key use case, enabling researchers and policymakers to access comprehensive datasets whilst maintaining strict security and privacy controls.

Data Exchange for Operational Analytics

Operational data sharing improves government efficiency and service delivery. Use cases include National Treasury Office of the Accountant General and the Office of the Chief Procurement Officer, focusing on cross-departmental analytics that enhance procurement processes and financial management.

Data Exchange for Open Access Data Products

Open data initiatives support effective resource allocation and service planning, promote transparency, and enable broader societal benefits. Pilot use cases include the Spatial Economic Activity Data – South Africa (SEAD-SA) platform and Statistics South Africa, making valuable datasets accessible to researchers, businesses, and civil society…(More)“.

MzansiXchange 

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday