Stefaan Verhulst
Paper by Jorrit de Jong et al: “Over the last decades, scholars and practitioners have focused their attention on the use of data for improving public action, with a renewed interest in the emergence of big data and artificial intelligence. The potential of data is particularly salient in cities, where vast amounts of data are being generated from traditional and novel sources. Despite this growing interest, there is a need for a conceptual and operational understanding of the beneficial uses of data. This article presents a comprehensive and precise account of how cities can use data to address problems more effectively, efficiently, equitably, and in a more accountable manner. It does so by synthesizing and augmenting current research with empirical evidence derived from original research and learnings from a program designed to strengthen city governments’ data capacity. The framework can be used to support longitudinal and comparative analyses as well as explore questions such as how different uses of data employed at various levels of maturity can yield disparate outcomes. Practitioners can use the framework to identify and prioritize areas in which building data capacity might further the goals of their teams and organizations…(More)”.
Book by Blaise Agüera y Arcas: “It has come as a shock to some AI researchers that a large neural net that predicts next words seems to produce a system with general intelligence. Yet this is consistent with a long-held view among some neuroscientists that the brain evolved precisely to predict the future—the “predictive brain” hypothesis.
In What Is Intelligence?, Blaise Agüera y Arcas takes up this idea—that prediction is fundamental not only to intelligence and the brain but to life itself—and explores the wide-ranging implications. These include radical new perspectives on the computational properties of living systems, the evolutionary and social origins of intelligence, the relationship between models and reality, entropy and the nature of time, the meaning of free will, the problem of consciousness, and the ethics of machine intelligence.
The book offers a unified picture of intelligence from molecules to organisms, societies, and AI, drawing from a wide array of literature in many fields, including computer science and machine learning, biology, physics, and neuroscience. It also adds recent and novel findings from the author, his research team, and colleagues. Combining technical rigor and deep up-to-the-minute knowledge about AI development, the natural sciences (especially neuroscience), and philosophical literacy, What Is Intelligence? argues—quite against the grain—that certain modern AI systems do indeed have a claim to intelligence, consciousness, and free will…(More)”.
Open Access Book by Matthijs M. Maas: “The impacts of artificial intelligence (AI) are often framed as an uncontrollable wave of technological change. But AI’s trajectory is not preordained-its governance is a human choice, one that hinges on global institutions that are effective, coherent, and resilient to AI’s own disruptions.
As AI systems grow more powerful, states and international institutions today face mounting pressure to address their impacts. How can they govern this changing technology, in a rapidly changing world, using tools that may themselves be altered by AI? Architectures of Global AI Governance provides the conceptual and practical tools to tackle this question.
Drawing from technology law, global governance scholarship, and history, the book maps AI’s growing global stakes, traces the trajectory of the global AI regime complex, and sets the scaffolding for new institutions. The book argues that, in crafting a global AI governance architecture, we must reckon with three facets of change: sociotechnical changes in AI systems’ uses and impacts; AI-driven changes in the fabric of international law; and political changes in the global AI regime complex. Many AI governance approaches will be too static unless they adapt to these forces….(More)”.
Blog by Hollie Russon Gilman: “The shocking assassination of conservative activist Charlie Kirk on a university campus last week has jolted the nation into confronting the fragility of public discourse—and the limits of free speech when threats of violence loom over political life. In the shadow of this tragedy, we need to invest and support new institutions that support civic voice and protect democratic engagement. Citizen assemblies, a model for engaging residents through civic lottery, are one such vital institution: a way of dignifying speech and reweaving trust in our democracy.
Unlike the adversarial debates that dominate our media and campuses, citizen assemblies bring together randomly selected, demographically representative residents in small groups to deliberate with experts and produce policy recommendations. In this way, they are a living extension of the First Amendment—they don’t just protect the right to speak, they structure more generative spaces for voices to emerge, be heard, and be acted upon. Especially now, when rhetoric risks becoming the dangerous fuel for violence, assemblies offer a path for communities to reclaim ownership over public life.
Globally and domestically, civic assemblies have already addressed climate policy, electoral reform, and land use. Here in the US, assemblies have been used to tackle long-standing local issues. For example, citizen assemblies in Colorado and California convened to determine the future of specific public land sites. Similarly, civic assemblies, a method which does not necessarily involve lottery based sortition, in Washington State have also played a significant role in setting policy agendas around climate issues. In this example, Washington State provided public dollars around climate resilience, which key civil society and grassroots groups were able to leverage to organize a people’s assembly. This includes intentionally over-sampling for traditionally underrepresented and marginalized communities. In the U.S., they have anchored local processes for redesigning public land or shaping resilience plans. These cases underscore how—for those often relegated to the margins—assemblies provide direct access to democracy…(More)”.

Book by Jonathan W. Y. Gray: “Public data shapes what we know and how we live together. It is often digital, freely available and related to matters of shared concern, from global warming graphs to collaborative spreadsheets documenting mass layoffs. It circulates via maps and apps which enable us to discover, report and rate what is around us.
Public Data Cultures explores the practices and cultures of how data is made public in the age of the Internet. Looking beyond familiar narratives of data as a resource to be liberated or protected, this book offers new perspectives on public data as networked cultural material, as medium of participation and as site of transnational politics. To better account for how data makes a difference, the book argues for a more expansive conception of what is involved in making data public. In doing so, it focuses not just on removing restrictions but also on caring for arrangements involved in making data public in ways that grow shared understanding and solidarity in responding to the many intersecting troubles of our times.
Nurturing critical and creative engagements with data, this book is essential reading for students and scholars of media, communications, Internet studies, science and technology studies and digital humanities, as well as artists, designers, engineers, reporters, public sector workers, community organisers and activists working with data…(More)”.
Article by Iason Gabriel, Geoff Keeling, Arianna Manzini & James Evans: “The rise of more-capable AI agents is likely to have far-reaching political, economic and social consequences. On the positive side, they could unlock economic value: the consultancy McKinsey forecasts an annual windfall from generative AI of US$2.6 trillion to $4.4 trillion globally, once AI agents are widely deployed (see go.nature.com/4qeqemh). They might also serve as powerful research assistants and accelerate scientific discovery.
But AI agents also introduce risks. People need to know who is responsible for agents operating ‘in the wild’, and what happens if they make mistakes. For example, in November 2022 , an Air Canada chatbot mistakenly decided to offer a customer a discounted bereavement fare, leading to a legal dispute over whether the airline was bound by the promise. In February 2024, a tribunal ruled that it was — highlighting the liabilities that corporations could experience when handing over tasks to AI agents, and the growing need for clear rules around AI responsibility.
Here, we argue for greater engagement by scientists, scholars, engineers and policymakers with the implications of a world increasingly populated by AI agents. We explore key challenges that must be addressed to ensure that interactions between humans and agents — and among agents themselves — remain broadly beneficial…(More)”.
Article by Louai Alarabi: In today’s world, almost everything is connected – our smartphones and cars link to our homes and services around us and at the core of it all is spatial data.
Spatial data is the digital footprint that pinpoints geographic locations. Unlike other data, it is the most valuable type of information for critical applications such as modern logistics, urban planning, climate monitoring, precision agriculture, disaster management and national security.
This sensitive information not only reveals “where” things are but also answers tangential questions, including “when”, “how”, “who” and “why.” It illuminates hidden patterns, forecasts events and discloses relationships.
But just how is spatial data transforming our world? What are the cybersecurity risks and how do we mitigate them?..(More)”.
Article by Stefaan Verhulst: “This week, during the United Nations (UN) Global Assembly, the UN will launch the Global Dialogue on Artificial Intelligence Governance. The symbolism is powerful: for the first time, UN agencies and member states, along with other stakeholders such as civil society and industry, will gather under UN auspices to express their expectations regarding how AI should be governed.
Symbolism won’t cut it. Unless the Dialogue tackles the widening asymmetries in data and AI with concrete, actionable steps, it risks becoming just another high-level talk shop. These divides are not abstract. They determine who has access to knowledge, who sets standards, and who reaps the benefits of AI and other digital technologies. Left unaddressed, they will leave most of the world dependent on technologies—and decisions—shaped elsewhere.
From an AI governance perspective, there are five asymmetries that need particular attention. Each of these will have a crucial bearing on who benefits from AI, who is left behind, and ultimately what role this powerful technology plays in our lives…(More)”.
Report by the Open Government Partnership: “As local governments embrace digital transformation, they stand at the forefront of addressing critical challenges in digital governance. These challenges involve managing both the opportunities and risks associated with using technology to enhance public services and democratic engagement. Increasingly, local governments are adopting artificial intelligence (AI) to achieve benefits such as greater efficiency, cost savings, improved decision-making, and better services. AI can also strengthen transparency, participation, and accountability.
At the same time, local governments must carefully assess the risks and challenges associated with the safe and responsible adoption of AI. They must ensure that their actions reinforce, rather than undermine, the principles of open government.
This document explores how local governments can embed open government in their AI and digital governance policies, based on insights from civil society and six OGP Local members: Austin, Bogotá, Buenos Aires, Paris, Plateau, and Scotland…(More)”.
Chapter by Mireille Hildebrandt: “… investigates the link between the contestability that is key to constitutional democracies on the one hand and the falsifiability of scientific theories on the other hand, with regard to large language models (LLMs). Legally relevant decision-making that is based on the deployment of applications that involve LLMs must be contestable in a court of law. The current flavour of such contestability is focused on transparency, usually framed in terms of the explainability of the model (explainable AI). In the long run, however, the fairness and reliability of these models should be tested in a more scientific manner, based on the falsifiability of the theoretical framework that should underpin the model. This requires that researchers in the domain of LLMs learn to abduct theoretical frameworks, based on the output models of LLMs and the real world patterns they imply, while this abduction should be such that the theory can be inductively tested in a way that allows for falsification. On top of that, researchers need to conduct empirical research to enable such inductive testing. The chapter thus argues that the contestability required under the Rule of Law, should move beyond explanations of how the model generates its output, to whether the real world patterns represented in the model output can falsify the theoretical framework that should inform the model…(More)”.