Stefaan Verhulst
Report by the Office for Statistical Regulation: “This report asks: what is trust?; who trusts?; and how do you build trust? These questions help us to understand whether the public trusts those involved in the production, and communication, of official statistics and what accounts for different levels of trust. These questions are answered through synthesising existing literature, and supported by primary analysis (described in Methodology). Thereafter, it concludes with a series of practical recommendations which can be adopted in order to increase levels of trust, improve trustworthiness and contribute to the overall vision of ensuring that official statistics serve the public good.
This report investigates levels of trust and draws together evidence exploring influencing factors. As the literature and existing studies focusing explicitly on the topic of trust in “official statistics” are relatively sparse – with obvious exceptions including the Public Confidence in Official Statistics (PCOS) survey and a small collation of commissioned surveys dedicated to this theme – this review adopts a wider approach which analyses levels of public trust more broadly. It considers studies which explore levels of public trust in actors and objects involved in the production, or communication, of official statistics. This includes the government; the Civil Service; scientists and experts; journalists and the media; research on communication platforms; and evidence more broadly.
From this broader approach to exploring trust, readers are provided with an overall picture of public trust levels. To support this aim, this review adopts a cross-disciplinary outlook drawing on psychological, sociological and political accounts of trust, and considers a range of models developed within these fields…(More)”.
Report by UNU: “…examines how artificial intelligence (AI) can be used to address some of the most pressing challenges facing humanity and the planet, including climate change, humanitarian crises, food insecurity and gaps in access to health and education. Drawing on nearly a decade of work under the International Telecommunication Union (ITU)–led AI for Good platform, the report focuses on three AI domains that have demonstrated particular relevance for the public good: robotics, geospatial artificial intelligence (GeoAI) and AI for communications networks.
Across applications ranging from healthcare and telemedicine to disaster response, biodiversity conservation and energy use optimization, the report documents how AI systems are already being applied to improve early warning, decision-making and service delivery, including in low-resource and crisis-affected settings. These applications illustrate how AI technologies beyond generative AI, when embedded in physical systems, spatial analysis and digital infrastructure, can support human well-being and planetary health.
At the same time, the report emphasizes that the benefits of AI are not automatic. Without appropriate governance, investment and capacity, AI systems risk reinforcing existing inequalities, exacerbating environmental pressures and undermining trust. Based on the analysis of case studies and consultations with experts from the AI for Good community, the report identifies five interrelated pathways that are critical for creating an enabling environment for AI for human and planetary well-being:
- Data quality, access and governance: Strengthening access to high-quality, representative and well-governed data, particularly geospatial data, to reduce bias, exclusion and fragmented decision-making.
- Digital infrastructure and access: Investing in inclusive digital infrastructure, including broadband connectivity, compute capacity and interoperable systems, to address persistent digital divides.
- AI literacy and talent: Expanding digital literacy, skills development and talent pipelines to enable institutions and societies to effectively deploy, interpret and govern AI systems.
- Responsible AI policy: Embedding safeguards related to human rights, privacy, cybersecurity, physical safety, labour impacts and environmental sustainability throughout the AI lifecycle.
- Digital ecosystem development: Fostering partnerships across governments, the United Nations system, industry, academia and civil society to translate innovation into scalable and durable public-interest outcomes.
Together, these pathways provide a practical framework for moving from isolated pilot projects towards the responsible and systemic deployment of AI systems in support of the Sustainable Development Goals. The report concludes that shaping the trajectory of AI through deliberate, inclusive and human-centred policymaking will be essential to ensuring that rapid technological advances translate into meaningful and lasting benefits for people and the planet…(More)”.
Website by Sean Hardesty Lewis: “Every ten years, New York City conducts a massive, manual census of its street trees. Thousands of volunteers walk every block with clipboards, counting and identifying every oak and maple across the five boroughs.
They do it because the digital map does not know the trees exist.
To Google or Apple, the city is a grid of addresses and listings. The rest of the world gets flattened. Not because it is invisible, but because it was never entered into a database. The map can tell you where a pharmacy is. It cannot tell you where the fire escapes are. Where the murals are. Where the awnings begin. Where the street trees actually cast shade. Where the scaffolding still hangs.
That is not a New York problem. It is a mapping problem.
We processed hundreds of thousands of Manhattan street view images with a vision language model (VLM). Instead of asking the model for coordinates, we simply asked it to describe what it saw…(More)”.
Paper by the Tony Blair Institute: “…A more realistic and grounded understanding of sovereignty that reflects the realities of interdependence rather than the illusion of isolation is therefore needed. Sovereignty in the age of AI is not a binary condition to be achieved or lost. It is fundamentally a question of agency and choice – the ability of a state to make deliberate, future-oriented decisions about how AI is integrated, governed and used in line with its national goals.
Sovereignty is shaped by how well countries configure and negotiate their position within an inherently interdependent technological system. This requires balancing a persistent trilemma: pursuing control by investing in domestic capability, accessing frontier capability through global systems, and ensuring coherence across regulatory, industrial, fiscal and diplomatic strategies. No state can maximise all three simultaneously. The task of modern statecraft is to manage these trade-offs within the layers of the AI stack in ways that preserve strategic autonomy and expand national agency over time.
Effective AI sovereignty therefore cannot be pursued through isolation. It must be deliberately negotiated…(More)”.
Article by Cory Doctorow: “…The fact that every AI-created work is in the public domain means that if Getty or Disney or Universal or Hearst newspapers use AI to generate works – then anyone else can take those works, copy them, sell them or give them away for nothing. And the only thing those companies hate more than paying creative workers, is having other people take their stuff without permission.
The US Copyright Office’s position means that the only way these companies can get a copyright is to pay humans to do creative work. This is a recipe for centaurhood. If you are a visual artist or writer who uses prompts to come up with ideas or variations, that’s no problem, because the ultimate work comes from you. And if you are a video editor who uses deepfakes to change the eyelines of 200 extras in a crowd scene, then sure, those eyeballs are in the public domain, but the movie stays copyrighted.
But creative workers do not have to rely on the US government to rescue us from AI predators. We can do it ourselves, the way the writers did in their historic writers’ strike. The writers brought the studios to their knees. They did it because they are organized and solidaristic, but also are allowed to do something that virtually no other workers are allowed to do: they can engage in “sectoral bargaining”, whereby all the workers in a sector can negotiate a contract with every employer in the sector.
That has been illegal for most workers since the late 1940s, when the Taft-Hartley Act outlawed it. If we are gonna campaign to get a new law passed in hopes of making more money and having more control over our labor, we should campaign to restore sectoral bargaining, not to expand copyright…(More)”.
Book by Ali Cheshmehzangi: “This book presents a bold reimagining of urban futures through the convergence of generative artificial intelligence and digital twin technologies. This book presents AI-powered digital twins as catalysts for systemic change, civic empowerment, and environmental regeneration rather than just as planning tools at a time when cities are dealing with growing climate stresses, infrastructure stress, and profound social inequality. This book explores the development of urban digital twins from data-driven models to intelligent, adaptive systems that learn, simulate, and co-design with their urban settings. It does this via eleven technically sound and conceptually rich chapters. It investigates how generative AI may improve climate simulation, manage floods, lessen urban heat, lower emissions, and promote participatory planning, all while posing important ethical, equitable, and governance issues. This book covers open data standards, AI-twin integration architectures, and the difficulties of implementing prototypes into citywide systems, moving from fundamental theory to state-of-the-art practice. It demonstrates how these technologies can be used to depict community-driven urban scenarios, model circular material flows, and build green roofs. Throughout, this book maintains that cities’ ability to restore ecosystems, incorporate a variety of viewpoints, and envision resilient and just futures are what truly define intelligence, not efficiency alone. This book promotes a new urban paradigm where ethics are ingrained, intelligence is dispersed, and regeneration becomes the design axiom. It does this while keeping a close eye on both potential and responsibility. This book provides scholars, planners, technologists, and policymakers with a visionary yet doable road map for creating cities that are not just intelligent—but profoundly alive—by drawing on real-world examples, speculative design theory, and systems thinking. This is not a book about managing cities more efficiently. It is a book about reconsidering the basic concept of urban intelligence and co-creating the urban futures filled with care, courage, and collective imagination…(More)”.
Book by William Rankin: “Maps are ubiquitous in contemporary life—not just for navigation, but for making sense of our society, our environment, and even ourselves. In an instant, huge datasets can be plotted on command and we can explore faraway places in exacting detail. Yet the new ease and speed of data mapping can often lead to the same results as ever: over-simplified maps used as tools for top-down control.
Cartographer and historian William Rankin argues that it’s time to reimagine what a map can be and how it can be used. Maps are not neutral visualizations of facts. They are innately political, defining how the world is divided, what becomes visible and what stays hidden, and whose voices are heard. What matters isn’t just the topics or the data, but how maps make arguments about how the world works. And the consequences are enormous. A map’s visual argument can change how cities are designed and how rivers flow, how wars are fought and how land claims are settled, how children learn about race and how colonialism becomes a habit of mind. Maps don’t just show us information—they help construct our world.
Brimming with vibrant maps, including many “radical” maps created by Rankin himself and by other cutting-edge mapmakers, Radical Cartography exposes the consequences of how maps represent boundaries, layers, people, projections, color, scale, and time. Challenging the map as a tool of the status quo, Rankin empowers readers to embrace three unexpected values for the future of cartography: uncertainty, multiplicity, and subjectivity. Changing the tools—changing the maps—can change the questions we ask, the answers we accept, and the world we build…(More)”.
European Parliament: “In the context of the wars in Ukraine and other parts of the world, the increasingly global effects – material and political – of war make it more important than ever to measure the level of threats to peace, security and democracy around the world. The Normandy Index has presented an annual measurement of these threats since the 2019 Normandy Peace Forum. The results of the 2025 exercise suggest the level of threats to peace is at its highest since the index was launched, confirming declining trends in global security resulting from conflict, geopolitical rivalry, growing militarisation and hybrid threats. The findings of the 2025 exercise draw on data compiled in 2024 and 2025 to compare peace – defined on the basis of a given country’s performance against a range of predetermined threats – across countries and regions. Derived from the Index, 63 individual country case studies provide a picture of the state of peace in the world today. Designed and prepared by the European Parliamentary Research Service (EPRS), in conjunction with, and on the basis of data provided by, the Institute for Economics and Peace, the Normandy Index is produced in partnership with the Region of Normandy. The paper forms part of the EPRS contribution to the 2026 Normandy World Peace Forum…(More)”.
Paper by Alek Tarkowski and Felix Sieker: “Europe’s AI landscape is dominated by capital-intensive, proprietary systems controlled by a small number of non-European actors. This creates an unprecedented concentration of power and limits European sovereignty in a technology that is rapidly reshaping society and the economy. The European Union has responded with ambitious strategies, but risks reinforcing dependencies without articulating a clear sovereign logic for AI development.
Public AI offers an alternative path: AI systems developed under transparent governance, with public accountability, equitable access to core components, and a clear focus on public-purpose functions…(More)”.
Book edited by Johanna Seibt, Raul Hakli and Marco Nørskov: “Robophilosophy is the philosophical engagement with the phenomena and problems that arise with “social robots”: robots developed for use everywhere in society, at work, in public spaces, or at home. This new area of research is applied philosophy undertaken in close contact with, or even as part of, empirical research in the multidisciplinary areas of human–robot interaction studies and social robotics. It includes, but goes beyond, ethical considerations, offering new research in social ontology, philosophy of mind, metaphysics, and more.
The book explores the wide-ranging questions we currently have about the new class of artificial social agents generated by robotics technology. Written by researchers from philosophy, psychology, and the technical sciences, the book shows how philosophical knowledge can help us to navigate the unprecedented sociocultural risks arising from this technology…(More)”.