Stefaan Verhulst
Article by Iason Gabriel, Geoff Keeling, Arianna Manzini & James Evans: “The rise of more-capable AI agents is likely to have far-reaching political, economic and social consequences. On the positive side, they could unlock economic value: the consultancy McKinsey forecasts an annual windfall from generative AI of US$2.6 trillion to $4.4 trillion globally, once AI agents are widely deployed (see go.nature.com/4qeqemh). They might also serve as powerful research assistants and accelerate scientific discovery.
But AI agents also introduce risks. People need to know who is responsible for agents operating ‘in the wild’, and what happens if they make mistakes. For example, in November 2022 , an Air Canada chatbot mistakenly decided to offer a customer a discounted bereavement fare, leading to a legal dispute over whether the airline was bound by the promise. In February 2024, a tribunal ruled that it was — highlighting the liabilities that corporations could experience when handing over tasks to AI agents, and the growing need for clear rules around AI responsibility.
Here, we argue for greater engagement by scientists, scholars, engineers and policymakers with the implications of a world increasingly populated by AI agents. We explore key challenges that must be addressed to ensure that interactions between humans and agents — and among agents themselves — remain broadly beneficial…(More)”.
Article by Louai Alarabi: In today’s world, almost everything is connected – our smartphones and cars link to our homes and services around us and at the core of it all is spatial data.
Spatial data is the digital footprint that pinpoints geographic locations. Unlike other data, it is the most valuable type of information for critical applications such as modern logistics, urban planning, climate monitoring, precision agriculture, disaster management and national security.
This sensitive information not only reveals “where” things are but also answers tangential questions, including “when”, “how”, “who” and “why.” It illuminates hidden patterns, forecasts events and discloses relationships.
But just how is spatial data transforming our world? What are the cybersecurity risks and how do we mitigate them?..(More)”.
Article by Stefaan Verhulst: “This week, during the United Nations (UN) Global Assembly, the UN will launch the Global Dialogue on Artificial Intelligence Governance. The symbolism is powerful: for the first time, UN agencies and member states, along with other stakeholders such as civil society and industry, will gather under UN auspices to express their expectations regarding how AI should be governed.
Symbolism won’t cut it. Unless the Dialogue tackles the widening asymmetries in data and AI with concrete, actionable steps, it risks becoming just another high-level talk shop. These divides are not abstract. They determine who has access to knowledge, who sets standards, and who reaps the benefits of AI and other digital technologies. Left unaddressed, they will leave most of the world dependent on technologies—and decisions—shaped elsewhere.
From an AI governance perspective, there are five asymmetries that need particular attention. Each of these will have a crucial bearing on who benefits from AI, who is left behind, and ultimately what role this powerful technology plays in our lives…(More)”.
Report by the Open Government Partnership: “As local governments embrace digital transformation, they stand at the forefront of addressing critical challenges in digital governance. These challenges involve managing both the opportunities and risks associated with using technology to enhance public services and democratic engagement. Increasingly, local governments are adopting artificial intelligence (AI) to achieve benefits such as greater efficiency, cost savings, improved decision-making, and better services. AI can also strengthen transparency, participation, and accountability.
At the same time, local governments must carefully assess the risks and challenges associated with the safe and responsible adoption of AI. They must ensure that their actions reinforce, rather than undermine, the principles of open government.
This document explores how local governments can embed open government in their AI and digital governance policies, based on insights from civil society and six OGP Local members: Austin, Bogotá, Buenos Aires, Paris, Plateau, and Scotland…(More)”.
Chapter by Mireille Hildebrandt: “… investigates the link between the contestability that is key to constitutional democracies on the one hand and the falsifiability of scientific theories on the other hand, with regard to large language models (LLMs). Legally relevant decision-making that is based on the deployment of applications that involve LLMs must be contestable in a court of law. The current flavour of such contestability is focused on transparency, usually framed in terms of the explainability of the model (explainable AI). In the long run, however, the fairness and reliability of these models should be tested in a more scientific manner, based on the falsifiability of the theoretical framework that should underpin the model. This requires that researchers in the domain of LLMs learn to abduct theoretical frameworks, based on the output models of LLMs and the real world patterns they imply, while this abduction should be such that the theory can be inductively tested in a way that allows for falsification. On top of that, researchers need to conduct empirical research to enable such inductive testing. The chapter thus argues that the contestability required under the Rule of Law, should move beyond explanations of how the model generates its output, to whether the real world patterns represented in the model output can falsify the theoretical framework that should inform the model…(More)”.
Article by Bryan Paul Robert, Mahidhar Chellamani, and Jyotirmoy Dutta: “For years, some of India’s most valuable geospatial datasets remained scattered across government departments, research institutes, or private organizations. They held immense potential to transform logistics, strengthen climate resilience, and support smarter urban planning, but they remained difficult to access, buried in different formats and lacking interoperability.
Recognizing this challenge, the Government of India through the Department of Science and Technology (DST) tasked the Centre of Data for Public Good (CDPG) at the Indian Institute of Science (IISc) with a bold vision: to develop a standards-based geospatial data exchange platform. The result was the Integrated Geospatial Data Exchange Interface (GDI) – a unified, open-access system built on OGC APIs, designed to make metadata-rich, analysis-ready geospatial data available for application developers, researchers, startups, and policymakers alike…
Designed to treat publicly funded geospatial data as a common good, GDI has established itself as a unified, open-access platform for interoperable, consent-based, analysis-ready, and metadata-rich data exchange using globally accepted standards. As a contribution to the third layer of the India stack, it enables seamless sharing of geospatial data, accelerating applications in data science and empowering developers.

GDI catalogue (catalogue.geospatial.org.in)
Unlike traditional open data portals, GDI emphasizes secure, consent-based data sharing, where data providers retain control over how their datasets are accessed and used. Robust auditing mechanisms ensure transparency and accountability, while an integrated analytics engine allows users to derive insights without always downloading raw datasets. The platform is also capable of onboarding large volumes of data and supports fine-grained access control for different user groups. By adhering to OGC standards, GDI ensures interoperability and fosters collaboration by allowing value-added services like data fusion, visualization, and application development. In doing so, it functions not just as a data exchange platform but as an ecosystem that drives trust, innovation, and responsible use of geospatial data.
To achieve these goals, CDPG developed a new GIS server based on the latest OGC REST APIs with the Intelligent Universal Data Exchange (IUDX) platform at its core – a technology stack developed in-house…(More)”
Paper by ESIR (European Union): “…proposes a new approach to policymaking in the, introducing the concept of “policy acupuncture” or “pressure points” to address the complex and interconnected challenges facing Europe. It argues that by identifying and acting on these high-leverage points, the EU can unlock systemic change and achieve greater impact and presents ten policy pressure point interventions to enable Europe’s twin green and digital transitions, increase competitiveness, and build resilience…(More)”.
Report by by Perry Hewitt, Ginger Zielinskie, Danil Mikhailov, Ph.D.: “Social impact organizations are already using AI to help solve some of the world’s biggest challenges—but not yet at the speed we need. Informed by our global network of partners and five years of program delivery at data.org, the 2025 Accelerate Report is a roadmap for how data and AI can be used innovatively and intentionally to drive social impact. From fair finance to workforce upskilling, 2025 Accelerate highlights the people, practices, and technology accelerating progress where it’s needed most.
Recommendations to Accelerate
- Stay focused on how AI can assist you and your mission. Don’t lose sight of what you’re trying to accomplish with technology. Incorporate data and AI in ways that fuel both operational excellence and impact strategies.
- Harness the power of partnerships. Partnerships help us all go farther by sharing best practices, curating important information, and facilitating sharing of data, talent, and infrastructure.
- Remember who you’re serving. Building trust and leading with localism are essential to ensuring the tools, technology, and systems you’re building are responsible, inclusive, and sustainable…(More)”.
Paper by Luciano Floridi and Anna Ascani: “Debates on AI governance often focus on regulating risks. This article shifts perspective to examine how AI can augment democratic processes, presenting a critical analysis of the Italian Chamber of Deputies ‘pioneering AI initiative. We detail the 2024 project that produced three prototype systems-NORMA (legislative analysis), MSE (drafting assistance), and DepuChat (citizen engagement)-which embed principles of transparency, human oversight, and privacy-by-design. We introduce the project’s third way development model, a public-academic partnership that contrasts with full in-house or commercial approaches. Using this initiative as a critical case study, we move beyond merely applying the concept of “augmented democracy”. We argue that this real-world implementation reveals key tensions between the goals of efficiency and the preservation of deliberative friction essential to democratic practice. The analysis highlights risks, including staff deskilling and the digitalisation of inequalities, and situates the Italian approach in an international context. We conclude by offering a theoretical refinement of augmented democracy, informed by the practical lessons of its implementation in a complex legislative environment…(More)”.
Essay by Steven Levy: “For decades, Mark Lemley’s life as an intellectual property lawyer was orderly enough. He’s a professor at Stanford University and has consulted for Amazon, Google, and Meta. “I always enjoyed that the area I practice in has largely been apolitical,” Lemley tells me. What’s more, his democratic values neatly aligned with those of the companies that hired him.
But in January, Lemley made a radical move. “I have struggled with how to respond to Mark Zuckerberg and Facebook’s descent into toxic masculinity and Neo-Nazi madness,” he posted on LinkedIn. “I have fired Meta as a client.”
This is the Silicon Valley of 2025. Zuckerberg, now 41, had turned into a MAGA-friendly mixed martial arts fan who didn’t worry so much about hate speech on his platforms and complained that corporate America wasn’t masculine enough. He stopped fact-checking and started hanging out at Mar-a-Lago. And it wasn’t only Zuckerberg. A whole cohort of billionaires seemed to place their companies’ fortunes over the well-being of society…It should be the best of times for the tech world, supercharged by a boom in artificial intelligence. But a shadow has fallen over Silicon Valley. The community still overwhelmingly leans left. But with few exceptions, its leaders are responding to Donald Trump by either keeping quiet or actively courting the government. One indelible image of this capture is from Trump’s second inauguration, where a decisive quorum of tech’s elite, after dutifully kicking in million-dollar checks, occupied front-row seats.
“Everyone in the business world fears repercussions, because this administration is vindictive,” says venture capitalist David Hornik, one of the few outspoken voices of resistance. So Silicon Valley’s elite are engaged in a dangerous dance with a capricious administration—or as Michael Moritz, one of the Valley’s iconic VCs, put it to me, “They’re doing their best to avoid being held up in a protection racket.”..(More)”.