Stefaan Verhulst
Paper by Anna Colom and Marta Poblet: “In our digital world, reusing data to inform: decisions, advance science, and improve people’s lives should be easier than ever. However, the reuse of data remains limited, complex, and challenging. Some of this complexity requires rethinking consent and public participation processes about it. First, to ensure the legitimacy of uses, including normative aspects like agency and data sovereignty. Second, to enhance data quality and mitigate risks, especially since data are proxies that can misrepresent realities or be oblivious to the original context or use purpose. Third, because data, both as a good and infrastructure, are the building blocks of both technologies and knowledge of public interest that can help societies work towards the well-being of their people and the environment. Using the case study of the European Health Data Space, we propose a multidimensional, polytopic framework with multiple intersections to democratising decision-making and improving the way in which meaningful participation and consent processes are conducted at various levels and from the point of view of institutions, regulations, and practices…(More)”.
Paper by Yu Zheng et al: “City plans are the product of integrating human creativity with emerging technologies, which continuously evolve and reshape urban morphology and environments. Here we argue that large language models hold large untapped potential in addressing the growing complexities of urban planning and enabling a more holistic, innovative and responsive approach to city design. By harnessing their advanced generation and simulation capabilities, large language models can contribute as an intelligent assistant for human planners in synthesizing conceptual ideas, generating urban designs and evaluating the outcomes of planning efforts…(More)”.
Article by Begoña G. Otero and Stefaan G. Verhulst: “The concept of digital twins has quickly become the new darling of the smart city world. By 2030, more than 500 cities plan to launch some kind of digital twin platform, often wrapped in dazzling promises: immersive 3D models of entire neighborhoods, holographic maps of traffic flows, real-time dashboards of carbon emissions. These visuals capture headlines and the political imagination. But beneath the glossy graphics lies a harder question: what actually makes a digital twin useful, trustworthy, and sustainable?
Having recently worked directly on a U.S. metropolitan digital twin pilot, we know the answer is not just shiny and sophisticated imagery. A genuine twin is a living ecosystem of different stakeholders and diverse datasets — integrating maps, open government data, IoT sensors, predictive AI models, synthetic data, and mobility data into a single responsive platform. Done right, a digital twin becomes a decision-making sandbox: where planners can simulate how pedestrianizing a street shifts congestion, for example, or how a Category 3 hurricane might inundate vulnerable neighborhoods.

If the initial rush of digital twin projects has taught anything, it’s that technology alone is not enough. Building a functional digital twin is as much an institutional and governance challenge as a technical one. The platform must integrate data from multiple sources, including government departments, private firms, utilities, researchers, and other relevant entities. Global leaders in the field, from Singapore’s Virtual Singapore to Orlando’s much-publicized holographic twin, have all discovered the same truth: the long-term value of a twin depends not on its graphics but on its data governance. Singapore’s twin works because its government mandated cross-agency data sharing. Orlando’s flashy prototype only turned serious when planners acknowledged that its future hinges on becoming an open ecosystem where utilities, agencies, and even residents can contribute data.
In practice, however, the necessary data is often scattered and siloed. European pilots have shown this clearly: the obstacle was not imagining use cases but finding and accessing the data to make them possible. In the OASC pilot regions, such as Athens and Pilsen, project teams reported that the biggest hurdle was that much of the relevant data sat in silos — owned by private firms, higher levels of government, or agencies unused to thinking of themselves as data stewards. Even when data existed, municipalities often lacked clear mandates, agreements, or technical workflows to integrate it responsibly.
The same applied in Helsinki, which today runs one of the most advanced city twins in Europe. Before reaching that point, the city had to spend years building a reliable data repository, common standards, and trust agreements with residents to ensure equitable use of information. Similarly, in the UK, the Gemini Principles and the subsequent National Digital Twin programme were born out of recognition that without shared governance, data would remain fragmented across sectors such as energy, transport, and the environment. Both cases show that even resource-rich contexts face governance hurdles first; technology comes later.
The lesson is clear: digital twins will only move beyond hype if we treat them as governance infrastructures, not visual spectacles. That means aligning the concept with frameworks of data governance, collaboration, and digital self-determination — in the process, ensuring that digital twins serve public purposes, respect local contexts, and empower communities to shape how their data is used…(More)”
Article by Lea Kaspar and Rose Payne: “As digital technologies reshape global power, the rules that govern them are increasingly negotiated in multilateral fora. But if these bodies are to deliver legitimate and future-proof governance frameworks for the Internet, AI, cybersecurity, and data, their processes must adapt – expanding participation to include the wider set of actors whose expertise is indispensable.
Multistakeholderism – the principle that governments, civil society, academia, the technical community, and the private sector should work together on digital policy – is not new. It is foundational to the Internet itself. Stakeholder engagement is both an inherent good – enhancing openness and inclusivity – and an instrumental one, improving outcomes and legitimacy.
Too often, however, multilateralism and multistakeholderism are cast as competing models of legitimacy: states vs. stakeholders. In reality, the two can, and must, complement one another. A helpful way to think about how is through two tracks of integration:
- State-led openings (vertical integration), where stakeholder perspectives are fed through governments – via national consultations, advisory bodies, or inclusion in official delegations. This makes openness part of a country’s digital foreign policy.
- Institutional openings (horizontal integration), where stakeholders engage independently with multilateral institutions through structured participation channels created by the institutions themselves.
Both tracks already exist in practice, but are applied unevenly. Looking at how each has worked, especially in Internet governance, where multistakeholderism has the deepest roots, shows both the possibilities and the limits…(More)”.
Article by Mark Schmitt: “We’re learning a lot about how government can shape our lives by watching the second Trump Administration dismantle it. One lesson is that government’s capacity to do good runs on information no less than on funding and regulations. From weather and economic forecasts to the census to predictions of other countries’ military capabilities to vaccine monitoring, data and ideas generated inside and outside of the federal government have guided decisions in a world of profound complexity. But as the young men of Elon Musk’s DOGE figured out quickly, information is also a point of vulnerability for the entire workings of government, and it can be exploited by those like Musk and Trump who seek to disable government, concentrate its power, or redirect that power to private profit.
Dozens of small federal agencies devoted to information and ideas have been gutted; expert advisory commissions disbanded; and grants for libraries, museums, and scientific and health research cut off without review. Indicators such as the National Assessment of Educational Progress, which always had strong conservative support, have been cancelled, pared back, or delayed, often because contracts were arbitrarily canceled, advisory panels dissolved, and key staff fired.
Much of the loosely connected galaxy of information and data that guides policy falls outside the formal boundaries of government, in a pluralistic set of institutions that are independent of the administration or political parties. Along with universities, independent policy research organizations—think tanks—are key to the system of knowledge production and policy ideas, particularly in the United States. Every think tank, aside from the few that maintain an allegiance to the current Administration, now faces a test: How do they not only survive, but remain relevant when the assumptions and processes under which they were born have been wiped away? How can their capacities be put to good use at a moment when the idea of informed decision-making is itself under attack, when little matters other than the raw and often arbitrary exercise of power?..(More)”.
Article by Adam Zable, Stefaan Verhulst: “Non-Traditional Data (NTD) — data digitally captured, mediated, or observed through instruments such as satellites, social media, mobility apps, and wastewater testing — holds immense potential when re-used responsibly for purposes beyond those for which it was originally collected. If combined with traditional sources and guided by strong governance, NTD can generate entirely new forms of public value — what we call the Third Wave of Open Data.
Yet, there is often little awareness of how these datasets are currently being applied, and even less visibility on the lessons learned. That is why we curate and monitor, on a quarterly basis, emerging developments that provide better insight into the value and risks of NTD.
In previous updates, we focused on how NTD has been applied across domains like financial inclusion, public health, socioeconomic analysis, urban mobility, governance, labor dynamics, and digital behavior, helping to surface hidden inequities, improve decision-making, and build more responsive systems.
In this update, we have curated recent advances where researchers and practitioners are using NTD to close monitoring gaps in climate resilience, track migration flows more effectively, support health surveillance, and strengthen urban planning. Their work demonstrates how satellite imagery can provide missing data, how crowdsourced information can enhance equity and resilience, and how AI can extract insights from underused streams.
Below we highlight recent cases, organized by public purpose and type of data. We conclude with reflections on the broader patterns and governance lessons emerging across these cases. Taken together, they illustrate both the expanding potential of NTD applications and the collaborative frameworks required to translate data innovation into real-world impact.
Categories
- Public Health & Disease Surveillance
- Environment, Climate & Biodiversity
- Urban Systems, Mobility & Planning
- Migration
- Food Security & Markets
- Information Flows for Risk and Policy..(More)”.
Article by Gideon Lichfield: “Point your browser at publicai.co and you will experience a new kind of artificial intelligence, called Apertus. Superficially, it looks and behaves much like any other generative AI chatbot: a simple webpage with a prompt bar, a blank canvas for your curiosity. But it is also a vision of a possible future.
With generative AI largely in the hands of a few powerful companies, some national governments are attempting to create sovereign versions of the technology that they can control. This is taking various forms. Some build data centres or provide AI infrastructure to academic researchers, like the US’s National AI Research Resource or a proposed “Cern for AI” in Europe. Others offer locally tailored AI models: Saudi-backed Humain has launched a chatbot trained to function in Arabic and respect Middle Eastern cultural norms.
Apertus was built by the Swiss government and two public universities. Like Humain’s chatbot, it is tailored to local languages and cultural references; it should be able to distinguish between regional dialects of Swiss-German, for example. But unlike Humain, Apertus (“open” in Latin) is a rare example of fully fledged “public AI”: not only built and controlled by the public sector but open-source and free to use. It was trained on publicly available data, not copyrighted material. Data sources and underlying code are all public, too.
Although it is notionally limited to Swiss users, there is, at least temporarily, an international portal — the publicai.co site — that was built with support from various government and corporate donors. This also lets you try out a public AI model created by the Singaporean government. Set it to Singaporean English and ask for “the best curry noodles in the city”, and it will reply: “Wah lau eh, best curry noodles issit? Depends lah, you prefer the rich, lemak kind or the more dry, spicy version?”
Apertus is not intended to compete with ChatGPT and its ilk, says Joshua Tan, an American computer scientist who led the creation of publicai.co. It is comparatively tiny in terms of raw power: its largest model has 70bn parameters (a measure of an AI model’s complexity) versus GPT-4’s 1.8tn. And it does not yet have reasoning capabilities. But Tan hopes it will serve as a proof of concept that governments can build high-quality public AI with fairly limited resources. Ultimately, he argues, it shows that AI “can be a form of public infrastructure like highways, water, or electricity”.
This is a big claim. Public infrastructure usually means expensive investments that market forces alone would not deliver. In the case of AI, market forces might appear to be doing just fine. And it is hard to imagine governments summoning up the money and talent needed to compete with the commercial AI industry. Why not regulate it like a utility instead of trying to build alternatives?..(More)”
Report by Darya Minovi: “The Trump administration is systematically attacking a wide range of public health, environmental, and safety rules. By law, federal agencies must notify the public about potential rule changes and give them the opportunity to make comments on those changes. But in many cases, the Trump administration is evading that legal requirement.
In the first six months in office, roughly 600 final rules were issued across six key science agencies. In 182 of these rules, the administration bypassed the public notice and comment period, cutting the public out of the process of shaping rules that affect their health and safety and our planet. This undermines the principles of accountability and transparency that should be part of our democracy…(More)”.
Editorial by Christian Fynbo Christiansen, Persephone Doupi, Nienke Schutte, and Damir Ivanković: “The European Health Data Space (EHDS) regulation creates a health-specific ecosystem for both primary and secondary use of health data. HealthData@EU—the novel cross-border technical infrastructure for secondary use of electronic health data will be crucial for achieving the ambitious goals of the EHDS.
In 2022, the “HealthData@EU pilot project,” co-funded under the EU4Health-framework (GA nr 101079839), brought together 17 partners including potential Health Data Access Bodies’ (HDABs) candidates, health data sharing infrastructures, and European agencies in order to build and test a pilot version of the HealthData@EU infrastructure and provide recommendations for metadata standards, data quality, data security, and data transfer to support development of the EHDS cross-border infrastructure.
This editorial and the other manuscripts presented in this Special EJPH Supplement will provide readers with insights from real-life scenarios that follow the research user journey and highlight the challenges of health research, as well as the solutions the EHDS can provide…(More)”.
Paper by Barbara J Evans and Azra Bihorac: “As nations design regulatory frameworks for medical AI, research and pilot projects are urgently needed to harness AI as a tool to enhance today’s regulatory and ethical oversight processes. Under pressure to regulate AI, policy makers may think it expedient to repurpose existing regulatory institutions to tackle the novel challenges AI presents. However, the profusion of new AI applications in biomedicine — combined with the scope, scale, complexity, and pace of innovation — threatens to overwhelm human regulators, diminishing public trust and inviting backlash. This article explores the challenge of protecting privacy while ensuring access to large, inclusive data resources to fuel safe, effective, and equitable medical AI. Informed consent for data use, as conceived in the 1970s, seems dead, and it cannot ensure strong privacy protection in today’s large-scale data environments. Informed consent has an ongoing role but must evolve to nurture privacy, equity, and trust. It is crucial to develop and test alternative solutions, including those using AI itself, to help human regulators oversee safe, ethical use of biomedical AI and give people a voice in co-creating privacy standards that might make them comfortable contributing their data. Biomedical AI demands AI-powered oversight processes that let ethicists and regulators hear directly and at scale from the public they are trying to protect. Nations are not yet investing in AI tools to enhance human oversight of AI. Without such investments, there is a rush toward a future in which AI assists everyone except regulators and bioethicists, leaving them behind…(More)”.