Stefaan Verhulst
Editorial by Christian Fynbo Christiansen, Persephone Doupi, Nienke Schutte, and Damir Ivanković: “The European Health Data Space (EHDS) regulation creates a health-specific ecosystem for both primary and secondary use of health data. HealthData@EU—the novel cross-border technical infrastructure for secondary use of electronic health data will be crucial for achieving the ambitious goals of the EHDS.
In 2022, the “HealthData@EU pilot project,” co-funded under the EU4Health-framework (GA nr 101079839), brought together 17 partners including potential Health Data Access Bodies’ (HDABs) candidates, health data sharing infrastructures, and European agencies in order to build and test a pilot version of the HealthData@EU infrastructure and provide recommendations for metadata standards, data quality, data security, and data transfer to support development of the EHDS cross-border infrastructure.
This editorial and the other manuscripts presented in this Special EJPH Supplement will provide readers with insights from real-life scenarios that follow the research user journey and highlight the challenges of health research, as well as the solutions the EHDS can provide…(More)”.
Paper by Barbara J Evans and Azra Bihorac: “As nations design regulatory frameworks for medical AI, research and pilot projects are urgently needed to harness AI as a tool to enhance today’s regulatory and ethical oversight processes. Under pressure to regulate AI, policy makers may think it expedient to repurpose existing regulatory institutions to tackle the novel challenges AI presents. However, the profusion of new AI applications in biomedicine — combined with the scope, scale, complexity, and pace of innovation — threatens to overwhelm human regulators, diminishing public trust and inviting backlash. This article explores the challenge of protecting privacy while ensuring access to large, inclusive data resources to fuel safe, effective, and equitable medical AI. Informed consent for data use, as conceived in the 1970s, seems dead, and it cannot ensure strong privacy protection in today’s large-scale data environments. Informed consent has an ongoing role but must evolve to nurture privacy, equity, and trust. It is crucial to develop and test alternative solutions, including those using AI itself, to help human regulators oversee safe, ethical use of biomedical AI and give people a voice in co-creating privacy standards that might make them comfortable contributing their data. Biomedical AI demands AI-powered oversight processes that let ethicists and regulators hear directly and at scale from the public they are trying to protect. Nations are not yet investing in AI tools to enhance human oversight of AI. Without such investments, there is a rush toward a future in which AI assists everyone except regulators and bioethicists, leaving them behind…(More)”.
Paper by Yaniv Benhamou & Mélanie Dulong de Rosnay: “The present contribution proposes a novel commons-based copyright licensing model that provides individuals better control over all their data (including copyrightable, personal and technical data) and that covers recent developments in AI technology. The licensing model also proposes restrictions and boundaries (e.g. to authorised users and groups) to protect the commons, allowing communities to define and maintain the political values they choose. Building on the practice of collective management of copyright, it also empowers data trusts to govern and monitor the use and re-use of the concerned data. The model is ultimately meant to address the power imbalance and information asymmetry that characterise today’s economy of data as well as the “data winter” effect that restricts the accessibility of data for public interest, while accommodating and empowering individuals and communities that may have different political values and visions…(More)”.
OECD Report: “The growing demand for high-quality data to inform policy and to enable trustworthy artificial intelligence has increased the relevance of trusted data intermediaries (TDIs). National statistical offices (NSOs) are uniquely positioned to serve as TDIs, given their mandates and public trust. This paper examines practices across 16 NSOs and finds that many are expanding beyond their traditional remit to facilitate data sharing among public administrations, researchers and, in some cases, private actors. These institutions employ robust confidentiality and privacy safeguards, adopt privacy enhancing technologies (PETs) and operate secure research environments. Oversight mechanisms, trust building and adequate resources are essential for NSOs to succeed in this evolving role. The analysis highlights the importance of NSOs as emerging actors within data ecosystems to support both evidence-based policymaking and the responsible development of AI. It also underscores the need for additional resources and support to ensure NSOs can undertake these expanding roles…(More)”.
Article by Kate Hodkinson: “Novel data sources can provide important proxies in data-limited contexts. When combined with traditional humanitarian data, such as needs assessments or displacement tracking, these sources can increase the resilience of the humanitarian data ecosystem. For example, in response to the March 2025 Myanmar earthquake, Microsoft AI for Good Lab provided data on building damage before access to affected areas was possible. In other cases, Meta’s high-resolution population density maps have been used to estimate the number of people living within a 30-metre grid and their demographics, helping organisations identify people in need.
Applications of this novel data sources have been explored through many pilots over the last decade. However, progress has not been linear. Seemingly promising technologies have fallen into obscurity, while others have carved out clear use cases. How should humanitarians use novel data sources to reinforce informational resilience, rather than create new dependencies?
Exploring this question required a structured rubric through which we could assess the integration of novel data sources to date, understand their technosocial context, and consider the factors that may define their future use in the humanitarian sector. We used two tools to do this: the S-Curve and the Technology Axis Model.
The S-Curve
The humanitarian sector’s adoption of novel data sources can be mapped against an S-Curve (adapted from Fisher, 1971), which measures maturity from nascent potential to normative practice. For example, the use of satellite data to create dynamic, high-resolution population estimates has moved from a ‘nascent’ opportunity towards a common practice, proving particularly valuable in contexts with limited or outdated census data. The S-Curve creates a way to plot examples of where novel data sources have been integrated into crisis response analysis or appear trapped in a perpetual pilot phase.

The Technology Axis Model (TAM) positions technology innovation cycles as happening in four inter-linked areas:
- The development of the underlying technology.
- The social norms and values associated with the innovation and its effects.
- The applications that are emerging as entrepreneurs seek to introduce the technology into markets.
- The infrastructure and systems that govern the technology area.

The Technology Axis Model was originally developed by Bill Sharpe as a way to help engineers think more widely about the contexts that surround technology innovation. Our definitions are drawn from a forthcoming paper on the Technology Axis Model, by Bill Sharpe and Andrew Curry…(More)”.
OECD Report: “…explores the concept of openness in artificial intelligence (AI), including relevant terminology and how different degrees of openness can exist. It explains why the term “open source” – a term rooted in software – does not fully capture the complexities specific to AI. This paper analyses current trends in open-weight foundation models using experimental data, illustrating both their potential benefits and associated risks. It incorporates the concept of marginality to further inform this discussion. By presenting information clearly and concisely, the paper seeks to support policy discussions on how to balance the openness of generative AI foundation models with responsible governance…(More)”
Book by Cass R. Sunstein: “New technologies are offering companies, politicians, and others unprecedented opportunity to manipulate us. Sometimes we are given the illusion of power – of freedom – through choice, yet the game is rigged, pushing us in specific directions that lead to less wealth, worse health, and weaker democracy. In, Manipulation, nudge theory pioneer and New York Times bestselling author, Cass Sunstein, offers a new definition of manipulation for the digital age, explains why it is wrong; and shows what we can do about it. He reveals how manipulation compromises freedom and personal agency, while threatening to reduce our well-being; he explains the difference between manipulation and unobjectionable forms of influence, including ‘nudges’; and he lifts the lid on online manipulation and manipulation by artificial intelligence, algorithms, and generative AI, as well as threats posed by deepfakes, social media, and ‘dark patterns,’ which can trick people into giving up time and money. Drawing on decades of groundbreaking research in behavioral science, this landmark book outlines steps we can take to counteract manipulation in our daily lives and offers guidance to protect consumers, investors, and workers…(More)”
Report by Cami Rincon and Jorge Perez: “We find, however, that the adoption of immersive technologies today is most significantly characterised by niche use cases, rather than by widely adopted general-purpose use cases. These uses leverage specialised technical functions to augment particular tasks in distinct sectors.
Despite a decline in attention from regulators, venture capital investors, consumers and the media, alongside growing interest in new advances in generative AI, certain immersive technologies have continued to receive significant enterprise investment and have seen market size growth and improved capabilities reflected in specialised use cases.
Many of these use cases take place in high-impact industries, augment safety-critical tasks and overlap with vulnerable groups, such as children and people receiving mental health care. These factors create significant potential for risk. With these advancements come a host of new regulatory, policy and ethical questions that regulators and policymakers will need to consider.
Rather than treating immersive technologies as ‘general purpose’, to govern them effectively regulators and policymakers may need to look to specific use cases in specific sectors.
Our analysis finds several obstacles that hinder immersive technology products from reaching widespread adoption…(More)”.
Paper by Michał Klincewicz, Mark Alfano, and Amir Ebrahimi Fard: “At least since Francis Bacon, the slogan “knowledge is power” has been used to capture the relationship between decision-making at a group level and information. We know that being able to shape the informational environment for a group is a way to shape their decisions; it is essentially a way to make decisions for them. This paper focuses on strategies that are intentionally, by design, impactful on the decision-making capacities of groups, effectively shaping their ability to take advantage of information in their environment. Among these, the best known are political rhetoric, propaganda, and misinformation. The phenomenon this paper brings out from these is a relatively new strategy, which we call slopaganda. According to The Guardian, News Corp Australia is currently churning out 3000 “local” generative AI (GAI) stories each week. In the coming years, such “generative AI slop” will present multiple knowledge-related (epistemic) challenges. We draw on contemporary research in cognitive science and artificial intelligence to diagnose the problem of slopaganda, describe some recent troubling cases, then suggest several interventions that may help to counter slopaganda…(More)”.
Article by Rod Schoonover, Daniel P. Aldrich, and Daniel Hoyer: “The emergent reality of complex risk demands a fundamental change in how we conceptualize it. To date, policymakers, risk managers, and insurers—to say nothing of ordinary people—have consistently treated disasters as isolated events. Our mental model imagines a linear progression of unfortunate, unpredictable episodes, unfolding without relation to one another or to their own long-term and widely distributed effects. A hurricane makes landfall, we rebuild, we move on. A pandemic emerges, we develop vaccines, we return to normal.
This outdated model of risk leads to reactive, short-sighted policies rather than proactive prevention and preparedness strategies. Key public programs are designed around discrete, historically bounded events, not today’s cascading and compounding crises. For instance, under the US Stafford Act, the Federal Emergency Management Agency (FEMA) must issue separate declarations for each major disaster, delaying aid and fragmenting coordination when multiple hazards strike. The National Flood Insurance Program still relies on historical floodplain maps that by definition underestimate future risks from climate change. Federal crop insurance supports farmers against crop losses from drought, excess moisture, damaging freezes, hail, wind, and disease, but today diverse stressors such as extreme heat and pollinator loss are converging with other known risks.
Our struggle to grasp complex risk has roots in human psychology. The well-documented tendency of humans is to notice and focus on immediate, visible dangers rather than long-term or abstract ones. Even when we can recognize such longer-term and larger-scale threats, we typically put them aside to focus on more immediate and tangible short-term threats. As a result, lawmakers and emergency managers, like people in general, often succumb to what psychologists and cognitive scientists call the availability heuristic: Policies are designed to react to whatever is most salient, which tends to be the most recent, most dramatic incidents—those most readily available to the mind.
These habits—and the policies that reflect them—do not account for the slow onset of risks, or their intersection with other sources of hazard, during the time when disaster might be prevented. Additionally, both cognitive biases and financial incentives may lead people to discount future risks, even when their probability and likely impact are well understood, and to struggle with conceptualizing phenomena that operate on global scales. Our mental processes are good at understanding immediate, tangible risk, not complex risk scenarios evolving over time and space…(More)”.