The Future of Health Is Preventive — If We Get Data Governance Right


Article by Stefaan Verhulst: “After a long gestation period of three years, the European Health Data Space (EHDS) is now coming into effect across the European Union, potentially ushering in a new era of health data access, interoperability, and innovation. As this ambitious initiative enters the implementation phase, it brings with it the opportunity to fundamentally reshape how health systems across Europe operate. More generally, the EHDS contains important lessons (and some cautions) for the rest of the world, suggesting how a fragmented, reactive model of healthcare may transition to one that is more integrated, proactive, and prevention-oriented.

For too long, health systems–in the EU and around the world–have been built around treating diseases rather than preventing them. Now, we have an opportunity to change that paradigm. Data, and especially the advent of AI, give us the tools to predict and intervene before illness takes hold. Data offers the potential for a system that prioritizes prevention–one where individuals receive personalized guidance to stay healthy, policymakers access real-time evidence to address risks before they escalate, and epidemics are predicted weeks in advance, enabling proactive, rapid, and highly effective responses.

But to make AI-powered preventive health care a reality, and to make the EHDS a success, we need a new data governance approach, one that would include two key components:

  • The ability to reuse data collected for other purposes (e.g., mobility, retail sales, workplace trends) to improve health outcomes.
  • The ability to integrate different data sources–clinical records and electronic health records (EHRS), but also environmental, social, and economic data — to build a complete picture of health risks.

In what follows, we outline some critical aspects of this new governance framework, including responsible data access and reuse (so-called secondary use), moving beyond traditional consent models to a social license for reuse, data stewardship, and the need to prioritize high-impact applications. We conclude with some specific recommendations for the EHDS, built from the preceding general discussion about the role of AI and data in preventive health…(More)”.

Unlocking Public Value with Non-Traditional Data: Recent Use Cases and Emerging Trends


Article by Adam Zable and Stefaan Verhulst: “Non-Traditional Data (NTD)—digitally captured, mediated, or observed data such as mobile phone records, online transactions, or satellite imagery—is reshaping how we identify, understand, and respond to public interest challenges. As part of the Third Wave of Open Data, these often privately held datasets are being responsibly re-used through new governance models and cross-sector collaboration to generate public value at scale.

In our previous post, we shared emerging case studies across health, urban planning, the environment, and more. Several months later, the momentum has not only continued but diversified. New projects reaffirm NTD’s potential—especially when linked with traditional data, embedded in interdisciplinary research, and deployed in ways that are privacy-aware and impact-focused.

This update profiles recent initiatives that push the boundaries of what NTD can do. Together, they highlight the evolving domains where this type of data is helping to surface hidden inequities, improve decision-making, and build more responsive systems:

  • Financial Inclusion
  • Public Health and Well-Being
  • Socioeconomic Analysis
  • Transportation and Urban Mobility
  • Data Systems and Governance
  • Economic and Labor Dynamics
  • Digital Behavior and Communication…(More)”.

2025 Technology and innovation report


UNCTAD Report: Frontier technologies, particularly artificial intelligence (AI), are profoundly transforming our economies and societies, reshaping production processes, labour markets and the ways in which we live and interact. Will AI accelerate progress towards the Sustainable Development Goals, or will it exacerbate existing inequalities, leaving the underprivileged further behind? How can developing countries harness AI for sustainable development? AI is the first technology in history that can make decisions and generate ideas on its own. This sets it apart from traditional technologies and challenges the notion of technological neutrality.
The rapid development of AI has also outpaced the ability of Governments to respond effectively. The Technology and Innovation Report 2025 aims to guide policymakers through the complex AI
andscape and support them in designing science, technology and innovation (STI) policies that foster inclusive and equitable technological progress.
The world already has significant digital divides, and with the rise of AI, these could widen even further. In response, the Report argues for AI development based on inclusion and equity, shifting the focus from
technology to people. AI technologies should complement rather than displace human workers and production should be restructured so that the benefits are shared fairly among countries, firms and
workers. It is also important to strengthen international collaboration, to enable countries to co-create inclusive AI governance.


The Report examines five core themes:
A. AI at the technological frontier
B. Leveraging AI for productivity and workers’ empowerment
C. Preparing to seize AI opportunities
D. Designing national policies for AI
E. Global collaboration for inclusive and equitable AI…(More)”

Fostering Open Data


Paper by Uri Y. Hacohen: “Data is often heralded as “the world’s most valuable resource,” yet its potential to benefit society remains unrealized due to systemic barriers in both public and private sectors. While open data-defined as data that is available, accessible, and usable-holds immense promise to advance open science, innovation, economic growth, and democratic values, its utilization is hindered by legal, technical, and organizational challenges. Public sector initiatives, such as U.S. and European Union open data regulations, face uneven enforcement and regulatory complexity, disproportionately affecting under-resourced stakeholders such as researchers. In the private sector, companies prioritize commercial interests and user privacy, often obstructing data openness through restrictive policies and technological barriers. This article proposes an innovative, four-layered policy framework to overcome these obstacles and foster data openness. The framework includes (1) improving open data infrastructures, (2) ensuring legal frameworks for open data, (3) incentivizing voluntary data sharing, and (4) imposing mandatory data sharing obligations. Each policy cluster is tailored to address sector-specific challenges and balance competing values such as privacy, property, and national security. Drawing from academic research and international case studies, the framework provides actionable solutions to transition from a siloed, proprietary data ecosystem to one that maximizes societal value. This comprehensive approach aims to reimagine data governance and unlock the transformative potential of open data…(More)”.

The Social Biome: How Everyday Communication Connects and Shapes Us


Book by Andy J. Merolla and Jeffrey A. Hall: “We spend much of our waking lives communicating with others. How does each moment of interaction shape not only our relationships but also our worldviews? And how can we create moments of connection that improve our health and well-being, particularly in a world in which people are feeling increasingly isolated?
 
Drawing from their extensive research, Andy J. Merolla and Jeffrey A. Hall establish a new way to think about our relational life: as existing within “social biomes”—complex ecosystems of moments of interaction with others. Each interaction we have, no matter how unimportant or mundane it might seem, is a building block of our identities and beliefs. Consequently, the choices we make about how we interact and who we interact with—and whether we interact at all—matter more than we might know. Merolla and Hall offer a sympathetic, practical guide to our vital yet complicated social lives and propose realistic ways to embrace and enhance connection and hope…(More)”.

How is AI augmenting collective intelligence for the SDGs?


Article by UNDP: “Increasingly AI techniques like natural language processing, machine learning and predictive analytics are being used alongside the most common methods in collective intelligence, from citizen science and crowdsourcing to digital democracy platforms.

At its best, AI can be used to augment and scale the intelligence of groups. In this section we describe the potential offered by these new combinations of human and machine intelligence. First we look at the applications that are most common, where AI is being used to enhance efficiency and categorize unstructured data, before turning to the emerging role of AI – where it helps us to better understand complex systems.

These are the three main ways AI and collective intelligence are currently being used together for the SDGs:

1. Efficiency and scale of data processing

AI is being effectively incorporated into collective intelligence projects where timing is paramount and a key insight is buried deep within large volumes of unstructured data. This combination of AI and collective intelligence is most useful when decision makers require an early warning to help them manage risks and distribute public resources more effectively. For example, Dataminr’s First Alert system uses pre-trained machine learning models to sift through text and images scraped from the internet, as well as other data streams, such as audio broadcasts, to isolate early signals that anticipate emergency events…(More)”. (See also: Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern).

Data Localization: A Global Threat to Human Rights Online


Article by Freedom House: “From Pakistan to Zambia, governments around the world are increasingly proposing and passing data localization legislation. These laws, which refer to the rules governing the storage and transfer of electronic data across jurisdictions, are often justified as addressing concerns such as user privacy, cybersecurity, national security, and monopolistic market practices. Notwithstanding these laudable goals, data localization initiatives cause more harm than good, especially in legal environments with poor rule of law.

Data localization requirements can take many different forms. A government may require all companies collecting and processing certain types of data about local users to store the data on servers located in the country. Authorities may also restrict the foreign transfer of certain types of data or allow it only under narrow circumstances, such as after obtaining the explicit consent of users, receiving a license or permit from a public authority, or conducting a privacy assessment of the country to which the data will be transferred.

While data localization can have significant economic and security implications, the focus of this piece—inline with that of the Global Network Initiative and Freedom House—is on its potential human rights impacts, which are varied. Freedom House’s research shows that the rise in data localization policies worldwide is contributing to the global decline of internet freedom. Without robust transparency and accountability frameworks embedded into these provisions, digital rights are often put on the line. As these types of legislation continue to pop up globally, the need for rights-respecting solutions and norms for cross-border data flows is greater than ever…(More)”.

Global data-driven prediction of fire activity


Paper by Francesca Di Giuseppe, Joe McNorton, Anna Lombardi & Fredrik Wetterhall: “Recent advancements in machine learning (ML) have expanded the potential use across scientific applications, including weather and hazard forecasting. The ability of these methods to extract information from diverse and novel data types enables the transition from forecasting fire weather, to predicting actual fire activity. In this study we demonstrate that this shift is feasible also within an operational context. Traditional methods of fire forecasts tend to over predict high fire danger, particularly in fuel limited biomes, often resulting in false alarms. By using data on fuel characteristics, ignitions and observed fire activity, data-driven predictions reduce the false-alarm rate of high-danger forecasts, enhancing their accuracy. This is made possible by high quality global datasets of fuel evolution and fire detection. We find that the quality of input data is more important when improving forecasts than the complexity of the ML architecture. While the focus on ML advancements is often justified, our findings highlight the importance of investing in high-quality data and, where necessary create it through physical models. Neglecting this aspect would undermine the potential gains from ML-based approaches, emphasizing that data quality is essential to achieve meaningful progress in fire activity forecasting…(More)”.

Developing countries are struggling to achieve their technology aims. Shared digital infrastructure is the answer


Article by Nii Simmonds: “The digital era offers remarkable prospects for both economic advancement and social development. Yet for emerging economies lacking energy, this potential often seems out of reach. The harsh truths of inconsistent electricity supply and scarce resources looms large over their digital ambitions. Nevertheless, a ray of hope shines through a strategy I call shared digital infrastructure (SDI). This cooperative model has the ability to turn these obstacles into opportunities for growth. By collaborating through regional country partnerships and bodies such as the Association of Southeast Asian Nations (ASEAN), the African Union (AU) and the Caribbean Community (CARICOM), these countries can harness the revolutionary power of digital technology, despite the challenges.

The digital economy is a critical driver of global GDP, with innovations in artificial intelligence, e-commerce and financial technology transforming industries at an unprecedented pace. At the heart of this transformation are data centres, which serve as the backbone of digital services, cloud computing and AI-driven applications. Yet many developing nations struggle to establish and maintain such facilities due to high energy costs, inadequate grid reliability and limited investment capital…(More)”.

Privacy-Enhancing and Privacy-Preserving Technologies in AI: Enabling Data Use and Operationalizing Privacy by Design and Default


Paper by the Centre for Information Policy Leadership at Hunton (“CIPL”): “provides an in-depth exploration of how privacy-enhancing technologies (“PETs”) are being deployed to address privacy within artificial intelligence (“AI”) systems. It aims to describe how these technologies can help operationalize privacy by design and default and serve as key business enablers, allowing companies and public sector organizations to access, share and use data that would otherwise be unavailable. It also seeks to demonstrate how PETs can address challenges and provide new opportunities across the AI life cycle, from data sourcing to model deployment, and includes real-world case studies…

As further detailed in the Paper, CIPL’s recommendations for boosting the adoption of PETs for AI are as follows:

Stakeholders should adopt a holistic view of the benefits of PETs in AI. PETs deliver value beyond addressing privacy and security concerns, such as fostering trust and enabling data sharing. It is crucial that stakeholders consider all these advantages when making decisions about their use.

Regulators should issue more clear and practical guidance to reduce regulatory uncertainty in the use of PETs in AI. While regulators increasingly recognize the value of PETs, clearer and more practical guidance is needed to help organizations implement these technologies effectively.

Regulators should adopt a risk-based approach to assess how PETs can meet standards for data anonymization, providing clear guidance to eliminate uncertainty. There is uncertainty around whether various PETs meet legal standards for data anonymization. A risk-based approach to defining anonymization standards could encourage wider adoption of PETs.

Deployers should take steps to provide contextually appropriate transparency to customers and data subjects. Given the complexity of PETs, deployers should ensure customers and data subjects understand how PETs function within AI models…(More)”.