Data equity and official statistics in the age of private sector data proliferation


Paper by Pietro Gennari: “Over the last few years, the private sector has become a primary generator of data due to widespread digitisation of the economy and society, the use of social media platforms, and advancements of technologies like the Internet of Things and AI. Unlike traditional sources, these new data streams often offer real-time information and unique insights into people’s behaviour, social dynamics, and economic trends. However, the proprietary nature of most private sector data presents challenges for public access, transparency, and governance that have led to fragmented, often conflicting, data governance arrangements worldwide. This lack of coherence can exacerbate inequalities, limit data access, and restrict data’s utility as a global asset.

Within this context, data equity has emerged as one of the key principles at the basis of any proposal of new data governance framework. The term “data equity” refers to the fair and inclusive access, use, and distribution of data so that it benefits all sections of society, regardless of socioeconomic status, race, or geographic location. It involves making sure that the collection, processing, and use of data does not disproportionately benefit or harm any particular group and seeks to address disparities in data access and quality that can perpetuate social and economic inequalities. This is important because data systems significantly influence access to resources and opportunities in society. In this sense, data equity aims to correct imbalances that have historically affected various groups and to ensure that decision-making based on data does not perpetuate these inequities…(More)”.

When forecasting and foresight meet data and innovation: toward a taxonomy of anticipatory methods for migration policy


Paper by Sara Marcucci, Stefaan Verhulst and María Esther Cervantes: “The various global refugee and migration events of the last few years underscore the need for advancing anticipatory strategies in migration policy. The struggle to manage large inflows (or outflows) highlights the demand for proactive measures based on a sense of the future. Anticipatory methods, ranging from predictive models to foresight techniques, emerge as valuable tools for policymakers. These methods, now bolstered by advancements in technology and leveraging nontraditional data sources, can offer a pathway to develop more precise, responsive, and forward-thinking policies.

This paper seeks to map out the rapidly evolving domain of anticipatory methods in the realm of migration policy, capturing the trend toward integrating quantitative and qualitative methodologies and harnessing novel tools and data. It introduces a new taxonomy designed to organize these methods into three core categories: Experience-based, Exploration-based, and Expertise-based. This classification aims to guide policymakers in selecting the most suitable methods for specific contexts or questions, thereby enhancing migration policies…(More)”

On the Shoulders of Others: The Importance of Regulatory Learning in the Age of AI


Paper by Urs Gasser and Viktor Mayer-Schonberger: “…International harmonization of regulation is the right strategy when the appropriate regulatory ends and means are sufficiently clear to reap efficiencies of scale and scope. When this is not the case, a push for efficiency through uniformity is premature and may lead to a suboptimal regulatory lock-in: the establishment of a rule framework that is either inefficient in the use of its means to reach the intended goal, or furthers the wrong goal, or both.


A century ago, economist Joseph Schumpeter suggested that companies have two distinct strategies to achieve success. The first is to employ economies of scale and scope to lower their cost. It’s essentially a push for improved efficiency. The other strategy is to invent a new product (or production process) that may not, at least initially, be hugely efficient, but is nevertheless advantageous because demand for the new product is price inelastic. For Schumpeter this was the essence of innovation. But, as Schumpeter also argued, innovation is not a simple, linear, and predictable process. Often, it happens in fits and starts, and can’t be easily commandeered or engineered.


As innovation is hard to foresee and plan, the best way to facilitate it is to enable a wide variety of different approaches and solutions. Public policies in many countries to foster startups and entrepreneurship stems from this view. Take, for instance, the policy of regulatory sandboxing, i.e. the idea that for a limited time certain sectors should not or only lightly be regulated…(More)”.

Generative AI for data stewards: enhancing accuracy and efficiency in data governance


Paper by Ankush Reddy Sugureddy: “The quality of data becomes an essential component for the success of an organisation in a world that is largely influenced by data, where data analytics is becoming increasingly popular in the process of informing strategic decisions. The failure to improve the quality of the data can lead to undesirable outcomes such as poor decisions, ineffective strategies, dysfunctional operations, lost commercial prospects, and abrasion of the consumer. In the process of organisations shifting their focus towards transformative methods such as generative artificial intelligence, several use cases may emerge that have the potential to aid the improvement of data quality. Streamlining procedures such as data classification, metadata management, and policy enforcement can be accomplished by the incorporation of generative artificial intelligence into data governance frameworks. This, in turn, reduces the workload of human data stewards and minimises the possibility of human error. In order to ensure compliance with legal standards such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), generative artificial intelligence may analyse enormous datasets by utilising machine learning algorithms to discover patterns, inconsistencies, and compliance issues…(More)”.

Data Sovereignty and Open Sharing: Reconceiving Benefit-Sharing and Governance of Digital Sequence Information


Paper by Masanori Arita: “There are ethical, legal, and governance challenges surrounding data, particularly in the context of digital sequence information (DSI) on genetic resources. I focus on the shift in the international framework, as exemplified by the CBD-COP15 decision on benefit-sharing from DSI and discuss the growing significance of data sovereignty in the age of AI and synthetic biology. Using the example of the COVID-19 pandemic, the tension between open science principles and data control rights is explained. This opinion also highlights the importance of inclusive and equitable data sharing frameworks that respect both privacy and sovereign data rights, stressing the need for international cooperation and equitable access to data to reduce global inequalities in scientific and technological advancement…(More)”.

Reimagining the Policy Cycle in the Age of Artificial Intelligence


Paper by Sara Marcucci and Stefaan Verhulst: “The increasing complexity of global challenges, such as climate change, public health crises, and socioeconomic inequalities, underscores the need for a more sophisticated and adaptive policymaking approach. Evidence-Informed Decision-Making (EIDM) has emerged as a critical framework, leveraging data and research to guide policy design, implementation, and impact assessment. However, traditional evidence-based approaches, such as reliance on Randomized Controlled Trials (RCTs) and systematic reviews, face limitations, including resource intensity, contextual constraints, and difficulty in addressing real-time challenges. Artificial Intelligence offers transformative potential to enhance EIDM by enabling large-scale data analysis, pattern recognition, predictive modeling, and stakeholder engagement across the policy cycle. While generative AI has attracted significant attention, this paper emphasizes the broader spectrum of AI applications (beyond Generative AI) —such as natural language processing (NLP), decision trees, and basic machine learning algorithms—that continue to play a critical role in evidence-informed policymaking. These models, often more transparent and resource-efficient, remain highly relevant in supporting data analysis, policy simulations, and decision-support.

This paper explores AI’s role in three key phases of the policy cycle: (1) problem identification, where AI can support issue framing, trend detection, and scenario creation; (2) policy design, where AI-driven simulations and decision-support tools can improve solution alignment with real-world contexts; and (3) policy implementation and impact assessment, where AI can enhance monitoring, evaluation, and adaptive decision-making. Despite its promise, AI adoption in policymaking remains limited due to challenges such as algorithmic bias, lack of explainability, resource demands, and ethical concerns related to data privacy and environmental impact. To ensure responsible and effective AI integration, this paper highlights key recommendations: prioritizing augmentation over automation, embedding human oversight throughout AI-driven processes, facilitating policy iteration, and combining AI with participatory governance models…(More)”.

Patients’ Trust in Health Systems to Use Artificial Intelligence


Paper by Paige Nong and Jodyn Platt: “The growth and development of artificial intelligence (AI) in health care introduces a new set of questions about patient engagement and whether patients trust systems to use AI responsibly and safely. The answer to this question is embedded in patients’ experiences seeking care and trust in health systems. Meanwhile, the adoption of AI technology outpaces efforts to analyze patient perspectives, which are critical to designing trustworthy AI systems and ensuring patient-centered care.

We conducted a national survey of US adults to understand whether they trust their health systems to use AI responsibly and protect them from AI harms. We also examined variables that may be associated with these attitudes, including knowledge of AI, trust, and experiences of discrimination in health care….Most respondents reported low trust in their health care system to use AI responsibly (65.8%) and low trust that their health care system would make sure an AI tool would not harm them (57.7%)…(More)”.

Using human mobility data to quantify experienced urban inequalities


Paper by Fengli Xu et al: “The lived experience of urban life is shaped by personal mobility through dynamic relationships and resources, marked not only by access and opportunity, but also inequality and segregation. The recent availability of fine-grained mobility data and context attributes ranging from venue type to demographic mixture offer researchers a deeper understanding of experienced inequalities at scale, and pose many new questions. Here we review emerging uses of urban mobility behaviour data, and propose an analytic framework to represent mobility patterns as a temporal bipartite network between people and places. As this network reconfigures over time, analysts can track experienced inequality along three critical dimensions: social mixing with others from specific demographic backgrounds, access to different types of facilities, and spontaneous adaptation to unexpected events, such as epidemics, conflicts or disasters. This framework traces the dynamic, lived experiences of urban inequality and complements prior work on static inequalities experience at home and work…(More)”.

Regulatory Markets: The Future of AI Governance


Paper by Gillian K. Hadfield, and Jack Clark: “Appropriately regulating artificial intelligence is an increasingly urgent policy challenge. Legislatures and regulators lack the specialized knowledge required to best translate public demands into legal requirements. Overreliance on industry self-regulation fails to hold producers and users of AI systems accountable to democratic demands. Regulatory markets, in which governments require the targets of regulation to purchase regulatory services from a private regulator, are proposed. This approach to AI regulation could overcome the limitations of both command-and-control regulation and self-regulation. Regulatory market could enable governments to establish policy priorities for the regulation of AI, whilst relying on market forces and industry R&D efforts to pioneer the methods of regulation that best achieve policymakers’ stated objectives…(More)”.

Tab the lab: A typology of public sector innovation labs


Paper by Aline Stoll and Kevin C Andermatt: “Many public sector organizations set up innovation laboratories in response to the pressure to tackle societal problems and the high expectations placed on them to innovate public services. Our understanding of the public sector innovation laboratories’ role in enhancing the innovation capacity of administrations is still limited. It is challenging to assess or compare the impact of innovation laboratories because of how they operate and what they do. This paper closes this research gap by offering a typology that organizes the diverse nature of innovation labs and makes it possible to compare various lab settings. The proposed typology gives possible relevant factors to increase the innovation capacity of public organizations. The findings are based on a literature review of primarily explorative papers and case studies, which made it possible to identify the relevant criteria. The proposed typology covers three dimensions: (1) value (intended innovation impact of the labs); (2) governance (role of government and financing model); and (3) network (stakeholders in the collaborative arrangements). Comparing European countries and regions with regards to the repartition of labs shows that Nordic and British countries tend to have broader scope than continental European countries…(More)”.