Paper by Ankush Reddy Sugureddy: “The quality of data becomes an essential component for the success of an organisation in a world that is largely influenced by data, where data analytics is becoming increasingly popular in the process of informing strategic decisions. The failure to improve the quality of the data can lead to undesirable outcomes such as poor decisions, ineffective strategies, dysfunctional operations, lost commercial prospects, and abrasion of the consumer. In the process of organisations shifting their focus towards transformative methods such as generative artificial intelligence, several use cases may emerge that have the potential to aid the improvement of data quality. Streamlining procedures such as data classification, metadata management, and policy enforcement can be accomplished by the incorporation of generative artificial intelligence into data governance frameworks. This, in turn, reduces the workload of human data stewards and minimises the possibility of human error. In order to ensure compliance with legal standards such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), generative artificial intelligence may analyse enormous datasets by utilising machine learning algorithms to discover patterns, inconsistencies, and compliance issues…(More)”.
Data Sovereignty and Open Sharing: Reconceiving Benefit-Sharing and Governance of Digital Sequence Information
Paper by Masanori Arita: “There are ethical, legal, and governance challenges surrounding data, particularly in the context of digital sequence information (DSI) on genetic resources. I focus on the shift in the international framework, as exemplified by the CBD-COP15 decision on benefit-sharing from DSI and discuss the growing significance of data sovereignty in the age of AI and synthetic biology. Using the example of the COVID-19 pandemic, the tension between open science principles and data control rights is explained. This opinion also highlights the importance of inclusive and equitable data sharing frameworks that respect both privacy and sovereign data rights, stressing the need for international cooperation and equitable access to data to reduce global inequalities in scientific and technological advancement…(More)”.
Reimagining the Policy Cycle in the Age of Artificial Intelligence
Paper by Sara Marcucci and Stefaan Verhulst: “The increasing complexity of global challenges, such as climate change, public health crises, and socioeconomic inequalities, underscores the need for a more sophisticated and adaptive policymaking approach. Evidence-Informed Decision-Making (EIDM) has emerged as a critical framework, leveraging data and research to guide policy design, implementation, and impact assessment. However, traditional evidence-based approaches, such as reliance on Randomized Controlled Trials (RCTs) and systematic reviews, face limitations, including resource intensity, contextual constraints, and difficulty in addressing real-time challenges. Artificial Intelligence offers transformative potential to enhance EIDM by enabling large-scale data analysis, pattern recognition, predictive modeling, and stakeholder engagement across the policy cycle. While generative AI has attracted significant attention, this paper emphasizes the broader spectrum of AI applications (beyond Generative AI) —such as natural language processing (NLP), decision trees, and basic machine learning algorithms—that continue to play a critical role in evidence-informed policymaking. These models, often more transparent and resource-efficient, remain highly relevant in supporting data analysis, policy simulations, and decision-support.
This paper explores AI’s role in three key phases of the policy cycle: (1) problem identification, where AI can support issue framing, trend detection, and scenario creation; (2) policy design, where AI-driven simulations and decision-support tools can improve solution alignment with real-world contexts; and (3) policy implementation and impact assessment, where AI can enhance monitoring, evaluation, and adaptive decision-making. Despite its promise, AI adoption in policymaking remains limited due to challenges such as algorithmic bias, lack of explainability, resource demands, and ethical concerns related to data privacy and environmental impact. To ensure responsible and effective AI integration, this paper highlights key recommendations: prioritizing augmentation over automation, embedding human oversight throughout AI-driven processes, facilitating policy iteration, and combining AI with participatory governance models…(More)”.

Patients’ Trust in Health Systems to Use Artificial Intelligence
Paper by Paige Nong and Jodyn Platt: “The growth and development of artificial intelligence (AI) in health care introduces a new set of questions about patient engagement and whether patients trust systems to use AI responsibly and safely. The answer to this question is embedded in patients’ experiences seeking care and trust in health systems. Meanwhile, the adoption of AI technology outpaces efforts to analyze patient perspectives, which are critical to designing trustworthy AI systems and ensuring patient-centered care.
We conducted a national survey of US adults to understand whether they trust their health systems to use AI responsibly and protect them from AI harms. We also examined variables that may be associated with these attitudes, including knowledge of AI, trust, and experiences of discrimination in health care….Most respondents reported low trust in their health care system to use AI responsibly (65.8%) and low trust that their health care system would make sure an AI tool would not harm them (57.7%)…(More)”.
Using human mobility data to quantify experienced urban inequalities
Paper by Fengli Xu et al: “The lived experience of urban life is shaped by personal mobility through dynamic relationships and resources, marked not only by access and opportunity, but also inequality and segregation. The recent availability of fine-grained mobility data and context attributes ranging from venue type to demographic mixture offer researchers a deeper understanding of experienced inequalities at scale, and pose many new questions. Here we review emerging uses of urban mobility behaviour data, and propose an analytic framework to represent mobility patterns as a temporal bipartite network between people and places. As this network reconfigures over time, analysts can track experienced inequality along three critical dimensions: social mixing with others from specific demographic backgrounds, access to different types of facilities, and spontaneous adaptation to unexpected events, such as epidemics, conflicts or disasters. This framework traces the dynamic, lived experiences of urban inequality and complements prior work on static inequalities experience at home and work…(More)”.
Regulatory Markets: The Future of AI Governance
Paper by Gillian K. Hadfield, and Jack Clark: “Appropriately regulating artificial intelligence is an increasingly urgent policy challenge. Legislatures and regulators lack the specialized knowledge required to best translate public demands into legal requirements. Overreliance on industry self-regulation fails to hold producers and users of AI systems accountable to democratic demands. Regulatory markets, in which governments require the targets of regulation to purchase regulatory services from a private regulator, are proposed. This approach to AI regulation could overcome the limitations of both command-and-control regulation and self-regulation. Regulatory market could enable governments to establish policy priorities for the regulation of AI, whilst relying on market forces and industry R&D efforts to pioneer the methods of regulation that best achieve policymakers’ stated objectives…(More)”.
Tab the lab: A typology of public sector innovation labs
Paper by Aline Stoll and Kevin C Andermatt: “Many public sector organizations set up innovation laboratories in response to the pressure to tackle societal problems and the high expectations placed on them to innovate public services. Our understanding of the public sector innovation laboratories’ role in enhancing the innovation capacity of administrations is still limited. It is challenging to assess or compare the impact of innovation laboratories because of how they operate and what they do. This paper closes this research gap by offering a typology that organizes the diverse nature of innovation labs and makes it possible to compare various lab settings. The proposed typology gives possible relevant factors to increase the innovation capacity of public organizations. The findings are based on a literature review of primarily explorative papers and case studies, which made it possible to identify the relevant criteria. The proposed typology covers three dimensions: (1) value (intended innovation impact of the labs); (2) governance (role of government and financing model); and (3) network (stakeholders in the collaborative arrangements). Comparing European countries and regions with regards to the repartition of labs shows that Nordic and British countries tend to have broader scope than continental European countries…(More)”.
Policy design labs and uncertainty: can they innovate, and retain and circulate learning?
Paper by Jenny Lewis: “Around the world in recent times, numerous policy design labs have been established, related to a rising focus on the need for public sector innovation. These labs are a response to the challenging nature of many societal problems and often have a purpose of navigating uncertainty. They do this by “labbing” ill-structured problems through moving them into an experimental environment, outside of traditional government structures, and using a design-for-policy approach. Labs can, therefore, be considered as a particular type of procedural policy tool, used in attempts to change how policy is formulated and implemented to address uncertainty. This paper considers the role of policy design labs in learning and explores the broader governance context they are embedded within. It examines whether labs have the capacity to innovate and also retain and circulate learning to other policy actors. It argues that labs have considerable potential to change the spaces of policymaking at the micro level and innovate, but for learning to be kept rather than lost, innovation needs to be institutionalized in governing structures at higher levels…(More)”.
It’s just distributed computing: Rethinking AI governance
Paper by Milton L. Mueller: “What we now lump under the unitary label “artificial intelligence” is not a single technology, but a highly varied set of machine learning applications enabled and supported by a globally ubiquitous system of distributed computing. The paper introduces a 4 part conceptual framework for analyzing the structure of that system, which it labels the digital ecosystem. What we now call “AI” is then shown to be a general functionality of distributed computing. “AI” has been present in primitive forms from the origins of digital computing in the 1950s. Three short case studies show that large-scale machine learning applications have been present in the digital ecosystem ever since the rise of the Internet. and provoked the same public policy concerns that we now associate with “AI.” The governance problems of “AI” are really caused by the development of this digital ecosystem, not by LLMs or other recent applications of machine learning. The paper then examines five recent proposals to “govern AI” and maps them to the constituent elements of the digital ecosystem model. This mapping shows that real-world attempts to assert governance authority over AI capabilities requires systemic control of all four elements of the digital ecosystem: data, computing power, networks and software. “Governing AI,” in other words, means total control of distributed computing. A better alternative is to focus governance and regulation upon specific applications of machine learning. An application-specific approach to governance allows for a more decentralized, freer and more effective method of solving policy conflicts…(More)”
Empowering open data sharing for social good: a privacy-aware approach
Paper by Tânia Carvalho et al: “The Covid-19 pandemic has affected the world at multiple levels. Data sharing was pivotal for advancing research to understand the underlying causes and implement effective containment strategies. In response, many countries have facilitated access to daily cases to support research initiatives, fostering collaboration between organisations and making such data available to the public through open data platforms. Despite the several advantages of data sharing, one of the major concerns before releasing health data is its impact on individuals’ privacy. Such a sharing process should adhere to state-of-the-art methods in Data Protection by Design and by Default. In this paper, we use a Covid-19 data set from Portugal’s second-largest hospital to show how it is feasible to ensure data privacy while improving the quality and maintaining the utility of the data. Our goal is to demonstrate how knowledge exchange in multidisciplinary teams of healthcare practitioners, data privacy, and data science experts is crucial to co-developing strategies that ensure high utility in de-identified data…(More).”