Unpacking B2G data sharing mechanism under the EU data act


Paper by Ludovica Paseri and Stefaan G. Verhulst: “The paper proposes an analysis of the business-to-government (B2G) data sharing mechanism envisaged by the Regulation EU 2023/2854, the so-called Data Act. The Regulation, in force since 11 January 2024, will be applicable from 12 September 2025, requiring the actors involved to put in place a compliance process. The focus of the paper is to present an assessment of the mechanism foreseen by the EU legislators, with the intention of highlighting two bottlenecks, represented by: (i) the flexibility of the definition of “exceptional need”, “public emergency” and “public interest”; (ii) the cumbersome procedure for data holders. The paper discusses the role that could be played by in-house data stewardship structures as a particularly beneficial contact point for complying with B2G data sharing requirements…(More)“.

Data integration and synthesis for pandemic and epidemic intelligence


Paper by Barbara Tornimbene et al: “The COVID-19 pandemic highlighted substantial obstacles in real-time data generation and management needed for clinical research and epidemiological analysis. Three years after the pandemic, reflection on the difficulties of data integration offers potential to improve emergency preparedness. The fourth session of the WHO Pandemic and Epidemic Intelligence Forum sought to report the experiences of key global institutions in data integration and synthesis, with the aim of identifying solutions for effective integration. Data integration, defined as the combination of heterogeneous sources into a cohesive system, allows for combining epidemiological data with contextual elements such as socioeconomic determinants to create a more complete picture of disease patterns. The approach is critical for predicting outbreaks, determining disease burden, and evaluating interventions. The use of contextual information improves real-time intelligence and risk assessments, allowing for faster outbreak responses. This report captures the growing acknowledgment of data integration importance in boosting public health intelligence and readiness and show examples of how global institutions are strengthening initiatives to respond to this need. However, obstacles persist, including interoperability, data standardization, and ethical considerations. The success of future data integration efforts will be determined by the development of a common technical and legal framework, the promotion of global collaboration, and the protection of sensitive data. Ultimately, effective data integration can potentially transform public health intelligence and our way to successfully respond to future pandemics…(More)”.

Trends in AI Supercomputers


Paper by Konstantin F. Pilz, James Sanders, Robi Rahman, and Lennart Heim: “Frontier AI development relies on powerful AI supercomputers, yet analysis of these systems is limited. We create a dataset of 500 AI supercomputers from 2019 to 2025 and analyze key trends in performance, power needs, hardware cost, ownership, and global distribution. We find that the computational performance of AI supercomputers has doubled every nine months, while hardware acquisition cost and power needs both doubled every year. The leading system in March 2025, xAI’s Colossus, used 200,000 AI chips, had a hardware cost of $7B, and required 300 MW of power, as much as 250,000 households. As AI supercomputers evolved from tools for science to industrial machines, companies rapidly expanded their share of total AI supercomputer performance, while the share of governments and academia diminished. Globally, the United States accounts for about 75% of total performance in our dataset, with China in second place at 15%. If the observed trends continue, the leading AI supercomputer in 2030 will achieve 2×1022 16-bit FLOP/s, use two million AI chips, have a hardware cost of $200 billion, and require 9 GW of power. Our analysis provides visibility into the AI supercomputer landscape, allowing policymakers to assess key AI trends like resource needs, ownership, and national competitiveness…(More)”.

Data Collection and Analysis for Policy Evaluation: Not for Duty, but for Knowledge


Paper by Valentina Battiloro: “This paper explores the challenges and methods involved in public policy evaluation, focusing on the role of data collection and use. The term “evaluation” encompasses a variety of analyses and approaches, all united by the intent to provide a judgment on a specific policy, but which, depending on the precise knowledge objective, can translate into entirely different activities. Regardless of the type of evaluation, a brief overview of which is provided, the collection of information represents a priority, often undervalued, under the assumption that it is sufficient to “have the data.“ Issues arise concerning the precise definition of the design, the planning of necessary information collection, and the appropriate management of timelines. With regard to administrative data, a potentially valuable source, a number of unresolved challenges remain due to a weak culture of data utilization. Among these are the transition from an administrative data culture to a statistical data culture, and the fundamental issue of microdata accessibility for research purposes, which is currently hindered by significant barriers…(More)”.

Intended, afforded, and experienced serendipity: overcoming the paradox of artificial serendipity


Paper by Annelien Smets: “Designing for serendipity in information technologies presents significant challenges for both scholars and practitioners. This paper presents a theoretical model of serendipity that aims to address this challenge by providing a structured framework for understanding and designing for serendipity. The model delineates between intended, afforded, and experienced serendipity, recognizing the role of design intents and the subjective nature of experiencing serendipity. Central to the model is the recognition that there is no single definition nor a unique operationalization of serendipity, emphasizing the need for a nuanced approach to its conceptualization and design. By delineating between the intentions of designers, the characteristics of the system, and the experiences of end-users, the model offers a pathway to resolve the paradox of artificial serendipity and provides actionable guidelines to design for serendipity in information technologies. However, it also emphasizes the importance of establishing ‘guardrails’ to guide the design process and mitigate potential negative unintended consequences. The model aims to lay ground to advance both research and the practice of designing for serendipity, leading to more ethical and effective design practices…(More)”.

Further Reflections on the Journey Towards an International Framework for Data Governance


Paper by Steve MacFeely, Angela Me, Rachael Beaven, Joseph Costanzo, David Passarelli, Carolina Rossini, Friederike Schueuer, Malarvizhi Veerappan, and Stefaan Verhulst: “The use of data is paramount both to inform individual decisions and to address major global challenges. Data are the lifeblood of the digital economy, feeding algorithms, currencies, artificial intelligence, and driving international services trade, improving the way we respond to crises, informing logistics, shaping markets, communications and politics. But data are not just an economic commodity, to be traded and harvested, they are a personal and social artifact. They contain our most personal and sensitive information – our financial and health records, our networks, our memories, and our most intimate secrets and aspirations. With the advent of digitalization and the internet, our data are ubiquitous – we are the sum of our data. Consequently, this powerful treasure trove needs to be protected carefully. This paper presents arguments for an international data governance framework, the barriers to achieving such a framework and some of the costs of failure. It also articulates why the United Nations is uniquely positioned to host such a framework, and learning from history, the opportunity available to solve a global problem…(More)”.

AI and Social Media: A Political Economy Perspective


Paper by Daron Acemoglu, Asuman Ozdaglar & James Siderius: “We consider the political consequences of the use of artificial intelligence (AI) by online platforms engaged in social media content dissemination, entertainment, or electronic commerce. We identify two distinct but complementary mechanisms, the social media channel and the digital ads channel, which together and separately contribute to the polarization of voters and consequently the polarization of parties. First, AI-driven recommendations aimed at maximizing user engagement on platforms create echo chambers (or “filter bubbles”) that increase the likelihood that individuals are not confronted with counter-attitudinal content. Consequently, social media engagement makes voters more polarized, and then parties respond by becoming more polarized themselves. Second, we show that party competition can encourage platforms to rely more on targeted digital ads for monetization (as opposed to a subscription-based business model), and such ads in turn make the electorate more polarized, further contributing to the polarization of parties. These effects do not arise when one party is dominant, in which case the profit-maximizing business model of the platform is subscription-based. We discuss the impact regulations can have on the polarizing effects of AI-powered online platforms…(More)”.

The Data-Informed City: A Conceptual Framework for Advancing Research and Practice


Paper by Jorrit de Jong, Fernando Fernandez-Monge et al: “Over the last decades, scholars and practitioners have focused their attention on the use of data for improving public action, with a renewed interest in the emergence of big data and artificial intelligence. The potential of data is particularly salient in cities, where vast amounts of data are being generated from traditional and novel sources. Despite this growing interest, there is a need for a conceptual and operational understanding of the beneficial uses of data. This article presents a comprehensive and precise account of how cities can use data to address problems more effectively, efficiently, equitably, and in a more accountable manner. It does so by synthesizing and augmenting current research with empirical evidence derived from original research and learnings from a program designed to strengthen city governments’ data capacity. The framework can be used to support longitudinal and comparative analyses as well as explore questions such as how different uses of data employed at various levels of maturity can yield disparate outcomes. Practitioners can use the framework to identify and prioritize areas in which building data capacity might further the goals of their teams and organizations…(More)

Practitioner perspectives on informing decisions in One Health sectors with predictive models


Paper by Kim M. Pepin: “Every decision a person makes is based on a model. A model is an idea about how a process works based on previous experience, observation, or other data. Models may not be explicit or stated (Johnson-Laird, 2010), but they serve to simplify a complex world. Models vary dramatically from conceptual (idea) to statistical (mathematical expression relating observed data to an assumed process and/or other data) or analytical/computational (quantitative algorithm describing a process). Predictive models of complex systems describe an understanding of how systems work, often in mathematical or statistical terms, using data, knowledge, and/or expert opinion. They provide means for predicting outcomes of interest, studying different management decision impacts, and quantifying decision risk and uncertainty (Berger et al. 2021; Li et al. 2017). They can help decision-makers assimilate how multiple pieces of information determine an outcome of interest about a complex system (Berger et al. 2021; Hemming et al. 2022).

People rely daily on system-level models to reach objectives. Choosing the fastest route to a destination is one example. Such a decision may be based on either a mental model of the road system developed from previous experience or a traffic prediction mapping application based on mathematical algorithms and current data. Either way, a system-level model has been applied and there is some uncertainty. In contrast, predicting outcomes for new and complex phenomena, such as emerging disease spread, a biological invasion risk (Chen et al. 2023; Elderd et al. 2006; Pepin et al. 2022), or climatic impacts on ecosystems is more uncertain. Here public service decision-makers may turn to mathematical models when expert opinion and experience do not resolve enough uncertainty about decision outcomes. But using models to guide decisions also relies on expert opinion and experience. Also, even technical experts need to make modeling choices regarding model structure and data inputs that have uncertainty (Elderd et al. 2006) and these might not be completely objective decisions (Bedson et al. 2021). Thus, using models for guiding decisions has subjectivity from both the developer and end-user, which can lead to apprehension or lack of trust about using models to inform decisions.

Models may be particularly advantageous to decision-making in One Health sectors, including health of humans, agriculture, wildlife, and the environment (hereafter called One Health sectors) and their interconnectedness (Adisasmito et al. 2022)…(More)”.

AI-enhanced nudging in public policy: why to worry and how to respond


Paper by Stefano Calboli & Bart Engelen: “What role can artificial intelligence (AI) play in enhancing public policy nudges and the extent to which these help people achieve their own goals? Can it help mitigate or even overcome the challenges that nudgers face in this respect? This paper discusses how AI-enhanced personalization can help make nudges more means paternalistic and thus more respectful of people’s ends. We explore the potential added value of AI by analyzing to what extent it can, (1) help identify individual preferences and (2) tailor different nudging techniques to different people based on variations in their susceptibility to those techniques. However, we also argue that the successes booked in this respect in the for-profit sector cannot simply be replicated in public policy. While AI can bring benefits to means paternalist public policy nudging, it also has predictable downsides (lower effectiveness compared to the private sector) and risks (graver consequences compared to the private sector). We discuss the practical implications of all this and propose novel strategies that both consumers and regulators can employ to respond to private AI use in nudging with the aim of safeguarding people’s autonomy and agency…(More)”. See also: Engagement Integrity: Ensuring Legitimacy at a time of AI-Augmented Participation