Paper by xiaohui jiang and Masaru Yarime: “The Chinese government has been playing an important role in stimulating innovation among Chinese enterprises. Small and medium-sized enterprises (SMEs), with their limited internal resources, particularly face a severe challenge in implementing innovation activities that depend upon data, funding sources, and talents. However, the rapidly developing smart city projects in China, where significant amounts of data are available from various sophisticated devices and generous funding opportunities, are providing rich opportunities for SMEs to explore data-driven innovation. Chinese Governments are trying to actively engage SMEs in the process of smart city construction. When cooperating with the government, the availability of and access to data involved in the government contracts and the ability required in the project help SMEs to train and improve their innovation ability.In this article, we intend to address how obtaining different types of government contracts (equipment supply, platform building, data analysis) can influence firms’ performance on innovation. Obtaining different types of government contracts are regarded as receiving different types of treatments. The hypothesis is that the data analysis type of contracts has a larger positive influence on improving the innovation ability compared to the platform building type, while the platform building type of contracts can have a larger influence compared to equipment supply. Focusing on the case of SMEs in China, this research aims to shed light on how the government and enterprises collaborate in smart city projects to facilitate innovation. Data on companies’ registered capital, industry, and software products from 1990– 2020 is compiled from the Tianyancha website. A panel dataset is established with the key characteristics of the SMEs, software productions, and their record on government contracts. Based on the company’s basic characteristics, we divided six pairs of treatment and control groups using propensity score matching (PSM) and then ran a validity test to confirm that the result of the division was reliable. Then based on the established control and treatment pairs, we run a difference-in-difference (DID) model, and the result supports our original hypothesis. The statistics shows mixed result, Hypothesis 1 which indicates that companies obtaining data analysis contracts will experience greater innovation improvements compared to those with platform-building contracts, is partially confirmed when using software copyright as an outcome variable. However, when using patent data as an indicator, the statistics is insignificant. Hypothesis 2, which posits that companies with platform-building contracts will show greater innovation improvements than those with equipment supply contracts, is not supported. Hypothesis 3 which suggests that companies receiving government contracts will have higher innovation outputs than those without, is confirmed. The case studies later have revealed the complex mechanisms behind the scenario…(More)”.
Harnessing Wearable Data and Social Sentiment: Designing Proactive Consumer and Patient EngagementStrategies through Integrated AI Systems
Paper by Warren Liang et al: “In the age of ubiquitous computing, the convergence of wearable technologies and social sentiment analysis has opened new frontiers in both consumer engagement and patient care. These technologies generate continuous, high-frequency, multimodal data streams that are increasingly being leveraged by artificial intelligence (AI) systems for predictive analytics and adaptive interventions. This article explores a unified, integrated framework that combines physiological data from wearables and behavioral insights from social media sentiment to drive proactive engagement strategies. By embedding AI-driven systems into these intersecting data domains, healthcare organizations, consumer brands, and public institutions can offer hyper-personalized experiences, predictive health alerts, emotional wellness interventions, and behaviorally aligned communication.
This paper critically evaluates how machine learning models, natural language processing, and real-time stream analytics can synthesize structured and unstructured data for longitudinal engagement, while also exploring the ethical, privacy, and infrastructural implications of such integration. Through cross-sectoral analysis across healthcare, retail, and public health, we illustrate scalable architectures and case studies where real-world deployment of such systems has yielded measurable improvements in satisfaction, retention, and health outcomes. Ultimately, the synthesis of wearable telemetry and social context data through AI systems represents a new paradigm in engagement science — moving from passive data collection to anticipatory, context-aware engagement ecosystems…(More)”.
Unpacking B2G data sharing mechanism under the EU data act
Paper by Ludovica Paseri and Stefaan G. Verhulst: “The paper proposes an analysis of the business-to-government (B2G) data sharing mechanism envisaged by the Regulation EU 2023/2854, the so-called Data Act. The Regulation, in force since 11 January 2024, will be applicable from 12 September 2025, requiring the actors involved to put in place a compliance process. The focus of the paper is to present an assessment of the mechanism foreseen by the EU legislators, with the intention of highlighting two bottlenecks, represented by: (i) the flexibility of the definition of “exceptional need”, “public emergency” and “public interest”; (ii) the cumbersome procedure for data holders. The paper discusses the role that could be played by in-house data stewardship structures as a particularly beneficial contact point for complying with B2G data sharing requirements…(More)“.
Data integration and synthesis for pandemic and epidemic intelligence
Paper by Barbara Tornimbene et al: “The COVID-19 pandemic highlighted substantial obstacles in real-time data generation and management needed for clinical research and epidemiological analysis. Three years after the pandemic, reflection on the difficulties of data integration offers potential to improve emergency preparedness. The fourth session of the WHO Pandemic and Epidemic Intelligence Forum sought to report the experiences of key global institutions in data integration and synthesis, with the aim of identifying solutions for effective integration. Data integration, defined as the combination of heterogeneous sources into a cohesive system, allows for combining epidemiological data with contextual elements such as socioeconomic determinants to create a more complete picture of disease patterns. The approach is critical for predicting outbreaks, determining disease burden, and evaluating interventions. The use of contextual information improves real-time intelligence and risk assessments, allowing for faster outbreak responses. This report captures the growing acknowledgment of data integration importance in boosting public health intelligence and readiness and show examples of how global institutions are strengthening initiatives to respond to this need. However, obstacles persist, including interoperability, data standardization, and ethical considerations. The success of future data integration efforts will be determined by the development of a common technical and legal framework, the promotion of global collaboration, and the protection of sensitive data. Ultimately, effective data integration can potentially transform public health intelligence and our way to successfully respond to future pandemics…(More)”.
Trends in AI Supercomputers
Paper by Konstantin F. Pilz, James Sanders, Robi Rahman, and Lennart Heim: “Frontier AI development relies on powerful AI supercomputers, yet analysis of these systems is limited. We create a dataset of 500 AI supercomputers from 2019 to 2025 and analyze key trends in performance, power needs, hardware cost, ownership, and global distribution. We find that the computational performance of AI supercomputers has doubled every nine months, while hardware acquisition cost and power needs both doubled every year. The leading system in March 2025, xAI’s Colossus, used 200,000 AI chips, had a hardware cost of $7B, and required 300 MW of power, as much as 250,000 households. As AI supercomputers evolved from tools for science to industrial machines, companies rapidly expanded their share of total AI supercomputer performance, while the share of governments and academia diminished. Globally, the United States accounts for about 75% of total performance in our dataset, with China in second place at 15%. If the observed trends continue, the leading AI supercomputer in 2030 will achieve 2×1022 16-bit FLOP/s, use two million AI chips, have a hardware cost of $200 billion, and require 9 GW of power. Our analysis provides visibility into the AI supercomputer landscape, allowing policymakers to assess key AI trends like resource needs, ownership, and national competitiveness…(More)”.
Data Collection and Analysis for Policy Evaluation: Not for Duty, but for Knowledge
Paper by Valentina Battiloro: “This paper explores the challenges and methods involved in public policy evaluation, focusing on the role of data collection and use. The term “evaluation” encompasses a variety of analyses and approaches, all united by the intent to provide a judgment on a specific policy, but which, depending on the precise knowledge objective, can translate into entirely different activities. Regardless of the type of evaluation, a brief overview of which is provided, the collection of information represents a priority, often undervalued, under the assumption that it is sufficient to “have the data.“ Issues arise concerning the precise definition of the design, the planning of necessary information collection, and the appropriate management of timelines. With regard to administrative data, a potentially valuable source, a number of unresolved challenges remain due to a weak culture of data utilization. Among these are the transition from an administrative data culture to a statistical data culture, and the fundamental issue of microdata accessibility for research purposes, which is currently hindered by significant barriers…(More)”.
Intended, afforded, and experienced serendipity: overcoming the paradox of artificial serendipity
Paper by Annelien Smets: “Designing for serendipity in information technologies presents significant challenges for both scholars and practitioners. This paper presents a theoretical model of serendipity that aims to address this challenge by providing a structured framework for understanding and designing for serendipity. The model delineates between intended, afforded, and experienced serendipity, recognizing the role of design intents and the subjective nature of experiencing serendipity. Central to the model is the recognition that there is no single definition nor a unique operationalization of serendipity, emphasizing the need for a nuanced approach to its conceptualization and design. By delineating between the intentions of designers, the characteristics of the system, and the experiences of end-users, the model offers a pathway to resolve the paradox of artificial serendipity and provides actionable guidelines to design for serendipity in information technologies. However, it also emphasizes the importance of establishing ‘guardrails’ to guide the design process and mitigate potential negative unintended consequences. The model aims to lay ground to advance both research and the practice of designing for serendipity, leading to more ethical and effective design practices…(More)”.
Further Reflections on the Journey Towards an International Framework for Data Governance
Paper by Steve MacFeely, Angela Me, Rachael Beaven, Joseph Costanzo, David Passarelli, Carolina Rossini, Friederike Schueuer, Malarvizhi Veerappan, and Stefaan Verhulst: “The use of data is paramount both to inform individual decisions and to address major global challenges. Data are the lifeblood of the digital economy, feeding algorithms, currencies, artificial intelligence, and driving international services trade, improving the way we respond to crises, informing logistics, shaping markets, communications and politics. But data are not just an economic commodity, to be traded and harvested, they are a personal and social artifact. They contain our most personal and sensitive information – our financial and health records, our networks, our memories, and our most intimate secrets and aspirations. With the advent of digitalization and the internet, our data are ubiquitous – we are the sum of our data. Consequently, this powerful treasure trove needs to be protected carefully. This paper presents arguments for an international data governance framework, the barriers to achieving such a framework and some of the costs of failure. It also articulates why the United Nations is uniquely positioned to host such a framework, and learning from history, the opportunity available to solve a global problem…(More)”.
AI and Social Media: A Political Economy Perspective
Paper by Daron Acemoglu, Asuman Ozdaglar & James Siderius: “We consider the political consequences of the use of artificial intelligence (AI) by online platforms engaged in social media content dissemination, entertainment, or electronic commerce. We identify two distinct but complementary mechanisms, the social media channel and the digital ads channel, which together and separately contribute to the polarization of voters and consequently the polarization of parties. First, AI-driven recommendations aimed at maximizing user engagement on platforms create echo chambers (or “filter bubbles”) that increase the likelihood that individuals are not confronted with counter-attitudinal content. Consequently, social media engagement makes voters more polarized, and then parties respond by becoming more polarized themselves. Second, we show that party competition can encourage platforms to rely more on targeted digital ads for monetization (as opposed to a subscription-based business model), and such ads in turn make the electorate more polarized, further contributing to the polarization of parties. These effects do not arise when one party is dominant, in which case the profit-maximizing business model of the platform is subscription-based. We discuss the impact regulations can have on the polarizing effects of AI-powered online platforms…(More)”.
The Data-Informed City: A Conceptual Framework for Advancing Research and Practice
Paper by Jorrit de Jong, Fernando Fernandez-Monge et al: “Over the last decades, scholars and practitioners have focused their attention on the use of data for improving public action, with a renewed interest in the emergence of big data and artificial intelligence. The potential of data is particularly salient in cities, where vast amounts of data are being generated from traditional and novel sources. Despite this growing interest, there is a need for a conceptual and operational understanding of the beneficial uses of data. This article presents a comprehensive and precise account of how cities can use data to address problems more effectively, efficiently, equitably, and in a more accountable manner. It does so by synthesizing and augmenting current research with empirical evidence derived from original research and learnings from a program designed to strengthen city governments’ data capacity. The framework can be used to support longitudinal and comparative analyses as well as explore questions such as how different uses of data employed at various levels of maturity can yield disparate outcomes. Practitioners can use the framework to identify and prioritize areas in which building data capacity might further the goals of their teams and organizations…(More)“