Stefaan Verhulst
Book by Michelle A. Amazeen: “We often blame social media for the rampant problem of disinformation, but mainstream news media is also at fault. Not only do news outlets disguise paid content to look like online news articles, a practice called “native advertising,” but new research suggests that this form of advertising even influences the real journalism that appears next to it—both perceptions of the journalism as well as its actual substance. In Content Confusion, Michelle Amazeen explores the origins and evolution of this mainstream media practice, how it affects audiences and the industry, and what the implications are for an accurately informed public.
For policymakers, in particular, the book highlights the long-standing principles from governmental regulation as well as industry professional codes that support clear identification of the provenance of content, an issue that will no doubt intensify with the release of generative artificial intelligence in the wild…(More)”.
Report by the Tony Blair Institute: “Forests are indispensable global life-support systems: they regulate our climate, purify our air and water, safeguard biodiversity, and sustain the livelihoods of millions. Yet they are vanishing at unprecedented rates. Illegal logging and mining, agricultural expansion, and climate change are degrading ecosystems and biodiversity, threatening rural livelihoods, and undermining climate stability. At the same time, rapid advances in digital technologies, particularly artificial intelligence, are opening new frontiers for conservation. While not a silver bullet, digital solutions can serve as powerful enablers, providing better understanding, faster intelligence and greater effectiveness in forest action.
The Digital Tree framework presented in this report illustrates how components of digital and AI solutions for forestry are interconnected and mutually reinforcing. The roots represent enabling foundations such as connectivity, secure data ecosystems and computing power. The trunk encompasses core technologies that transform the ways in which forest data is captured, including satellites, drones, sensors and robotics. The branches represent analytics powered by AI and machine learning (ML), which convert raw data into actionable insights for a better understanding of current forest conditions, and how to link changes to their drivers, anticipate future risks and optimise operations. The canopy represents myriad real-life applications being developed to enable stronger forest outcomes. Finally, the nutrients represent just and inclusive forest stewardship – embedding the knowledge systems of indigenous peoples and local communities (IPLC), enabling their participation in technology development and data collection, and ensuring benefits flow back to the communities that safeguard forests. This digital ecosystem is self-reinforcing, where improvements in one area strengthen the whole…(More)”
The Digital Tree Structure

Paper by J. Nathan Matias and Megan Price: “As AI systems from decision-making algorithms to generative AI are deployed more widely, computer scientists and social scientists alike are being called on to provide trustworthy quantitative evaluations of AI safety and reliability. These calls have included demands from affected parties to be given a seat at the table of AI evaluation. What, if anything, can public involvement add to the science of AI? In this perspective, we summarize the sociotechnical challenge of evaluating AI systems, which often adapt to multiple layers of social context that shape their outcomes. We then offer guidance for improving the science of AI by engaging lived-experience experts in the design, data collection, and interpretation of scientific evaluations. This article reviews common models of public engagement in AI research alongside common concerns about participatory methods, including questions about generalizable knowledge, subjectivity, reliability, and practical logistics. To address these questions, we summarize the literature on participatory science, discuss case studies from AI in healthcare, and share our own experience evaluating AI in areas from policing systems to social media algorithms. Overall, we describe five parts of any quantitative evaluation where public participation can improve the science of AI: equipoise, explanation, measurement, inference, and interpretation. We conclude with reflections on the role that participatory science can play in trustworthy AI by supporting trustworthy science…(More)”.
OECD Report: “Personal health data (PHD) are transforming how individuals engage with health systems, creating new opportunities for trust, innovation, and improving access and quality of care. This report examines how OECD countries enable individuals to access, manage, and share their health information across digital platforms and patient portals. Drawing from in-depth interviews with national authorities in Australia, Denmark, Finland, Japan, Korea, and the United Kingdom, the paper analyses policy, technical, and governance enablers that underpin equitable access to personal health data. It identifies leading practices in interoperability, data architecture, privacy, consent, digital identity, and patient engagement. Countries with mature ecosystems demonstrated consistent public trust frameworks, integration across sectors, and strong legislative foundations balancing privacy with data interoperability and sharing. As healthcare becomes increasingly digital and the availability of patient-generated data grows, ensuring that individuals can securely access and use their own health data will be critical for future-ready, data-driven, and person-centred health systems…(More)”.
Article by Mansoor Al Mansoori and Noura Al Ghaithi: “For more than a decade, global leaders have recognized the rising burden of chronic, non-communicable diseases (NCDs) such as heart disease, diabetes, obesity and cancer. These conditions are the leading cause of death globally – accounting for 71% of all deaths – and represent the most costly, preventable health challenge of our time.
Abu Dhabi, however, is establishing a new global benchmark for health, demonstrating that prevention at population scale is possible. Advances in digitalization, AI, multimodal data and life sciences technology are making it possible to move decisively from reactive “sick care” to predictive, proactive and personalized healthcare…At the core of its digital health transformation lies a unified strategy: Predict, Prevent and Act to Cure and to Restore. This is powered by Abu Dhabi’s Intelligent Health System, which integrates medical records, insurance data, genomics, environmental and lifestyle data into a fully sovereign, privacy-protected ecosystem.

Abu Dhabi’s Predict, Prevent and Act to Cure and to Restore healthy system framework.Image: DOH Abu Dhabi
This infrastructure is powered by platforms like Malaffi, the region’s first health information exchange, connecting 100% of the emirate’s public and private healthcare providers and insurers with real-time access to patient histories, enabling more coordinated, efficient and patient-centred care and reducing system level costs for diagnostics.
Meanwhile, Sahatna, a mobile app used by more than 800,000 residents, empowers the community members to take ownership of their health. It provides secure access to personal health data, enables proactive appointment booking across the ecosystem, instant telehealth consultations, wellness tracking and behavioural nudges toward prevention. This plays a critical role in simplifying personal health, helping to shift the public’s mindset from reactive treatment to self-led prevention…(More)”.
Article by Kathy Talkington: “…Before the advent of vaccines, antibiotics, and modern sanitation, infectious diseases were the leading killers. But today, chronic diseases such as diabetes, hypertension, and asthma account for 70% of deaths and 86% of health care expenses in the United States. Yet the clinical data that doctors must report is still largely restricted to infectious diseases.
So, if doctors aren’t required to report chronic disease data and cannot feasibly do so, where can public health agencies turn? One major source is insurance providers. They collect information daily on the types of illnesses that patients have, the treatments being recommended, and the medications being prescribed. Insurance providers use this data to determine reimbursement rates, assess the quality of care, and guide treatment, but public health agencies do not have ready access to this information.
To help overcome this gap, The Pew Charitable Trusts recently launched a project to build data-driven partnerships between state public health agencies and their Medicaid counterparts. Why Medicaid? First, as the nation’s largest single payer, it can provide public health agencies with a large pool of claims data. Second, Medicaid serves people who would benefit most by more effective public health programs, including families with low incomes, people with disabilities, and older adults. And, lastly, Medicaid influences the practices of two critical constituencies: private insurers that contract with Medicaid and the 70% of doctors who accept Medicaid payments…(More)”.
Paper by Chinasa T. Okolo: “The increasing development of machine learning (ML) models and adoption of artificial intelligence (AI) tools, particularly generative AI, has dramatically shifted practices around data, spurring the development of new industries centered around data labeling and revealing new forms of exploitation, including illegal data scraping for AI training datasets. These new complexities around data production, refinement, and use have also impacted African countries, elevating a need for comprehensive regulation and enforcement measures. While 38/55 African Union (AU) Member States have existing data protection regulations, there is a wide disparity in the comprehensiveness and quality of these regulations and in the ability of individual countries to enact sufficient protections against data privacy violations. Thus, to enable effective data governance, AU Member States must enact comprehensive data protection regulations and reform existing data governance measures to cover aspects such as data quality, privacy, responsible data sharing, transparency, and data worker labor protections. This paper analyzes data governance measures in Africa, outlines data privacy violations across the continent, and examines regulatory gaps imposed by a lack of comprehensive data governance to outline the sociopolitical infrastructure required to bolster data governance capacity.
This work introduces the RICE Data Governance Framework, which aims to operationalize comprehensive data governance in Africa by outlining best measures for data governance policy
reform, integrating revamped policies, increasing continentalwide cooperation in AI governance, and improving enforcement actions against data privacy violations…(More)”
Paper by Stefaan Verhulst: “We are entering a data winter-a period marked by the growing inaccessibility of data critical for science, governance, and innovation. Just as previous AI winters saw stagnation in artificial intelligence research, today’s data winter is characterized by the enclosure of data behind proprietary, regulatory, and technological barriers. This contraction threatens the capacity for evidence-based policymaking, scientific discovery, and equitable AI development at a moment when data has become society’s most strategic resource. Eight interrelated forces are driving this decline: government open data cutbacks, shifting institutional priorities toward risk aversion, generative AI-induced data hoarding, research data lockdowns, scarcity of high-quality training data, risks of synthetic data substitution, geopolitical fragmentation, and private-sector closure. Together, these trends risk immobilizing data that could otherwise serve the public good. To counteract this trajectory, the paper proposes five strategic interventions: (1) shifting norms and incentives to treat data as essential infrastructure; (2) translating openness commitments into enforceable action; (3) investing in professional data stewardship across sectors; (4) advancing governance innovations such as digital self-determination and social license mechanisms to restore trust; and (5) developing sustainable data commons as shared infrastructures for equitable reuse. The technical means exist, but the collective will is uncertain. The paper concludes by arguing that the coming years will determine whether society builds an open, collaborative data ecosystem-or succumbs to a fragmented, privatized data order…(More)”.
Paper by Federico Bartolomucci & Gianluca Bresolin: “The use of data for social good has received increasing attention from institutions, practitioners and academics in recent years. Data collaboratives are cross-sectoral partnerships that aim to foster the use of data for societal purposes. However, the proliferation of initiatives on the topic of data sharing has created confusion regarding their nature and scope. To advance research on the topic, using existing literature, this paper offers a refinement of the concept of data collaboratives ten years after their first definition. This enables the distinction between data collaboratives and other forms of initiatives such as open platforms and data ecosystems. Through the analysis of a dataset of 171 data collaboratives, the paper proposes an enhanced categorisation that identifies five clusters of data collaboratives. Each cluster is described with a focus on its individual characteristics and development challenges. The holistic approach adopted and the maturity of the field allowed us to gain valuable insights into the domains and scopes that these types of partnership may serve and their potential impact. The results highlight the heterogeneity of initiatives falling under the concept of data collaboratives and the necessity to address their development challenges by either concentrating on a specific cluster or conducting comparative and horizontal studies. These findings also enable comparability and improve the identification of benchmarks, which is a valuable resource for the development of the field…(More)”.
Paper by Francis Gassert, et al: “AI for Nature examines the transformative role of artificial intelligence in understanding and protecting the natural world. The paper outlines how AI can be applied to environmental monitoring, biodiversity mapping, and land-use planning, while also identifying the social, ethical, and governance challenges that accompany these technologies. It calls for collaboration across science, technology, and policy to ensure AI benefits both nature and people…(More)”.