Stefaan Verhulst
Article by Sophia Knight: “The development of non-profit, public interest alternatives to access and debate information online can contribute to a healthier information ecosystem – the question is what role should public service media play in providing them? We are living within an increasingly volatile and unpredictable democratic landscape in the UK. We face challenges with political polarisation and social cohesion, exacerbated by the decades-long fragmentation of our civic infrastructure.
The shift to an online information ecosystem has disrupted traditional media. Digital public spaces have enabled almost anyone, anywhere, to speak their minds, opening new avenues for connection. Yet, the open internet has become overrun by sprawling platform monopolies, shaped by algorithms and profit-seeking incentives towards attention and outrage.
Policymakers, and to a large extent the media industry, are stuck on one part of the solution: regulating harmful online content. To move forward, we need to identify and build on opportunities to improve the digital information ecosystem, rather than only targeting potential threats…(More)”.
Paper by Marc E. B. Picavet, Peter Maroni, Amardeep Sandhu, and Kevin C. Desouza: “Generating strategic foresight for public organizations is a resource-intensive and non-trivial effort. Strategic foresight is especially important for governments, which are increasingly confronted by complex and unpredictable challenges and wicked problems. With advances in machine learning, information systems can be integrated more creatively into the strategic foresight process. We report on an innovative pilot project conducted by an Australian state government that leveraged generative artificial intelligence (AI), specifically large language models, for strategic foresight using a design science approach. The project demonstrated AI’s potential to enhance scenario generation for strategic foresight, improve data processing efficiency, and support human decision-making. However, the study also found that it is essential to balance AI automation with human expertise for validation and oversight. These findings highlight the importance of iterative design to develop robust AI tools for strategic foresight which, alongside stakeholder engagement and process transparency, build trust and ensure practical relevance…(More)”.
Update by Yossi Matias et al: “When disasters strike, Google products like Search and Maps help billions of people make critical decisions to stay safe. Our flood forecasting information — now covering more than two billion people — provides life-saving forecasts before the most significant river floods. It’s helped organizations like World Vision get drinking water and food to communities when they need it most. And during the devastating 2025 California wildfires, we provided crisis alerts with information from local authorities to 15 million people across Los Angeles while showing them where to find shelter in Google Maps. This is all made possible by our geospatial AI models, not only for floods and wildfires, but cyclones, air quality and many more.
We recently introduced Google Earth AI, bringing together these geospatial models to help tackle the planet’s most critical needs. Earth AI is built on decades modeling the world, combined with state of the art predictive models and Gemini’s advanced reasoning, letting enterprises, cities and nonprofits achieve deeper understanding in minutes — efforts that previously required complex analytics and years of research.
Today, we’re advancing Earth AI’s innovations and capabilities, and expanding access around the globe. Here’s how:..
To solve a complex problem, you need to see the whole picture, not just one piece of it. That’s the idea behind Geospatial Reasoning, a framework powered by Gemini that now lets AI automatically connect different Earth AI models — like weather forecasts, population maps and satellite imagery — to answer complex questions.
Instead of just seeing where a storm might hit, our latest research demonstrates that analysts can use Geospatial Reasoning to identify which communities are most vulnerable and what infrastructure is at risk, all at once. For example, Geospatial Reasoning empowers the nonprofit GiveDirectly to respond to disasters by combining flood and population density information, helping them identify who needs direct aid most…(More)”.
Report by Columbia World Projects: “Citizens cannot make active choices about what they see on social media. Independent regulators cannot hold companies accountable for their obligations under a growing number of national and regional online safety regimes. The research community — made up of academics, civil society groups and the media — cannot highlight potential deficiencies in both platform and regulatory action. Collectively, it represents a deficit in social media platform transparency and accountability that is a direct threat to individuals’ fundamental rights, as well as to wider societal democratic norms. Funders, regulators and researchers must act within the next 6-12 months to establish foundational infrastructure and standards related to social media data access. Without swift action, democratic institutions are vulnerable to the weaponization of social media platforms whose activities remain opaque and subject to potential manipulation by malign actors.
It is within this context the Columbia-Hertie initiative provides clear funding recommendations, as outlined in the chart below. At its core, this work is based on upholding the highest levels of data protection and security practices so that any form of social media data access protects the privacy rights of individual social media users — no matter where they are located. That is the guiding principle for all recommendations.
The report is divided into three sections:
1. Supporting Underlying Data Access Infrastructure
2. Building Best Practices for the Research Community
3. Fostering Researcher-Regulator Relationships
Each of these sections provide specific recommendations on how public and private funders can meet the existing opportunities within social media data access. The recommendations include which type of funder is most appropriate; how much money is required to meet the objectives; and a time-scale for results…(More)”
Report by James P. Cummings: “Humanity stands at the threshold of a new era in biological understanding, disease treatment, and overall wellness. The convergence of evolving patient and caregiver (consumer) behaviors, increased data collection, advancements in health technology and standards, federal policies, and the rise of artificial intelligence (AI) is driving one of the most significant transformations in human history. To achieve transformative health care insights, AI must have access to comprehensive longitudinal health records (LHRs) that span clinical, genomic, nonclinical, wearable, and patient-generated data. Despite the extensive use of electronic medical records and widespread interoperability efforts, current health care organizations, electronic medical record vendors, and public agencies are not incentivized to develop and maintain complete LHRs. This paper explores the new paradigm of consumers as the common provenance and singular custodian of LHRs. With fully aligned intentions and ample time to dedicate to optimizing their health outcomes, patients and caregivers must assume the sole responsibility to manage or delegate aggregation of complete, accurate, and real-time LHRs. Significant gaps persist in empowering consumers to act as primary custodians of their health data and to aggregate their complete LHRs, a foundational requirement for the effective application of AI. Rare disease communities, leaders in participatory care, offer a compelling model for demonstrating how consumer-driven data aggregation can be achieved and underscore the need for improved policy frameworks and technological tools. The convergence of AI and LHRs promises to transform medicine by enhancing clinical decision-making, accelerating accurate diagnoses, and dramatically advancing our ability to understand and treat disease at an unprecedented pace…(More)”.
Article by Stefaan G. Verhulst: “Since 2016, the FAIR principles — specifying that data should be Findable, Accessible, Interoperable, and Reusable — have served as the foundation for responsible open data management. Especially within the open science community, FAIR has shaped how we publish, share, and reuse scientific and public data. It brought a common language to a fragmented ecosystem.
But as artificial intelligence transforms how knowledge is produced and decisions are made, FAIR alone may no longer be enough. We now face a new question:
What does it mean for data to be AI-ready — and ready for what kind of AI?
Earlier this year we sought to provide an answer to that question by proposing the FAIR-R Principles and Framework. Last week, Frontiers, released its own FAIR² data management platform. Both seek to extend FAIR, but they diverge in focus and method. FAIR-R introduces a conceptual expansion; FAIR² adds operational guidance. Together, they reveal how our understanding of data readiness is evolving in the age of AI…(More)”
Book by Tim Wu: “Our world is dominated by a handful of tech platforms. They provide great conveniences and entertainment, but also stand as some of the most effective instruments of wealth extraction ever invented, seizing immense amounts of money, data, and attention from all of us. An economy driven by digital platforms and AI influence offers the potential to enrich us, and also threatens to marginalize entire industries, widen the wealth gap, and foster a two-class nation. As technology evolves and our markets adapt, can society cultivate a better life for everyone? Is it possible to balance economic growth and egalitarianism, or are we too far gone?
Tim Wu—the preeminent scholar and former White House official who coined the phrase “net neutrality”—explores the rise of platform power and details the risks and rewards of working within such systems. The Age of Extraction tells the story of an Internet that promised widespread wealth and democracy in the 1990s and 2000s, only to create new economic classes and aid the spread of autocracy instead. Wu frames our current moment with lessons from recent history—from generative AI and predictive social data to the antimonopoly and crypto movements—and envisions a future where technological advances can serve the greatest possible good. Concise and hopeful, The Age of Extraction offers consequential proposals for taking back control in order to achieve a better economic balance and prosperity for all…(More)”.
UNECE Report: “Developed under the Applying Data Science and Modern Methods Group of the High-Level Group for the Modernisation of Official Statistics (HLG-MOS), this framework provides practical guidance on the responsible use of Artificial Intelligence (AI) and Machine Learning (ML) in the production of official statistics. It outlines key principles such as fairness, accountability, transparency, and validity, accompanied by concrete guidelines and examples. The framework aims to help national and international statistical organisations adopt AI technologies in a trustworthy, ethical, and sustainable manner…(More)”.
Paper by by Marta Zorrilla and Juan Yebenes: “The growing importance of data as a driver of the digital economy is promoting the creation of data spaces for the secure and controlled exchange of data between organizations. Data governance is emerging as an essential pillar to ensure efficient, ethical and transparent access and use of data in these ecosystems. The article reviews the state of the art to identify the specific requirements that data governance must address in data spaces and proposes a reference enterprise architecture to facilitate the design, development and implementation of a data governance system for a data space scenario. The proposed framework has already been formally defined and validated in the context of Industry 4.0, and is now adapted to the particular characteristics and needs of data spaces. This architecture focuses on key aspects of data governance in data spaces, such as new requirements, principles, organization, roles and responsibilities, and data quality, security and metadata management, as well as the data lifecycle in the data space. This research contributes to guiding data space government bodies to formalize data strategies and high-level governance principles in concrete architectural components that establish the capacities to be implemented within the data ecosystem. To support practical adoption, this work also provides clarifying examples of different blocks of architecture…(More)”.
Press Release by the European Commission: “As of today, new rules under the Digital Services Act (DSA) will allow researchers to gain unprecedented access to very large online platforms’ data to study the societal impact stemming from the platforms’ systems.

Such access is now possible following the entry into force of the delegated act on data access.
The measures will allow qualified researchers to request access to previously unavailable data from very large online platforms and search engines. Platforms’ own data is a key element in understanding the possible systemic risks stemming from, for example, recommender systems. It will also help address risks such as the spread of illegal content and financial scams. Hence, ensuring a safer online experience for users, and, importantly, minors.
While creating opportunities for new studies, these measures also include safeguards to protect the companies’ interest. To get access to platforms’ data, researchers will have to undergo a strict assessment carried out by Digital Services Coordinators, the national authorities responsible for the implementation of the DSA. If researchers fulfil all the criteria prescribed by the law and if the research projects are relevant for studying systemic risks, including the spread of illegal content or negative effects on mental health, under the DSA, the platforms are legally required to comply with their data requests. Digital Services Coordinators are already working together to ensure that data access applications will be assessed uniformly across Member States and in due time…(More)”.