Explore our articles

Stefaan Verhulst

Article by Robert Cuffe: “Wind and waves are set to be included in calculations of the size of countries’ economies for the first time, as part of changes approved at the United Nations.

Assets like oilfields were already factored in under the rules – last updated in 2008.

This update aims to capture areas that have grown since then, such as the cost of using up natural resources and the value of data.

The changes come into force in 2030, and could mean an increase in estimates of the size of the UK economy making promises to spend a fixed share of the economy on defence or aid more expensive.

The economic value of wind and waves can be estimated from the price of all the energy that can be generated from the turbines in a country.

The update also treats data as an asset in its own right on top of the assets that house it like servers and cables.

Governments use a common rule book for measuring the size of their economies and how they grow over time.

These changes to the rule book are “tweaks, rather than a rewrite”, according to Prof Diane Coyle of the University of Cambridge.

Ben Zaranko of the Institute for Fiscal Studies (IFS) calls it an “accounting” change, rather than a real change. He explains: “We’d be no better off in a material sense, and tax revenues would be no higher.”

But it could make economies look bigger, creating a possible future spending headache for the UK government…(More)”.

Data, waves and wind to be counted in the economy

Paper by Shayne Longpre et al: “Progress in AI is driven largely by the scale and quality of training data. Despite this, there is a deficit of empirical analysis examining the attributes of well-established datasets beyond text. In this work we conduct the largest and first-of-its-kind longitudinal audit across modalities–popular text, speech, and video datasets–from their detailed sourcing trends and use restrictions to their geographical and linguistic representation. Our manual analysis covers nearly 4000 public datasets between 1990-2024, spanning 608 languages, 798 sources, 659 organizations, and 67 countries. We find that multimodal machine learning applications have overwhelmingly turned to web-crawled, synthetic, and social media platforms, such as YouTube, for their training sets, eclipsing all other sources since 2019. Secondly, tracing the chain of dataset derivations we find that while less than 33% of datasets are restrictively licensed, over 80% of the source content in widely-used text, speech, and video datasets, carry non-commercial restrictions. Finally, counter to the rising number of languages and geographies represented in public AI training datasets, our audit demonstrates measures of relative geographical and multilingual representation have failed to significantly improve their coverage since 2013. We believe the breadth of our audit enables us to empirically examine trends in data sourcing, restrictions, and Western-centricity at an ecosystem-level, and that visibility into these questions are essential to progress in responsible AI. As a contribution to ongoing improvements in dataset transparency and responsible use, we release our entire multimodal audit, allowing practitioners to trace data provenance across text, speech, and video…(More)”.

Bridging the Data Provenance Gap Across Text, Speech and Video

Paper by C. Huang & L. Soete: “In history, open science has been effective in facilitating knowledge sharing and promoting and diffusing innovations. However, as a result of geopolitical tensions, technological sovereignty has recently been increasingly emphasized in various countries’ science and technology policy making, posing a challenge to open science policy. In this paper, we argue that the European Union significantly benefits from and contributes to open science and should continue to support it. Similarly, China embraced foreign technologies and engaged in open science as its economy developed rapidly in the last 40 years. Today both economies could learn from each other in finding the right balance between open science and technological sovereignty particularly given the very different policy experience and the urgency of implementing new technologies addressing the grand challenges such as climate change faced by mankind…(More)”.

Reconciling open science with technological sovereignty

Paper by Alessandro Narduzzo and Valentina Forrer: “Failure, even in the context of innovation, is primarily conceived and experienced as an inevitable (e.g., innovation funnel) or unintended (e.g., unexpected drawbacks) outcome. This paper aims to provide a more systematic understanding of innovation failure by considering and problematizing the case of “intelligent failures”, namely experiments that are intentionally designed and implemented to explore technological and market uncertainty. We conceptualize intelligent failure through an epistemic perspective that recognizes its contribution to challenging and revising the organizational knowledge system. We also outline an original process model of intelligent failure that fully reveals its potential and distinctiveness in the context of learning from failure (i.e., failure as an outcome vs failure of expectations and initial beliefs), analyzing and comparing intended and unintended innovation failures. By positioning intelligent failure in the context of innovation and explaining its critical role in enhancing the ability of innovative firms to achieve breakthroughs, we identify important landmarks for practitioners in designing an intelligent failure approach to innovation…(More)”.

Nurturing innovation through intelligent failure: The art of failing on purpose

Paper by Moritz U. G. Kraemer et al: “Infectious disease threats to individual and public health are numerous, varied and frequently unexpected. Artificial intelligence (AI) and related technologies, which are already supporting human decision making in economics, medicine and social science, have the potential to transform the scope and power of infectious disease epidemiology. Here we consider the application to infectious disease modelling of AI systems that combine machine learning, computational statistics, information retrieval and data science. We first outline how recent advances in AI can accelerate breakthroughs in answering key epidemiological questions and we discuss specific AI methods that can be applied to routinely collected infectious disease surveillance data. Second, we elaborate on the social context of AI for infectious disease epidemiology, including issues such as explainability, safety, accountability and ethics. Finally, we summarize some limitations of AI applications in this field and provide recommendations for how infectious disease epidemiology can harness most effectively current and future developments in AI…(More)”.

Artificial intelligence for modelling infectious disease epidemics

Book edited by Agnieszka Szpak et al: “…argues that cities are becoming more active participants in international law-making and challenging the previously dominant nation-state approach of recent history.

Chapters explore key literature and legal regulations surrounding cities, providing the latest information on their international normative activities. This book includes multiple interviews conducted with the official representatives of cities and various international institutions, such as UN-Habitat, the EU Committee of the Regions, and the Congress for Local and Regional Authorities of the Council of Europe. The authors investigate how, despite their strong role in international relations and international law implementation, the importance of cities has still not been adequately reflected in the structures of the Council of Europe, the EU and the UN. Ultimately, the book finds that cities have more impact on policy-making than on decision-making processes…(More)”.

Cities in International Decision-Making

Guide by Global Partnership for Sustainable Development Data: “… introduces milestones on the path to mobile network data access. While it is aimed at stakeholders in national statistical systems and across national governments in general, the lessons should resonate with others seeking to take this route. The steps in this guide are written in the order in which they should be taken, and some readers who have already embarked on this journey may find they have completed some of these steps. 

This roadmap is meant to be followed in steps, and readers may start, stop, and return to points on the path at any point. 

The path to mobile network data access has three milestones:

  1. Evaluating the opportunity – setting clear goals for the desired impact of data innovation.
  2. Engaging with stakeholders – getting critical stakeholders to support your cause.
  3. Executing collaboration agreements – signing a written agreement among partners…(More)”
A Roadmap to Accessing Mobile Network Data for Statistics

Paper by Stefaan Verhulst, Andrew Zahuranec and Hannah Chafetz: “In today’s rapidly evolving AI ecosystem, making data ready for AI-optimized for training, fine-tuning, and augmentation-is more critical than ever. While the FAIR principles (Findability, Accessibility, Interoperability, and Reusability) have guided data management and open science, they do not inherently address AI-specific needs. Expanding FAIR to FAIR-R, incorporating Readiness for AI, could accelerate the responsible use of open data in AI applications that serve the public interest. This paper introduces the FAIR-R framework and identifies current efforts for enhancing AI-ready data through improved data labeling, provenance tracking, and new data standards. However, key challenges remain: How can data be structured for AI without compromising ethics? What governance models ensure equitable access? How can AI itself be leveraged to improve data quality? Answering these questions is essential for unlocking the full potential of AI-driven innovation while ensuring responsible and transparent data use…(More)”.

Moving Toward the FAIR-R principles: Advancing AI-Ready Data

Blog by Elena Murray, Moiz Shaikh, and Stefaan G. Verhulst: “Young people seeking essential services — whether mental health support, education, or government benefits — often face a critical challenge: they are asked to share their data without having a say in how it is used or for what purpose. While the responsible use of data can help tailor services to better meet their needs and ensure that vulnerable populations are not overlooked, a lack of trust in data collection and usage can have the opposite effect.

When young people feel uncertain or uneasy about how their data is being handled, they may adopt privacy-protective behaviors — choosing not to seek services at all or withholding critical information out of fear of misuse. This risks deepening existing inequalities rather than addressing them.

To build trust, those designing and delivering services must engage young people meaningfully in shaping data practices. Understanding their concerns, expectations, and values is key to aligning data use with their preferences. But how can this be done effectively?

This question was at the heart of a year-long global collaboration through the NextGenData project, which brought together partners worldwide to explore solutions. Today, we are releasing a key deliverable of that project: The Youth Engagement Toolkit for Responsible Data Reuse:

Based on a methodology developed and piloted during the NextGenData project, the Toolkit describes an innovative methodology for engaging young people on responsible data reuse practices, to improve services that matter to them…(More)”.

Announcing the Youth Engagement Toolkit for Responsible Data Reuse: An Innovative Methodology for the Future of Data-Driven Services

UN-Habitat: “…The guidelines aim to support national, regional and local governments, as well as relevant stakeholders, in leveraging digital technology for a better quality of life in cities and human settlements, while mitigating the associated risks to achieve global visions of sustainable urban development, in line with the New Urban Agenda, the 2030 Agenda for Sustainable Development and other relevant global agendas.
The aim is to promote a people-centred smart cities approach that is consistent with the purpose and the principles of the Charter of the United Nations, including full respect for international law and the Universal Declaration of Human Rights, to ensure that innovation and digital technologies are used to help cities and human settlements in order to achieve the Sustainable Development Goals and the New Urban Agenda.
The guidelines serve as a reference for Member States to implement people-centred smart city approaches in the preparation and implementation of smart city regulations, plans and strategies to promote equitable access to, and life-long education and training of all people in, the opportunities provided by data, digital infrastructure and digital services in cities and human settlements, and to favour transparency and accountability.
The guidelines recognize local and regional governments (LRGs) as pivotal actors in ensuring closing digital divides and localizing the objectives and principles of these guidelines as well as the Global Digital Compact for an open, safe, sustainable and secure digital future. The guidelines are intended to complement existing global principles on digital development through a specific additional focus on the key role of local and regional governments, and local action, in advancing people-centred smart city development also towards the vision of global digital compact…(More)”.

International Guidelines on People Centred Smart Cities

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday