From Safer Cities to Healthier Lives: The Top 10 Emerging Technologies of 2025


World Economic Forum: “As cities become more connected, collaborative sensing is enabling vehicles, traffic systems and emergency services to coordinate in real time – improving safety and easing congestion. This is just one of the World Economic Forum’s Top 10 Emerging Technologies of 2025 that is expected to deliver real-world impact within three to five years and address urgent global challenges….The report outlines what is needed to bring them to scale: investment, infrastructure, standards and responsible governance, and calls on business, government and the scientific community to collaborate to ensure their development serves the public good.

Trajectory of emerging technologies in healthcare overtime.
Trajectory of emerging technologies in healthcare overtime.

This year’s edition highlights a trend towards technology convergence. For example, structural battery composites combine energy with storage design, while engineered living therapeutics merge synthetic biology and precision medicine. Such integration signals a shift away from standalone innovations to more integrated systems-based solutions, reshaping what is possible.

“The path from breakthrough research to tangible societal progress depends on transparency, collaboration, and open science,” said Frederick Fenter, Chief Executive Editor, Frontiers. “Together with the World Economic Forum, we have once again delivered trusted, evidence-based insights on emerging technologies that will shape a better future for all.”

The Top 10 Emerging Technologies of 2025

Trust and safety in a connected world:

1. Collaborative sensing

Networks of connected sensors can help vehicles, cities and emergency services share information in real time. This can improve safety, reduce traffic and respond faster to crises.

2. Generative watermarking

This technology adds invisible tags to AI-generated content, making it easier to tell what is real and what is not. It could help fight misinformation and protect trust online…(More)”.

Introducing CC Signals: A New Social Contract for the Age of AI


Creative Commons: “Creative Commons (CC) today announces the public kickoff of the CC signals project, a new preference signals framework designed to increase reciprocity and sustain a creative commons in the age of AI. The development of CC signals represents a major step forward in building a more equitable, sustainable AI ecosystem rooted in shared benefits. This step is the culmination of years of consultation and analysis. As we enter this new phase of work, we are actively seeking input from the public. 

As artificial intelligence (AI) transforms how knowledge is created, shared, and reused, we are at a fork in the road that will define the future of access to knowledge and shared creativity. One path leads to data extraction and the erosion of openness; the other leads to a walled-off internet guarded by paywalls. CC signals offer another way, grounded in the nuanced values of the commons expressed by the collective.

Based on the same principles that gave rise to the CC licenses and tens of billions of works openly licensed online, CC signals will allow dataset holders to signal their preferences for how their content can be reused by machines based on a set of limited but meaningful options shaped in the public interest. They are both a technical and legal tool and a social proposition: a call for a new pact between those who share data and those who use it to train AI models.

“CC signals are designed to sustain the commons in the age of AI,” said Anna Tumadóttir, CEO, Creative Commons. “Just as the CC licenses helped build the open web, we believe CC signals will help shape an open AI ecosystem grounded in reciprocity.”

CC signals recognize that change requires systems-level coordination. They are tools that will be built for machine and human readability, and are flexible across legal, technical, and normative contexts. However, at their core CC signals are anchored in mobilizing the power of the collective. While CC signals may range in enforceability, legally binding in some cases and normative in others, their application will always carry ethical weight that says we give, we take, we give again, and we are all in this together.

Now Ready for Feedback 

More information about CC signals and early design decisions are available on the CC website. We are committed to developing CC signals transparently and alongside our partners and community. We are actively seeking public feedback and input over the next few months as we work toward an alpha launch in November 2025….(More)”

Inclusive Rule-Making by International Organisations


Book edited by Rita Guerreiro Teixeira et al: “…explores the opportunities and challenges of implementing inclusive rule-making processes in international organisations (IOs). Expert authors examine the impact of inclusiveness across a wide range of organisations and policy issues, from climate change and peace and security to energy governance and securities regulation.

Chapters combine novel academic research with insights from IO practitioners to identify ways of making rule-making more inclusive, building on the ongoing work of the Partnership of International Organisations for Effective International Rule-Making. They utilise both qualitative and quantitative research methods to analyse the functions and consequences of inclusive rule-making; mechanisms for citizen participation; and the challenges of engaging with private actors and for-profit stakeholders. Ultimately, the book highlights key strategies for maintaining favourable public perceptions and trust in international institutions, emphasizing the importance of making rule-making more accountable, legitimate and accessible…(More)”.

Practitioner perspectives on informing decisions in One Health sectors with predictive models


Paper by Kim M. Pepin: “Every decision a person makes is based on a model. A model is an idea about how a process works based on previous experience, observation, or other data. Models may not be explicit or stated (Johnson-Laird, 2010), but they serve to simplify a complex world. Models vary dramatically from conceptual (idea) to statistical (mathematical expression relating observed data to an assumed process and/or other data) or analytical/computational (quantitative algorithm describing a process). Predictive models of complex systems describe an understanding of how systems work, often in mathematical or statistical terms, using data, knowledge, and/or expert opinion. They provide means for predicting outcomes of interest, studying different management decision impacts, and quantifying decision risk and uncertainty (Berger et al. 2021; Li et al. 2017). They can help decision-makers assimilate how multiple pieces of information determine an outcome of interest about a complex system (Berger et al. 2021; Hemming et al. 2022).

People rely daily on system-level models to reach objectives. Choosing the fastest route to a destination is one example. Such a decision may be based on either a mental model of the road system developed from previous experience or a traffic prediction mapping application based on mathematical algorithms and current data. Either way, a system-level model has been applied and there is some uncertainty. In contrast, predicting outcomes for new and complex phenomena, such as emerging disease spread, a biological invasion risk (Chen et al. 2023; Elderd et al. 2006; Pepin et al. 2022), or climatic impacts on ecosystems is more uncertain. Here public service decision-makers may turn to mathematical models when expert opinion and experience do not resolve enough uncertainty about decision outcomes. But using models to guide decisions also relies on expert opinion and experience. Also, even technical experts need to make modeling choices regarding model structure and data inputs that have uncertainty (Elderd et al. 2006) and these might not be completely objective decisions (Bedson et al. 2021). Thus, using models for guiding decisions has subjectivity from both the developer and end-user, which can lead to apprehension or lack of trust about using models to inform decisions.

Models may be particularly advantageous to decision-making in One Health sectors, including health of humans, agriculture, wildlife, and the environment (hereafter called One Health sectors) and their interconnectedness (Adisasmito et al. 2022)…(More)”.

AI-enhanced nudging in public policy: why to worry and how to respond


Paper by Stefano Calboli & Bart Engelen: “What role can artificial intelligence (AI) play in enhancing public policy nudges and the extent to which these help people achieve their own goals? Can it help mitigate or even overcome the challenges that nudgers face in this respect? This paper discusses how AI-enhanced personalization can help make nudges more means paternalistic and thus more respectful of people’s ends. We explore the potential added value of AI by analyzing to what extent it can, (1) help identify individual preferences and (2) tailor different nudging techniques to different people based on variations in their susceptibility to those techniques. However, we also argue that the successes booked in this respect in the for-profit sector cannot simply be replicated in public policy. While AI can bring benefits to means paternalist public policy nudging, it also has predictable downsides (lower effectiveness compared to the private sector) and risks (graver consequences compared to the private sector). We discuss the practical implications of all this and propose novel strategies that both consumers and regulators can employ to respond to private AI use in nudging with the aim of safeguarding people’s autonomy and agency…(More)”. See also: Engagement Integrity: Ensuring Legitimacy at a time of AI-Augmented Participation

The Global A.I. Divide


Article by Adam Satariano and Paul Mozur: “Last month, Sam Altman, the chief executive of the artificial intelligence company OpenAI, donned a helmet, work boots and a luminescent high-visibility vest to visit the construction site of the company’s new data center project in Texas.

Bigger than New York’s Central Park, the estimated $60 billion project, which has its own natural gas plant, will be one of the most powerful computing hubs ever created when completed as soon as next year.

Around the same time as Mr. Altman’s visit to Texas, Nicolás Wolovick, a computer science professor at the National University of Córdoba in Argentina, was running what counts as one of his country’s most advanced A.I. computing hubs. It was in a converted room at the university, where wires snaked between aging A.I. chips and server computers.

“Everything is becoming more split,” Dr. Wolovick said. “We are losing.”

Artificial intelligence has created a new digital divide, fracturing the world between nations with the computing power for building cutting-edge A.I. systems and those without. The split is influencing geopolitics and global economics, creating new dependencies and prompting a desperate rush to not be excluded from a technology race that could reorder economies, drive scientific discovery and change the way that people live and work.

The biggest beneficiaries by far are the United States, China and the European Union. Those regions host more than half of the world’s most powerful data centers, which are used for developing the most complex A.I. systems, according to data compiled by Oxford University researchers. Only 32 countries, or about 16 percent of nations, have these large facilities filled with microchips and computers, giving them what is known in industry parlance as “compute power.”..(More)”.

Library Catalogues as Data: Research, Practice and Usage


Book by Paul Gooding, Melissa Terras, and Sarah Ames: “Through the web of library catalogues, library management systems and myriad digital resources, libraries have become repositories not only for physical and digital information resources but also for enormous amounts of data about the interactions between these resources and their users. Bringing together leading practitioners and academic voices, this book considers library catalogue data as a vital research resource.

Divided into four sections, each approaches library catalogues, collections and records from a different angle, from exploring methods for examining such data; to the politics of catalogues and library data; their interdisciplinary potential; and practical uses and applications of catalogues as data. Other topics the volume discusses include:

  • Practical routes to preparing library catalogue data for researchers
  • The ethics of library metadata privacy and reuse
  • Data-driven decision making
  • Data quality and collections bias
  • Preserving, resurrecting and restoring data
  • The uses and potential of historical library data
  • The intersection of catalogue data, AI and Large Language Models (LLMs)

This comprehensive book will be an essential read for practitioners in the GLAM sector, particularly those dealing with collections and catalogue data, and LIS academics and students…(More)”

Misinformation by Omission: The Need for More Environmental Transparency in AI


Paper by Sasha Luccioni, Boris Gamazaychikov, Theo Alves da Costa, and Emma Strubell: “In recent years, Artificial Intelligence (AI) models have grown in size and complexity, driving greater demand for computational power and natural resources. In parallel to this trend, transparency around the costs and impacts of these models has decreased, meaning that the users of these technologies have little to no information about their resource demands and subsequent impacts on the environment. Despite this dearth of adequate data, escalating demand for figures quantifying AI’s environmental impacts has led to numerous instances of misinformation evolving from inaccurate or de-contextualized best-effort estimates of greenhouse gas emissions. In this article, we explore pervasive myths and misconceptions shaping public understanding of AI’s environmental impacts, tracing their origins and their spread in both the media and scientific publications. We discuss the importance of data transparency in clarifying misconceptions and mitigating these harms, and conclude with a set of recommendations for how AI developers and policymakers can leverage this information to mitigate negative impacts in the future…(More)”.

The Devil’s Advocate: What Happens When Dissent Becomes Digital


Article by Anthea Roberts: “But what if the devil’s advocate wasn’t human at all? What if it was an AI agent—faceless, rank-agnostic, apolitically neutral? A devil without a career to lose. Here’s where the inversion occurs: artificial intelligence enabling more genuine human conversation.

At Dragonfly Thinking, we’ve been experimenting with this concept. We call this Devil’s Advocate your Critical Friend. It’s an AI agent designed to do what humans find personally difficult and professionally dangerous: provide systematic criticism without career consequences.

The magic isn’t in the AI’s intelligence. It’s in how removing the human face transforms the social dynamics of dissent.

When critical feedback comes from an AI, no one’s promotion is at risk. The criticism can be thorough without being insubordinate. Teams can engage with substance rather than navigating office politics.

The AI might note: “Previous digital transformations show 73% failure rate when legacy system dependencies exceed 40%. This proposal shows significant dependencies.” It’s the AI saying what the tech lead knows but can’t safely voice, at least not alone.

Does criticism from code carry less weight because there’s no skin in the game? Counterintuitively, we’ve found the opposite. Without perceived motives or political agendas, the criticism becomes clearer, more digestible.

Ritualizing Productive Dissent

Imagine every major initiative automatically triggering AI analysis. Not optional. Built in like a financial review.

The ritual unfolds:

Monday, 2 PM: The transformation strategy is pitched. Energy builds. Heads nod. The vision is compelling.

Tuesday, 9 AM: An email arrives: “Devil’s Advocate Analysis – Digital Transformation Initiative.” Sender: DA-System. Twelve pages of systematic critique. People read alone, over coffee. Some sections sting. Others confirm private doubts.

Wednesday, 10 AM: The team reconvenes. Printouts are marked up. The tech lead says, “Section 3.2 about integration dependencies—we need to address this.” The ops head adds, “The adoption curve analysis on page 8 matches what we saw in Phoenix.”

Thursday: A revised strategy goes forward. Not perfect, but honest about assumptions and clear about risks.

When criticism is ritualized and automated, it stops being personal. It becomes data…(More)”.

ChatGPT Has Already Polluted the Internet So Badly That It’s Hobbling Future AI Development


Article by Frank Landymore: “The rapid rise of ChatGPT — and the cavalcade of competitors’ generative models that followed suit — has polluted the internet with so much useless slop that it’s already kneecapping the development of future AI models.

As the AI-generated data clouds the human creations that these models are so heavily dependent on amalgamating, it becomes inevitable that a greater share of what these so-called intelligences learn from and imitate is itself an ersatz AI creation. 

Repeat this process enough, and AI development begins to resemble a maximalist game of telephone in which not only is the quality of the content being produced diminished, resembling less and less what it’s originally supposed to be replacing, but in which the participants actively become stupider. The industry likes to describe this scenario as AI “model collapse.”

As a consequence, the finite amount of data predating ChatGPT’s rise becomes extremely valuable. In a new featureThe Register likens this to the demand for “low-background steel,” or steel that was produced before the detonation of the first nuclear bombs, starting in July 1945 with the US’s Trinity test. 

Just as the explosion of AI chatbots has irreversibly polluted the internet, so did the detonation of the atom bomb release radionuclides and other particulates that have seeped into virtually all steel produced thereafter. That makes modern metals unsuitable for use in some highly sensitive scientific and medical equipment. And so, what’s old is new: a major source of low-background steel, even today, is WW1 and WW2 era battleships, including a huge naval fleet that was scuttled by German Admiral Ludwig von Reuter in 1919…(More)”.