Sharing trustworthy AI models with privacy-enhancing technologies


OECD Report: “Privacy-enhancing technologies (PETs) are critical tools for building trust in the collaborative development and sharing of artificial intelligence (AI) models while protecting privacy, intellectual property, and sensitive information. This report identifies two key types of PET use cases. The first is enhancing the performance of AI models through confidential and minimal use of input data, with technologies like trusted execution environments, federated learning, and secure multi-party computation. The second is enabling the confidential co-creation and sharing of AI models using tools such as differential privacy, trusted execution environments, and homomorphic encryption. PETs can reduce the need for additional data collection, facilitate data-sharing partnerships, and help address risks in AI governance. However, they are not silver bullets. While combining different PETs can help compensate for their individual limitations, balancing utility, efficiency, and usability remains challenging. Governments and regulators can encourage PET adoption through policies, including guidance, regulatory sandboxes, and R&D support, which would help build sustainable PET markets and promote trustworthy AI innovation…(More)”.

Understanding the Impacts of Generative AI Use on Children


Primer by The Alan Turing Institute and LEGO Foundation: “There is a growing body of research looking at the potential positive and negative impacts of generative AI and its associated risks. However, there is a lack of research that considers the potential impacts of these technologies on children, even though generative AI is already being deployed within many products and systems that children engage with, from games to educational platforms. Children have particular needs and rights that must be accounted for when designing, developing, and rolling out new technologies, and more focus on children’s rights is needed. While children are the group that may be most impacted by the widespread deployment of generative AI, they are simultaneously the group least represented in decision-making processes relating to the design, development, deployment or governance of AI. The Alan Turing Institute’s Children and AI and AI for Public Services teams explored the perspectives of children, parents, carers and teachers on generative AI technologies. Their research is guided by the ‘Responsible Innovation in Technology for Children’ (RITEC) framework for digital technology, play and children’s wellbeing established by UNICEF and funded by the LEGO Foundation and seeks to examine the potential impacts of generative AI on children’s wellbeing. The utility of the RITEC framework is that it allows for the qualitative analysis of wellbeing to take place by foregrounding more specific factors such as identity and creativity, which are further explored in each of the work packages.

The project provides unique and much needed insights into impacts of generative AI on children through combining quantitative and qualitative research methods…(More)”.

Generative AI Outlook Report


Outlook report, prepared by the European Commission’s Joint Research Centre (JRC): “…examines the transformative role of Generative AI (GenAI) with a specific emphasis on the European Union. It highlights the potential of GenAI for innovation, productivity, and societal change. GenAI is a disruptive technology due to its capability of producing human-like content at an unprecedented scale. As such, it holds multiple opportunities for advancements across various sectors, including healthcare, education, science, and creative industries. At the same time, GenAI also presents significant challenges, including the possibility to amplify misinformation, bias, labour disruption, and privacy concerns. All those issues are cross-cutting and therefore, the rapid development of GenAI requires a multidisciplinary approach to fully understand its implications. Against this context, the Outlook report begins with an overview of the technological aspects of GenAI, detailing their current capabilities and outlining emerging trends. It then focuses on economic implications, examining how GenAI can transform industry dynamics and necessitate adaptation of skills and strategies. The societal impact of GenAI is also addressed, with focus on both the opportunities for inclusivity and the risks of bias and over-reliance. Considering these challenges, the regulatory framework section outlines the EU’s current legislative framework, such as the AI Act and horizontal Data legislation to promote trustworthy and transparent AI practices. Finally, sector-specific ‘deep dives’ examine the opportunities and challenges that GenAI presents. This section underscores the need for careful management and strategic policy interventions to maximize its potential benefits while mitigating the risks. The report concludes that GenAI has the potential to bring significant social and economic impact in the EU, and that a comprehensive and nuanced policy approach is needed to navigate the challenges and opportunities while ensuring that technological developments are fully aligned with democratic values and EU legal framework…(More)”.

A New Paradigm for Fueling AI for the Public Good


Article by Kevin T. Frazier: “Imagine receiving this email in the near future: “Thank you for sharing data with the American Data Collective on May 22, 2025. After first sharing your workout data with SprintAI, a local startup focused on designing shoes for differently abled athletes, your data donation was also sent to an artificial intelligence research cluster hosted by a regional university. Your donation is on its way to accelerate artificial intelligence innovation and support researchers and innovators addressing pressing public needs!”

That is exactly the sort of message you could expect to receive if we made donations of personal data akin to blood donations—a pro-social behavior that may not immediately serve a donor’s individual needs but may nevertheless benefit the whole of the community. This vision of a future where data flow toward the public good is not science fiction—it is a tangible possibility if we address a critical bottleneck faced by innovators today.

Creating the data equivalent of blood banks may not seem like a pressing need or something that people should voluntarily contribute to, given widespread concerns about a few large artificial intelligence (AI) companies using data for profit-driven and, arguably, socially harmful ends. This narrow conception of the AI ecosystem fails to consider the hundreds of AI research initiatives and startups that have a desperate need for high-quality data. I was fortunate enough to meet leaders of those nascent AI efforts at Meta’s Open Source AI Summit in Austin, Texas. For example, I met with Matt Schwartz, who leads a startup that leans on AI to glean more diagnostic information from colonoscopies. I also connected with Edward Chang, a professor of neurological surgery at the University of California, San Francisco Weill Institute for Neurosciences, who relies on AI tools to discover new information on how and why our brains work. I also got to know Corin Wagen, whose startup is helping companies “find better molecules faster.” This is a small sample of the people leveraging AI for objectively good outcomes. They need your help. More specifically, they need your data.

A tragic irony shapes our current data infrastructure. Most of us share mountains of data with massive and profitable private parties—smartwatch companies, diet apps, game developers, and social media companies. Yet, AI labs, academic researchers, and public interest organizations best positioned to leverage our data for the common good are often those facing the most formidable barriers to acquiring the necessary quantity, quality, and diversity of data. Unlike OpenAI, they are not going to use bots to scrape the internet for data. Unlike Google and Meta, they cannot rely on their own social media platforms and search engines to act as perpetual data generators. And, unlike Anthropic, they lack the funds to license data from media outlets. So, while commercial entities amass vast datasets, frequently as a byproduct of consumer services and proprietary data acquisition strategies, mission-driven AI initiatives dedicated to public problems find themselves in a state of chronic data scarcity. This is not merely a hurdle—it is a systemic bottleneck choking off innovation where society needs it most, delaying or even preventing the development of AI tools that could significantly improve lives.

Individuals are, quite rightly, increasingly hesitant to share their personal information, with concerns about privacy, security, and potential misuse being both rampant and frequently justified by past breaches and opaque practices. Yet, in a striking contradiction, troves of deeply personal data are continuously siphoned by app developers, by tech platforms, and, often opaquely, by an extensive network of data brokers. This practice often occurs with minimal transparency and without informed consent concerning the full lifecycle and downstream uses of that data. This lack of transparency extends to how algorithms trained on this data make decisions that can impact individuals’ lives—from loan applications to job prospects—often without clear avenues for recourse or understanding, potentially perpetuating existing societal biases embedded in historical data…(More)”.

Energy and AI Observatory


IEA’s Energy and AI Observatory: “… provides up-to-date data and analysis on the growing links between the energy sector and artificial intelligence (AI). The new and fast-moving field of AI requires a new approach to gathering data and information, and the Observatory aims to provide regularly updated data and a comprehensive view of the implications of AI on energy demand (energy for AI) and of AI applications for efficiency, innovation, resilience and competitiveness in the energy sector (AI for energy). This first-of-a-kind platform is developed and maintained by the IEA, with valuable contributions of data and insights from the IEA’s energy industry and tech sector partners, and complements the IEA’s Special Report on Energy and AI…(More)”.

AI alone cannot solve the productivity puzzle


Article by Carl Benedikt Frey: “Each time fears of AI-driven job losses flare up, optimists reassure us that artificial intelligence is a productivity tool that will help both workers and the economy. Microsoft chief Satya Nadella thinks autonomous AI agents will allow users to name their goal while the software plans, executes and learns across every system. A dream tool — if efficiency alone was enough to solve the productivity problem.

History says it is not. Over the past half-century we have filled offices and pockets with ever-faster computers, yet labour-productivity growth in advanced economies has slowed from roughly 2 per cent a year in the 1990s to about 0.8 per cent in the past decade. Even China’s once-soaring output per worker has stalled.

The shotgun marriage of the computer and the internet promised more than enhanced office efficiency — it envisioned a golden age of discovery. By placing the world’s knowledge in front of everyone and linking global talent, breakthroughs should have multiplied. Yet research productivity has sagged. The average scientist now produces fewer breakthrough ideas per dollar than their 1960s counterpart.

What went wrong? As economist Gary Becker once noted, parents face a quality-versus-quantity trade-off: the more children they have, the less they can invest in each child. The same might be said for innovation.

Large-scale studies of inventive output confirm the result: researchers juggling more projects are less likely to deliver breakthrough innovations. Over recent decades, scientific papers and patents have become increasingly incremental. History’s greats understood why. Isaac Newton kept a single problem “constantly before me . . . till the first dawnings open slowly, by little and little, into a full and clear light”. Steve Jobs concurred: “Innovation is saying no to a thousand things.”

Human ingenuity thrives where precedent is thin. Had the 19th century focused solely on better looms and ploughs, we would enjoy cheap cloth and abundant grain — but there would be no antibiotics, jet engines or rockets. Economic miracles stem from discovery, not repeating tasks at greater speed.

Large language models gravitate towards the statistical consensus. A model trained before Galileo would have parroted a geocentric universe; fed 19th-century texts it would have proved human flight impossible before the Wright brothers succeeded. A recent Nature review found that while LLMs lightened routine scientific chores, the decisive leaps of insight still belonged to humans. Even Demis Hassabis, whose team at Google DeepMind produced AlphaFold — a model that can predict the shape of a protein and is arguably AI’s most celebrated scientific feat so far — admits that achieving genuine artificial general intelligence systems that can match or surpass humans across the full spectrum of cognitive tasks may require “several more innovations”…(More)”.

5 Ways AI Supports City Adaptation to Extreme Heat


Article by Urban AI: “Cities stand at the frontline of climate change, confronting some of its most immediate and intense consequences. Among these, extreme heat has emerged as one of the most pressing and rapidly escalating threats. As we enter June 2025, Europe is already experiencing its first major and long-lasting heatwave of the summer season with temperatures surpassing 40°C in parts of Spain, France, and Portugal — and projections indicate that this extreme event could persist well into mid-June.

This climate event is not an isolated incident. By 2050, the number of cities exposed to dangerous levels of heat is expected to triple, with peak temperatures of 48°C (118°F) potentially becoming the new normal in some regions. Such intensifying conditions place unprecedented stress on urban infrastructure, public health systems, and the overall livability of cities — especially for vulnerable communities.

In this context, Artificial Intelligence (AI) is emerging as a vital tool in the urban climate adaptation toolbox. Urban AI — defined as the application of AI technologies to urban systems and decision-making — can help cities anticipate, manage, and mitigate the effects of extreme heat in more targeted and effective ways.

Cooling the Metro with AI-Driven Ventilation, in Barcelona

With over 130 stations and a century-old metro network, the city of Barcelona faces increasing pressure to ensure passenger comfort and safety — especially underground, where heat and air quality are harder to manage. In response, Transports Metropolitans de Barcelona (TMB), in partnership with SENER Engineering, developed and implemented the RESPIRA® system, an AI-powered ventilation control platform. First introduced in 2020 on Line 1, RESPIRA® demonstrated its effectiveness by lowering ambient temperatures, improving air circulation during the COVID-19 pandemic, and achieving a notable 25.1% reduction in energy consumption along with a 10.7% increase in passenger satisfaction…(More)”

Can AI Agents Be Trusted?


Article by Blair Levin and Larry Downes: “Agentic AI has quickly become one of the most active areas of artificial intelligence development. AI agents are a level of programming on top of large language models (LLMs) that allow them to work towards specific goals. This extra layer of software can collect data, make decisions, take action, and adapt its behavior based on results. Agents can interact with other systems, apply reasoning, and work according to priorities and rules set by you as the principal.

Companies such as Salesforce have already deployed agents that can independently handle customer queries in a wide range of industries and applications, for example, and recognize when human intervention is required.

But perhaps the most exciting future for agentic AI will come in the form of personal agents, which can take self-directed action on your behalf. These agents will act as your personal assistant, handling calendar management, performing directed research and analysis, finding, negotiating for, and purchasing goods and services, curating content and taking over basic communications, learning and optimizing themselves along the way.

The idea of personal AI agents goes back decades, but the technology finally appears ready for prime-time. Already, leading companies are offering prototype personal AI agents to their customers, suppliers, and other stakeholders, raising challenging business and technical questions. Most pointedly: Can AI agents be trusted to act in our best interests? Will they work exclusively for us, or will their loyalty be split between users, developers, advertisers, and service providers? And how will be know?

The answers to these questions will determine whether and how quickly users embrace personal AI agents, and if their widespread deployment will enhance or damage business relationships and brand value…(More)”.

AI-Ready Federal Statistical Data: An Extension of Communicating Data Quality


Article by By Hoppe, Travis et al : “Generative Artificial Intelligence (AI) is redefining how people interact with public information and shaping how public data are consumed. Recent advances in large language models (LLMs) mean that more Americans are getting answers from AI chatbots and other AI systems, which increasingly draw on public datasets. The federal statistical community can take action to advance the use of federal statistics with generative AI to ensure that official statistics are front-and-center, powering these AIdriven experiences.
The Federal Committee on Statistical Methodology (FCSM) developed the Framework for Data Quality to help analysts and the public assess fitness for use of data sets. AI-based queries present new challenges, and the framework should be enhanced to meet them. Generative AI acts as an intermediary in the consumption of public statistical information, extracting and combining data with logical strategies that differ from the thought processes and judgments of analysts. For statistical data to be accurately represented and trustworthy, they need to be machine understandable and be able to support models that measure data quality and provide contextual information.
FCSM is working to ensure that federal statistics used in these AI-driven interactions meet the data quality dimensions of the Framework including, but not limited to, accessibility, timeliness, accuracy, and credibility. We propose a new collaborative federal effort to establish best practices for optimizing APIs, metadata, and data accessibility to support accurate and trusted generative AI results…(More)”.

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity


Paper by Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar: “Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. Current evaluations primarily focus on established mathematical and coding benchmarks, emphasizing final answer accuracy. However, this evaluation paradigm often suffers from data contamination and does not provide insights into the reasoning traces’ structure and quality. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of compositional complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs “think”. Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter- intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low- complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities…(More)”