5 Ways AI Supports City Adaptation to Extreme Heat


Article by Urban AI: “Cities stand at the frontline of climate change, confronting some of its most immediate and intense consequences. Among these, extreme heat has emerged as one of the most pressing and rapidly escalating threats. As we enter June 2025, Europe is already experiencing its first major and long-lasting heatwave of the summer season with temperatures surpassing 40°C in parts of Spain, France, and Portugal — and projections indicate that this extreme event could persist well into mid-June.

This climate event is not an isolated incident. By 2050, the number of cities exposed to dangerous levels of heat is expected to triple, with peak temperatures of 48°C (118°F) potentially becoming the new normal in some regions. Such intensifying conditions place unprecedented stress on urban infrastructure, public health systems, and the overall livability of cities — especially for vulnerable communities.

In this context, Artificial Intelligence (AI) is emerging as a vital tool in the urban climate adaptation toolbox. Urban AI — defined as the application of AI technologies to urban systems and decision-making — can help cities anticipate, manage, and mitigate the effects of extreme heat in more targeted and effective ways.

Cooling the Metro with AI-Driven Ventilation, in Barcelona

With over 130 stations and a century-old metro network, the city of Barcelona faces increasing pressure to ensure passenger comfort and safety — especially underground, where heat and air quality are harder to manage. In response, Transports Metropolitans de Barcelona (TMB), in partnership with SENER Engineering, developed and implemented the RESPIRA® system, an AI-powered ventilation control platform. First introduced in 2020 on Line 1, RESPIRA® demonstrated its effectiveness by lowering ambient temperatures, improving air circulation during the COVID-19 pandemic, and achieving a notable 25.1% reduction in energy consumption along with a 10.7% increase in passenger satisfaction…(More)”

Beyond the Checkbox: Upgrading the Right to Opt Out


Article by Sebastian Zimmeck: “…rights, as currently encoded in privacy laws, put too much onus on individuals when many privacy problems are systematic.5 Indeed, privacy is a systems property. If we want to make progress toward a more privacy-friendly Web as well as mobile and smart TV platforms, we need to take a systems perspective. For example, instead of requiring people to opt out from individual websites, there should be opt-out settings in browsers and operating systems. If a law requires individual opt-outs, those can be generalized by applying one opt-out toward all future sites visited or apps used, if a user so desires.8

Another problem is that the ad ecosystem is structured such that if people opt out, in many cases, their data is still being shared just as if they would not have opted out. The only difference is that in the latter case the data is accompanied by a privacy flag propagating the opt-out to the data recipient.7 However, if people opt out, their data should not be shared in the first place! The current system relying on the propagation of opt-out signals and deletion of incoming data by the recipient is complicated, error-prone, violates the principle of data minimization, and is an obstacle for effective privacy enforcement. Changing the ad ecosystem is particularly important as it is not only used on the web but also on many other platforms. Companies and the online ad industry as a whole need to do better!..(More)”

Can AI Agents Be Trusted?


Article by Blair Levin and Larry Downes: “Agentic AI has quickly become one of the most active areas of artificial intelligence development. AI agents are a level of programming on top of large language models (LLMs) that allow them to work towards specific goals. This extra layer of software can collect data, make decisions, take action, and adapt its behavior based on results. Agents can interact with other systems, apply reasoning, and work according to priorities and rules set by you as the principal.

Companies such as Salesforce have already deployed agents that can independently handle customer queries in a wide range of industries and applications, for example, and recognize when human intervention is required.

But perhaps the most exciting future for agentic AI will come in the form of personal agents, which can take self-directed action on your behalf. These agents will act as your personal assistant, handling calendar management, performing directed research and analysis, finding, negotiating for, and purchasing goods and services, curating content and taking over basic communications, learning and optimizing themselves along the way.

The idea of personal AI agents goes back decades, but the technology finally appears ready for prime-time. Already, leading companies are offering prototype personal AI agents to their customers, suppliers, and other stakeholders, raising challenging business and technical questions. Most pointedly: Can AI agents be trusted to act in our best interests? Will they work exclusively for us, or will their loyalty be split between users, developers, advertisers, and service providers? And how will be know?

The answers to these questions will determine whether and how quickly users embrace personal AI agents, and if their widespread deployment will enhance or damage business relationships and brand value…(More)”.

What World Does Bitcoin Want To Build For Itself?


Article by Patrick Redford: “We often talk about baseball games as a metric for where we are, and we’re literally in the first inning,” one of the Winklevoss twins gloats. “And this game’s going to overtime.”

It’s the first day of Bitcoin 2025, industry day here at the largest cryptocurrency conference in the world. This Winklevoss is sharing the stage with the other one, plus Donald Trump’s newly appointed crypto and AI czar David Sacks. They are in the midst of a victory lap, laughing with the free ease of men who know they have it made. The mangled baseball metaphor neither lands nor elicits laughs, but that’s fine. He’s earned, or at any rate acquired, the right to be wrong.

This year’s Bitcoin Conference takes place amid a boom, the same month the price of a single coin stabilized above $100,000 for the first time. More than 35,000 people have descended on Las Vegas in the final week of May for the conference: bitcoin miners, bitcoin dealers, several retired athletes, three U.S. senators, two Trump children, one U.S. vice president, people who describe themselves as “content creators,” people who describe themselves as “founders,” venture capitalists, ex-IDF bodyguards, tax-dodging experts, crypto heretics, evangelists, paladins, Bryan Johnson, Eric Adams, and me, trying to figure out what they were all doing there together. I’m in Vegas talking to as many people as I can in order to conduct an assay of the orange pill. What is the argument for bitcoin, exactly? Who is making it, and why?

Here is the part of the story where I am supposed to tell you it’s all a fraud. I am supposed to point out that nobody has come up with a use case for blockchain technology in 17 years beyond various forms of money laundering; that half of these people have been prosecuted for one financial crime or another; that the game is rigged in favor of the casino and those who got there before you; that this is an onerous use of energy; that all the mystification around bitcoin is a fog intended to draw in suckers where they can be bled. All that stuff is true, but the trick is that being true isn’t quite the same thing as mattering.

The bitcoin people are winning…(More)”

TAPIS: A Simple Web Tool for Analyzing Citizen-Generated Data


Tool by CityObs: “Citizen observatories and communities collect valuable environmental data — but making sense of this data can be tricky, especially if you’re not a data expert. That’s why we created TAPIS: a free, easy-to-use web tool developed within the CitiObs project to help you view, manage, and analyze data collected from sensors and online platforms.

Why We Built TAPIS

The SensorThings API is a standard for sharing sensor data, used by many observatories. However, tools that help people explore this data visually and interactively have been limited. Often, users had to dig into complicated URLs and query parameters such as “expand”, “select”, “orderby” and “filter” to extract the data they needed, as illustrated in tutorials and examples such as the ones collected by SensorUp [1].

TAPIS changes that. It gives you a visual interface to work with sensor data from different API standards (such as SensorThings API, STAplus, OGC API Features/Records, OGC Catalogue Service for the Web, S3 Services, Eclipse Data Connectors, and STAC) and data file formats (such as CSV, JSON, JSON-LD, GeoJSON, and GeoPackage). You can load the data into tables, filter or group it, and view it as maps, bar charts, pie charts, or scatter plots — all in your browser, with no installation required.

Key Features

  • Connects to online data sources (like OGC APIs, STAC, SensorThings, and CSV files)
  • Turns raw data into easy-to-read tables
  • Adds meaning to table columns
  • Visualizes data with different chart types
  • Links with MiraMon to create interactive maps

TAPIS is inspired by the look and feel of Orange Data Mining (a popular data science tool) — but runs entirely in your browser, making it accessible for all users, even those with limited technical skills…(More)”

Unequal Journeys to Food Markets: Continental-Scale Evidence from Open Data in Africa


Paper by Robert Benassai-Dalmau, et al: “Food market accessibility is a critical yet underexplored dimension of food systems, particularly in low- and middle-income countries. Here, we present a continent-wide assessment of spatial food market accessibility in Africa, integrating open geospatial data from OpenStreetMap and the World Food Programme. We compare three complementary metrics: travel time to the nearest market, market availability within a 30-minute threshold, and an entropy-based measure of spatial distribution, to quantify accessibility across diverse settings. Our analysis reveals pronounced disparities: rural and economically disadvantaged populations face substantially higher travel times, limited market reach, and less spatial redundancy. These accessibility patterns align with socioeconomic stratification, as measured by the Relative Wealth Index, and moderately correlate with food insecurity levels, assessed using the Integrated Food Security Phase Classification. Overall, results suggest that access to food markets plays a relevant role in shaping food security outcomes and reflects broader geographic and economic inequalities. This framework provides a scalable, data-driven approach for identifying underserved regions and supporting equitable infrastructure planning and policy design across diverse African contexts…(More)”.

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity


Paper by Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar: “Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. Current evaluations primarily focus on established mathematical and coding benchmarks, emphasizing final answer accuracy. However, this evaluation paradigm often suffers from data contamination and does not provide insights into the reasoning traces’ structure and quality. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of compositional complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs “think”. Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter- intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low- complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities…(More)”

Opening code, opening access: The World Bank’s first open source software release


Article by Keongmin Yoon, Olivier Dupriez, Bryan Cahill, and Katie Bannon: “The World Bank has long championed data transparency. Open data platforms, global indicators, and reproducible research have become pillars of the Bank’s knowledge work. But in many operational contexts, access to raw data alone is not enough. Turning data into insight requires tools—software to structure metadata, run models, update systems, and integrate outputs into national platforms.

With this in mind, the World Bank has released its first Open Source Software (OSS) tool under a new institutional licensing framework. The Metadata Editor—a lightweight application for structuring and publishing statistical metadata—is now publicly available on the Bank’s GitHub repository, under the widely used MIT License, supplemented by Bank-specific legal provisions.

This release marks more than a technical milestone. It reflects a structural shift in how the Bank shares its data and knowledge. For the first time, there is a clear institutional framework for making Bank-developed software open, reusable, and legally shareable—advancing the Bank’s commitment to public goods, transparency, Open Science, and long-term development impact, as emphasized in The Knowledge Compact for Action…(More)”.

The path for AI in poor nations does not need to be paved with billions


Editorial in Nature: “Coinciding with US President Donald Trump’s tour of Gulf states last week, Saudi Arabia announced that it is embarking on a large-scale artificial intelligence (AI) initiative. The proposed venture will have state backing and considerable involvement from US technology firms. It is the latest move in a global expansion of AI ambitions beyond the existing heartlands of the United States, China and Europe. However, as Nature India, Nature Africa and Nature Middle East report in a series of articles on AI in low- and middle-income countries (LMICs) published on 21 May (see go.nature.com/45jy3qq), the path to home-grown AI doesn’t need to be paved with billions, or even hundreds of millions, of dollars, or depend exclusively on partners in Western nations or China…, as a News Feature that appears in the series makes plain (see go.nature.com/3yrd3u2), many initiatives in LMICs aren’t focusing on scaling up, but on ‘scaling right’. They are “building models that work for local users, in their languages, and within their social and economic realities”.

More such local initiatives are needed. Some of the most popular AI applications, such as OpenAI’s ChatGPT and Google Gemini, are trained mainly on data in European languages. That would mean that the model is less effective for users who speak Hindi, Arabic, Swahili, Xhosa and countless other languages. Countries are boosting home-grown apps by funding start-up companies, establishing AI education programmes, building AI research and regulatory capacity and through public engagement.

Those LMICs that have started investing in AI began by establishing an AI strategy, including policies for AI research. However, as things stand, most of the 55 member states of the African Union and of the 22 members of the League of Arab States have not produced an AI strategy. That must change…(More)”.

The AI Policy Playbook


Playbook by AI Policymaker Network & Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH: “It moves away from talking about AI ethics in abstract terms but tells of building policies that work right-away in emerging economies and respond to immediate development priorities. The Playbook emphasises that a one-size-fits-all solution doesn’t work. Rather, it illustrates shared challenges—like limited research capacity, fragmented data ecosystems, and compounding AI risks—while spotlighting national innovations and success stories. From drafting AI strategies to engaging communities and safeguarding rights, it lays out a roadmap grounded in local realities….What can you expect to find in the AI Policy Playbook:

  1. Policymaker Interviews
    Real-world insights from policymakers to understand their challenges and best practices.
  2. Policy Process Analysis
    Key elements from existing policies to extract effective strategies for AI governance, as well as policy mapping.
  3. Case Studies
    Examples of successes and lessons learnt from various countries to provide practical guidance.
  4. Recommendations
    Concrete solutions and recommendations from actors in the field to improve the policy development process, including quick tips for implementation and handling challenges.

What distinguishes this initiative is its commitment to peer learning and co-creation. The Africa-Asia AI Policymaker Network comprises over 30 high-level government partners who anchor the Playbook in real-world policy contexts. This ensures that the frameworks are not only theoretically sound but politically and socially implementable…(More)”