Usability for the World: Building Better Cities and Communities


Book edited by Elizabeth Rosenzweig, and Amanda Davis: “Want to build cities that truly work for everyone? Usability for the World: Sustainable Cities and Communities reveals how human-centered design is key to thriving, equitable urban spaces. This isn’t just another urban planning book; it’s a practical guide to transforming cities, offering concrete strategies and real-world examples you can use today.

What if our cities could be both efficient and human-friendly? This book tackles the core challenge of modern urban development: balancing functionality with the well-being of residents. It explores the crucial connection between usability and sustainability, demonstrating how design principles, from Universal to life-centered, create truly livable cities.

Interested in sustainable urban development? Usability for the World offers a global perspective, showcasing diverse approaches to creating equitable and resilient cities. Through compelling case studies, discover how user-centered design addresses pressing urban challenges. See how these principles connect directly to achieving the UN Sustainable Development Goals, specifically SDG 11: Sustainable Cities and Communities…(More)”.

TAPIS: A Simple Web Tool for Analyzing Citizen-Generated Data


Tool by CityObs: “Citizen observatories and communities collect valuable environmental data — but making sense of this data can be tricky, especially if you’re not a data expert. That’s why we created TAPIS: a free, easy-to-use web tool developed within the CitiObs project to help you view, manage, and analyze data collected from sensors and online platforms.

Why We Built TAPIS

The SensorThings API is a standard for sharing sensor data, used by many observatories. However, tools that help people explore this data visually and interactively have been limited. Often, users had to dig into complicated URLs and query parameters such as “expand”, “select”, “orderby” and “filter” to extract the data they needed, as illustrated in tutorials and examples such as the ones collected by SensorUp [1].

TAPIS changes that. It gives you a visual interface to work with sensor data from different API standards (such as SensorThings API, STAplus, OGC API Features/Records, OGC Catalogue Service for the Web, S3 Services, Eclipse Data Connectors, and STAC) and data file formats (such as CSV, JSON, JSON-LD, GeoJSON, and GeoPackage). You can load the data into tables, filter or group it, and view it as maps, bar charts, pie charts, or scatter plots — all in your browser, with no installation required.

Key Features

  • Connects to online data sources (like OGC APIs, STAC, SensorThings, and CSV files)
  • Turns raw data into easy-to-read tables
  • Adds meaning to table columns
  • Visualizes data with different chart types
  • Links with MiraMon to create interactive maps

TAPIS is inspired by the look and feel of Orange Data Mining (a popular data science tool) — but runs entirely in your browser, making it accessible for all users, even those with limited technical skills…(More)”

AI-Ready Federal Statistical Data: An Extension of Communicating Data Quality


Article by By Hoppe, Travis et al : “Generative Artificial Intelligence (AI) is redefining how people interact with public information and shaping how public data are consumed. Recent advances in large language models (LLMs) mean that more Americans are getting answers from AI chatbots and other AI systems, which increasingly draw on public datasets. The federal statistical community can take action to advance the use of federal statistics with generative AI to ensure that official statistics are front-and-center, powering these AIdriven experiences.
The Federal Committee on Statistical Methodology (FCSM) developed the Framework for Data Quality to help analysts and the public assess fitness for use of data sets. AI-based queries present new challenges, and the framework should be enhanced to meet them. Generative AI acts as an intermediary in the consumption of public statistical information, extracting and combining data with logical strategies that differ from the thought processes and judgments of analysts. For statistical data to be accurately represented and trustworthy, they need to be machine understandable and be able to support models that measure data quality and provide contextual information.
FCSM is working to ensure that federal statistics used in these AI-driven interactions meet the data quality dimensions of the Framework including, but not limited to, accessibility, timeliness, accuracy, and credibility. We propose a new collaborative federal effort to establish best practices for optimizing APIs, metadata, and data accessibility to support accurate and trusted generative AI results…(More)”.

Unequal Journeys to Food Markets: Continental-Scale Evidence from Open Data in Africa


Paper by Robert Benassai-Dalmau, et al: “Food market accessibility is a critical yet underexplored dimension of food systems, particularly in low- and middle-income countries. Here, we present a continent-wide assessment of spatial food market accessibility in Africa, integrating open geospatial data from OpenStreetMap and the World Food Programme. We compare three complementary metrics: travel time to the nearest market, market availability within a 30-minute threshold, and an entropy-based measure of spatial distribution, to quantify accessibility across diverse settings. Our analysis reveals pronounced disparities: rural and economically disadvantaged populations face substantially higher travel times, limited market reach, and less spatial redundancy. These accessibility patterns align with socioeconomic stratification, as measured by the Relative Wealth Index, and moderately correlate with food insecurity levels, assessed using the Integrated Food Security Phase Classification. Overall, results suggest that access to food markets plays a relevant role in shaping food security outcomes and reflects broader geographic and economic inequalities. This framework provides a scalable, data-driven approach for identifying underserved regions and supporting equitable infrastructure planning and policy design across diverse African contexts…(More)”.

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity


Paper by Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar: “Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. Current evaluations primarily focus on established mathematical and coding benchmarks, emphasizing final answer accuracy. However, this evaluation paradigm often suffers from data contamination and does not provide insights into the reasoning traces’ structure and quality. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of compositional complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs “think”. Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter- intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low- complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities…(More)”

Opening code, opening access: The World Bank’s first open source software release


Article by Keongmin Yoon, Olivier Dupriez, Bryan Cahill, and Katie Bannon: “The World Bank has long championed data transparency. Open data platforms, global indicators, and reproducible research have become pillars of the Bank’s knowledge work. But in many operational contexts, access to raw data alone is not enough. Turning data into insight requires tools—software to structure metadata, run models, update systems, and integrate outputs into national platforms.

With this in mind, the World Bank has released its first Open Source Software (OSS) tool under a new institutional licensing framework. The Metadata Editor—a lightweight application for structuring and publishing statistical metadata—is now publicly available on the Bank’s GitHub repository, under the widely used MIT License, supplemented by Bank-specific legal provisions.

This release marks more than a technical milestone. It reflects a structural shift in how the Bank shares its data and knowledge. For the first time, there is a clear institutional framework for making Bank-developed software open, reusable, and legally shareable—advancing the Bank’s commitment to public goods, transparency, Open Science, and long-term development impact, as emphasized in The Knowledge Compact for Action…(More)”.

The path for AI in poor nations does not need to be paved with billions


Editorial in Nature: “Coinciding with US President Donald Trump’s tour of Gulf states last week, Saudi Arabia announced that it is embarking on a large-scale artificial intelligence (AI) initiative. The proposed venture will have state backing and considerable involvement from US technology firms. It is the latest move in a global expansion of AI ambitions beyond the existing heartlands of the United States, China and Europe. However, as Nature India, Nature Africa and Nature Middle East report in a series of articles on AI in low- and middle-income countries (LMICs) published on 21 May (see go.nature.com/45jy3qq), the path to home-grown AI doesn’t need to be paved with billions, or even hundreds of millions, of dollars, or depend exclusively on partners in Western nations or China…, as a News Feature that appears in the series makes plain (see go.nature.com/3yrd3u2), many initiatives in LMICs aren’t focusing on scaling up, but on ‘scaling right’. They are “building models that work for local users, in their languages, and within their social and economic realities”.

More such local initiatives are needed. Some of the most popular AI applications, such as OpenAI’s ChatGPT and Google Gemini, are trained mainly on data in European languages. That would mean that the model is less effective for users who speak Hindi, Arabic, Swahili, Xhosa and countless other languages. Countries are boosting home-grown apps by funding start-up companies, establishing AI education programmes, building AI research and regulatory capacity and through public engagement.

Those LMICs that have started investing in AI began by establishing an AI strategy, including policies for AI research. However, as things stand, most of the 55 member states of the African Union and of the 22 members of the League of Arab States have not produced an AI strategy. That must change…(More)”.

Assessing data governance models for smart cities: Benchmarking data governance models on the basis of European urban requirements


Paper by Yusuf Bozkurt, Alexander Rossmann, Zeeshan Pervez, and Naeem Ramzan: “Smart cities aim to improve residents’ quality of life by implementing effective services, infrastructure, and processes through information and communication technologies. However, without robust smart city data governance, much of the urban data potential remains underexploited, resulting in inefficiencies and missed opportunities for city administrations. This study addresses these challenges by establishing specific, actionable requirements for smart city data governance models, derived from expert interviews with representatives of 27 European cities. From these interviews, recurring themes emerged, such as the need for standardized data formats, clear data access guidelines, and stronger cross-departmental collaboration mechanisms. These requirements emphasize technology independence, flexibility to adapt across different urban contexts, and promoting a data-driven culture. By benchmarking existing data governance models against these newly established urban requirements, the study uncovers significant variations in their ability to address the complex, dynamic nature of smart city data systems. This study thus enhances the theoretical understanding of data governance in smart cities and provides municipal decision-makers with actionable insights for improving data governance strategies. In doing so, it directly supports the broader goals of sustainable urban development by helping improve the efficiency and effectiveness of smart city initiatives…(More)”.

Making Civic Trust Less Abstract: A Framework for Measuring Trust Within Cities


Report by Stefaan Verhulst, Andrew J. Zahuranec, and Oscar Romero: “Trust is foundational to effective governance, yet its inherently abstract nature has made it difficult to measure and operationalize, especially in urban contexts. This report proposes a practical framework for city officials to diagnose and strengthen civic trust through observable indicators and actionable interventions.

Rather than attempting to quantify trust as an abstract concept, the framework distinguishes between the drivers of trust—direct experiences and institutional interventions—and its manifestations, both emotional and behavioral. Drawing on literature reviews, expert workshops, and field engagement with the New York City Civic Engagement Commission (CEC), we present a three-phase approach: (1) baseline assessment of trust indicators, (2) analysis of causal drivers, and (3) design and continuous evaluation of targeted interventions. The report illustrates the framework’s applicability through a hypothetical case involving the NYC Parks Department and a real-world case study of the citywide participatory budgeting initiative, The People’s Money. By providing a structured, context-sensitive, and iterative model for measuring civic trust, this report seeks to equip public institutions and city officials with a framework for meaningful measurement of civic trust…(More)“.

The AI Policy Playbook


Playbook by AI Policymaker Network & Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH: “It moves away from talking about AI ethics in abstract terms but tells of building policies that work right-away in emerging economies and respond to immediate development priorities. The Playbook emphasises that a one-size-fits-all solution doesn’t work. Rather, it illustrates shared challenges—like limited research capacity, fragmented data ecosystems, and compounding AI risks—while spotlighting national innovations and success stories. From drafting AI strategies to engaging communities and safeguarding rights, it lays out a roadmap grounded in local realities….What can you expect to find in the AI Policy Playbook:

  1. Policymaker Interviews
    Real-world insights from policymakers to understand their challenges and best practices.
  2. Policy Process Analysis
    Key elements from existing policies to extract effective strategies for AI governance, as well as policy mapping.
  3. Case Studies
    Examples of successes and lessons learnt from various countries to provide practical guidance.
  4. Recommendations
    Concrete solutions and recommendations from actors in the field to improve the policy development process, including quick tips for implementation and handling challenges.

What distinguishes this initiative is its commitment to peer learning and co-creation. The Africa-Asia AI Policymaker Network comprises over 30 high-level government partners who anchor the Playbook in real-world policy contexts. This ensures that the frameworks are not only theoretically sound but politically and socially implementable…(More)”