Paper by Ingrid Campo-Ruiz: “Urban space is an important infrastructure for democracy and fosters democratic engagement, such as meetings, discussions, and protests. Artificial Intelligence (AI) systems could affect democracy through urban space, for example, by breaching data privacy, hindering political equality and engagement, or manipulating information about places. This research explores the urban places that promote democratic engagement according to the outputs generated with ChatGPT-4o. This research moves beyond the dominant framework of discussions on AI and democracy as a form of spreading misinformation and fake news. Instead, it provides an innovative framework, combining architectural space as an infrastructure for democracy and the way in which generative AI tools provide a nuanced view of democracy that could potentially influence millions of people. This article presents a new conceptual framework for understanding AI for democracy from the perspective of architecture. For the first case study in Stockholm, Sweden, AI outputs were later combined with GIS maps and a theoretical framework. The research then analyzes the results obtained for Madrid, Spain, and Brussels, Belgium. This analysis provides deeper insights into the outputs obtained with AI, the places that facilitate democratic engagement and those that are overlooked, and the ensuing consequences.Results show that urban space for democratic engagement obtained with ChatGPT-4o for Stockholm is mainly composed of governmental institutions and non-governmental organizations for representative or deliberative democracy and the education of individuals in public buildings in the city centre. The results obtained with ChatGPT-40 barely reflect public open spaces, parks, or routes. They also prioritize organized rather than spontaneous engagement and do not reflect unstructured events like demonstrations, and powerful actors, such as political parties, or workers’ unions. The places listed by ChatGPT-4o for Madrid and Brussels give major prominence to private spaces like offices that house organizations with political activities. While cities offer a broad and complex array of places for democratic engagement, outputs obtained with AI can narrow users’ perspectives on their real opportunities, while perpetuating powerful agents by not making them sufficiently visible to be accountable for their actions. In conclusion, urban space is a fundamental infrastructure for democracy, and AI outputs could be a valid starting point for understanding the plethora of interactions. These outputs should be complemented with other forms of knowledge to produce a more comprehensive framework that adjusts to reality for developing AI in a democratic context. Urban space should be protected as a shared space and as an asset for societies to fully develop democracy in its multiple forms. Democracy and urban spaces influence each other and are subject to pressures from different actors including AI. AI systems should, therefore, be monitored to enhance democratic values through urban space…(More)”.
What World Does Bitcoin Want To Build For Itself?
Article by Patrick Redford: “We often talk about baseball games as a metric for where we are, and we’re literally in the first inning,” one of the Winklevoss twins gloats. “And this game’s going to overtime.”
It’s the first day of Bitcoin 2025, industry day here at the largest cryptocurrency conference in the world. This Winklevoss is sharing the stage with the other one, plus Donald Trump’s newly appointed crypto and AI czar David Sacks. They are in the midst of a victory lap, laughing with the free ease of men who know they have it made. The mangled baseball metaphor neither lands nor elicits laughs, but that’s fine. He’s earned, or at any rate acquired, the right to be wrong.
This year’s Bitcoin Conference takes place amid a boom, the same month the price of a single coin stabilized above $100,000 for the first time. More than 35,000 people have descended on Las Vegas in the final week of May for the conference: bitcoin miners, bitcoin dealers, several retired athletes, three U.S. senators, two Trump children, one U.S. vice president, people who describe themselves as “content creators,” people who describe themselves as “founders,” venture capitalists, ex-IDF bodyguards, tax-dodging experts, crypto heretics, evangelists, paladins, Bryan Johnson, Eric Adams, and me, trying to figure out what they were all doing there together. I’m in Vegas talking to as many people as I can in order to conduct an assay of the orange pill. What is the argument for bitcoin, exactly? Who is making it, and why?
Here is the part of the story where I am supposed to tell you it’s all a fraud. I am supposed to point out that nobody has come up with a use case for blockchain technology in 17 years beyond various forms of money laundering; that half of these people have been prosecuted for one financial crime or another; that the game is rigged in favor of the casino and those who got there before you; that this is an onerous use of energy; that all the mystification around bitcoin is a fog intended to draw in suckers where they can be bled. All that stuff is true, but the trick is that being true isn’t quite the same thing as mattering.
The bitcoin people are winning…(More)”
Usability for the World: Building Better Cities and Communities
Book edited by Elizabeth Rosenzweig, and Amanda Davis: “Want to build cities that truly work for everyone? Usability for the World: Sustainable Cities and Communities reveals how human-centered design is key to thriving, equitable urban spaces. This isn’t just another urban planning book; it’s a practical guide to transforming cities, offering concrete strategies and real-world examples you can use today.
What if our cities could be both efficient and human-friendly? This book tackles the core challenge of modern urban development: balancing functionality with the well-being of residents. It explores the crucial connection between usability and sustainability, demonstrating how design principles, from Universal to life-centered, create truly livable cities.
Interested in sustainable urban development? Usability for the World offers a global perspective, showcasing diverse approaches to creating equitable and resilient cities. Through compelling case studies, discover how user-centered design addresses pressing urban challenges. See how these principles connect directly to achieving the UN Sustainable Development Goals, specifically SDG 11: Sustainable Cities and Communities…(More)”.
TAPIS: A Simple Web Tool for Analyzing Citizen-Generated Data
Tool by CityObs: “Citizen observatories and communities collect valuable environmental data — but making sense of this data can be tricky, especially if you’re not a data expert. That’s why we created TAPIS: a free, easy-to-use web tool developed within the CitiObs project to help you view, manage, and analyze data collected from sensors and online platforms.
Why We Built TAPIS
The SensorThings API is a standard for sharing sensor data, used by many observatories. However, tools that help people explore this data visually and interactively have been limited. Often, users had to dig into complicated URLs and query parameters such as “expand”, “select”, “orderby” and “filter” to extract the data they needed, as illustrated in tutorials and examples such as the ones collected by SensorUp [1].
TAPIS changes that. It gives you a visual interface to work with sensor data from different API standards (such as SensorThings API, STAplus, OGC API Features/Records, OGC Catalogue Service for the Web, S3 Services, Eclipse Data Connectors, and STAC) and data file formats (such as CSV, JSON, JSON-LD, GeoJSON, and GeoPackage). You can load the data into tables, filter or group it, and view it as maps, bar charts, pie charts, or scatter plots — all in your browser, with no installation required.
Key Features
- Connects to online data sources (like OGC APIs, STAC, SensorThings, and CSV files)
- Turns raw data into easy-to-read tables
- Adds meaning to table columns
- Visualizes data with different chart types
- Links with MiraMon to create interactive maps
TAPIS is inspired by the look and feel of Orange Data Mining (a popular data science tool) — but runs entirely in your browser, making it accessible for all users, even those with limited technical skills…(More)”
AI-Ready Federal Statistical Data: An Extension of Communicating Data Quality
Article by By Hoppe, Travis et al : “Generative Artificial Intelligence (AI) is redefining how people interact with public information and shaping how public data are consumed. Recent advances in large language models (LLMs) mean that more Americans are getting answers from AI chatbots and other AI systems, which increasingly draw on public datasets. The federal statistical community can take action to advance the use of federal statistics with generative AI to ensure that official statistics are front-and-center, powering these AIdriven experiences.
The Federal Committee on Statistical Methodology (FCSM) developed the Framework for Data Quality to help analysts and the public assess fitness for use of data sets. AI-based queries present new challenges, and the framework should be enhanced to meet them. Generative AI acts as an intermediary in the consumption of public statistical information, extracting and combining data with logical strategies that differ from the thought processes and judgments of analysts. For statistical data to be accurately represented and trustworthy, they need to be machine understandable and be able to support models that measure data quality and provide contextual information.
FCSM is working to ensure that federal statistics used in these AI-driven interactions meet the data quality dimensions of the Framework including, but not limited to, accessibility, timeliness, accuracy, and credibility. We propose a new collaborative federal effort to establish best practices for optimizing APIs, metadata, and data accessibility to support accurate and trusted generative AI results…(More)”.
Unequal Journeys to Food Markets: Continental-Scale Evidence from Open Data in Africa
Paper by Robert Benassai-Dalmau, et al: “Food market accessibility is a critical yet underexplored dimension of food systems, particularly in low- and middle-income countries. Here, we present a continent-wide assessment of spatial food market accessibility in Africa, integrating open geospatial data from OpenStreetMap and the World Food Programme. We compare three complementary metrics: travel time to the nearest market, market availability within a 30-minute threshold, and an entropy-based measure of spatial distribution, to quantify accessibility across diverse settings. Our analysis reveals pronounced disparities: rural and economically disadvantaged populations face substantially higher travel times, limited market reach, and less spatial redundancy. These accessibility patterns align with socioeconomic stratification, as measured by the Relative Wealth Index, and moderately correlate with food insecurity levels, assessed using the Integrated Food Security Phase Classification. Overall, results suggest that access to food markets plays a relevant role in shaping food security outcomes and reflects broader geographic and economic inequalities. This framework provides a scalable, data-driven approach for identifying underserved regions and supporting equitable infrastructure planning and policy design across diverse African contexts…(More)”.
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
Paper by Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar: “Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. Current evaluations primarily focus on established mathematical and coding benchmarks, emphasizing final answer accuracy. However, this evaluation paradigm often suffers from data contamination and does not provide insights into the reasoning traces’ structure and quality. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of compositional complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs “think”. Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter- intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low- complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities…(More)”
Opening code, opening access: The World Bank’s first open source software release
Article by Keongmin Yoon, Olivier Dupriez, Bryan Cahill, and Katie Bannon: “The World Bank has long championed data transparency. Open data platforms, global indicators, and reproducible research have become pillars of the Bank’s knowledge work. But in many operational contexts, access to raw data alone is not enough. Turning data into insight requires tools—software to structure metadata, run models, update systems, and integrate outputs into national platforms.
With this in mind, the World Bank has released its first Open Source Software (OSS) tool under a new institutional licensing framework. The Metadata Editor—a lightweight application for structuring and publishing statistical metadata—is now publicly available on the Bank’s GitHub repository, under the widely used MIT License, supplemented by Bank-specific legal provisions.
This release marks more than a technical milestone. It reflects a structural shift in how the Bank shares its data and knowledge. For the first time, there is a clear institutional framework for making Bank-developed software open, reusable, and legally shareable—advancing the Bank’s commitment to public goods, transparency, Open Science, and long-term development impact, as emphasized in The Knowledge Compact for Action…(More)”.
The path for AI in poor nations does not need to be paved with billions
Editorial in Nature: “Coinciding with US President Donald Trump’s tour of Gulf states last week, Saudi Arabia announced that it is embarking on a large-scale artificial intelligence (AI) initiative. The proposed venture will have state backing and considerable involvement from US technology firms. It is the latest move in a global expansion of AI ambitions beyond the existing heartlands of the United States, China and Europe. However, as Nature India, Nature Africa and Nature Middle East report in a series of articles on AI in low- and middle-income countries (LMICs) published on 21 May (see go.nature.com/45jy3qq), the path to home-grown AI doesn’t need to be paved with billions, or even hundreds of millions, of dollars, or depend exclusively on partners in Western nations or China…, as a News Feature that appears in the series makes plain (see go.nature.com/3yrd3u2), many initiatives in LMICs aren’t focusing on scaling up, but on ‘scaling right’. They are “building models that work for local users, in their languages, and within their social and economic realities”.
More such local initiatives are needed. Some of the most popular AI applications, such as OpenAI’s ChatGPT and Google Gemini, are trained mainly on data in European languages. That would mean that the model is less effective for users who speak Hindi, Arabic, Swahili, Xhosa and countless other languages. Countries are boosting home-grown apps by funding start-up companies, establishing AI education programmes, building AI research and regulatory capacity and through public engagement.
Those LMICs that have started investing in AI began by establishing an AI strategy, including policies for AI research. However, as things stand, most of the 55 member states of the African Union and of the 22 members of the League of Arab States have not produced an AI strategy. That must change…(More)”.
Assessing data governance models for smart cities: Benchmarking data governance models on the basis of European urban requirements
Paper by Yusuf Bozkurt, Alexander Rossmann, Zeeshan Pervez, and Naeem Ramzan: “Smart cities aim to improve residents’ quality of life by implementing effective services, infrastructure, and processes through information and communication technologies. However, without robust smart city data governance, much of the urban data potential remains underexploited, resulting in inefficiencies and missed opportunities for city administrations. This study addresses these challenges by establishing specific, actionable requirements for smart city data governance models, derived from expert interviews with representatives of 27 European cities. From these interviews, recurring themes emerged, such as the need for standardized data formats, clear data access guidelines, and stronger cross-departmental collaboration mechanisms. These requirements emphasize technology independence, flexibility to adapt across different urban contexts, and promoting a data-driven culture. By benchmarking existing data governance models against these newly established urban requirements, the study uncovers significant variations in their ability to address the complex, dynamic nature of smart city data systems. This study thus enhances the theoretical understanding of data governance in smart cities and provides municipal decision-makers with actionable insights for improving data governance strategies. In doing so, it directly supports the broader goals of sustainable urban development by helping improve the efficiency and effectiveness of smart city initiatives…(More)”.