TAPIS: A Simple Web Tool for Analyzing Citizen-Generated Data


Tool by CityObs: “Citizen observatories and communities collect valuable environmental data — but making sense of this data can be tricky, especially if you’re not a data expert. That’s why we created TAPIS: a free, easy-to-use web tool developed within the CitiObs project to help you view, manage, and analyze data collected from sensors and online platforms.

Why We Built TAPIS

The SensorThings API is a standard for sharing sensor data, used by many observatories. However, tools that help people explore this data visually and interactively have been limited. Often, users had to dig into complicated URLs and query parameters such as “expand”, “select”, “orderby” and “filter” to extract the data they needed, as illustrated in tutorials and examples such as the ones collected by SensorUp [1].

TAPIS changes that. It gives you a visual interface to work with sensor data from different API standards (such as SensorThings API, STAplus, OGC API Features/Records, OGC Catalogue Service for the Web, S3 Services, Eclipse Data Connectors, and STAC) and data file formats (such as CSV, JSON, JSON-LD, GeoJSON, and GeoPackage). You can load the data into tables, filter or group it, and view it as maps, bar charts, pie charts, or scatter plots — all in your browser, with no installation required.

Key Features

  • Connects to online data sources (like OGC APIs, STAC, SensorThings, and CSV files)
  • Turns raw data into easy-to-read tables
  • Adds meaning to table columns
  • Visualizes data with different chart types
  • Links with MiraMon to create interactive maps

TAPIS is inspired by the look and feel of Orange Data Mining (a popular data science tool) — but runs entirely in your browser, making it accessible for all users, even those with limited technical skills…(More)”

Unequal Journeys to Food Markets: Continental-Scale Evidence from Open Data in Africa


Paper by Robert Benassai-Dalmau, et al: “Food market accessibility is a critical yet underexplored dimension of food systems, particularly in low- and middle-income countries. Here, we present a continent-wide assessment of spatial food market accessibility in Africa, integrating open geospatial data from OpenStreetMap and the World Food Programme. We compare three complementary metrics: travel time to the nearest market, market availability within a 30-minute threshold, and an entropy-based measure of spatial distribution, to quantify accessibility across diverse settings. Our analysis reveals pronounced disparities: rural and economically disadvantaged populations face substantially higher travel times, limited market reach, and less spatial redundancy. These accessibility patterns align with socioeconomic stratification, as measured by the Relative Wealth Index, and moderately correlate with food insecurity levels, assessed using the Integrated Food Security Phase Classification. Overall, results suggest that access to food markets plays a relevant role in shaping food security outcomes and reflects broader geographic and economic inequalities. This framework provides a scalable, data-driven approach for identifying underserved regions and supporting equitable infrastructure planning and policy design across diverse African contexts…(More)”.

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity


Paper by Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar: “Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. Current evaluations primarily focus on established mathematical and coding benchmarks, emphasizing final answer accuracy. However, this evaluation paradigm often suffers from data contamination and does not provide insights into the reasoning traces’ structure and quality. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of compositional complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs “think”. Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter- intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low- complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities…(More)”

Opening code, opening access: The World Bank’s first open source software release


Article by Keongmin Yoon, Olivier Dupriez, Bryan Cahill, and Katie Bannon: “The World Bank has long championed data transparency. Open data platforms, global indicators, and reproducible research have become pillars of the Bank’s knowledge work. But in many operational contexts, access to raw data alone is not enough. Turning data into insight requires tools—software to structure metadata, run models, update systems, and integrate outputs into national platforms.

With this in mind, the World Bank has released its first Open Source Software (OSS) tool under a new institutional licensing framework. The Metadata Editor—a lightweight application for structuring and publishing statistical metadata—is now publicly available on the Bank’s GitHub repository, under the widely used MIT License, supplemented by Bank-specific legal provisions.

This release marks more than a technical milestone. It reflects a structural shift in how the Bank shares its data and knowledge. For the first time, there is a clear institutional framework for making Bank-developed software open, reusable, and legally shareable—advancing the Bank’s commitment to public goods, transparency, Open Science, and long-term development impact, as emphasized in The Knowledge Compact for Action…(More)”.

The path for AI in poor nations does not need to be paved with billions


Editorial in Nature: “Coinciding with US President Donald Trump’s tour of Gulf states last week, Saudi Arabia announced that it is embarking on a large-scale artificial intelligence (AI) initiative. The proposed venture will have state backing and considerable involvement from US technology firms. It is the latest move in a global expansion of AI ambitions beyond the existing heartlands of the United States, China and Europe. However, as Nature India, Nature Africa and Nature Middle East report in a series of articles on AI in low- and middle-income countries (LMICs) published on 21 May (see go.nature.com/45jy3qq), the path to home-grown AI doesn’t need to be paved with billions, or even hundreds of millions, of dollars, or depend exclusively on partners in Western nations or China…, as a News Feature that appears in the series makes plain (see go.nature.com/3yrd3u2), many initiatives in LMICs aren’t focusing on scaling up, but on ‘scaling right’. They are “building models that work for local users, in their languages, and within their social and economic realities”.

More such local initiatives are needed. Some of the most popular AI applications, such as OpenAI’s ChatGPT and Google Gemini, are trained mainly on data in European languages. That would mean that the model is less effective for users who speak Hindi, Arabic, Swahili, Xhosa and countless other languages. Countries are boosting home-grown apps by funding start-up companies, establishing AI education programmes, building AI research and regulatory capacity and through public engagement.

Those LMICs that have started investing in AI began by establishing an AI strategy, including policies for AI research. However, as things stand, most of the 55 member states of the African Union and of the 22 members of the League of Arab States have not produced an AI strategy. That must change…(More)”.

The AI Policy Playbook


Playbook by AI Policymaker Network & Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH: “It moves away from talking about AI ethics in abstract terms but tells of building policies that work right-away in emerging economies and respond to immediate development priorities. The Playbook emphasises that a one-size-fits-all solution doesn’t work. Rather, it illustrates shared challenges—like limited research capacity, fragmented data ecosystems, and compounding AI risks—while spotlighting national innovations and success stories. From drafting AI strategies to engaging communities and safeguarding rights, it lays out a roadmap grounded in local realities….What can you expect to find in the AI Policy Playbook:

  1. Policymaker Interviews
    Real-world insights from policymakers to understand their challenges and best practices.
  2. Policy Process Analysis
    Key elements from existing policies to extract effective strategies for AI governance, as well as policy mapping.
  3. Case Studies
    Examples of successes and lessons learnt from various countries to provide practical guidance.
  4. Recommendations
    Concrete solutions and recommendations from actors in the field to improve the policy development process, including quick tips for implementation and handling challenges.

What distinguishes this initiative is its commitment to peer learning and co-creation. The Africa-Asia AI Policymaker Network comprises over 30 high-level government partners who anchor the Playbook in real-world policy contexts. This ensures that the frameworks are not only theoretically sound but politically and socially implementable…(More)”

Europe’s dream to wean off US tech gets reality check


Article by Pieter Haeck and Mathieu Pollet: “..As the U.S. continues to up the ante in questioning transatlantic ties, calls are growing in Europe to reduce the continent’s reliance on U.S. technology in critical areas such as cloud services, artificial intelligence and microchips, and to opt for European alternatives instead.

But the European Commission is preparing on Thursday to acknowledge publicly what many have said in private: Europe is nowhere near being able to wean itself off U.S. Big Tech.

In a new International Digital Strategy the EU will instead promote collaboration with the U.S., according to a draft seen by POLITICO, as well as with other tech players including China, Japan, India and South Korea. “Decoupling is unrealistic and cooperation will remain significant across the technological value chain,” the draft reads. 

It’s a reality check after a year that has seen calls for a technologically sovereign Europe gain significant traction. In December the Commission appointed Finland’s Henna Virkkunen as the first-ever commissioner in charge of tech sovereignty. After few months in office, European Parliament lawmakers embarked on an effort to draft a blueprint for tech sovereignty. 

Even more consequential has been the rapid rise of the so-called Eurostack movement, which advocates building out a European tech infrastructure and has brought together effective voices including competition economist Cristina Caffarra and Kai Zenner, an assistant to key European lawmaker Axel Voss.

There’s wide agreement on the problem: U.S. cloud giants capture over two-thirds of the European market, the U.S. outpaces the EU in nurturing companies for artificial intelligence, and Europe’s stake in the global microchips market has crumbled to around 10 percent. Thursday’s strategy will acknowledge the U.S.’s “superior ability to innovate” and “Europe’s failure to capitalise on the digital revolution.”

What’s missing are viable solutions to the complex problem of unwinding deep-rooted dependencies….(More)”

Scientific Publishing: Enough is Enough


Blog by Seemay Chou: “In Abundance, Ezra Klein and Derek Thompson make the case that the biggest barriers to progress today are institutional. They’re not because of physical limitations or intellectual scarcity. They’re the product of legacy systems — systems that were built with one logic in mind, but now operate under another. And until we go back and address them at the root, we won’t get the future we say we want.

I’m a scientist. Over the past five years, I’ve experimented with science outside traditional institutes. From this vantage point, one truth has become inescapable. The journal publishing system — the core of how science is currently shared, evaluated, and rewarded — is fundamentally broken. And I believe it’s one of the legacy systems that prevents science from meeting its true potential for society.

It’s an unpopular moment to critique the scientific enterprise given all the volatility around its funding. But we do have a public trust problem. The best way to increase trust and protect science’s future is for scientists to have the hard conversations about what needs improvement. And to do this transparently. In all my discussions with scientists across every sector, exactly zero think the journal system works well. Yet we all feel trapped in a system that is, by definition, us.

I no longer believe that incremental fixes are enough. Science publishing must be built anew. I help oversee billions of dollars in funding across several science and technology organizations. We are expanding our requirement that all scientific work we fund will not go towards traditional journal publications. Instead, research we support should be released and reviewed more openly, comprehensively, and frequently than the status quo.

This policy is already in effect at Arcadia Science and Astera Institute, and we’re actively funding efforts to build journal alternatives through both Astera and The Navigation Fund. We hope others cross this line with us, and below I explain why every scientist and science funder should strongly consider it…(More)”.

Surveillance pricing: How your data determines what you pay


Article by Douglas Crawford: “Surveillance pricing, also known as personalized or algorithmic pricing, is a practice where companies use your personal data, such as your location, the device you’re using, your browsing history, and even your income, to determine what price to show you. It’s not just about supply and demand — it’s about you as a consumer and how much the system thinks you’re able (or willing) to pay.

Have you ever shopped online for a flight(new window), only to find that the price mysteriously increased the second time you checked? Or have you and a friend searched for the same hotel room on your phones, only to find your friend sees a lower price? This isn’t a glitch — it’s surveillance pricing at work.

In the United States, surveillance pricing is becoming increasingly prevalent across various industries, including airlines, hotels, and e-commerce platforms. It exists elsewhere, but in other parts of the world, such as the European Union, there is a growing recognition of the danger this pricing model presents to citizens’ privacy, resulting in stricter data protection laws aimed at curbing it. The US appears to be moving in the opposite direction…(More)”.

Human rights centered global governance of quantum technologies: advancing information for all


UNESCO Brief: “The integration of quantum technologies into AI systems introduces greater complexity, requiring stronger policy and technical frameworks that uphold human rights protections. Ensuring that these advancements do not widen existing inequalities or cause environmental harm is crucial.

The  Brief  expands  on  the  “Quantum  technologies  and  their  global  impact:  discussion  paper ”published by UNESCO. The objective of this Brief is to unpack the multiple dimensions of the quantum ecosystem and broadly explore the human rights and policy implications of quantum technologies, with some key findings:

  • While quantum technologies promise advancements of human rights in the areas of encryption, privacy, and security,  they also pose risks to these very domains and related ones such as freedom of expression and access to information
  • Quantum  innovations  will  reshape security,  economic  growth,  and  science, but  without  a robust human  rights-based  framework,  they  risk  deepening  inequalities  and  destabilizing global governance.
  • The quantum  divide  is  emerging  as  a  critical  issue,  with  disparities  in  access  to  technology,  expertise, and infrastructure widening global inequalities. Unchecked, this gap could limit the benefits of quantum advancements for all.
  • The quantum gender divide remains stark—79% of quantum companies have no female senior leaders, and only 1 in 54 quantum job applicants are women.

The Issue Brief provides broad recommendations and targeted actions for stakeholders,emphasizing

human  rights-centered  governance,  awareness,  capacity  building,  and  inclusivity  to  bridge global and gender divides. The key recommendations focus on a comprehensive governance model which must  ensure  a  multistakeholder  approach  that  facilitates,  state  duties,  corporate  accountability, effective remedies for human rights violations, and open standards for equitable access. Prioritizing human  rights  in  global  governance  will  ensure  quantum  innovation  serves  all  of  humanity  while safeguarding fundamental freedoms…(More)”.