IEA’s Energy and AI Observatory: “… provides up-to-date data and analysis on the growing links between the energy sector and artificial intelligence (AI). The new and fast-moving field of AI requires a new approach to gathering data and information, and the Observatory aims to provide regularly updated data and a comprehensive view of the implications of AI on energy demand (energy for AI) and of AI applications for efficiency, innovation, resilience and competitiveness in the energy sector (AI for energy). This first-of-a-kind platform is developed and maintained by the IEA, with valuable contributions of data and insights from the IEA’s energy industry and tech sector partners, and complements the IEA’s Special Report on Energy and AI…(More)”.
Community-Aligned A.I. Benchmarks
White Paper by the Aspen Institute: “…When people develop machine learning models for AI products and services, they iterate to improve performance.
What it means to “improve” a machine learning model depends on what you want the model to do, like correctly transcribe an audio sample or generate a reliable summary of a long document.
Machine learning benchmarks are similar to standardized tests that AI researchers and builders can score their work against. Benchmarks allow us to both see if different model tweaks improve the performance for the intended task and compare similar models against one another.
Some famous benchmarks in AI include ImageNet and the Stanford Question Answering Dataset (SQuAD).
Benchmarks are important, but their development and adoption has historically been somewhat arbitrary. The capabilities that benchmarks measure should reflect the priorities for what the public wants AI tools to be and do.
We can build positive AI futures, ones that emphasize what the public wants out of these emerging technologies. As such, it’s imperative that we build benchmarks worth striving for…(More)”.
Facilitating the secondary use of health data for public interest purposes across borders
OECD Paper: “Recent technological developments create significant opportunities to process health data in the public interest. However, the growing fragmentation of frameworks applied to data has become a structural impediment to fully leverage these opportunities. Public and private stakeholders suggest that three key areas should be analysed to support this outcome, namely: the convergence of governance frameworks applicable to health data use in the public interest across jurisdictions; the harmonisation of national procedures applicable to secondary health data use; and the public perceptions around the use of health data. This paper explores each of these three key areas and concludes with an overview of collective findings relating specifically to the convergence of legal bases for secondary data use…(More)”.
Blueprint on Prosocial Tech Design Governance
Blueprint by Lisa Schirch: “… lays out actionable recommendations for governments, civil society, researchers, and industry to design digital platforms that reduce harm and increase benefit to society.
The Blueprint on Prosocial Tech Design Governance responds to the crisis in the scale and impact of digital platform harms. Digital platforms are fueling a systemic crisis by amplifying misinformation, harming mental health, eroding privacy, promoting polarization, exploiting children, and concentrating unaccountable power through manipulative design.
Prosocial tech design governance is a framework for regulating digital platforms based on how their design choices— such as algorithms and interfaces—impact society. It shifts focus “upstream” to address the root causes of digital harms and the structural incentives influencing platform design…(More)”.
Data Integration, Sharing, and Management for Transportation Planning and Traffic Operations
Report by the National Academies of Sciences, Engineering, and Medicine: “Planning and operating transportation systems involves the exchange of large volumes of data that must be shared between partnering transportation agencies, private-sector interests, travelers, and intelligent devices such as traffic signals, ramp meters, and connected vehicles.
NCHRP Research Report 1121: Data Integration, Sharing, and Management for Transportation Planning and Traffic Operations, from TRB’s National Cooperative Highway Research Program, presents tools, methods, and guidelines for improving data integration, sharing, and management practices through case studies, proof-of-concept product developments, and deployment assistance…(More)”.
TAPIS: A Simple Web Tool for Analyzing Citizen-Generated Data
Tool by CityObs: “Citizen observatories and communities collect valuable environmental data — but making sense of this data can be tricky, especially if you’re not a data expert. That’s why we created TAPIS: a free, easy-to-use web tool developed within the CitiObs project to help you view, manage, and analyze data collected from sensors and online platforms.
Why We Built TAPIS
The SensorThings API is a standard for sharing sensor data, used by many observatories. However, tools that help people explore this data visually and interactively have been limited. Often, users had to dig into complicated URLs and query parameters such as “expand”, “select”, “orderby” and “filter” to extract the data they needed, as illustrated in tutorials and examples such as the ones collected by SensorUp [1].
TAPIS changes that. It gives you a visual interface to work with sensor data from different API standards (such as SensorThings API, STAplus, OGC API Features/Records, OGC Catalogue Service for the Web, S3 Services, Eclipse Data Connectors, and STAC) and data file formats (such as CSV, JSON, JSON-LD, GeoJSON, and GeoPackage). You can load the data into tables, filter or group it, and view it as maps, bar charts, pie charts, or scatter plots — all in your browser, with no installation required.
Key Features
- Connects to online data sources (like OGC APIs, STAC, SensorThings, and CSV files)
- Turns raw data into easy-to-read tables
- Adds meaning to table columns
- Visualizes data with different chart types
- Links with MiraMon to create interactive maps
TAPIS is inspired by the look and feel of Orange Data Mining (a popular data science tool) — but runs entirely in your browser, making it accessible for all users, even those with limited technical skills…(More)”
AI-Ready Federal Statistical Data: An Extension of Communicating Data Quality
Article by By Hoppe, Travis et al : “Generative Artificial Intelligence (AI) is redefining how people interact with public information and shaping how public data are consumed. Recent advances in large language models (LLMs) mean that more Americans are getting answers from AI chatbots and other AI systems, which increasingly draw on public datasets. The federal statistical community can take action to advance the use of federal statistics with generative AI to ensure that official statistics are front-and-center, powering these AIdriven experiences.
The Federal Committee on Statistical Methodology (FCSM) developed the Framework for Data Quality to help analysts and the public assess fitness for use of data sets. AI-based queries present new challenges, and the framework should be enhanced to meet them. Generative AI acts as an intermediary in the consumption of public statistical information, extracting and combining data with logical strategies that differ from the thought processes and judgments of analysts. For statistical data to be accurately represented and trustworthy, they need to be machine understandable and be able to support models that measure data quality and provide contextual information.
FCSM is working to ensure that federal statistics used in these AI-driven interactions meet the data quality dimensions of the Framework including, but not limited to, accessibility, timeliness, accuracy, and credibility. We propose a new collaborative federal effort to establish best practices for optimizing APIs, metadata, and data accessibility to support accurate and trusted generative AI results…(More)”.
Opening code, opening access: The World Bank’s first open source software release
Article by Keongmin Yoon, Olivier Dupriez, Bryan Cahill, and Katie Bannon: “The World Bank has long championed data transparency. Open data platforms, global indicators, and reproducible research have become pillars of the Bank’s knowledge work. But in many operational contexts, access to raw data alone is not enough. Turning data into insight requires tools—software to structure metadata, run models, update systems, and integrate outputs into national platforms.
With this in mind, the World Bank has released its first Open Source Software (OSS) tool under a new institutional licensing framework. The Metadata Editor—a lightweight application for structuring and publishing statistical metadata—is now publicly available on the Bank’s GitHub repository, under the widely used MIT License, supplemented by Bank-specific legal provisions.
This release marks more than a technical milestone. It reflects a structural shift in how the Bank shares its data and knowledge. For the first time, there is a clear institutional framework for making Bank-developed software open, reusable, and legally shareable—advancing the Bank’s commitment to public goods, transparency, Open Science, and long-term development impact, as emphasized in The Knowledge Compact for Action…(More)”.
Making Civic Trust Less Abstract: A Framework for Measuring Trust Within Cities
Report by Stefaan Verhulst, Andrew J. Zahuranec, and Oscar Romero: “Trust is foundational to effective governance, yet its inherently abstract nature has made it difficult to measure and operationalize, especially in urban contexts. This report proposes a practical framework for city officials to diagnose and strengthen civic trust through observable indicators and actionable interventions.

Rather than attempting to quantify trust as an abstract concept, the framework distinguishes between the drivers of trust—direct experiences and institutional interventions—and its manifestations, both emotional and behavioral. Drawing on literature reviews, expert workshops, and field engagement with the New York City Civic Engagement Commission (CEC), we present a three-phase approach: (1) baseline assessment of trust indicators, (2) analysis of causal drivers, and (3) design and continuous evaluation of targeted interventions. The report illustrates the framework’s applicability through a hypothetical case involving the NYC Parks Department and a real-world case study of the citywide participatory budgeting initiative, The People’s Money. By providing a structured, context-sensitive, and iterative model for measuring civic trust, this report seeks to equip public institutions and city officials with a framework for meaningful measurement of civic trust…(More)“.
The AI Policy Playbook
Playbook by AI Policymaker Network & Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH: “It moves away from talking about AI ethics in abstract terms but tells of building policies that work right-away in emerging economies and respond to immediate development priorities. The Playbook emphasises that a one-size-fits-all solution doesn’t work. Rather, it illustrates shared challenges—like limited research capacity, fragmented data ecosystems, and compounding AI risks—while spotlighting national innovations and success stories. From drafting AI strategies to engaging communities and safeguarding rights, it lays out a roadmap grounded in local realities….What can you expect to find in the AI Policy Playbook:
- Policymaker Interviews
Real-world insights from policymakers to understand their challenges and best practices. - Policy Process Analysis
Key elements from existing policies to extract effective strategies for AI governance, as well as policy mapping. - Case Studies
Examples of successes and lessons learnt from various countries to provide practical guidance. - Recommendations
Concrete solutions and recommendations from actors in the field to improve the policy development process, including quick tips for implementation and handling challenges.
What distinguishes this initiative is its commitment to peer learning and co-creation. The Africa-Asia AI Policymaker Network comprises over 30 high-level government partners who anchor the Playbook in real-world policy contexts. This ensures that the frameworks are not only theoretically sound but politically and socially implementable…(More)”