Blueprint on Prosocial Tech Design Governance


Blueprint by Lisa Schirch: “… lays out actionable recommendations for governments, civil society, researchers, and industry to design digital platforms that reduce harm and increase benefit to society.

The Blueprint on Prosocial Tech Design Governance responds to the crisis in the scale and impact of digital platform harms. Digital platforms are fueling a systemic crisis by amplifying misinformation, harming mental health, eroding privacy, promoting polarization, exploiting children, and concentrating unaccountable power through manipulative design.

Prosocial tech design governance is a framework for regulating digital platforms based on how their design choices— such as algorithms and interfaces—impact society. It shifts focus “upstream” to address the root causes of digital harms and the structural incentives influencing platform design…(More)”.

5 Ways AI Supports City Adaptation to Extreme Heat


Article by Urban AI: “Cities stand at the frontline of climate change, confronting some of its most immediate and intense consequences. Among these, extreme heat has emerged as one of the most pressing and rapidly escalating threats. As we enter June 2025, Europe is already experiencing its first major and long-lasting heatwave of the summer season with temperatures surpassing 40°C in parts of Spain, France, and Portugal — and projections indicate that this extreme event could persist well into mid-June.

This climate event is not an isolated incident. By 2050, the number of cities exposed to dangerous levels of heat is expected to triple, with peak temperatures of 48°C (118°F) potentially becoming the new normal in some regions. Such intensifying conditions place unprecedented stress on urban infrastructure, public health systems, and the overall livability of cities — especially for vulnerable communities.

In this context, Artificial Intelligence (AI) is emerging as a vital tool in the urban climate adaptation toolbox. Urban AI — defined as the application of AI technologies to urban systems and decision-making — can help cities anticipate, manage, and mitigate the effects of extreme heat in more targeted and effective ways.

Cooling the Metro with AI-Driven Ventilation, in Barcelona

With over 130 stations and a century-old metro network, the city of Barcelona faces increasing pressure to ensure passenger comfort and safety — especially underground, where heat and air quality are harder to manage. In response, Transports Metropolitans de Barcelona (TMB), in partnership with SENER Engineering, developed and implemented the RESPIRA® system, an AI-powered ventilation control platform. First introduced in 2020 on Line 1, RESPIRA® demonstrated its effectiveness by lowering ambient temperatures, improving air circulation during the COVID-19 pandemic, and achieving a notable 25.1% reduction in energy consumption along with a 10.7% increase in passenger satisfaction…(More)”

Beyond the Checkbox: Upgrading the Right to Opt Out


Article by Sebastian Zimmeck: “…rights, as currently encoded in privacy laws, put too much onus on individuals when many privacy problems are systematic.5 Indeed, privacy is a systems property. If we want to make progress toward a more privacy-friendly Web as well as mobile and smart TV platforms, we need to take a systems perspective. For example, instead of requiring people to opt out from individual websites, there should be opt-out settings in browsers and operating systems. If a law requires individual opt-outs, those can be generalized by applying one opt-out toward all future sites visited or apps used, if a user so desires.8

Another problem is that the ad ecosystem is structured such that if people opt out, in many cases, their data is still being shared just as if they would not have opted out. The only difference is that in the latter case the data is accompanied by a privacy flag propagating the opt-out to the data recipient.7 However, if people opt out, their data should not be shared in the first place! The current system relying on the propagation of opt-out signals and deletion of incoming data by the recipient is complicated, error-prone, violates the principle of data minimization, and is an obstacle for effective privacy enforcement. Changing the ad ecosystem is particularly important as it is not only used on the web but also on many other platforms. Companies and the online ad industry as a whole need to do better!..(More)”

Data Integration, Sharing, and Management for Transportation Planning and Traffic Operations


Report by the National Academies of Sciences, Engineering, and Medicine: “Planning and operating transportation systems involves the exchange of large volumes of data that must be shared between partnering transportation agencies, private-sector interests, travelers, and intelligent devices such as traffic signals, ramp meters, and connected vehicles.

NCHRP Research Report 1121: Data Integration, Sharing, and Management for Transportation Planning and Traffic Operations, from TRB’s National Cooperative Highway Research Program, presents tools, methods, and guidelines for improving data integration, sharing, and management practices through case studies, proof-of-concept product developments, and deployment assistance…(More)”.

Can AI Agents Be Trusted?


Article by Blair Levin and Larry Downes: “Agentic AI has quickly become one of the most active areas of artificial intelligence development. AI agents are a level of programming on top of large language models (LLMs) that allow them to work towards specific goals. This extra layer of software can collect data, make decisions, take action, and adapt its behavior based on results. Agents can interact with other systems, apply reasoning, and work according to priorities and rules set by you as the principal.

Companies such as Salesforce have already deployed agents that can independently handle customer queries in a wide range of industries and applications, for example, and recognize when human intervention is required.

But perhaps the most exciting future for agentic AI will come in the form of personal agents, which can take self-directed action on your behalf. These agents will act as your personal assistant, handling calendar management, performing directed research and analysis, finding, negotiating for, and purchasing goods and services, curating content and taking over basic communications, learning and optimizing themselves along the way.

The idea of personal AI agents goes back decades, but the technology finally appears ready for prime-time. Already, leading companies are offering prototype personal AI agents to their customers, suppliers, and other stakeholders, raising challenging business and technical questions. Most pointedly: Can AI agents be trusted to act in our best interests? Will they work exclusively for us, or will their loyalty be split between users, developers, advertisers, and service providers? And how will be know?

The answers to these questions will determine whether and how quickly users embrace personal AI agents, and if their widespread deployment will enhance or damage business relationships and brand value…(More)”.

What World Does Bitcoin Want To Build For Itself?


Article by Patrick Redford: “We often talk about baseball games as a metric for where we are, and we’re literally in the first inning,” one of the Winklevoss twins gloats. “And this game’s going to overtime.”

It’s the first day of Bitcoin 2025, industry day here at the largest cryptocurrency conference in the world. This Winklevoss is sharing the stage with the other one, plus Donald Trump’s newly appointed crypto and AI czar David Sacks. They are in the midst of a victory lap, laughing with the free ease of men who know they have it made. The mangled baseball metaphor neither lands nor elicits laughs, but that’s fine. He’s earned, or at any rate acquired, the right to be wrong.

This year’s Bitcoin Conference takes place amid a boom, the same month the price of a single coin stabilized above $100,000 for the first time. More than 35,000 people have descended on Las Vegas in the final week of May for the conference: bitcoin miners, bitcoin dealers, several retired athletes, three U.S. senators, two Trump children, one U.S. vice president, people who describe themselves as “content creators,” people who describe themselves as “founders,” venture capitalists, ex-IDF bodyguards, tax-dodging experts, crypto heretics, evangelists, paladins, Bryan Johnson, Eric Adams, and me, trying to figure out what they were all doing there together. I’m in Vegas talking to as many people as I can in order to conduct an assay of the orange pill. What is the argument for bitcoin, exactly? Who is making it, and why?

Here is the part of the story where I am supposed to tell you it’s all a fraud. I am supposed to point out that nobody has come up with a use case for blockchain technology in 17 years beyond various forms of money laundering; that half of these people have been prosecuted for one financial crime or another; that the game is rigged in favor of the casino and those who got there before you; that this is an onerous use of energy; that all the mystification around bitcoin is a fog intended to draw in suckers where they can be bled. All that stuff is true, but the trick is that being true isn’t quite the same thing as mattering.

The bitcoin people are winning…(More)”

AI-Ready Federal Statistical Data: An Extension of Communicating Data Quality


Article by By Hoppe, Travis et al : “Generative Artificial Intelligence (AI) is redefining how people interact with public information and shaping how public data are consumed. Recent advances in large language models (LLMs) mean that more Americans are getting answers from AI chatbots and other AI systems, which increasingly draw on public datasets. The federal statistical community can take action to advance the use of federal statistics with generative AI to ensure that official statistics are front-and-center, powering these AIdriven experiences.
The Federal Committee on Statistical Methodology (FCSM) developed the Framework for Data Quality to help analysts and the public assess fitness for use of data sets. AI-based queries present new challenges, and the framework should be enhanced to meet them. Generative AI acts as an intermediary in the consumption of public statistical information, extracting and combining data with logical strategies that differ from the thought processes and judgments of analysts. For statistical data to be accurately represented and trustworthy, they need to be machine understandable and be able to support models that measure data quality and provide contextual information.
FCSM is working to ensure that federal statistics used in these AI-driven interactions meet the data quality dimensions of the Framework including, but not limited to, accessibility, timeliness, accuracy, and credibility. We propose a new collaborative federal effort to establish best practices for optimizing APIs, metadata, and data accessibility to support accurate and trusted generative AI results…(More)”.

Unequal Journeys to Food Markets: Continental-Scale Evidence from Open Data in Africa


Paper by Robert Benassai-Dalmau, et al: “Food market accessibility is a critical yet underexplored dimension of food systems, particularly in low- and middle-income countries. Here, we present a continent-wide assessment of spatial food market accessibility in Africa, integrating open geospatial data from OpenStreetMap and the World Food Programme. We compare three complementary metrics: travel time to the nearest market, market availability within a 30-minute threshold, and an entropy-based measure of spatial distribution, to quantify accessibility across diverse settings. Our analysis reveals pronounced disparities: rural and economically disadvantaged populations face substantially higher travel times, limited market reach, and less spatial redundancy. These accessibility patterns align with socioeconomic stratification, as measured by the Relative Wealth Index, and moderately correlate with food insecurity levels, assessed using the Integrated Food Security Phase Classification. Overall, results suggest that access to food markets plays a relevant role in shaping food security outcomes and reflects broader geographic and economic inequalities. This framework provides a scalable, data-driven approach for identifying underserved regions and supporting equitable infrastructure planning and policy design across diverse African contexts…(More)”.

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity


Paper by Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar: “Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. Current evaluations primarily focus on established mathematical and coding benchmarks, emphasizing final answer accuracy. However, this evaluation paradigm often suffers from data contamination and does not provide insights into the reasoning traces’ structure and quality. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of compositional complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs “think”. Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter- intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low- complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities…(More)”

Opening code, opening access: The World Bank’s first open source software release


Article by Keongmin Yoon, Olivier Dupriez, Bryan Cahill, and Katie Bannon: “The World Bank has long championed data transparency. Open data platforms, global indicators, and reproducible research have become pillars of the Bank’s knowledge work. But in many operational contexts, access to raw data alone is not enough. Turning data into insight requires tools—software to structure metadata, run models, update systems, and integrate outputs into national platforms.

With this in mind, the World Bank has released its first Open Source Software (OSS) tool under a new institutional licensing framework. The Metadata Editor—a lightweight application for structuring and publishing statistical metadata—is now publicly available on the Bank’s GitHub repository, under the widely used MIT License, supplemented by Bank-specific legal provisions.

This release marks more than a technical milestone. It reflects a structural shift in how the Bank shares its data and knowledge. For the first time, there is a clear institutional framework for making Bank-developed software open, reusable, and legally shareable—advancing the Bank’s commitment to public goods, transparency, Open Science, and long-term development impact, as emphasized in The Knowledge Compact for Action…(More)”.