Protecting young digital citizens


Blog by Pascale Raulin-Serrier: “…As digital tools become more deeply embedded in children’s lives, many young users are unaware of the long-term consequences of sharing personal information online through apps, games, social media platforms and even educational tools. The large-scale collection of data related to their preferences, identity or lifestyle may be used for targeted advertising or profiling. This affects not only their immediate online experiences but can also have lasting consequences, including greater risks of discrimination and exclusion. These concerns underscore the urgent need for stronger safeguards, greater transparency and a child-centered approach to data governance.

CNIL’s initiatives to promote children’s privacy

In response to these challenges, the CNIL introduced eight recommendations in 2021 to provide practical guidance for children, parents and other stakeholders in the digital economy. These are built around several key pillars to promote and protect children’s privacy:

1. Providing specific safeguards

Children have distinct digital rights and must be able to exercise them fully. Under the European General Data Protection Regulation (GDPR), they benefit from special protections, including the right to be forgotten and, in some cases, the ability to consent to the processing of their data.In France, children can only register for social networks or online gaming platforms if they are over 15, or with parental consent if they are younger. CNIL helps hold platforms accountable by offering clear recommendations on how to present terms of service and collect consent in ways that are accessible and understandable to children.

2. Balancing autonomy and protection

The needs and capacities of a 6-year-old child differ greatly from those of a 16-year-old adolescent. It is essential to consider this diversity in online behaviour, maturity and the evolving ability to make informed decisions. The CNIL emphasizes  the importance of offering children a digital environment that strikes a balance between protection and autonomy. It also advocates for digital citizenship education to empower young people with the tools they need to manage their privacy responsibly…(More)”. See also Responsible Data for Children.

Blueprint on Prosocial Tech Design Governance


Blueprint by Lisa Schirch: “… lays out actionable recommendations for governments, civil society, researchers, and industry to design digital platforms that reduce harm and increase benefit to society.

The Blueprint on Prosocial Tech Design Governance responds to the crisis in the scale and impact of digital platform harms. Digital platforms are fueling a systemic crisis by amplifying misinformation, harming mental health, eroding privacy, promoting polarization, exploiting children, and concentrating unaccountable power through manipulative design.

Prosocial tech design governance is a framework for regulating digital platforms based on how their design choices— such as algorithms and interfaces—impact society. It shifts focus “upstream” to address the root causes of digital harms and the structural incentives influencing platform design…(More)”.

5 Ways AI Supports City Adaptation to Extreme Heat


Article by Urban AI: “Cities stand at the frontline of climate change, confronting some of its most immediate and intense consequences. Among these, extreme heat has emerged as one of the most pressing and rapidly escalating threats. As we enter June 2025, Europe is already experiencing its first major and long-lasting heatwave of the summer season with temperatures surpassing 40°C in parts of Spain, France, and Portugal — and projections indicate that this extreme event could persist well into mid-June.

This climate event is not an isolated incident. By 2050, the number of cities exposed to dangerous levels of heat is expected to triple, with peak temperatures of 48°C (118°F) potentially becoming the new normal in some regions. Such intensifying conditions place unprecedented stress on urban infrastructure, public health systems, and the overall livability of cities — especially for vulnerable communities.

In this context, Artificial Intelligence (AI) is emerging as a vital tool in the urban climate adaptation toolbox. Urban AI — defined as the application of AI technologies to urban systems and decision-making — can help cities anticipate, manage, and mitigate the effects of extreme heat in more targeted and effective ways.

Cooling the Metro with AI-Driven Ventilation, in Barcelona

With over 130 stations and a century-old metro network, the city of Barcelona faces increasing pressure to ensure passenger comfort and safety — especially underground, where heat and air quality are harder to manage. In response, Transports Metropolitans de Barcelona (TMB), in partnership with SENER Engineering, developed and implemented the RESPIRA® system, an AI-powered ventilation control platform. First introduced in 2020 on Line 1, RESPIRA® demonstrated its effectiveness by lowering ambient temperatures, improving air circulation during the COVID-19 pandemic, and achieving a notable 25.1% reduction in energy consumption along with a 10.7% increase in passenger satisfaction…(More)”

Beyond the Checkbox: Upgrading the Right to Opt Out


Article by Sebastian Zimmeck: “…rights, as currently encoded in privacy laws, put too much onus on individuals when many privacy problems are systematic.5 Indeed, privacy is a systems property. If we want to make progress toward a more privacy-friendly Web as well as mobile and smart TV platforms, we need to take a systems perspective. For example, instead of requiring people to opt out from individual websites, there should be opt-out settings in browsers and operating systems. If a law requires individual opt-outs, those can be generalized by applying one opt-out toward all future sites visited or apps used, if a user so desires.8

Another problem is that the ad ecosystem is structured such that if people opt out, in many cases, their data is still being shared just as if they would not have opted out. The only difference is that in the latter case the data is accompanied by a privacy flag propagating the opt-out to the data recipient.7 However, if people opt out, their data should not be shared in the first place! The current system relying on the propagation of opt-out signals and deletion of incoming data by the recipient is complicated, error-prone, violates the principle of data minimization, and is an obstacle for effective privacy enforcement. Changing the ad ecosystem is particularly important as it is not only used on the web but also on many other platforms. Companies and the online ad industry as a whole need to do better!..(More)”

Spaces for democracy with generative artificial intelligence: public architecture at stake


Paper by Ingrid Campo-Ruiz: “Urban space is an important infrastructure for democracy and fosters democratic engagement, such as meetings, discussions, and protests. Artificial Intelligence (AI) systems could affect democracy through urban space, for example, by breaching data privacy, hindering political equality and engagement, or manipulating information about places. This research explores the urban places that promote democratic engagement according to the outputs generated with ChatGPT-4o. This research moves beyond the dominant framework of discussions on AI and democracy as a form of spreading misinformation and fake news. Instead, it provides an innovative framework, combining architectural space as an infrastructure for democracy and the way in which generative AI tools provide a nuanced view of democracy that could potentially influence millions of people. This article presents a new conceptual framework for understanding AI for democracy from the perspective of architecture. For the first case study in Stockholm, Sweden, AI outputs were later combined with GIS maps and a theoretical framework. The research then analyzes the results obtained for Madrid, Spain, and Brussels, Belgium. This analysis provides deeper insights into the outputs obtained with AI, the places that facilitate democratic engagement and those that are overlooked, and the ensuing consequences.Results show that urban space for democratic engagement obtained with ChatGPT-4o for Stockholm is mainly composed of governmental institutions and non-governmental organizations for representative or deliberative democracy and the education of individuals in public buildings in the city centre. The results obtained with ChatGPT-40 barely reflect public open spaces, parks, or routes. They also prioritize organized rather than spontaneous engagement and do not reflect unstructured events like demonstrations, and powerful actors, such as political parties, or workers’ unions. The places listed by ChatGPT-4o for Madrid and Brussels give major prominence to private spaces like offices that house organizations with political activities. While cities offer a broad and complex array of places for democratic engagement, outputs obtained with AI can narrow users’ perspectives on their real opportunities, while perpetuating powerful agents by not making them sufficiently visible to be accountable for their actions. In conclusion, urban space is a fundamental infrastructure for democracy, and AI outputs could be a valid starting point for understanding the plethora of interactions. These outputs should be complemented with other forms of knowledge to produce a more comprehensive framework that adjusts to reality for developing AI in a democratic context. Urban space should be protected as a shared space and as an asset for societies to fully develop democracy in its multiple forms. Democracy and urban spaces influence each other and are subject to pressures from different actors including AI. AI systems should, therefore, be monitored to enhance democratic values through urban space…(More)”.

What World Does Bitcoin Want To Build For Itself?


Article by Patrick Redford: “We often talk about baseball games as a metric for where we are, and we’re literally in the first inning,” one of the Winklevoss twins gloats. “And this game’s going to overtime.”

It’s the first day of Bitcoin 2025, industry day here at the largest cryptocurrency conference in the world. This Winklevoss is sharing the stage with the other one, plus Donald Trump’s newly appointed crypto and AI czar David Sacks. They are in the midst of a victory lap, laughing with the free ease of men who know they have it made. The mangled baseball metaphor neither lands nor elicits laughs, but that’s fine. He’s earned, or at any rate acquired, the right to be wrong.

This year’s Bitcoin Conference takes place amid a boom, the same month the price of a single coin stabilized above $100,000 for the first time. More than 35,000 people have descended on Las Vegas in the final week of May for the conference: bitcoin miners, bitcoin dealers, several retired athletes, three U.S. senators, two Trump children, one U.S. vice president, people who describe themselves as “content creators,” people who describe themselves as “founders,” venture capitalists, ex-IDF bodyguards, tax-dodging experts, crypto heretics, evangelists, paladins, Bryan Johnson, Eric Adams, and me, trying to figure out what they were all doing there together. I’m in Vegas talking to as many people as I can in order to conduct an assay of the orange pill. What is the argument for bitcoin, exactly? Who is making it, and why?

Here is the part of the story where I am supposed to tell you it’s all a fraud. I am supposed to point out that nobody has come up with a use case for blockchain technology in 17 years beyond various forms of money laundering; that half of these people have been prosecuted for one financial crime or another; that the game is rigged in favor of the casino and those who got there before you; that this is an onerous use of energy; that all the mystification around bitcoin is a fog intended to draw in suckers where they can be bled. All that stuff is true, but the trick is that being true isn’t quite the same thing as mattering.

The bitcoin people are winning…(More)”

Usability for the World: Building Better Cities and Communities


Book edited by Elizabeth Rosenzweig, and Amanda Davis: “Want to build cities that truly work for everyone? Usability for the World: Sustainable Cities and Communities reveals how human-centered design is key to thriving, equitable urban spaces. This isn’t just another urban planning book; it’s a practical guide to transforming cities, offering concrete strategies and real-world examples you can use today.

What if our cities could be both efficient and human-friendly? This book tackles the core challenge of modern urban development: balancing functionality with the well-being of residents. It explores the crucial connection between usability and sustainability, demonstrating how design principles, from Universal to life-centered, create truly livable cities.

Interested in sustainable urban development? Usability for the World offers a global perspective, showcasing diverse approaches to creating equitable and resilient cities. Through compelling case studies, discover how user-centered design addresses pressing urban challenges. See how these principles connect directly to achieving the UN Sustainable Development Goals, specifically SDG 11: Sustainable Cities and Communities…(More)”.

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity


Paper by Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar: “Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. Current evaluations primarily focus on established mathematical and coding benchmarks, emphasizing final answer accuracy. However, this evaluation paradigm often suffers from data contamination and does not provide insights into the reasoning traces’ structure and quality. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of compositional complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs “think”. Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter- intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low- complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities…(More)”

Opening code, opening access: The World Bank’s first open source software release


Article by Keongmin Yoon, Olivier Dupriez, Bryan Cahill, and Katie Bannon: “The World Bank has long championed data transparency. Open data platforms, global indicators, and reproducible research have become pillars of the Bank’s knowledge work. But in many operational contexts, access to raw data alone is not enough. Turning data into insight requires tools—software to structure metadata, run models, update systems, and integrate outputs into national platforms.

With this in mind, the World Bank has released its first Open Source Software (OSS) tool under a new institutional licensing framework. The Metadata Editor—a lightweight application for structuring and publishing statistical metadata—is now publicly available on the Bank’s GitHub repository, under the widely used MIT License, supplemented by Bank-specific legal provisions.

This release marks more than a technical milestone. It reflects a structural shift in how the Bank shares its data and knowledge. For the first time, there is a clear institutional framework for making Bank-developed software open, reusable, and legally shareable—advancing the Bank’s commitment to public goods, transparency, Open Science, and long-term development impact, as emphasized in The Knowledge Compact for Action…(More)”.

The path for AI in poor nations does not need to be paved with billions


Editorial in Nature: “Coinciding with US President Donald Trump’s tour of Gulf states last week, Saudi Arabia announced that it is embarking on a large-scale artificial intelligence (AI) initiative. The proposed venture will have state backing and considerable involvement from US technology firms. It is the latest move in a global expansion of AI ambitions beyond the existing heartlands of the United States, China and Europe. However, as Nature India, Nature Africa and Nature Middle East report in a series of articles on AI in low- and middle-income countries (LMICs) published on 21 May (see go.nature.com/45jy3qq), the path to home-grown AI doesn’t need to be paved with billions, or even hundreds of millions, of dollars, or depend exclusively on partners in Western nations or China…, as a News Feature that appears in the series makes plain (see go.nature.com/3yrd3u2), many initiatives in LMICs aren’t focusing on scaling up, but on ‘scaling right’. They are “building models that work for local users, in their languages, and within their social and economic realities”.

More such local initiatives are needed. Some of the most popular AI applications, such as OpenAI’s ChatGPT and Google Gemini, are trained mainly on data in European languages. That would mean that the model is less effective for users who speak Hindi, Arabic, Swahili, Xhosa and countless other languages. Countries are boosting home-grown apps by funding start-up companies, establishing AI education programmes, building AI research and regulatory capacity and through public engagement.

Those LMICs that have started investing in AI began by establishing an AI strategy, including policies for AI research. However, as things stand, most of the 55 member states of the African Union and of the 22 members of the League of Arab States have not produced an AI strategy. That must change…(More)”.