Facilitating the secondary use of health data for public interest purposes across borders


OECD Paper: “Recent technological developments create significant opportunities to process health data in the public interest. However, the growing fragmentation of frameworks applied to data has become a structural impediment to fully leverage these opportunities. Public and private stakeholders suggest that three key areas should be analysed to support this outcome, namely: the convergence of governance frameworks applicable to health data use in the public interest across jurisdictions; the harmonisation of national procedures applicable to secondary health data use; and the public perceptions around the use of health data. This paper explores each of these three key areas and concludes with an overview of collective findings relating specifically to the convergence of legal bases for secondary data use…(More)”.

Protecting young digital citizens


Blog by Pascale Raulin-Serrier: “…As digital tools become more deeply embedded in children’s lives, many young users are unaware of the long-term consequences of sharing personal information online through apps, games, social media platforms and even educational tools. The large-scale collection of data related to their preferences, identity or lifestyle may be used for targeted advertising or profiling. This affects not only their immediate online experiences but can also have lasting consequences, including greater risks of discrimination and exclusion. These concerns underscore the urgent need for stronger safeguards, greater transparency and a child-centered approach to data governance.

CNIL’s initiatives to promote children’s privacy

In response to these challenges, the CNIL introduced eight recommendations in 2021 to provide practical guidance for children, parents and other stakeholders in the digital economy. These are built around several key pillars to promote and protect children’s privacy:

1. Providing specific safeguards

Children have distinct digital rights and must be able to exercise them fully. Under the European General Data Protection Regulation (GDPR), they benefit from special protections, including the right to be forgotten and, in some cases, the ability to consent to the processing of their data.In France, children can only register for social networks or online gaming platforms if they are over 15, or with parental consent if they are younger. CNIL helps hold platforms accountable by offering clear recommendations on how to present terms of service and collect consent in ways that are accessible and understandable to children.

2. Balancing autonomy and protection

The needs and capacities of a 6-year-old child differ greatly from those of a 16-year-old adolescent. It is essential to consider this diversity in online behaviour, maturity and the evolving ability to make informed decisions. The CNIL emphasizes  the importance of offering children a digital environment that strikes a balance between protection and autonomy. It also advocates for digital citizenship education to empower young people with the tools they need to manage their privacy responsibly…(More)”. See also Responsible Data for Children.

5 Ways AI is Boosting Citizen Engagement in Africa’s Democracies


Article by Peter Agbesi Adivor: “Artificial Intelligence (AI) is increasingly influencing democratic participation across Africa. From campaigning to voter education, AI is transforming electoral processes across the continent. While concerns about misinformation and government overreach persist, AI also offers promising avenues to enhance citizen engagement. This article explores five key ways AI is fostering more inclusive and participatory democracies in Africa.

1. AI-Powered Voter Education and Campaign

AI-driven platforms are revolutionizing voter education by providing accessible, real-time information. These platforms ensure citizens receive standardized electoral information delivered to them on their digital devices regardless of their geographical location, significantly reducing the cost for political actors as well as state and non-state actors who focus on voter education. They also ensure that those who can navigate these tools easily access the needed information, allowing authorities to focus limited resources on citizens on the other side of the digital divide.

 In Nigeria, ChatVE developed CitiBot, an AI-powered chatbot deployed during the 2024 Edo State elections to educate citizens on their civic rights and responsibilities via WhatsApp and Telegram. The bot offered information on voting procedures, eligibility, and the importance of participation.

Similarly, in South Africa, the Rivonia Circle introduced Thoko the Bot, an AI chatbot designed to answer voters’ questions about the electoral process, including where and how to vote, and the significance of participating in elections.

These AI tools enhance voter understanding and engagement by providing personalized, easily accessible information, thereby encouraging greater participation in democratic processes…(More)”.

Blueprint on Prosocial Tech Design Governance


Blueprint by Lisa Schirch: “… lays out actionable recommendations for governments, civil society, researchers, and industry to design digital platforms that reduce harm and increase benefit to society.

The Blueprint on Prosocial Tech Design Governance responds to the crisis in the scale and impact of digital platform harms. Digital platforms are fueling a systemic crisis by amplifying misinformation, harming mental health, eroding privacy, promoting polarization, exploiting children, and concentrating unaccountable power through manipulative design.

Prosocial tech design governance is a framework for regulating digital platforms based on how their design choices— such as algorithms and interfaces—impact society. It shifts focus “upstream” to address the root causes of digital harms and the structural incentives influencing platform design…(More)”.

European project to make web search more open and ethical


PressRelease: “The OpenWebSearch.eu consortium, which includes CERN, has released a pilot of the first federated, pan-European Open Web Index, paving the way for a new generation of unbiased and ethical search engines

Artistic map of Europe with search bars in different languages overlaid
(Image: openwebsearch.eu / using images by NASA (europe_dnb_2012_lrg.jpg), Unsplash (christopher-burns-dzejyfCAzIA-unsplash))

On 6 June, the OpenWebSearch.eu consortium released a pilot of a new infrastructure that aims to make European web search fairer, more transparent and commercially unbiased. With strong participation by CERN, the European Open Web Index (OWI) is now open for use by academic, commercial and independent teams under a general research licence, with commercial options in development on a case-by-case basis.

The OpenWebSearch.eu initiative was launched in 2022, with a consortium made up of 14 leading research institutions from across Europe, including CERN…

The OWI offers a clear alternative based on European values. The project’s cross-disciplinary nature, ensuring continuous dialogue between technical teams and legal, ethical and social experts, ensures that fairness and privacy are built into the OWI from the start. “Over thirty years since the World Wide Web was created at CERN and released to the public, our commitment to openness continues,” says Noor Afshan Fathima, IT research fellow at CERN. “Search is the next logical step in democratising digital access, especially as we enter the AI era.” The OWI facilitates AI capabilities, allowing web search data to be used for training large language models (LLMs), generating embeddings and powering chatbots…(More)”.

5 Ways AI Supports City Adaptation to Extreme Heat


Article by Urban AI: “Cities stand at the frontline of climate change, confronting some of its most immediate and intense consequences. Among these, extreme heat has emerged as one of the most pressing and rapidly escalating threats. As we enter June 2025, Europe is already experiencing its first major and long-lasting heatwave of the summer season with temperatures surpassing 40°C in parts of Spain, France, and Portugal — and projections indicate that this extreme event could persist well into mid-June.

This climate event is not an isolated incident. By 2050, the number of cities exposed to dangerous levels of heat is expected to triple, with peak temperatures of 48°C (118°F) potentially becoming the new normal in some regions. Such intensifying conditions place unprecedented stress on urban infrastructure, public health systems, and the overall livability of cities — especially for vulnerable communities.

In this context, Artificial Intelligence (AI) is emerging as a vital tool in the urban climate adaptation toolbox. Urban AI — defined as the application of AI technologies to urban systems and decision-making — can help cities anticipate, manage, and mitigate the effects of extreme heat in more targeted and effective ways.

Cooling the Metro with AI-Driven Ventilation, in Barcelona

With over 130 stations and a century-old metro network, the city of Barcelona faces increasing pressure to ensure passenger comfort and safety — especially underground, where heat and air quality are harder to manage. In response, Transports Metropolitans de Barcelona (TMB), in partnership with SENER Engineering, developed and implemented the RESPIRA® system, an AI-powered ventilation control platform. First introduced in 2020 on Line 1, RESPIRA® demonstrated its effectiveness by lowering ambient temperatures, improving air circulation during the COVID-19 pandemic, and achieving a notable 25.1% reduction in energy consumption along with a 10.7% increase in passenger satisfaction…(More)”

Beyond the Checkbox: Upgrading the Right to Opt Out


Article by Sebastian Zimmeck: “…rights, as currently encoded in privacy laws, put too much onus on individuals when many privacy problems are systematic.5 Indeed, privacy is a systems property. If we want to make progress toward a more privacy-friendly Web as well as mobile and smart TV platforms, we need to take a systems perspective. For example, instead of requiring people to opt out from individual websites, there should be opt-out settings in browsers and operating systems. If a law requires individual opt-outs, those can be generalized by applying one opt-out toward all future sites visited or apps used, if a user so desires.8

Another problem is that the ad ecosystem is structured such that if people opt out, in many cases, their data is still being shared just as if they would not have opted out. The only difference is that in the latter case the data is accompanied by a privacy flag propagating the opt-out to the data recipient.7 However, if people opt out, their data should not be shared in the first place! The current system relying on the propagation of opt-out signals and deletion of incoming data by the recipient is complicated, error-prone, violates the principle of data minimization, and is an obstacle for effective privacy enforcement. Changing the ad ecosystem is particularly important as it is not only used on the web but also on many other platforms. Companies and the online ad industry as a whole need to do better!..(More)”

Data Integration, Sharing, and Management for Transportation Planning and Traffic Operations


Report by the National Academies of Sciences, Engineering, and Medicine: “Planning and operating transportation systems involves the exchange of large volumes of data that must be shared between partnering transportation agencies, private-sector interests, travelers, and intelligent devices such as traffic signals, ramp meters, and connected vehicles.

NCHRP Research Report 1121: Data Integration, Sharing, and Management for Transportation Planning and Traffic Operations, from TRB’s National Cooperative Highway Research Program, presents tools, methods, and guidelines for improving data integration, sharing, and management practices through case studies, proof-of-concept product developments, and deployment assistance…(More)”.

Can AI Agents Be Trusted?


Article by Blair Levin and Larry Downes: “Agentic AI has quickly become one of the most active areas of artificial intelligence development. AI agents are a level of programming on top of large language models (LLMs) that allow them to work towards specific goals. This extra layer of software can collect data, make decisions, take action, and adapt its behavior based on results. Agents can interact with other systems, apply reasoning, and work according to priorities and rules set by you as the principal.

Companies such as Salesforce have already deployed agents that can independently handle customer queries in a wide range of industries and applications, for example, and recognize when human intervention is required.

But perhaps the most exciting future for agentic AI will come in the form of personal agents, which can take self-directed action on your behalf. These agents will act as your personal assistant, handling calendar management, performing directed research and analysis, finding, negotiating for, and purchasing goods and services, curating content and taking over basic communications, learning and optimizing themselves along the way.

The idea of personal AI agents goes back decades, but the technology finally appears ready for prime-time. Already, leading companies are offering prototype personal AI agents to their customers, suppliers, and other stakeholders, raising challenging business and technical questions. Most pointedly: Can AI agents be trusted to act in our best interests? Will they work exclusively for us, or will their loyalty be split between users, developers, advertisers, and service providers? And how will be know?

The answers to these questions will determine whether and how quickly users embrace personal AI agents, and if their widespread deployment will enhance or damage business relationships and brand value…(More)”.

Spaces for democracy with generative artificial intelligence: public architecture at stake


Paper by Ingrid Campo-Ruiz: “Urban space is an important infrastructure for democracy and fosters democratic engagement, such as meetings, discussions, and protests. Artificial Intelligence (AI) systems could affect democracy through urban space, for example, by breaching data privacy, hindering political equality and engagement, or manipulating information about places. This research explores the urban places that promote democratic engagement according to the outputs generated with ChatGPT-4o. This research moves beyond the dominant framework of discussions on AI and democracy as a form of spreading misinformation and fake news. Instead, it provides an innovative framework, combining architectural space as an infrastructure for democracy and the way in which generative AI tools provide a nuanced view of democracy that could potentially influence millions of people. This article presents a new conceptual framework for understanding AI for democracy from the perspective of architecture. For the first case study in Stockholm, Sweden, AI outputs were later combined with GIS maps and a theoretical framework. The research then analyzes the results obtained for Madrid, Spain, and Brussels, Belgium. This analysis provides deeper insights into the outputs obtained with AI, the places that facilitate democratic engagement and those that are overlooked, and the ensuing consequences.Results show that urban space for democratic engagement obtained with ChatGPT-4o for Stockholm is mainly composed of governmental institutions and non-governmental organizations for representative or deliberative democracy and the education of individuals in public buildings in the city centre. The results obtained with ChatGPT-40 barely reflect public open spaces, parks, or routes. They also prioritize organized rather than spontaneous engagement and do not reflect unstructured events like demonstrations, and powerful actors, such as political parties, or workers’ unions. The places listed by ChatGPT-4o for Madrid and Brussels give major prominence to private spaces like offices that house organizations with political activities. While cities offer a broad and complex array of places for democratic engagement, outputs obtained with AI can narrow users’ perspectives on their real opportunities, while perpetuating powerful agents by not making them sufficiently visible to be accountable for their actions. In conclusion, urban space is a fundamental infrastructure for democracy, and AI outputs could be a valid starting point for understanding the plethora of interactions. These outputs should be complemented with other forms of knowledge to produce a more comprehensive framework that adjusts to reality for developing AI in a democratic context. Urban space should be protected as a shared space and as an asset for societies to fully develop democracy in its multiple forms. Democracy and urban spaces influence each other and are subject to pressures from different actors including AI. AI systems should, therefore, be monitored to enhance democratic values through urban space…(More)”.