OECD Report: “Privacy-enhancing technologies (PETs) are critical tools for building trust in the collaborative development and sharing of artificial intelligence (AI) models while protecting privacy, intellectual property, and sensitive information. This report identifies two key types of PET use cases. The first is enhancing the performance of AI models through confidential and minimal use of input data, with technologies like trusted execution environments, federated learning, and secure multi-party computation. The second is enabling the confidential co-creation and sharing of AI models using tools such as differential privacy, trusted execution environments, and homomorphic encryption. PETs can reduce the need for additional data collection, facilitate data-sharing partnerships, and help address risks in AI governance. However, they are not silver bullets. While combining different PETs can help compensate for their individual limitations, balancing utility, efficiency, and usability remains challenging. Governments and regulators can encourage PET adoption through policies, including guidance, regulatory sandboxes, and R&D support, which would help build sustainable PET markets and promote trustworthy AI innovation…(More)”.
Comparative evaluation of behavioral epidemic models using COVID-19 data
Paper by Nicolò Gozzi, Nicola Perra, and Alessandro Vespignani: “Characterizing the feedback linking human behavior and the transmission of infectious diseases (i.e., behavioral changes) remains a significant challenge in computational and mathematical epidemiology. Existing behavioral epidemic models often lack real-world data calibration and cross-model performance evaluation in both retrospective analysis and forecasting. In this study, we systematically compare the performance of three mechanistic behavioral epidemic models across nine geographies and two modeling tasks during the first wave of COVID-19, using various metrics. The first model, a Data-Driven Behavioral Feedback Model, incorporates behavioral changes by leveraging mobility data to capture variations in contact patterns. The second and third models are Analytical Behavioral Feedback Models, which simulate the feedback loop either through the explicit representation of different behavioral compartments within the population or by utilizing an effective nonlinear force of infection. Our results do not identify a single best model overall, as performance varies based on factors such as data availability, data quality, and the choice of performance metrics. While the Data-Driven Behavioral Feedback Model incorporates substantial real-time behavioral information, the Analytical Compartmental Behavioral Feedback Model often demonstrates superior or equivalent performance in both retrospective fitting and out-of-sample forecasts. Overall, our work offers guidance for future approaches and methodologies to better integrate behavioral changes into the modeling and projection of epidemic dynamics…(More)”.
Fixing the US statistical infrastructure
Article by Nancy Potok and Erica L. Groshen: “Official government statistics are critical infrastructure for the information age. Reliable, relevant, statistical information helps businesses to invest and flourish; governments at the local, state, and national levels to make critical decisions on policy and public services; and individuals and families to invest in their futures. Yet surrounded by all manner of digitized data, one can still feel inadequately informed. A major driver of this disconnect in the US context is delayed modernization of the federal statistical system. The disconnect will likely worsen in coming months as the administration shrinks statistical agencies’ staffing, terminates programs (notably for health and education statistics), and eliminates unpaid external advisory groups. Amid this upheaval, might the administration’s appetite for disruption be harnessed to modernize federal statistics?
Federal statistics, one of the United States’ premier public goods, differ from privately provided data because they are privacy protected, aggregated to address relevant questions for decision-makers, constructed transparently, and widely available without a subscription. The private sector cannot be expected to adequately supply such statistical infrastructure. Yes, some companies collect and aggregate some economic data, such as credit card purchases and payroll information. But without strong underpinnings of a modern, federal information infrastructure, there would be large gaps in nationally consistent, transparent, trustworthy data. Furthermore, most private providers rely on public statistics for their internal analytics, to improve their products. They are among the many data users asking for more from statistical agencies…(More)”.
Generative AI Outlook Report
Outlook report, prepared by the European Commission’s Joint Research Centre (JRC): “…examines the transformative role of Generative AI (GenAI) with a specific emphasis on the European Union. It highlights the potential of GenAI for innovation, productivity, and societal change. GenAI is a disruptive technology due to its capability of producing human-like content at an unprecedented scale. As such, it holds multiple opportunities for advancements across various sectors, including healthcare, education, science, and creative industries. At the same time, GenAI also presents significant challenges, including the possibility to amplify misinformation, bias, labour disruption, and privacy concerns. All those issues are cross-cutting and therefore, the rapid development of GenAI requires a multidisciplinary approach to fully understand its implications. Against this context, the Outlook report begins with an overview of the technological aspects of GenAI, detailing their current capabilities and outlining emerging trends. It then focuses on economic implications, examining how GenAI can transform industry dynamics and necessitate adaptation of skills and strategies. The societal impact of GenAI is also addressed, with focus on both the opportunities for inclusivity and the risks of bias and over-reliance. Considering these challenges, the regulatory framework section outlines the EU’s current legislative framework, such as the AI Act and horizontal Data legislation to promote trustworthy and transparent AI practices. Finally, sector-specific ‘deep dives’ examine the opportunities and challenges that GenAI presents. This section underscores the need for careful management and strategic policy interventions to maximize its potential benefits while mitigating the risks. The report concludes that GenAI has the potential to bring significant social and economic impact in the EU, and that a comprehensive and nuanced policy approach is needed to navigate the challenges and opportunities while ensuring that technological developments are fully aligned with democratic values and EU legal framework…(More)”.
AI alone cannot solve the productivity puzzle
Article by Carl Benedikt Frey: “Each time fears of AI-driven job losses flare up, optimists reassure us that artificial intelligence is a productivity tool that will help both workers and the economy. Microsoft chief Satya Nadella thinks autonomous AI agents will allow users to name their goal while the software plans, executes and learns across every system. A dream tool — if efficiency alone was enough to solve the productivity problem.
History says it is not. Over the past half-century we have filled offices and pockets with ever-faster computers, yet labour-productivity growth in advanced economies has slowed from roughly 2 per cent a year in the 1990s to about 0.8 per cent in the past decade. Even China’s once-soaring output per worker has stalled.
The shotgun marriage of the computer and the internet promised more than enhanced office efficiency — it envisioned a golden age of discovery. By placing the world’s knowledge in front of everyone and linking global talent, breakthroughs should have multiplied. Yet research productivity has sagged. The average scientist now produces fewer breakthrough ideas per dollar than their 1960s counterpart.
What went wrong? As economist Gary Becker once noted, parents face a quality-versus-quantity trade-off: the more children they have, the less they can invest in each child. The same might be said for innovation.
Large-scale studies of inventive output confirm the result: researchers juggling more projects are less likely to deliver breakthrough innovations. Over recent decades, scientific papers and patents have become increasingly incremental. History’s greats understood why. Isaac Newton kept a single problem “constantly before me . . . till the first dawnings open slowly, by little and little, into a full and clear light”. Steve Jobs concurred: “Innovation is saying no to a thousand things.”
Human ingenuity thrives where precedent is thin. Had the 19th century focused solely on better looms and ploughs, we would enjoy cheap cloth and abundant grain — but there would be no antibiotics, jet engines or rockets. Economic miracles stem from discovery, not repeating tasks at greater speed.
Large language models gravitate towards the statistical consensus. A model trained before Galileo would have parroted a geocentric universe; fed 19th-century texts it would have proved human flight impossible before the Wright brothers succeeded. A recent Nature review found that while LLMs lightened routine scientific chores, the decisive leaps of insight still belonged to humans. Even Demis Hassabis, whose team at Google DeepMind produced AlphaFold — a model that can predict the shape of a protein and is arguably AI’s most celebrated scientific feat so far — admits that achieving genuine artificial general intelligence systems that can match or surpass humans across the full spectrum of cognitive tasks may require “several more innovations”…(More)”.
Facilitating the secondary use of health data for public interest purposes across borders
OECD Paper: “Recent technological developments create significant opportunities to process health data in the public interest. However, the growing fragmentation of frameworks applied to data has become a structural impediment to fully leverage these opportunities. Public and private stakeholders suggest that three key areas should be analysed to support this outcome, namely: the convergence of governance frameworks applicable to health data use in the public interest across jurisdictions; the harmonisation of national procedures applicable to secondary health data use; and the public perceptions around the use of health data. This paper explores each of these three key areas and concludes with an overview of collective findings relating specifically to the convergence of legal bases for secondary data use…(More)”.
Protecting young digital citizens
Blog by Pascale Raulin-Serrier: “…As digital tools become more deeply embedded in children’s lives, many young users are unaware of the long-term consequences of sharing personal information online through apps, games, social media platforms and even educational tools. The large-scale collection of data related to their preferences, identity or lifestyle may be used for targeted advertising or profiling. This affects not only their immediate online experiences but can also have lasting consequences, including greater risks of discrimination and exclusion. These concerns underscore the urgent need for stronger safeguards, greater transparency and a child-centered approach to data governance.
CNIL’s initiatives to promote children’s privacy
In response to these challenges, the CNIL introduced eight recommendations in 2021 to provide practical guidance for children, parents and other stakeholders in the digital economy. These are built around several key pillars to promote and protect children’s privacy:
1. Providing specific safeguards
Children have distinct digital rights and must be able to exercise them fully. Under the European General Data Protection Regulation (GDPR), they benefit from special protections, including the right to be forgotten and, in some cases, the ability to consent to the processing of their data.In France, children can only register for social networks or online gaming platforms if they are over 15, or with parental consent if they are younger. CNIL helps hold platforms accountable by offering clear recommendations on how to present terms of service and collect consent in ways that are accessible and understandable to children.
2. Balancing autonomy and protection
The needs and capacities of a 6-year-old child differ greatly from those of a 16-year-old adolescent. It is essential to consider this diversity in online behaviour, maturity and the evolving ability to make informed decisions. The CNIL emphasizes the importance of offering children a digital environment that strikes a balance between protection and autonomy. It also advocates for digital citizenship education to empower young people with the tools they need to manage their privacy responsibly…(More)”. See also Responsible Data for Children.
Blueprint on Prosocial Tech Design Governance
Blueprint by Lisa Schirch: “… lays out actionable recommendations for governments, civil society, researchers, and industry to design digital platforms that reduce harm and increase benefit to society.
The Blueprint on Prosocial Tech Design Governance responds to the crisis in the scale and impact of digital platform harms. Digital platforms are fueling a systemic crisis by amplifying misinformation, harming mental health, eroding privacy, promoting polarization, exploiting children, and concentrating unaccountable power through manipulative design.
Prosocial tech design governance is a framework for regulating digital platforms based on how their design choices— such as algorithms and interfaces—impact society. It shifts focus “upstream” to address the root causes of digital harms and the structural incentives influencing platform design…(More)”.
5 Ways AI Supports City Adaptation to Extreme Heat
Article by Urban AI: “Cities stand at the frontline of climate change, confronting some of its most immediate and intense consequences. Among these, extreme heat has emerged as one of the most pressing and rapidly escalating threats. As we enter June 2025, Europe is already experiencing its first major and long-lasting heatwave of the summer season with temperatures surpassing 40°C in parts of Spain, France, and Portugal — and projections indicate that this extreme event could persist well into mid-June.
This climate event is not an isolated incident. By 2050, the number of cities exposed to dangerous levels of heat is expected to triple, with peak temperatures of 48°C (118°F) potentially becoming the new normal in some regions. Such intensifying conditions place unprecedented stress on urban infrastructure, public health systems, and the overall livability of cities — especially for vulnerable communities.
In this context, Artificial Intelligence (AI) is emerging as a vital tool in the urban climate adaptation toolbox. Urban AI — defined as the application of AI technologies to urban systems and decision-making — can help cities anticipate, manage, and mitigate the effects of extreme heat in more targeted and effective ways.
Cooling the Metro with AI-Driven Ventilation, in Barcelona
With over 130 stations and a century-old metro network, the city of Barcelona faces increasing pressure to ensure passenger comfort and safety — especially underground, where heat and air quality are harder to manage. In response, Transports Metropolitans de Barcelona (TMB), in partnership with SENER Engineering, developed and implemented the RESPIRA® system, an AI-powered ventilation control platform. First introduced in 2020 on Line 1, RESPIRA® demonstrated its effectiveness by lowering ambient temperatures, improving air circulation during the COVID-19 pandemic, and achieving a notable 25.1% reduction in energy consumption along with a 10.7% increase in passenger satisfaction…(More)”
Beyond the Checkbox: Upgrading the Right to Opt Out
Article by Sebastian Zimmeck: “…rights, as currently encoded in privacy laws, put too much onus on individuals when many privacy problems are systematic.5 Indeed, privacy is a systems property. If we want to make progress toward a more privacy-friendly Web as well as mobile and smart TV platforms, we need to take a systems perspective. For example, instead of requiring people to opt out from individual websites, there should be opt-out settings in browsers and operating systems. If a law requires individual opt-outs, those can be generalized by applying one opt-out toward all future sites visited or apps used, if a user so desires.8
Another problem is that the ad ecosystem is structured such that if people opt out, in many cases, their data is still being shared just as if they would not have opted out. The only difference is that in the latter case the data is accompanied by a privacy flag propagating the opt-out to the data recipient.7 However, if people opt out, their data should not be shared in the first place! The current system relying on the propagation of opt-out signals and deletion of incoming data by the recipient is complicated, error-prone, violates the principle of data minimization, and is an obstacle for effective privacy enforcement. Changing the ad ecosystem is particularly important as it is not only used on the web but also on many other platforms. Companies and the online ad industry as a whole need to do better!..(More)”