Government at a Glance 2025


OECD Report: “Governments face a highly complex operating environment marked by major demographic, environmental, and digital shifts, alongside low trust and constrained fiscal space. 

Responding effectively means concentrating efforts on three fronts: Enhancing individuals’ sense of dignity in their interactions with government, restoring a sense of security amid rapid societal and economic changes, and improving government efficiency and effectiveness to help boost productivity in the economy, while restoring public finances. These priorities converge in the governance of the green transition.

Government at a Glance 2025 offers evidence-based tools to tackle these long-term challenges…

Governments are not yet making the most of digital tools and data to improve effectiveness and efficiency

Data, digital tools and AI all offer the prospect of efficiency gains. OECD countries score, on average, 0.61 on the Digital Government Index (on a 0-1 scale) but could improve their digital policy frameworks, whole-of-government approaches and use of data as a strategic asset. On average, only 47% of OECD governments’ high-value datasets are openly available, falling to just 37% in education and 42% in health and social welfare…(More)”.

Sharing trustworthy AI models with privacy-enhancing technologies


OECD Report: “Privacy-enhancing technologies (PETs) are critical tools for building trust in the collaborative development and sharing of artificial intelligence (AI) models while protecting privacy, intellectual property, and sensitive information. This report identifies two key types of PET use cases. The first is enhancing the performance of AI models through confidential and minimal use of input data, with technologies like trusted execution environments, federated learning, and secure multi-party computation. The second is enabling the confidential co-creation and sharing of AI models using tools such as differential privacy, trusted execution environments, and homomorphic encryption. PETs can reduce the need for additional data collection, facilitate data-sharing partnerships, and help address risks in AI governance. However, they are not silver bullets. While combining different PETs can help compensate for their individual limitations, balancing utility, efficiency, and usability remains challenging. Governments and regulators can encourage PET adoption through policies, including guidance, regulatory sandboxes, and R&D support, which would help build sustainable PET markets and promote trustworthy AI innovation…(More)”.

Understanding the Impacts of Generative AI Use on Children


Primer by The Alan Turing Institute and LEGO Foundation: “There is a growing body of research looking at the potential positive and negative impacts of generative AI and its associated risks. However, there is a lack of research that considers the potential impacts of these technologies on children, even though generative AI is already being deployed within many products and systems that children engage with, from games to educational platforms. Children have particular needs and rights that must be accounted for when designing, developing, and rolling out new technologies, and more focus on children’s rights is needed. While children are the group that may be most impacted by the widespread deployment of generative AI, they are simultaneously the group least represented in decision-making processes relating to the design, development, deployment or governance of AI. The Alan Turing Institute’s Children and AI and AI for Public Services teams explored the perspectives of children, parents, carers and teachers on generative AI technologies. Their research is guided by the ‘Responsible Innovation in Technology for Children’ (RITEC) framework for digital technology, play and children’s wellbeing established by UNICEF and funded by the LEGO Foundation and seeks to examine the potential impacts of generative AI on children’s wellbeing. The utility of the RITEC framework is that it allows for the qualitative analysis of wellbeing to take place by foregrounding more specific factors such as identity and creativity, which are further explored in each of the work packages.

The project provides unique and much needed insights into impacts of generative AI on children through combining quantitative and qualitative research methods…(More)”.

2025 State of the Digital Decade


Report by The European Commission: “…assessed the EU’s progress along the four target areas for the EU’s digital transformation by 2030, highlighting achievements and gaps in the areas of digital infrastructure, digitalisation of businesses, digital skills, and digitalisation of public service.

Digital Decade logo

The report shows that although there are certain advancements, the rollout of connectivity infrastructure, such as fibre and 5G stand-alone networks, is still lagging. More companies are adopting Artificial Intelligence (AI), cloud and big data, but adoption needs to accelerate. Just over half of Europeans (55.6%) have a basic level of digital skills, while the availability of ICT specialists with advanced skills remains low and with a stark gender divide, hindering progress in key sectors, such as cybersecurity and AI. In 2024, the EU made steady progress in digitalising key public services, but a substantial portion of governmental digital infrastructure continues to depend on service providers outside the EU.

The data shows persisting challenges, such as fragmented markets, overly complex regulations, security and strategic dependence. Further public and private investment and easier access to venture capital for EU companies would accelerate innovation and scale up…(More)”.

Five dimensions of scaling democratic deliberation: With and beyond AI


Paper by Sammy McKinney and Claudia Chwalisz: “In the study and practice of deliberative democracy, academics and practitioners are increasingly exploring the role that Artificial Intelligence (AI) can play in scaling democratic deliberation. From claims by leading deliberative democracy scholars that AI can bring deliberation to the ‘mass’, or ‘global’, scale, to cutting-edge innovations from technologists aiming to support scalability in practice, AI’s role in scaling deliberation is capturing the energy and imagination of many leading thinkers and practitioners.

There are many reasons why people may be interested in ‘scaling deliberation’. One is that there is evidence that deliberation has numerous benefits for the people involved in deliberations – strengthening their individual and collective agency, political efficacy, and trust in one another and in institutions. Another is that the decisions and actions that result are arguably higher-quality and more legitimate. Because the benefits of deliberation are so great, there is significant interest around how we could scale these benefits to as many people and decisions as possible.

Another motivation stems from the view that one weakness of small-scale deliberative processes results from their size. Increasing the sheer numbers involved is perceived as a source of legitimacy for some. Others argue that increasing the numbers will also increase the quality of the outputs and outcome.

Finally, deliberative processes that are empowered and/or institutionalised are able to shift political power. Many therefore want to replicate the small-scale model of deliberation in more places, with an emphasis on redistributing power and influencing decision-making.

When we consider how to leverage technology for deliberation, we emphasise that we should not lose sight of the first-order goals of strengthening collective agency. Today there are deep geo-political shifts; in many places, there is a movement towards authoritarian measures, a weakening of civil society, and attacks on basic rights and freedoms. We see the debate about how to ‘scale deliberation’ through this political lens, where our goals are focused on how we can enable a citizenry that is resilient to the forces of autocracy – one that feels and is more powerful and connected, where people feel heard and empathise with others, where citizens have stronger interpersonal and societal trust, and where public decisions have greater legitimacy and better alignment with collective values…(More)”

Generative AI Outlook Report


Outlook report, prepared by the European Commission’s Joint Research Centre (JRC): “…examines the transformative role of Generative AI (GenAI) with a specific emphasis on the European Union. It highlights the potential of GenAI for innovation, productivity, and societal change. GenAI is a disruptive technology due to its capability of producing human-like content at an unprecedented scale. As such, it holds multiple opportunities for advancements across various sectors, including healthcare, education, science, and creative industries. At the same time, GenAI also presents significant challenges, including the possibility to amplify misinformation, bias, labour disruption, and privacy concerns. All those issues are cross-cutting and therefore, the rapid development of GenAI requires a multidisciplinary approach to fully understand its implications. Against this context, the Outlook report begins with an overview of the technological aspects of GenAI, detailing their current capabilities and outlining emerging trends. It then focuses on economic implications, examining how GenAI can transform industry dynamics and necessitate adaptation of skills and strategies. The societal impact of GenAI is also addressed, with focus on both the opportunities for inclusivity and the risks of bias and over-reliance. Considering these challenges, the regulatory framework section outlines the EU’s current legislative framework, such as the AI Act and horizontal Data legislation to promote trustworthy and transparent AI practices. Finally, sector-specific ‘deep dives’ examine the opportunities and challenges that GenAI presents. This section underscores the need for careful management and strategic policy interventions to maximize its potential benefits while mitigating the risks. The report concludes that GenAI has the potential to bring significant social and economic impact in the EU, and that a comprehensive and nuanced policy approach is needed to navigate the challenges and opportunities while ensuring that technological developments are fully aligned with democratic values and EU legal framework…(More)”.

Energy and AI Observatory


IEA’s Energy and AI Observatory: “… provides up-to-date data and analysis on the growing links between the energy sector and artificial intelligence (AI). The new and fast-moving field of AI requires a new approach to gathering data and information, and the Observatory aims to provide regularly updated data and a comprehensive view of the implications of AI on energy demand (energy for AI) and of AI applications for efficiency, innovation, resilience and competitiveness in the energy sector (AI for energy). This first-of-a-kind platform is developed and maintained by the IEA, with valuable contributions of data and insights from the IEA’s energy industry and tech sector partners, and complements the IEA’s Special Report on Energy and AI…(More)”.

Community-Aligned A.I. Benchmarks


White Paper by the Aspen Institute: “…When people develop machine learning models for AI products and services, they iterate to improve performance. 

What it means to “improve” a machine learning model depends on what you want the model to do, like correctly transcribe an audio sample or generate a reliable summary of a long document.

Machine learning benchmarks are similar to standardized tests that AI researchers and builders can score their work against. Benchmarks allow us to both see if different model tweaks improve the performance for the intended task and compare similar models against one another.

Some famous benchmarks in AI include ImageNet and the Stanford Question Answering Dataset (SQuAD).

Benchmarks are important, but their development and adoption has historically been somewhat arbitrary. The capabilities that benchmarks measure should reflect the priorities for what the public wants AI tools to be and do. 

We can build positive AI futures, ones that emphasize what the public wants out of these emerging technologies. As such, it’s imperative that we build benchmarks worth striving for…(More)”.

Facilitating the secondary use of health data for public interest purposes across borders


OECD Paper: “Recent technological developments create significant opportunities to process health data in the public interest. However, the growing fragmentation of frameworks applied to data has become a structural impediment to fully leverage these opportunities. Public and private stakeholders suggest that three key areas should be analysed to support this outcome, namely: the convergence of governance frameworks applicable to health data use in the public interest across jurisdictions; the harmonisation of national procedures applicable to secondary health data use; and the public perceptions around the use of health data. This paper explores each of these three key areas and concludes with an overview of collective findings relating specifically to the convergence of legal bases for secondary data use…(More)”.

Blueprint on Prosocial Tech Design Governance


Blueprint by Lisa Schirch: “… lays out actionable recommendations for governments, civil society, researchers, and industry to design digital platforms that reduce harm and increase benefit to society.

The Blueprint on Prosocial Tech Design Governance responds to the crisis in the scale and impact of digital platform harms. Digital platforms are fueling a systemic crisis by amplifying misinformation, harming mental health, eroding privacy, promoting polarization, exploiting children, and concentrating unaccountable power through manipulative design.

Prosocial tech design governance is a framework for regulating digital platforms based on how their design choices— such as algorithms and interfaces—impact society. It shifts focus “upstream” to address the root causes of digital harms and the structural incentives influencing platform design…(More)”.