AI action plan database


A project by the Institute for Progress: “In January 2025, President Trump tasked the Office of Science and Technology Policy with creating an AI Action Plan to promote American AI Leadership. The government requested input from the public, and received 10,068 submissions. The database below summarizes specific recommendations from these submissions. … We used AI to extract recommendations from each submission, and to tag them with relevant information. Click on a recommendation to learn more about it. See our analysis of common themes and ideas across these recommendations…(More)”.

Technical Tiers: A New Classification Framework for Global AI Workforce Analysis


Report by Siddhi Pal, Catherine Schneider and Ruggero Marino Lazzaroni: “… introduces a novel three-tiered classification system for global AI talent that addresses significant methodological limitations in existing workforce analyses, by distinguishing between different skill categories within the existing AI talent pool. By distinguishing between non-technical roles (Category 0), technical software development (Category 1), and advanced deep learning specialization (Category 2), our framework enables precise examination of AI workforce dynamics at a pivotal moment in global AI policy.

Through our analysis of a sample of 1.6 million individuals in the AI talent pool across 31 countries, we’ve uncovered clear patterns in technical talent distribution that significantly impact Europe’s AI ambitions. Asian nations hold an advantage in specialized AI expertise, with South Korea (27%), Israel (23%), and Japan (20%) maintaining the highest proportions of Category 2 talent. Within Europe, Poland and Germany stand out as leaders in specialized AI talent. This may be connected to their initiatives to attract tech companies and investments in elite research institutions, though further research is needed to confirm these relationships.

Our data also reveals a shifting landscape of global talent flows. Research shows that countries employing points-based immigration systems attract 1.5 times more high-skilled migrants than those using demand-led approaches. This finding takes on new significance in light of recent geopolitical developments affecting scientific research globally. As restrictive policies and funding cuts create uncertainty for researchers in the United States, one of the big destinations for European AI talent, the way nations position their regulatory environments, scientific freedoms, and research infrastructure will increasingly determine their ability to attract and retain specialized AI talent.

The gender analysis in our study illuminates another dimension of competitive advantage. Contrary to the overall AI talent pool, EU countries lead in female representation in highly technical roles (Category 2), occupying seven of the top ten global rankings. Finland, Czechia, and Italy have the highest proportion of female representation in Category 2 roles globally (39%, 31%, and 28%, respectively). This gender diversity represents not merely a social achievement but a potential strategic asset in AI innovation, particularly as global coalitions increasingly emphasize the importance of diverse perspectives in AI development…(More)”

Mini-Publics and Party Ideology: Who Commissioned the Deliberative Wave in Europe?


Paper by Rodrigo Ramis-Moyano et al: “The increasing implementation of deliberative mini-publics (DMPs) such as Citizens’ Assemblies and Citizens’ Juries led the OECD to identify a ‘deliberative wave’. The burgeoning scholarship on DMPs has increased understanding of how they operate and their impact, but less attention has been paid to the drivers behind this diffusion. Existing research on democratic innovations has underlined the role of the governing party’s ideology as a relevant variable in the study of the adoption of other procedures such as participatory budgeting, placing left-wing parties as a prominent actor in this process. Unlike this previous literature, we have little understanding of whether mini-publics appeal equally across the ideological spectrum. This paper draws on the large-N OECD database to analyse the impact of governing party affiliation on the commissioning of DMPs in Europe across the last four decades. Our analysis finds the ideological pattern of adoption is less clear cut compared to other democratic innovations such as participatory budgeting. But stronger ideological differentiation emerges when we pay close attention to the design features of DMPs implemented…(More)”.

Artificial Intelligence: Generative AI’s Environmental and Human Effects


GAO Report: “Generative artificial intelligence (AI) could revolutionize entire industries. In the nearer term, it may dramatically increase productivity and transform daily tasks in many sectors. However, both its benefits and risks, including its environmental and human effects, are unknown or unclear.

Generative AI uses significant energy and water resources, but companies are generally not reporting details of these uses. Most estimates of environmental effects of generative AI technologies have focused on quantifying the energy consumed, and carbon emissions associated with generating that energy, required to train the generative AI model. Estimates of water consumption by generative AI are limited. Generative AI is expected to be a driving force for data center demand, but what portion of data center electricity consumption is related to generative AI is unclear. According to the International Energy Agency, U.S. data center electricity consumption was approximately 4 percent of U.S. electricity demand in 2022 and could be 6 percent of demand in 2026.

While generative AI may bring beneficial effects for people, GAO highlights five risks and challenges that could result in negative human effects on society, culture, and people from generative AI (see figure). For example, unsafe systems may produce outputs that compromise safety, such as inaccurate information, undesirable content, or the enabling of malicious behavior. However, definitive statements about these risks and challenges are difficult to make because generative AI is rapidly evolving, and private developers do not disclose some key technical information.

Selected generative artificial antelligence risks and challenges that could result in human effects

GAO identified policy options to consider that could enhance the benefits or address the challenges of environmental and human effects of generative AI. These policy options identify possible actions by policymakers, which include Congress, federal agencies, state and local governments, academic and research institutions, and industry. In addition, policymakers could choose to maintain the status quo, whereby they would not take additional action beyond current efforts. See below for details on the policy options…(More)”.

Test and learn: a playbook for mission-driven government


Playbook by the Behavioral Insights Team: “…sets out more detailed considerations around embedding test and learn in government, along with a broader range of methods that can be used at different stages of the innovation cycle. These can be combined flexibly, depending on the stage of the policy or service cycle, the available resources, and the nature of the challenge – whether that’s improving services, testing creative new approaches, or navigating uncertainty in new policy areas.

Almost all of the methods set out can be augmented or accelerated by harnessing AI tools – from using AI agents to conduct large-scale qualitative research, to AI-enhanced evidence discovery and analysis, and AI-powered systems mapping and modelling. AI should be treated as a core component of the toolkit at each stage.  And the speed of evolution of the application of AI is another strong argument for maintaining an agile mindset and regularly updating our ways of working. 

We hope this playbook will make test-and-learn more tangible to people who are new to it, and will expand the toolkit of people who have more experience with the approach. And ultimately we hope it will serve as a practical cheatsheet for building and improving the fabric of life…(More)”.

Mind the (Language) Gap: Mapping the Challenges of LLM Development in Low-Resource Language Contexts


White Paper by the Stanford Institute for Human-Centered AI (HAI), the Asia Foundation and the University of Pretoria: “…maps the LLM development landscape for low-resource languages, highlighting challenges, trade-offs, and strategies to increase investment; prioritize cross-disciplinary, community-driven development; and ensure fair data ownership…

  • Large language model (LLM) development suffers from a digital divide: Most major LLMs underperform for non-English—and especially low-resource—languages; are not attuned to relevant cultural contexts; and are not accessible in parts of the Global South.
  • Low-resource languages (such as Swahili or Burmese) face two crucial limitations: a scarcity of labeled and unlabeled language data and poor quality data that is not sufficiently representative of the languages and their sociocultural contexts.
  • To bridge these gaps, researchers and developers are exploring different technical approaches to developing LLMs that better perform for and represent low-resource languages but come with different trade-offs:
    • Massively multilingual models, developed primarily by large U.S.-based firms, aim to improve performance for more languages by including a wider range of (100-plus) languages in their training datasets.
    • Regional multilingual models, developed by academics, governments, and nonprofits in the Global South, use smaller training datasets made up of 10-20 low-resource languages to better cater to and represent a smaller group of languages and cultures.
    • Monolingual or monocultural models, developed by a variety of public and private actors, are trained on or fine-tuned for a single low-resource language and thus tailored to perform well for that language…(More)”

Deliberative Approaches to Inclusive Governance


Series edited by Taylor Owen and Sequoia Kim: “Democracy has undergone profound changes over the past decade, shaped by rapid technological, social, and political transformations. Across the globe, citizens are demanding more meaningful and sustained engagement in governance—especially around emerging technologies like artificial intelligence (AI), which increasingly shape the contours of public life.

From world-leading experts in deliberative democracy, civic technology, and AI governance we introduce a seven-part essay series exploring how deliberative democratic processes like citizen’s assemblies and civic tech can strengthen AI governance…(More)”.

Spaces for Deliberation


Report by Gustav Kjær Vad Nielsen & James MacDonald-Nelson: “As citizens’ assemblies and other forms of citizen deliberation are increasingly implemented in many parts of the world, it is becoming more relevant to explore and question the role of the physical spaces in which these processes take place.

This paper builds on existing literature that considers the relationships between space and democracy. In the literature, this relationship has been studied with a focus on the architecture of parliament buildings, and on the role of urban public spaces and architecture for political culture, both largely within the context of representative democracy and with little or no attention given to spaces for facilitated citizen deliberation. With very limited considerations of the spaces for deliberative assemblies in the literature, in this paper, we argue that the spatial qualities for citizen deliberation demand more critical attention.

Through a series of interviews with leading practitioners of citizens’ assemblies from six different countrieswe explore what spatial qualities are typically considered in the planning and implementation of these assemblies, what are the recurring challenges related to the physical spaces where they take place, and the opportunities and limitations for a more intentional spatial design. In this paper, we synthesise our findings and formulate a series of considerations for the spatial qualities of citizens’ assemblies aimed at informing future practice and further research…(More)”.

The New Commons Challenge: Advancing AI for Public Good through Data Commons


Press Release: “The Open Data Policy Lab, a collaboration between The GovLab at New York University and Microsoft, has launched the New Commons Challenge, an initiative to advance the responsible reuse of data for AI-driven solutions that enhance local decision-making and humanitarian response. 

The Challenge will award two winning institutions $100,000 each to develop data commons that fuel responsible AI innovation in these critical areas.

With the increasing use of generative AI in crisis management, disaster preparedness, and local decision-making, access to diverse and high-quality data has never been more essential. 

The New Commons Challenge seeks to support organizations—including start-ups, non-profits, NGOs, universities, libraries, and AI developers—to build shared data ecosystems that improve real-world outcomes, from public health to emergency response.

Bridging Research and Real-World Impact

The New Commons Challenge is about putting data into action,” said Stefaan Verhulst, Co-Founder and Chief Research and Development Officer at The GovLab. “By enabling new models of data stewardship, we aim to support AI applications that save lives, strengthen communities, and enhance local decision-making where it matters most.”

The Challenge builds on the Open Data Policy Lab’s recent report, “Blueprint to Unlock New Data Commons for AI,” which advocates for creating collaboratively governed data ecosystems that support responsible AI development.

How the Challenge Works

The challenge unfolds in two phases: Phase One: Open Call for Concept Notes (April 14 – June 2, 2025) 

Innovators world-wide are invited to submit concept notes outlining their ideas. Phase Two: Full Proposal Submissions & Expert Review (June 2025)

  • Selected applicants will be invited to submit a full proposal
  • An interdisciplinary panel will evaluate proposals based on their impact potential, feasibility, and ethical governance.

Winners Announced in Late Summer 2025

Two selected projects will each receive $100,000 in funding, alongside technical support, mentorship, and global recognition…(More)”.

Data Cooperatives: Democratic Models for Ethical Data Stewardship


Paper by Francisco Mendonca, Giovanna DiMarzo, and Nabil Abdennadher: “Data cooperatives offer a new model for fair data governance, enabling individuals to collectively control, manage, and benefit from their information while adhering to cooperative principles such as democratic member control, economic participation, and community concern. This paper reviews data cooperatives, distinguishing them from models like data trusts, data commons, and data unions, and defines them based on member ownership, democratic governance, and data sovereignty. It explores applications in sectors like healthcare, agriculture, and construction. Despite their potential, data cooperatives face challenges in coordination, scalability, and member engagement, requiring innovative governance strategies, robust technical systems, and mechanisms to align member interests with cooperative goals. The paper concludes by advocating for data cooperatives as a sustainable, democratic, and ethical model for the future data economy…(More)”.