Spaces for Deliberation


Report by Gustav Kjær Vad Nielsen & James MacDonald-Nelson: “As citizens’ assemblies and other forms of citizen deliberation are increasingly implemented in many parts of the world, it is becoming more relevant to explore and question the role of the physical spaces in which these processes take place.

This paper builds on existing literature that considers the relationships between space and democracy. In the literature, this relationship has been studied with a focus on the architecture of parliament buildings, and on the role of urban public spaces and architecture for political culture, both largely within the context of representative democracy and with little or no attention given to spaces for facilitated citizen deliberation. With very limited considerations of the spaces for deliberative assemblies in the literature, in this paper, we argue that the spatial qualities for citizen deliberation demand more critical attention.

Through a series of interviews with leading practitioners of citizens’ assemblies from six different countrieswe explore what spatial qualities are typically considered in the planning and implementation of these assemblies, what are the recurring challenges related to the physical spaces where they take place, and the opportunities and limitations for a more intentional spatial design. In this paper, we synthesise our findings and formulate a series of considerations for the spatial qualities of citizens’ assemblies aimed at informing future practice and further research…(More)”.

The New Commons Challenge: Advancing AI for Public Good through Data Commons


Press Release: “The Open Data Policy Lab, a collaboration between The GovLab at New York University and Microsoft, has launched the New Commons Challenge, an initiative to advance the responsible reuse of data for AI-driven solutions that enhance local decision-making and humanitarian response. 

The Challenge will award two winning institutions $100,000 each to develop data commons that fuel responsible AI innovation in these critical areas.

With the increasing use of generative AI in crisis management, disaster preparedness, and local decision-making, access to diverse and high-quality data has never been more essential. 

The New Commons Challenge seeks to support organizations—including start-ups, non-profits, NGOs, universities, libraries, and AI developers—to build shared data ecosystems that improve real-world outcomes, from public health to emergency response.

Bridging Research and Real-World Impact

The New Commons Challenge is about putting data into action,” said Stefaan Verhulst, Co-Founder and Chief Research and Development Officer at The GovLab. “By enabling new models of data stewardship, we aim to support AI applications that save lives, strengthen communities, and enhance local decision-making where it matters most.”

The Challenge builds on the Open Data Policy Lab’s recent report, “Blueprint to Unlock New Data Commons for AI,” which advocates for creating collaboratively governed data ecosystems that support responsible AI development.

How the Challenge Works

The challenge unfolds in two phases: Phase One: Open Call for Concept Notes (April 14 – June 2, 2025) 

Innovators world-wide are invited to submit concept notes outlining their ideas. Phase Two: Full Proposal Submissions & Expert Review (June 2025)

  • Selected applicants will be invited to submit a full proposal
  • An interdisciplinary panel will evaluate proposals based on their impact potential, feasibility, and ethical governance.

Winners Announced in Late Summer 2025

Two selected projects will each receive $100,000 in funding, alongside technical support, mentorship, and global recognition…(More)”.

Data Cooperatives: Democratic Models for Ethical Data Stewardship


Paper by Francisco Mendonca, Giovanna DiMarzo, and Nabil Abdennadher: “Data cooperatives offer a new model for fair data governance, enabling individuals to collectively control, manage, and benefit from their information while adhering to cooperative principles such as democratic member control, economic participation, and community concern. This paper reviews data cooperatives, distinguishing them from models like data trusts, data commons, and data unions, and defines them based on member ownership, democratic governance, and data sovereignty. It explores applications in sectors like healthcare, agriculture, and construction. Despite their potential, data cooperatives face challenges in coordination, scalability, and member engagement, requiring innovative governance strategies, robust technical systems, and mechanisms to align member interests with cooperative goals. The paper concludes by advocating for data cooperatives as a sustainable, democratic, and ethical model for the future data economy…(More)”.

Artificial Intelligence and the Future of Work


Report by National Academies of Sciences, Engineering, and Medicine: “Advances in artificial intelligence (AI) promise to improve productivity significantly, but there are many questions about how AI could affect jobs and workers.

Recent technical innovations have driven the rapid development of generative AI systems, which produce text, images, or other content based on user requests – advances which have the potential to complement or replace human labor in specific tasks, and to reshape demand for certain types of expertise in the labor market.

Artificial Intelligence and the Future of Work evaluates recent advances in AI technology and their implications for economic productivity, the workforce, and education in the United States. The report notes that AI is a tool with the potential to enhance human labor and create new forms of valuable work – but this is not an inevitable outcome. Tracking progress in AI and its impacts on the workforce will be critical to helping inform and equip workers and policymakers to flexibly respond to AI developments…(More)”.

Energy and AI


Report by the International Energy Agency (IEA): “The development and uptake of artificial intelligence (AI) has accelerated in recent years – elevating the question of what widespread deployment of the technology will mean for the energy sector. There is no AI without energy – specifically electricity for data centres. At the same time, AI could transform how the energy industry operates if it is adopted at scale. However, until now, policy makers and other stakeholders have often lacked the tools to analyse both sides of this issue due to a lack of comprehensive data. 

This report from the International Energy Agency (IEA) aims to fill this gap based on new global and regional modelling and datasets, as well as extensive consultation with governments and regulators, the tech sector, the energy industry and international experts. It includes projections for how much electricity AI could consume over the next decade, as well as which energy sources are set to help meet it. It also analyses what the uptake of AI could mean for energy security, emissions, innovation and affordability…(More)”.

2025 Technology and innovation report


UNCTAD Report: Frontier technologies, particularly artificial intelligence (AI), are profoundly transforming our economies and societies, reshaping production processes, labour markets and the ways in which we live and interact. Will AI accelerate progress towards the Sustainable Development Goals, or will it exacerbate existing inequalities, leaving the underprivileged further behind? How can developing countries harness AI for sustainable development? AI is the first technology in history that can make decisions and generate ideas on its own. This sets it apart from traditional technologies and challenges the notion of technological neutrality.
The rapid development of AI has also outpaced the ability of Governments to respond effectively. The Technology and Innovation Report 2025 aims to guide policymakers through the complex AI
andscape and support them in designing science, technology and innovation (STI) policies that foster inclusive and equitable technological progress.
The world already has significant digital divides, and with the rise of AI, these could widen even further. In response, the Report argues for AI development based on inclusion and equity, shifting the focus from
technology to people. AI technologies should complement rather than displace human workers and production should be restructured so that the benefits are shared fairly among countries, firms and
workers. It is also important to strengthen international collaboration, to enable countries to co-create inclusive AI governance.


The Report examines five core themes:
A. AI at the technological frontier
B. Leveraging AI for productivity and workers’ empowerment
C. Preparing to seize AI opportunities
D. Designing national policies for AI
E. Global collaboration for inclusive and equitable AI…(More)”

How is AI augmenting collective intelligence for the SDGs?


Article by UNDP: “Increasingly AI techniques like natural language processing, machine learning and predictive analytics are being used alongside the most common methods in collective intelligence, from citizen science and crowdsourcing to digital democracy platforms.

At its best, AI can be used to augment and scale the intelligence of groups. In this section we describe the potential offered by these new combinations of human and machine intelligence. First we look at the applications that are most common, where AI is being used to enhance efficiency and categorize unstructured data, before turning to the emerging role of AI – where it helps us to better understand complex systems.

These are the three main ways AI and collective intelligence are currently being used together for the SDGs:

1. Efficiency and scale of data processing

AI is being effectively incorporated into collective intelligence projects where timing is paramount and a key insight is buried deep within large volumes of unstructured data. This combination of AI and collective intelligence is most useful when decision makers require an early warning to help them manage risks and distribute public resources more effectively. For example, Dataminr’s First Alert system uses pre-trained machine learning models to sift through text and images scraped from the internet, as well as other data streams, such as audio broadcasts, to isolate early signals that anticipate emergency events…(More)”. (See also: Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern).

Privacy-Enhancing and Privacy-Preserving Technologies in AI: Enabling Data Use and Operationalizing Privacy by Design and Default


Paper by the Centre for Information Policy Leadership at Hunton (“CIPL”): “provides an in-depth exploration of how privacy-enhancing technologies (“PETs”) are being deployed to address privacy within artificial intelligence (“AI”) systems. It aims to describe how these technologies can help operationalize privacy by design and default and serve as key business enablers, allowing companies and public sector organizations to access, share and use data that would otherwise be unavailable. It also seeks to demonstrate how PETs can address challenges and provide new opportunities across the AI life cycle, from data sourcing to model deployment, and includes real-world case studies…

As further detailed in the Paper, CIPL’s recommendations for boosting the adoption of PETs for AI are as follows:

Stakeholders should adopt a holistic view of the benefits of PETs in AI. PETs deliver value beyond addressing privacy and security concerns, such as fostering trust and enabling data sharing. It is crucial that stakeholders consider all these advantages when making decisions about their use.

Regulators should issue more clear and practical guidance to reduce regulatory uncertainty in the use of PETs in AI. While regulators increasingly recognize the value of PETs, clearer and more practical guidance is needed to help organizations implement these technologies effectively.

Regulators should adopt a risk-based approach to assess how PETs can meet standards for data anonymization, providing clear guidance to eliminate uncertainty. There is uncertainty around whether various PETs meet legal standards for data anonymization. A risk-based approach to defining anonymization standards could encourage wider adoption of PETs.

Deployers should take steps to provide contextually appropriate transparency to customers and data subjects. Given the complexity of PETs, deployers should ensure customers and data subjects understand how PETs function within AI models…(More)”.

Enabling an Open-Source AI Ecosystem as a Building Block for Public AI


Policy brief by Katarzyna Odrozek, Vidisha Mishra, Anshul Pachouri, Arnav Nigam: “…informed by insights from 30 open dataset builders convened by Mozilla and EleutherAI and a policy analysis on open-source Artificial intelligence (AI) development, outlines four key areas for G7 action: expand access to open data, support sustainable governance, encourage policy alignment in open-source AI and local capacity building and identification of use cases. These steps will enhance AI competitiveness, accountability, and innovation, positioning the G7 as a leader in Responsible AI development…(More)”.

AI Liability Along the Value Chain


Report by Beatriz Botero Arcila: “…explores how liability law can help solve the “problem of many hands” in AI: that is, determining who is responsible for harm that has been dealt in a value chain in which a variety of different companies and actors might be contributing to the development of any given AI system. This is aggravated by the fact that AI systems are both opaque and technically complex, making their behavior hard to predict.

Why AI Liability Matters

To find meaningful solutions to this problem, different kinds of experts have to come together. This resource is designed for a wide audience, but we indicate how specific audiences can best make use of different sections, overviews, and case studies.

Specifically, the report:

  • Proposes a 3-step analysis to consider how liability should be allocated along the value chain: 1) The choice of liability regime, 2) how liability should be shared amongst actors along the value chain and 3) whether and how information asymmetries will be addressed.
  • Argues that where ex-ante AI regulation is already in place, policymakers should consider how liability rules will interact with these rules.
  • Proposes a baseline liability regime where actors along the AI value chain share responsibility if fault can be demonstrated, paired with measures to alleviate or shift the burden of proof and to enable better access to evidence — which would incentivize companies to act with sufficient care and address information asymmetries between claimants and companies.
  • Argues that in some cases, courts and regulators should extend a stricter regime, such as product liability or strict liability.
  • Analyzes liability rules in the EU based on this framework…(More)”.