Policy design labs and uncertainty: can they innovate, and retain and circulate learning?


Paper by Jenny Lewis: “Around the world in recent times, numerous policy design labs have been established, related to a rising focus on the need for public sector innovation. These labs are a response to the challenging nature of many societal problems and often have a purpose of navigating uncertainty. They do this by “labbing” ill-structured problems through moving them into an experimental environment, outside of traditional government structures, and using a design-for-policy approach. Labs can, therefore, be considered as a particular type of procedural policy tool, used in attempts to change how policy is formulated and implemented to address uncertainty. This paper considers the role of policy design labs in learning and explores the broader governance context they are embedded within. It examines whether labs have the capacity to innovate and also retain and circulate learning to other policy actors. It argues that labs have considerable potential to change the spaces of policymaking at the micro level and innovate, but for learning to be kept rather than lost, innovation needs to be institutionalized in governing structures at higher levels…(More)”.

Economic Implications of Data Regulation


OECD Report: “Cross-border data flows are the lifeblood of today’s social and economic interactions, but they also raise a range of new challenges, including for privacy and data protection, national security, cybersecurity, digital protectionism and regulatory reach. This has led to a surge in regulation conditioning (or prohibiting) its flow or mandating that data be stored or processed domestically (data localisation). However, the economic implications of these measures are not well understood. This report provides estimates on what is at stake, highlighting that full fragmentation could reduce global GDP by 4.5%. It also underscores the benefits associated with open regimes with safeguards which could see global GDP increase by 1.7%. In a world where digital fragmentation is growing, global discussions on these issues can help harness the benefits of an open and safeguarded internet…(More)”.

Sandboxes for AI


Report by Datasphere Initiative: “The Sandboxes for AI report explores the role of regulatory sandboxes in the development and governance of artificial intelligence. Originally presented as a working paper at the Global Sandbox Forum Inaugural Meeting in July 2024, the report was further refined through expert consultations and an online roundtable in December 2024. It examines sandboxes that have been announced, are under development, or have been completed, identifying common patterns in their creation, timing, and implementation. By providing insights into why and how regulators and companies should consider AI sandboxes, the report serves as a strategic guide for fostering responsible innovation.

In a rapidly evolving AI landscape, traditional regulatory processes often struggle to keep pace with technological advancements. Sandboxes offer a flexible and iterative approach, allowing policymakers to test and refine AI governance models in a controlled environment. The report identifies 66 AI, data, or technology-related sandboxes globally, with 31 specifically designed for AI innovation across 44 countries. These initiatives focus on areas such as machine learning, data-driven solutions, and AI governance, helping policymakers address emerging challenges while ensuring ethical and transparent AI development…(More)”.

Google-backed public interest AI partnership launches with $400M+ for open ecosystem building


Article by Natasha Lomas: “Make room for yet another partnership on AI. Current AI, a “public interest” initiative focused on fostering and steering development of artificial intelligence in societally beneficial directions, was announced at the French AI Action summit on Monday. It’s kicking off with an initial $400 million in pledges from backers and a plan to pull in $2.5 billion more over the next five years.

Such figures might are small beer when it comes to AI investment, with the French president fresh from trumpeting a private support package worth around $112 billion (which itself pales beside U.S. investments of $500 billion aiming to accelerate the tech). But the partnership is not focused on compute, so its backers believe such relatively modest sums will still be able to produce an impact in key areas where AI could make a critical difference to advancing the public interest in areas like healthcare and supporting climate goals.

The initial details are high level. Under the top-line focus on “the enabling environment for public interest AI,” the initiative has a number of stated aims — including pushing to widen access to “high quality” public and private datasets for AI training; support for open source infrastructure and tooling to boost AI transparency and security; and support for developing systems to measure AI’s social and environmental impact. 

Its founder, Martin Tisné, said the goal is to create a financial vehicle “to provide a North Star for public financing of critical efforts,” such as bringing AI to bear on combating cancers or coming up with treatments for long COVID.

“I think what’s happening is you’ve got a data bottleneck coming in artificial intelligence, because we’re running out of road with data on the web, effectively … and here, what we need is to really unlock innovations in how to make data accessible and available,” he told TechCrunch….(More)”

Recommendations for Better Sharing of Climate Data


Creative Commons: “…the culmination of a nine-month research initiative from our Open Climate Data project. These guidelines are a result of collaboration between Creative Commons, government agencies and intergovernmental organizations. They mark a significant milestone in our ongoing effort to enhance the accessibility, sharing, and reuse of open climate data to address the climate crisis. Our goal is to share strategies that align with existing data sharing principles and pave the way for a more interconnected and accessible future for climate data.

Our recommendations offer practical steps and best practices, crafted in collaboration with key stakeholders and organizations dedicated to advancing open practices in climate data. We provide recommendations for 1) legal and licensing terms, 2) using metadata values for attribution and provenance, and 3) management and governance for better sharing.

Opening climate data requires an examination of the public’s legal rights to access and use the climate data, often dictated by copyright and licensing. This legal detail is sometimes missing from climate data sharing and legal interoperability conversations. Our recommendations suggest two options: Option A: CC0 + Attribution Request, in order to maximize reuse by dedicating climate data to the public domain, plus a request for attribution; and Option B: CC BY 4.0, for retaining data ownership and legal enforcement of attribution. We address how to navigate license stacking and attribution stacking for climate data hosts and for users working with multiple climate data sources.

We also propose standardized human- and machine-readable metadata values that enhance transparency, reduce guesswork, and ensure broader accessibility to climate data. We built upon existing model metadata schemas and standards, including those that address license and attribution information. These recommendations address a gap and provide metadata schema that standardize the inclusion of upfront, clear values related to attribution, licensing and provenance.

Lastly, we highlight four key aspects of effective climate data management: designating a dedicated technical managing steward, designating a legal and/or policy steward, encouraging collaborative data sharing, and regularly revisiting and updating data sharing policies in accordance with parallel open data policies and standards…(More)”.

It’s just distributed computing: Rethinking AI governance


Paper by Milton L. Mueller: “What we now lump under the unitary label “artificial intelligence” is not a single technology, but a highly varied set of machine learning applications enabled and supported by a globally ubiquitous system of distributed computing. The paper introduces a 4 part conceptual framework for analyzing the structure of that system, which it labels the digital ecosystem. What we now call “AI” is then shown to be a general functionality of distributed computing. “AI” has been present in primitive forms from the origins of digital computing in the 1950s. Three short case studies show that large-scale machine learning applications have been present in the digital ecosystem ever since the rise of the Internet. and provoked the same public policy concerns that we now associate with “AI.” The governance problems of “AI” are really caused by the development of this digital ecosystem, not by LLMs or other recent applications of machine learning. The paper then examines five recent proposals to “govern AI” and maps them to the constituent elements of the digital ecosystem model. This mapping shows that real-world attempts to assert governance authority over AI capabilities requires systemic control of all four elements of the digital ecosystem: data, computing power, networks and software. “Governing AI,” in other words, means total control of distributed computing. A better alternative is to focus governance and regulation upon specific applications of machine learning. An application-specific approach to governance allows for a more decentralized, freer and more effective method of solving policy conflicts…(More)”

Network architecture for global AI policy


Article by Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, and Andrew W. Wyckoff: “We see efforts to consolidate international AI governance as premature and ill-suited to respond to the immense, complex, novel, challenges of governing advanced AI, and the current diverse and decentralized efforts as beneficial and the best fit for this complex and rapidly developing technology.

Exploring the vast terra incognita of AI, realizing its opportunities, and managing its risks requires governance that can adapt and respond rapidly to AI risks as they emerge, develop deep understanding of the technology and its implications, and mobilize diverse resources and initiatives to address the growing global demand for access to AI. No one government or body will have the capacity to take on these challenges without building multiple coalitions and working closely with experts and institutions in industry, philanthropy, civil society, and the academy.

A distributed network of networks can more effectively address the challenges and opportunities of AI governance than a centralized system. Like the architecture of the interconnected information technology systems on which AI depends, such a decentralized system can bring to bear redundancy, resiliency, and diversity by channeling the functions of AI governance toward the most timely and effective pathways in iterative and diversified processes, providing agility against setbacks or failures at any single point. These multiple centers of effort can harness the benefit of network effects and parallel processing.

We explore this model of distributed and iterative AI governance below…(More)”.

Citizens’ assemblies in fragile and conflict-affected settings


Article by Nicole Curato, Lucy J Parry, and Melisa Ross: “Citizens’ assemblies have become a popular form of citizen engagement to address complex issues like climate change, electoral reform, and assisted dying. These assemblies bring together randomly selected citizens to learn about an issue, consider diverse perspectives, and develop collective recommendations. Growing evidence highlights their ability to depolarise views, enhance political efficacy, and rebuild trust in institutions. However, the story of citizens’ assemblies is more complicated on closer look. This demanding form of political participation is increasingly critiqued for its limited impact, susceptibility to elite influence, and rigid design features unsuitable to local contexts. These challenges are especially pronounced in fragile and conflict-affected settings, where trust is low, expectations for action are high, and local ownership is critical. Well-designed assemblies can foster civic trust and dialogue across difference, but poorly implemented ones risk exacerbating tensions.

This article offers a framework to examine citizens’ assemblies in fragile and conflict-affected settings, focusing on three dimensions: deliberative design, deliberative integrity, and deliberative sustainability. We apply this framework to cases in Bosnia and France to illustrate both the transformative potential and the challenges of citizens’ assemblies when held amidst or in the aftermath of political conflict. This article argues that citizens’ assemblies can be vital mechanisms to manage intractable conflict, provided they are designed with intentionality, administered deliberatively, and oriented towards sustainability…(More)”.

So You’ve Decided To Carry Your Brain Around


Article by Nicholas Clairmont: “If the worry during the Enlightenment, as mathematician Isaac Milner wrote in 1794, was that ‘the great and high’ have ‘forgotten that they have souls,’ then today the worry is that many of us have forgotten that we have bodies.” So writes Christine Rosen, senior fellow at the American Enterprise Institute and senior editor of this journal, in her new book, The Extinction of Experience: Being Human in a Disembodied World.

A sharp articulation of the problem, attributed to Thomas Edison, is that “the chief function of the body is to carry the brain around.” Today, the “brain” can be cast virtually into text or voice communication with just about anyone on Earth, and information and entertainment can be delivered almost immediately to wherever a brain happens to be carried around. But we forget how recently this became possible.

Can it really be less than two decades ago that life started to be revolutionized by the smartphone, the technology that made it possible for people of Edison’s persuasion to render the body seemingly redundant? The iPhone was released in 2007. But even by 2009, according to Pew Research, only a third of American adults “had at some point used the internet on their mobile device.” It wasn’t until 2012 that half did so at least occasionally. And then there is that other technology that took off over the same time period: Facebook and Twitter and Instagram and TikTok and the rest of the social networks that allow us to e-commune and that induce us to see everything we do in light of how it might look to others online.

For such a drastic and recent change, it is one we have largely accepted as just a fact. All the public hand-wringing about it has arguably not made a dent in our actual habits. And maybe that’s because we have underestimated the problem with how it has changed us…(More)”.

Public Policy Evaluation


​Implementation Toolkit by the OECD: “…offers practical guidance for government officials and evaluators seeking to improve their evaluation capacities and systems, by enabling a deeper understanding of their strengths and weaknesses and learning from OECD member country experiences and trends. The toolkit supports the practical implementation of the principles contained in the 2022 OECD Recommendation on Public Policy Evaluation, which is the first international standard aimed at driving the establishment of robust institutions and practices that promote the use of public policy evaluations. Together, the Recommendation and this accompanying toolkit seek to help governments build a culture of continuous learning and evidence-informed policymaking, potentially leading to more impactful policies and greater trust in government action.​..(More)”.