Economic Implications of Data Regulation


OECD Report: “Cross-border data flows are the lifeblood of today’s social and economic interactions, but they also raise a range of new challenges, including for privacy and data protection, national security, cybersecurity, digital protectionism and regulatory reach. This has led to a surge in regulation conditioning (or prohibiting) its flow or mandating that data be stored or processed domestically (data localisation). However, the economic implications of these measures are not well understood. This report provides estimates on what is at stake, highlighting that full fragmentation could reduce global GDP by 4.5%. It also underscores the benefits associated with open regimes with safeguards which could see global GDP increase by 1.7%. In a world where digital fragmentation is growing, global discussions on these issues can help harness the benefits of an open and safeguarded internet…(More)”.

Sandboxes for AI


Report by Datasphere Initiative: “The Sandboxes for AI report explores the role of regulatory sandboxes in the development and governance of artificial intelligence. Originally presented as a working paper at the Global Sandbox Forum Inaugural Meeting in July 2024, the report was further refined through expert consultations and an online roundtable in December 2024. It examines sandboxes that have been announced, are under development, or have been completed, identifying common patterns in their creation, timing, and implementation. By providing insights into why and how regulators and companies should consider AI sandboxes, the report serves as a strategic guide for fostering responsible innovation.

In a rapidly evolving AI landscape, traditional regulatory processes often struggle to keep pace with technological advancements. Sandboxes offer a flexible and iterative approach, allowing policymakers to test and refine AI governance models in a controlled environment. The report identifies 66 AI, data, or technology-related sandboxes globally, with 31 specifically designed for AI innovation across 44 countries. These initiatives focus on areas such as machine learning, data-driven solutions, and AI governance, helping policymakers address emerging challenges while ensuring ethical and transparent AI development…(More)”.

Google-backed public interest AI partnership launches with $400M+ for open ecosystem building


Article by Natasha Lomas: “Make room for yet another partnership on AI. Current AI, a “public interest” initiative focused on fostering and steering development of artificial intelligence in societally beneficial directions, was announced at the French AI Action summit on Monday. It’s kicking off with an initial $400 million in pledges from backers and a plan to pull in $2.5 billion more over the next five years.

Such figures might are small beer when it comes to AI investment, with the French president fresh from trumpeting a private support package worth around $112 billion (which itself pales beside U.S. investments of $500 billion aiming to accelerate the tech). But the partnership is not focused on compute, so its backers believe such relatively modest sums will still be able to produce an impact in key areas where AI could make a critical difference to advancing the public interest in areas like healthcare and supporting climate goals.

The initial details are high level. Under the top-line focus on “the enabling environment for public interest AI,” the initiative has a number of stated aims — including pushing to widen access to “high quality” public and private datasets for AI training; support for open source infrastructure and tooling to boost AI transparency and security; and support for developing systems to measure AI’s social and environmental impact. 

Its founder, Martin Tisné, said the goal is to create a financial vehicle “to provide a North Star for public financing of critical efforts,” such as bringing AI to bear on combating cancers or coming up with treatments for long COVID.

“I think what’s happening is you’ve got a data bottleneck coming in artificial intelligence, because we’re running out of road with data on the web, effectively … and here, what we need is to really unlock innovations in how to make data accessible and available,” he told TechCrunch….(More)”

Trump’s shocking purge of public health data, explained


Article by Dylan Scott: “In the initial days of the Trump administration, officials scoured federal websites for any mention of what they deemed “DEI” keywords — terms as generic as “diverse” and “historically” and even “women.” They soon identified reams of some of the country’s most valuable public health data containing some of the targeted words, including language about LGBTQ+ people, and quickly took down much of it — from surveys on obesity and suicide rates to real-time reports on immediate infectious disease threats like bird flu.

The removal elicited a swift response from public health experts who warned that without this data, the country risked being in the dark about important health trends that shape life-and-death public health decisions made in communities across the country.

Some of this data was restored in a matter of days, but much of it was incomplete. In some cases, the raw data sheets were posted again, but the reference documents that would allow most people to decipher them were not. Meanwhile, health data continues to be taken down: The New York Times reported last week that data from the Centers for Disease Control and Prevention on bird flu transmission between humans and cats had been posted and then promptly removed…

It is difficult to capture the sheer breadth and importance of the public health data that has been affected. Here are a few illustrative examples of reports that have either been tampered with or removed completely, as compiled by KFF.

The Behavioral Risk Factor Surveillance System (BRFSS), which is “one of the most widely used national health surveys and has been ongoing for about 40 years,” per KFF, is an annual survey that contacts 400,000 Americans to ask people about everything from their own perception of their general health to exercise, diet, sexual activity, and alcohol and drug use.

That in turn allows experts to track important health trends, like the fluctuations in teen vaping use. One recent study that relied on BRFSS data warned that a recent ban on flavored e-cigarettes (also known as vapes) may be driving more young people to conventional smoking, five years after an earlier Yale study based on the same survey led to the ban being proposed in the first place. The Supreme Court and the Trump administration are currently revisiting the flavored vape ban, and the Yale study was cited in at least one amicus brief for the case.

This survey has also been of particular use in identifying health disparities among LGBTQ+ people, such as higher rates of uninsurance and reported poor health compared to the general population. Those findings have motivated policymakers at the federal, state and local levels to launch new initiatives aimed specifically at that at-risk population.

As of now, most of the BRFSS data has been restored, but the supplemental materials that make it legible to lay people still has not…(More)”.

Digital Data and Advanced AI for Richer Global Intelligence


Report by Danielle Goldfarb: “From collecting millions of online price data to measure inflation, to assessing the economic impact of the COVID-19 pandemic on low-income workers, digital data sets can be used to benefit the public interest. Using these and other examples, this special report explores how digital data sets and advances in artificial intelligence (AI) can provide timely, transparent and detailed insights into global challenges. These experiments illustrate how governments and civil society analysts can reuse digital data to spot emerging problems, analyze specific group impacts, complement traditional metrics or verify data that may be manipulated. AI and data governance should extend beyond addressing harms. International institutions and governments need to actively steward digital data and AI tools to support a step change in our understanding of society’s biggest challenges…(More)”

Recommendations for Better Sharing of Climate Data


Creative Commons: “…the culmination of a nine-month research initiative from our Open Climate Data project. These guidelines are a result of collaboration between Creative Commons, government agencies and intergovernmental organizations. They mark a significant milestone in our ongoing effort to enhance the accessibility, sharing, and reuse of open climate data to address the climate crisis. Our goal is to share strategies that align with existing data sharing principles and pave the way for a more interconnected and accessible future for climate data.

Our recommendations offer practical steps and best practices, crafted in collaboration with key stakeholders and organizations dedicated to advancing open practices in climate data. We provide recommendations for 1) legal and licensing terms, 2) using metadata values for attribution and provenance, and 3) management and governance for better sharing.

Opening climate data requires an examination of the public’s legal rights to access and use the climate data, often dictated by copyright and licensing. This legal detail is sometimes missing from climate data sharing and legal interoperability conversations. Our recommendations suggest two options: Option A: CC0 + Attribution Request, in order to maximize reuse by dedicating climate data to the public domain, plus a request for attribution; and Option B: CC BY 4.0, for retaining data ownership and legal enforcement of attribution. We address how to navigate license stacking and attribution stacking for climate data hosts and for users working with multiple climate data sources.

We also propose standardized human- and machine-readable metadata values that enhance transparency, reduce guesswork, and ensure broader accessibility to climate data. We built upon existing model metadata schemas and standards, including those that address license and attribution information. These recommendations address a gap and provide metadata schema that standardize the inclusion of upfront, clear values related to attribution, licensing and provenance.

Lastly, we highlight four key aspects of effective climate data management: designating a dedicated technical managing steward, designating a legal and/or policy steward, encouraging collaborative data sharing, and regularly revisiting and updating data sharing policies in accordance with parallel open data policies and standards…(More)”.

Net zero: the role of consumer behaviour


Horizon Scan by the UK Parliament: “According to research from the Centre for Climate Change and Social Transformation, reaching net zero by 2050 will require individual behaviour change, particularly when it comes to aviation, diet and energy use.

The government’s 2023 Powering Up Britain: Net Zero Growth Plan referred to low carbon choices as ‘green choices’, and described them as public and businesses choosing green products, services, and goods. The plan sets out six principles regarding policies to facilitate green choices. Both the Climate Change Committee and the House of Lords Environment and Climate Change Committee have recommended that government strategies should incorporate greater societal and behavioural change policies and guidance.

Contributors to the horizon scan identified managing consumer behaviour and habits to help achieve net zero as a topic of importance for parliament over the next five years. Change in consumer behaviour could result in approximately 60% of required emission reductions to reach net zero.[5] Behaviour change will be needed from the wealthiest in society, who according to Oxfam typically lead higher-carbon lifestyles.

Incorporating behavioural science principles into policy levers is a well-established method of encouraging desired behaviours. Common examples of policies aiming to influence behaviour include subsidies, regulation and information campaigns (see below).

However, others suggest deliberative public engagement approaches, such as the UK Climate Change Assembly,[7] may be needed to determine which pro-environmental policies are acceptable.[8] Repeated public engagement is seen as key to achieve a just transition as different groups will need different support to enable their green choices (PN 706).

Researchers debate the extent to which individuals should be responsible for making green choices as opposed to the regulatory and physical environment facilitating them, or whether markets, businesses and governments should be the main actors responsible for driving action. They highlight the need for different actions based on the context and the different ways individuals act as consumers, citizens, and within organisations and groups. Health, time, comfort and status can strongly influence individual decisions while finance and regulation are typically stronger motivations for organisations (PN 714)…(More)”

It’s just distributed computing: Rethinking AI governance


Paper by Milton L. Mueller: “What we now lump under the unitary label “artificial intelligence” is not a single technology, but a highly varied set of machine learning applications enabled and supported by a globally ubiquitous system of distributed computing. The paper introduces a 4 part conceptual framework for analyzing the structure of that system, which it labels the digital ecosystem. What we now call “AI” is then shown to be a general functionality of distributed computing. “AI” has been present in primitive forms from the origins of digital computing in the 1950s. Three short case studies show that large-scale machine learning applications have been present in the digital ecosystem ever since the rise of the Internet. and provoked the same public policy concerns that we now associate with “AI.” The governance problems of “AI” are really caused by the development of this digital ecosystem, not by LLMs or other recent applications of machine learning. The paper then examines five recent proposals to “govern AI” and maps them to the constituent elements of the digital ecosystem model. This mapping shows that real-world attempts to assert governance authority over AI capabilities requires systemic control of all four elements of the digital ecosystem: data, computing power, networks and software. “Governing AI,” in other words, means total control of distributed computing. A better alternative is to focus governance and regulation upon specific applications of machine learning. An application-specific approach to governance allows for a more decentralized, freer and more effective method of solving policy conflicts…(More)”

Network architecture for global AI policy


Article by Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, and Andrew W. Wyckoff: “We see efforts to consolidate international AI governance as premature and ill-suited to respond to the immense, complex, novel, challenges of governing advanced AI, and the current diverse and decentralized efforts as beneficial and the best fit for this complex and rapidly developing technology.

Exploring the vast terra incognita of AI, realizing its opportunities, and managing its risks requires governance that can adapt and respond rapidly to AI risks as they emerge, develop deep understanding of the technology and its implications, and mobilize diverse resources and initiatives to address the growing global demand for access to AI. No one government or body will have the capacity to take on these challenges without building multiple coalitions and working closely with experts and institutions in industry, philanthropy, civil society, and the academy.

A distributed network of networks can more effectively address the challenges and opportunities of AI governance than a centralized system. Like the architecture of the interconnected information technology systems on which AI depends, such a decentralized system can bring to bear redundancy, resiliency, and diversity by channeling the functions of AI governance toward the most timely and effective pathways in iterative and diversified processes, providing agility against setbacks or failures at any single point. These multiple centers of effort can harness the benefit of network effects and parallel processing.

We explore this model of distributed and iterative AI governance below…(More)”.

Empowering open data sharing for social good: a privacy-aware approach


Paper by Tânia Carvalho et al: “The Covid-19 pandemic has affected the world at multiple levels. Data sharing was pivotal for advancing research to understand the underlying causes and implement effective containment strategies. In response, many countries have facilitated access to daily cases to support research initiatives, fostering collaboration between organisations and making such data available to the public through open data platforms. Despite the several advantages of data sharing, one of the major concerns before releasing health data is its impact on individuals’ privacy. Such a sharing process should adhere to state-of-the-art methods in Data Protection by Design and by Default. In this paper, we use a Covid-19 data set from Portugal’s second-largest hospital to show how it is feasible to ensure data privacy while improving the quality and maintaining the utility of the data. Our goal is to demonstrate how knowledge exchange in multidisciplinary teams of healthcare practitioners, data privacy, and data science experts is crucial to co-developing strategies that ensure high utility in de-identified data…(More).”