Data Commons: The Missing Infrastructure for Public Interest Artificial Intelligence


Article by Stefaan Verhulst, Burton Davis and Andrew Schroeder: “Artificial intelligence is celebrated as the defining technology of our time. From ChatGPT to Copilot and beyond, generative AI systems are reshaping how we work, learn, and govern. But behind the headline-grabbing breakthroughs lies a fundamental problem: The data these systems depend on to produce useful results that serve the public interest is increasingly out of reach.

Without access to diverse, high-quality datasets, AI models risk reinforcing bias, deepening inequality, and returning less accurate, more imprecise results. Yet, access to data remains fragmented, siloed, and increasingly enclosed. What was once open—government records, scientific research, public media—is now locked away by proprietary terms, outdated policies, or simple neglect. We are entering a data winter just as AI’s influence over public life is heating up.

This isn’t just a technical glitch. It’s a structural failure. What we urgently need is new infrastructure: data commons.

A data commons is a shared pool of data resources—responsibly governed, managed using participatory approaches, and made available for reuse in the public interest. Done correctly, commons can ensure that communities and other networks have a say in how their data is used, that public interest organizations can access the data they need, and that the benefits of AI can be applied to meet societal challenges.

Commons offer a practical response to the paradox of data scarcity amid abundance. By pooling datasets across organizations—governments, universities, libraries, and more—they match data supply with real-world demand, making it easier to build AI that responds to public needs.

We’re already seeing early signs of what this future might look like. Projects like Common Corpus, MLCommons, and Harvard’s Institutional Data Initiative show how diverse institutions can collaborate to make data both accessible and accountable. These initiatives emphasize open standards, participatory governance, and responsible reuse. They challenge the idea that data must be either locked up or left unprotected, offering a third way rooted in shared value and public purpose.

But the pace of progress isn’t matching the urgency of the moment. While policymakers debate AI regulation, they often ignore the infrastructure that makes public interest applications possible in the first place. Without better access to high-quality, responsibly governed data, AI for the common good will remain more aspiration than reality.

That’s why we’re launching The New Commons Challenge—a call to action for universities, libraries, civil society, and technologists to build data ecosystems that fuel public-interest AI…(More)”.

Open with care: transparency and data sharing in civically engaged research


Paper by Ankushi Mitra: “Research transparency and data access are considered increasingly important for advancing research credibility, cumulative learning, and discovery. However, debates persist about how to define and achieve these goals across diverse forms of inquiry. This article intervenes in these debates, arguing that the participants and communities with whom scholars work are active stakeholders in science, and thus have a range of rights, interests, and researcher obligations to them in the practice of transparency and openness. Drawing on civically engaged research and related approaches that advocate for subjects of inquiry to more actively shape its process and share in its benefits, I outline a broader vision of research openness not only as a matter of peer scrutiny among scholars or a top-down exercise in compliance, but rather as a space for engaging and maximizing opportunities for all stakeholders in research. Accordingly, this article provides an ethical and practical framework for broadening transparency, accessibility, and data-sharing and benefit-sharing in research. It promotes movement beyond open science to a more inclusive and socially responsive science anchored in a larger ethical commitment: that the pursuit of knowledge be accountable and its benefits made accessible to the citizens and communities who make it possible…(More)”.

Fostering Open Data


Paper by Uri Y. Hacohen: “Data is often heralded as “the world’s most valuable resource,” yet its potential to benefit society remains unrealized due to systemic barriers in both public and private sectors. While open data-defined as data that is available, accessible, and usable-holds immense promise to advance open science, innovation, economic growth, and democratic values, its utilization is hindered by legal, technical, and organizational challenges. Public sector initiatives, such as U.S. and European Union open data regulations, face uneven enforcement and regulatory complexity, disproportionately affecting under-resourced stakeholders such as researchers. In the private sector, companies prioritize commercial interests and user privacy, often obstructing data openness through restrictive policies and technological barriers. This article proposes an innovative, four-layered policy framework to overcome these obstacles and foster data openness. The framework includes (1) improving open data infrastructures, (2) ensuring legal frameworks for open data, (3) incentivizing voluntary data sharing, and (4) imposing mandatory data sharing obligations. Each policy cluster is tailored to address sector-specific challenges and balance competing values such as privacy, property, and national security. Drawing from academic research and international case studies, the framework provides actionable solutions to transition from a siloed, proprietary data ecosystem to one that maximizes societal value. This comprehensive approach aims to reimagine data governance and unlock the transformative potential of open data…(More)”.

Enabling an Open-Source AI Ecosystem as a Building Block for Public AI


Policy brief by Katarzyna Odrozek, Vidisha Mishra, Anshul Pachouri, Arnav Nigam: “…informed by insights from 30 open dataset builders convened by Mozilla and EleutherAI and a policy analysis on open-source Artificial intelligence (AI) development, outlines four key areas for G7 action: expand access to open data, support sustainable governance, encourage policy alignment in open-source AI and local capacity building and identification of use cases. These steps will enhance AI competitiveness, accountability, and innovation, positioning the G7 as a leader in Responsible AI development…(More)”.

Researching data discomfort: The case of Statistics Norway’s quest for billing data


Paper by Lisa Reutter: “National statistics offices are increasingly exploring the possibilities of utilizing new data sources to position themselves in emerging data markets. In 2022, Statistics Norway announced that the national agency will require the biggest grocers in Norway to hand over all collected billing data to produce consumer behavior statistics which had previously been produced by other sampling methods. An online article discussing this proposal sparked a surprisingly (at least to Statistics Norway) high level of interest among readers, many of whom expressed concerns about this intended change in data practice. This paper focuses on the multifaceted online discussions of the proposal, as these enable us to study citizens’ reactions and feelings towards increased data collection and emerging public-private data flows in a Nordic context. Through an explorative empirical analysis of comment sections, this paper investigates what is discussed by commenters and reflects upon why this case sparked so much interest among citizens in the first place. It therefore contributes to the growing literature of citizens’ voices in data-driven administration and to a wider discussion on how to research public feeling towards datafication. I argue that this presents an interesting case of discomfort voiced by citizens, which demonstrates the contested nature of data practices among citizens–and their ability to regard data as deeply intertwined with power and politics. This case also reminds researchers to pay attention to seemingly benign and small changes in administration beyond artificial intelligence…(More)”

Legal frictions for data openness


Paper by Ramya Chandrasekhar: “investigates legal entanglements of re-use, when data and content from the open web is used to train foundation AI models. Based on conversations with AI researchers and practitioners, an online workshop, and legal analysis of a repository of 41 legal disputes relating to copyright and data protection, this report highlights tensions between legal imaginations of data flows and computational processes involved in training foundation models.

To realise the promise of the open web as open for all, this report argues that efforts oriented solely towards techno-legal openness of training datasets are not enough. Techno-legal openness of datasets facilitates easy re-use of data. But, certain well-resourced actors like Big Tech are able to take advantage of data flows on the open web to internet to train proprietary foundation models, while giving little to no value back to either the maintenance of shared informational resources or communities of commoners. At the same time, open licenses no longer accommodate changing community preferences of sharing and re-use of data and content.
In addition to techno-legal openness of training datasets, there is a need for certain limits on the extractive power of well-resourced actors like BigTech combined with increased recognition of community data sovereignty. Alternative licensing frameworks, such as the Nwulite Obodo License, Kaitiakitanga Licenses, the Montreal License, the OpenRAIL Licenses, the Open Data Commons License, and the AI2Impact Licenses hold valuable insights in this regard. While these licensing frameworks impose more obligations on re-users and necessitate more collective thinking on interoperability,they are nonetheless necessary for the creation of healthy digital and data commons, to realise the original promise of the open web as open for all…(More)”.

What is a fair exchange for access to public data?


Blog and policy brief by Jeni Tennison: “The most obvious approach to get companies to share value back to the public sector in return for access to data is to charge them. However, there are a number of challenges with a “pay to access” approach: it’s hard to set the right price; it creates access barriers, particularly for cash-poor start-ups; and it creates a public perception that the government is willing to sell their data, and might be tempted to loosen privacy-protecting governance controls in exchange for cash.

Are there other options? The policy brief explores a range of other approaches and assesses these against five goals that a value-sharing framework should ideally meet, to:

  • Encourage use of public data, including by being easy for organisations to understand and administer.
  • Provide a return on investment for the public sector, offsetting at least some of the costs of supporting the NDL infrastructure and minimising administrative costs.
  • Promote equitable innovation and economic growth in the UK, which might mean particularly encouraging smaller, home-grown businesses.
  • Create social value, particularly towards this Government’s other missions, such as achieving Net Zero or unlocking opportunity for all.
  • Build public trust by being easily explainable, avoiding misaligned incentives that encourage the breaking of governance guardrails, and feeling like a fair exchange.

In brief, alternatives to a pay-to-access model that still provide direct financial returns include:

  • Discounts: the public sector could secure discounts on products and services created using public data. However, this could be difficult to administer and enforce.
  • Royalties: taking a percentage of charges for products and services created using public data might be similarly hard to administer and enforce, but applies to more companies.
  • Equity: taking equity in startups can provide long-term returns and align with public investment goals.
  • Levies: targeted taxes on businesses that use public data can provide predictable revenue and encourage data use.
  • General taxation: general taxation can fund data infrastructure, but it may lack the targeted approach and public visibility of other methods.

It’s also useful to consider non-financial conditions that could be put on organisations accessing public data..(More)”.

A crowd-sourced repository for valuable government data


About: “DataLumos is an ICPSR archive for valuable government data resources. ICPSR has a long commitment to safekeeping and disseminating US government and other social science data. DataLumos accepts deposits of public data resources from the community and recommendations of public data resources that ICPSR itself might add to DataLumos. Please consider making a monetary donation to sustain DataLumos…(More)”.

Elon Musk Also Has a Problem with Wikipedia


Article by Margaret Talbot: “If you have spent time on Wikipedia—and especially if you’ve delved at all into the online encyclopedia’s inner workings—you will know that it is, in almost every aspect, the inverse of Trumpism. That’s not a statement about its politics. The thousands of volunteer editors who write, edit, and fact-check the site manage to adhere remarkably well, over all, to one of its core values: the neutral point of view. Like many of Wikipedia’s s principles and procedures, the neutral point of view is the subject of a practical but sophisticated epistemological essay posted on Wikipedia. Among other things, the essay explains, N.P.O.V. means not stating opinions as facts, and also, just as important, not stating facts as opinions. (So, for example, the third sentence of the entry titled “Climate change” states, with no equivocation, that “the current rise in global temperatures is driven by human activities, especially fossil fuel burning since the Industrial Revolution.”)…So maybe it should come as no surprise that Elon Musk has lately taken time from his busy schedule of dismantling the federal government, along with many of its sources of reliable information, to attack Wikipedia. On January 21st, after the site updated its page on Musk to include a reference to the much-debated stiff-armed salute he made at a Trump inaugural event, he posted on X that “since legacy media propaganda is considered a ‘valid’ source by Wikipedia, it naturally simply becomes an extension of legacy media propaganda!” He urged people not to donate to the site: “Defund Wikipedia until balance is restored!” It’s worth taking a look at how the incident is described on Musk’s page, quite far down, and judging for yourself. What I see is a paragraph that first describes the physical gesture (“Musk thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together”), goes on to say that “some” viewed it as a Nazi or a Roman salute, then quotes Musk disparaging those claims as “politicized,” while noting that he did not explicitly deny them. (There is also now a separate Wikipedia article, “Elon Musk salute controversy,” that goes into detail about the full range of reactions.)

This is not the first time Musk has gone after the site. In December, he posted on X, “Stop donating to Wokepedia.” And that wasn’t even his first bad Wikipedia pun. “I will give them a billion dollars if they change their name to Dickipedia,” he wrote, in an October, 2023, post. It seemed to be an ego thing at first. Musk objected to being described on his page as an “early investor” in Tesla, rather than as a founder, which is how he prefers to be identified, and seemed frustrated that he couldn’t just buy the site. But lately Musk’s beef has merged with a general conviction on the right that Wikipedia—which, like all encyclopedias, is a tertiary source that relies on original reporting and research done by other media and scholars—is biased against conservatives.

The Heritage Foundation, the think tank behind the Project 2025 policy blueprint, has plans to unmask Wikipedia editors who maintain their privacy using pseudonyms (these usernames are displayed in the article history but don’t necessarily make it easy to identify the people behind them) and whose contributions on Israel it deems antisemitic…(More)”.

To Stop Tariffs, Trump Demands Opioid Data That Doesn’t Yet Exist


Article by Josh Katz and Margot Sanger-Katz: “One month ago, President Trump agreed to delay tariffs on Canada and Mexico after the two countries agreed to help stem the flow of fentanyl into the United States. On Tuesday, the Trump administration imposed the tariffs anyway, saying that the countries had failed to do enough — and claiming that tariffs would be lifted only when drug deaths fall.

But the administration has seemingly established an impossible standard. Real-time, national data on fentanyl overdose deaths does not exist, so there is no way to know whether Canada and Mexico were able to “adequately address the situation” since February, as the White House demanded.

“We need to see material reduction in autopsied deaths from opioids,” said Howard Lutnick, the commerce secretary, in an interview on CNBC on Tuesday, indicating that such a decline would be a precondition to lowering tariffs. “But you’ve seen it — it has not been a statistically relevant reduction of deaths in America.”

In a way, Mr. Lutnick is correct that there is no evidence that overdose deaths have fallen in the last month — since there is no such national data yet. His stated goal to measure deaths again in early April will face similar challenges.

But data through September shows that fentanyl deaths had already been falling at a statistically significant rate for months, causing overall drug deaths to drop at a pace unlike any seen in more than 50 years of recorded drug overdose mortality data.

The declines can be seen in provisional data from the Centers for Disease Control and Prevention, which compiles death records from states, which in turn collect data from medical examiners and coroners in cities and towns. Final national data generally takes more than a year to produce. But, as the drug overdose crisis has become a major public health emergency in recent years, the C.D.C. has been publishing monthly data, with some holes, at around a four-month lag…(More)”.