Paper by Uri Y. Hacohen: “Data is often heralded as “the world’s most valuable resource,” yet its potential to benefit society remains unrealized due to systemic barriers in both public and private sectors. While open data-defined as data that is available, accessible, and usable-holds immense promise to advance open science, innovation, economic growth, and democratic values, its utilization is hindered by legal, technical, and organizational challenges. Public sector initiatives, such as U.S. and European Union open data regulations, face uneven enforcement and regulatory complexity, disproportionately affecting under-resourced stakeholders such as researchers. In the private sector, companies prioritize commercial interests and user privacy, often obstructing data openness through restrictive policies and technological barriers. This article proposes an innovative, four-layered policy framework to overcome these obstacles and foster data openness. The framework includes (1) improving open data infrastructures, (2) ensuring legal frameworks for open data, (3) incentivizing voluntary data sharing, and (4) imposing mandatory data sharing obligations. Each policy cluster is tailored to address sector-specific challenges and balance competing values such as privacy, property, and national security. Drawing from academic research and international case studies, the framework provides actionable solutions to transition from a siloed, proprietary data ecosystem to one that maximizes societal value. This comprehensive approach aims to reimagine data governance and unlock the transformative potential of open data…(More)”.
Enabling an Open-Source AI Ecosystem as a Building Block for Public AI
Policy brief by Katarzyna Odrozek, Vidisha Mishra, Anshul Pachouri, Arnav Nigam: “…informed by insights from 30 open dataset builders convened by Mozilla and EleutherAI and a policy analysis on open-source Artificial intelligence (AI) development, outlines four key areas for G7 action: expand access to open data, support sustainable governance, encourage policy alignment in open-source AI and local capacity building and identification of use cases. These steps will enhance AI competitiveness, accountability, and innovation, positioning the G7 as a leader in Responsible AI development…(More)”.
Researching data discomfort: The case of Statistics Norway’s quest for billing data
Paper by Lisa Reutter: “National statistics offices are increasingly exploring the possibilities of utilizing new data sources to position themselves in emerging data markets. In 2022, Statistics Norway announced that the national agency will require the biggest grocers in Norway to hand over all collected billing data to produce consumer behavior statistics which had previously been produced by other sampling methods. An online article discussing this proposal sparked a surprisingly (at least to Statistics Norway) high level of interest among readers, many of whom expressed concerns about this intended change in data practice. This paper focuses on the multifaceted online discussions of the proposal, as these enable us to study citizens’ reactions and feelings towards increased data collection and emerging public-private data flows in a Nordic context. Through an explorative empirical analysis of comment sections, this paper investigates what is discussed by commenters and reflects upon why this case sparked so much interest among citizens in the first place. It therefore contributes to the growing literature of citizens’ voices in data-driven administration and to a wider discussion on how to research public feeling towards datafication. I argue that this presents an interesting case of discomfort voiced by citizens, which demonstrates the contested nature of data practices among citizens–and their ability to regard data as deeply intertwined with power and politics. This case also reminds researchers to pay attention to seemingly benign and small changes in administration beyond artificial intelligence…(More)”
Legal frictions for data openness
Paper by Ramya Chandrasekhar: “investigates legal entanglements of re-use, when data and content from the open web is used to train foundation AI models. Based on conversations with AI researchers and practitioners, an online workshop, and legal analysis of a repository of 41 legal disputes relating to copyright and data protection, this report highlights tensions between legal imaginations of data flows and computational processes involved in training foundation models.
To realise the promise of the open web as open for all, this report argues that efforts oriented solely towards techno-legal openness of training datasets are not enough. Techno-legal openness of datasets facilitates easy re-use of data. But, certain well-resourced actors like Big Tech are able to take advantage of data flows on the open web to internet to train proprietary foundation models, while giving little to no value back to either the maintenance of shared informational resources or communities of commoners. At the same time, open licenses no longer accommodate changing community preferences of sharing and re-use of data and content.
In addition to techno-legal openness of training datasets, there is a need for certain limits on the extractive power of well-resourced actors like BigTech combined with increased recognition of community data sovereignty. Alternative licensing frameworks, such as the Nwulite Obodo License, Kaitiakitanga Licenses, the Montreal License, the OpenRAIL Licenses, the Open Data Commons License, and the AI2Impact Licenses hold valuable insights in this regard. While these licensing frameworks impose more obligations on re-users and necessitate more collective thinking on interoperability,they are nonetheless necessary for the creation of healthy digital and data commons, to realise the original promise of the open web as open for all…(More)”.
What is a fair exchange for access to public data?
Blog and policy brief by Jeni Tennison: “The most obvious approach to get companies to share value back to the public sector in return for access to data is to charge them. However, there are a number of challenges with a “pay to access” approach: it’s hard to set the right price; it creates access barriers, particularly for cash-poor start-ups; and it creates a public perception that the government is willing to sell their data, and might be tempted to loosen privacy-protecting governance controls in exchange for cash.
Are there other options? The policy brief explores a range of other approaches and assesses these against five goals that a value-sharing framework should ideally meet, to:
- Encourage use of public data, including by being easy for organisations to understand and administer.
- Provide a return on investment for the public sector, offsetting at least some of the costs of supporting the NDL infrastructure and minimising administrative costs.
- Promote equitable innovation and economic growth in the UK, which might mean particularly encouraging smaller, home-grown businesses.
- Create social value, particularly towards this Government’s other missions, such as achieving Net Zero or unlocking opportunity for all.
- Build public trust by being easily explainable, avoiding misaligned incentives that encourage the breaking of governance guardrails, and feeling like a fair exchange.
In brief, alternatives to a pay-to-access model that still provide direct financial returns include:
- Discounts: the public sector could secure discounts on products and services created using public data. However, this could be difficult to administer and enforce.
- Royalties: taking a percentage of charges for products and services created using public data might be similarly hard to administer and enforce, but applies to more companies.
- Equity: taking equity in startups can provide long-term returns and align with public investment goals.
- Levies: targeted taxes on businesses that use public data can provide predictable revenue and encourage data use.
- General taxation: general taxation can fund data infrastructure, but it may lack the targeted approach and public visibility of other methods.
It’s also useful to consider non-financial conditions that could be put on organisations accessing public data..(More)”.
A crowd-sourced repository for valuable government data
About: “DataLumos is an ICPSR archive for valuable government data resources. ICPSR has a long commitment to safekeeping and disseminating US government and other social science data. DataLumos accepts deposits of public data resources from the community and recommendations of public data resources that ICPSR itself might add to DataLumos. Please consider making a monetary donation to sustain DataLumos…(More)”.
Elon Musk Also Has a Problem with Wikipedia
Article by Margaret Talbot: “If you have spent time on Wikipedia—and especially if you’ve delved at all into the online encyclopedia’s inner workings—you will know that it is, in almost every aspect, the inverse of Trumpism. That’s not a statement about its politics. The thousands of volunteer editors who write, edit, and fact-check the site manage to adhere remarkably well, over all, to one of its core values: the neutral point of view. Like many of Wikipedia’s s principles and procedures, the neutral point of view is the subject of a practical but sophisticated epistemological essay posted on Wikipedia. Among other things, the essay explains, N.P.O.V. means not stating opinions as facts, and also, just as important, not stating facts as opinions. (So, for example, the third sentence of the entry titled “Climate change” states, with no equivocation, that “the current rise in global temperatures is driven by human activities, especially fossil fuel burning since the Industrial Revolution.”)…So maybe it should come as no surprise that Elon Musk has lately taken time from his busy schedule of dismantling the federal government, along with many of its sources of reliable information, to attack Wikipedia. On January 21st, after the site updated its page on Musk to include a reference to the much-debated stiff-armed salute he made at a Trump inaugural event, he posted on X that “since legacy media propaganda is considered a ‘valid’ source by Wikipedia, it naturally simply becomes an extension of legacy media propaganda!” He urged people not to donate to the site: “Defund Wikipedia until balance is restored!” It’s worth taking a look at how the incident is described on Musk’s page, quite far down, and judging for yourself. What I see is a paragraph that first describes the physical gesture (“Musk thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together”), goes on to say that “some” viewed it as a Nazi or a Roman salute, then quotes Musk disparaging those claims as “politicized,” while noting that he did not explicitly deny them. (There is also now a separate Wikipedia article, “Elon Musk salute controversy,” that goes into detail about the full range of reactions.)
This is not the first time Musk has gone after the site. In December, he posted on X, “Stop donating to Wokepedia.” And that wasn’t even his first bad Wikipedia pun. “I will give them a billion dollars if they change their name to Dickipedia,” he wrote, in an October, 2023, post. It seemed to be an ego thing at first. Musk objected to being described on his page as an “early investor” in Tesla, rather than as a founder, which is how he prefers to be identified, and seemed frustrated that he couldn’t just buy the site. But lately Musk’s beef has merged with a general conviction on the right that Wikipedia—which, like all encyclopedias, is a tertiary source that relies on original reporting and research done by other media and scholars—is biased against conservatives.
The Heritage Foundation, the think tank behind the Project 2025 policy blueprint, has plans to unmask Wikipedia editors who maintain their privacy using pseudonyms (these usernames are displayed in the article history but don’t necessarily make it easy to identify the people behind them) and whose contributions on Israel it deems antisemitic…(More)”.
To Stop Tariffs, Trump Demands Opioid Data That Doesn’t Yet Exist
Article by Josh Katz and Margot Sanger-Katz: “One month ago, President Trump agreed to delay tariffs on Canada and Mexico after the two countries agreed to help stem the flow of fentanyl into the United States. On Tuesday, the Trump administration imposed the tariffs anyway, saying that the countries had failed to do enough — and claiming that tariffs would be lifted only when drug deaths fall.
But the administration has seemingly established an impossible standard. Real-time, national data on fentanyl overdose deaths does not exist, so there is no way to know whether Canada and Mexico were able to “adequately address the situation” since February, as the White House demanded.
“We need to see material reduction in autopsied deaths from opioids,” said Howard Lutnick, the commerce secretary, in an interview on CNBC on Tuesday, indicating that such a decline would be a precondition to lowering tariffs. “But you’ve seen it — it has not been a statistically relevant reduction of deaths in America.”
In a way, Mr. Lutnick is correct that there is no evidence that overdose deaths have fallen in the last month — since there is no such national data yet. His stated goal to measure deaths again in early April will face similar challenges.
But data through September shows that fentanyl deaths had already been falling at a statistically significant rate for months, causing overall drug deaths to drop at a pace unlike any seen in more than 50 years of recorded drug overdose mortality data.

The declines can be seen in provisional data from the Centers for Disease Control and Prevention, which compiles death records from states, which in turn collect data from medical examiners and coroners in cities and towns. Final national data generally takes more than a year to produce. But, as the drug overdose crisis has become a major public health emergency in recent years, the C.D.C. has been publishing monthly data, with some holes, at around a four-month lag…(More)”.
Open Data Under Attack: How to Find Data and Why It Is More Important Than Ever
Article by Jessica Hilburn: “This land was made for you and me, and so was the data collected with our taxpayer dollars. Open data is data that is accessible, shareable, and able to be used by anyone. While any person, company, or organization can create and publish open data, the federal and state governments are by far the largest providers of open data.
President Barack Obama codified the importance of government-created open data in his May 9, 2013, executive order as a part of the Open Government Initiative. This initiative was meant to “ensure the public trust and establish a system of transparency, public participation, and collaboration” in furtherance of strengthening democracy and increasing efficiency. The initiative also launched Project Open Data (since replaced by the Resources.data.gov platform), which documented best practices and offered tools so government agencies in every sector could open their data and contribute to the collective public good. As has been made readily apparent, the era of public good through open data is now under attack.
Immediately after his inauguration, President Donald Trump signed a slew of executive orders, many of which targeted diversity, equity, and inclusion (DEI) for removal in federal government operations. Unsurprisingly, a large number of federal datasets include information dealing with diverse populations, equitable services, and inclusion of marginalized groups. Other datasets deal with information on topics targeted by those with nefarious agendas—vaccination rates, HIV/AIDS, and global warming, just to name a few. In the wake of these executive orders, datasets and website pages with blacklisted topics, tags, or keywords suddenly disappeared—more than 8,000 of them. In addition, President Trump fired the National Archivist, and top National Archives and Records Administration officials are being ousted, putting the future of our collective history at enormous risk.
While it is common practice to archive websites and information in the transition between administrations, it is unprecedented for the incoming administration to cull data altogether. In response, unaffiliated organizations are ramping up efforts to separately archive data and information for future preservation and access. Web scrapers are being used to grab as much data as possible, but since this method is automated, data requiring a login or bot challenger (like a captcha) is left behind. The future information gap that researchers will be left to grapple with could be catastrophic for progress in crucial areas, including weather, natural disasters, and public health. Though there are efforts to put out the fire, such as the federal order to restore certain resources, the people’s library is burning. The losses will be permanently felt…Data is a weapon, whether we like it or not. Free and open access to information—about democracy, history, our communities, and even ourselves—is the foundation of library service. It is time for anyone who continues to claim that libraries are not political to wake up before it is too late. Are libraries still not political when the Pentagon barred library access for tens of thousands of American children attending Pentagon schools on military bases while they examined and removed supposed “radical indoctrination” books? Are libraries still not political when more than 1,000 unique titles are being targeted for censorship annually, and soft censorship through preemptive restriction to avoid controversy is surely occurring and impossible to track? It is time for librarians and library workers to embrace being political.
In a country where the federal government now denies that certain people even exist, claims that children are being indoctrinated because they are being taught the good and bad of our nation’s history, and rescinds support for the arts, humanities, museums, and libraries, there is no such thing as neutrality. When compassion and inclusion are labeled the enemy and the diversity created by our great American experiment is lambasted as a social ill, claiming that libraries are neutral or apolitical is not only incorrect, it’s complicit. To update the quote, information is the weapon in the war of ideas. Librarians are the stewards of information. We don’t want to be the Americans who protested in 1933 at the first Nazi book burnings and then, despite seeing the early warning signs of catastrophe, retreated into the isolation of their own concerns. The people’s library is on fire. We must react before all that is left of our profession is ash…(More)”.
Data Sovereignty and Open Sharing: Reconceiving Benefit-Sharing and Governance of Digital Sequence Information
Paper by Masanori Arita: “There are ethical, legal, and governance challenges surrounding data, particularly in the context of digital sequence information (DSI) on genetic resources. I focus on the shift in the international framework, as exemplified by the CBD-COP15 decision on benefit-sharing from DSI and discuss the growing significance of data sovereignty in the age of AI and synthetic biology. Using the example of the COVID-19 pandemic, the tension between open science principles and data control rights is explained. This opinion also highlights the importance of inclusive and equitable data sharing frameworks that respect both privacy and sovereign data rights, stressing the need for international cooperation and equitable access to data to reduce global inequalities in scientific and technological advancement…(More)”.