To Stop Tariffs, Trump Demands Opioid Data That Doesn’t Yet Exist


Article by Josh Katz and Margot Sanger-Katz: “One month ago, President Trump agreed to delay tariffs on Canada and Mexico after the two countries agreed to help stem the flow of fentanyl into the United States. On Tuesday, the Trump administration imposed the tariffs anyway, saying that the countries had failed to do enough — and claiming that tariffs would be lifted only when drug deaths fall.

But the administration has seemingly established an impossible standard. Real-time, national data on fentanyl overdose deaths does not exist, so there is no way to know whether Canada and Mexico were able to “adequately address the situation” since February, as the White House demanded.

“We need to see material reduction in autopsied deaths from opioids,” said Howard Lutnick, the commerce secretary, in an interview on CNBC on Tuesday, indicating that such a decline would be a precondition to lowering tariffs. “But you’ve seen it — it has not been a statistically relevant reduction of deaths in America.”

In a way, Mr. Lutnick is correct that there is no evidence that overdose deaths have fallen in the last month — since there is no such national data yet. His stated goal to measure deaths again in early April will face similar challenges.

But data through September shows that fentanyl deaths had already been falling at a statistically significant rate for months, causing overall drug deaths to drop at a pace unlike any seen in more than 50 years of recorded drug overdose mortality data.

The declines can be seen in provisional data from the Centers for Disease Control and Prevention, which compiles death records from states, which in turn collect data from medical examiners and coroners in cities and towns. Final national data generally takes more than a year to produce. But, as the drug overdose crisis has become a major public health emergency in recent years, the C.D.C. has been publishing monthly data, with some holes, at around a four-month lag…(More)”.

Open Data Under Attack: How to Find Data and Why It Is More Important Than Ever


Article by Jessica Hilburn: “This land was made for you and me, and so was the data collected with our taxpayer dollars. Open data is data that is accessible, shareable, and able to be used by anyone. While any person, company, or organization can create and publish open data, the federal and state governments are by far the largest providers of open data.

President Barack Obama codified the importance of government-created open data in his May 9, 2013, executive order as a part of the Open Government Initiative. This initiative was meant to “ensure the public trust and establish a system of transparency, public participation, and collaboration” in furtherance of strengthening democracy and increasing efficiency. The initiative also launched Project Open Data (since replaced by the Resources.data.gov platform), which documented best practices and offered tools so government agencies in every sector could open their data and contribute to the collective public good. As has been made readily apparent, the era of public good through open data is now under attack.

Immediately after his inauguration, President Donald Trump signed a slew of executive orders, many of which targeted diversity, equity, and inclusion (DEI) for removal in federal government operations. Unsurprisingly, a large number of federal datasets include information dealing with diverse populations, equitable services, and inclusion of marginalized groups. Other datasets deal with information on topics targeted by those with nefarious agendas—vaccination rates, HIV/AIDS, and global warming, just to name a few. In the wake of these executive orders, datasets and website pages with blacklisted topics, tags, or keywords suddenly disappeared—more than 8,000 of them. In addition, President Trump fired the National Archivist, and top National Archives and Records Administration officials are being ousted, putting the future of our collective history at enormous risk.

While it is common practice to archive websites and information in the transition between administrations, it is unprecedented for the incoming administration to cull data altogether. In response, unaffiliated organizations are ramping up efforts to separately archive data and information for future preservation and access. Web scrapers are being used to grab as much data as possible, but since this method is automated, data requiring a login or bot challenger (like a captcha) is left behind. The future information gap that researchers will be left to grapple with could be catastrophic for progress in crucial areas, including weather, natural disasters, and public health. Though there are efforts to put out the fire, such as the federal order to restore certain resources, the people’s library is burning. The losses will be permanently felt…Data is a weapon, whether we like it or not. Free and open access to information—about democracy, history, our communities, and even ourselves—is the foundation of library service. It is time for anyone who continues to claim that libraries are not political to wake up before it is too late. Are libraries still not political when the Pentagon barred library access for tens of thousands of American children attending Pentagon schools on military bases while they examined and removed supposed “radical indoctrination” books? Are libraries still not political when more than 1,000 unique titles are being targeted for censorship annually, and soft censorship through preemptive restriction to avoid controversy is surely occurring and impossible to track? It is time for librarians and library workers to embrace being political.

In a country where the federal government now denies that certain people even exist, claims that children are being indoctrinated because they are being taught the good and bad of our nation’s history, and rescinds support for the arts, humanities, museums, and libraries, there is no such thing as neutrality. When compassion and inclusion are labeled the enemy and the diversity created by our great American experiment is lambasted as a social ill, claiming that libraries are neutral or apolitical is not only incorrect, it’s complicit. To update the quote, information is the weapon in the war of ideas. Librarians are the stewards of information. We don’t want to be the Americans who protested in 1933 at the first Nazi book burnings and then, despite seeing the early warning signs of catastrophe, retreated into the isolation of their own concerns. The people’s library is on fire. We must react before all that is left of our profession is ash…(More)”.

Data Sovereignty and Open Sharing: Reconceiving Benefit-Sharing and Governance of Digital Sequence Information


Paper by Masanori Arita: “There are ethical, legal, and governance challenges surrounding data, particularly in the context of digital sequence information (DSI) on genetic resources. I focus on the shift in the international framework, as exemplified by the CBD-COP15 decision on benefit-sharing from DSI and discuss the growing significance of data sovereignty in the age of AI and synthetic biology. Using the example of the COVID-19 pandemic, the tension between open science principles and data control rights is explained. This opinion also highlights the importance of inclusive and equitable data sharing frameworks that respect both privacy and sovereign data rights, stressing the need for international cooperation and equitable access to data to reduce global inequalities in scientific and technological advancement…(More)”.

AI crawler wars threaten to make the web more closed for everyone


Article by Shayne Longpre: “We often take the internet for granted. It’s an ocean of information at our fingertips—and it simply works. But this system relies on swarms of “crawlers”—bots that roam the web, visit millions of websites every day, and report what they see. This is how Google powers its search engines, how Amazon sets competitive prices, and how Kayak aggregates travel listings. Beyond the world of commerce, crawlers are essential for monitoring web security, enabling accessibility tools, and preserving historical archives. Academics, journalists, and civil societies also rely on them to conduct crucial investigative research.  

Crawlers are endemic. Now representing half of all internet traffic, they will soon outpace human traffic. This unseen subway of the web ferries information from site to site, day and night. And as of late, they serve one more purpose: Companies such as OpenAI use web-crawled data to train their artificial intelligence systems, like ChatGPT. 

Understandably, websites are now fighting back for fear that this invasive species—AI crawlers—will help displace them. But there’s a problem: This pushback is also threatening the transparency and open borders of the web, that allow non-AI applications to flourish. Unless we are thoughtful about how we fix this, the web will increasingly be fortified with logins, paywalls, and access tolls that inhibit not just AI but the biodiversity of real users and useful crawlers…(More)”.

Recommendations for Better Sharing of Climate Data


Creative Commons: “…the culmination of a nine-month research initiative from our Open Climate Data project. These guidelines are a result of collaboration between Creative Commons, government agencies and intergovernmental organizations. They mark a significant milestone in our ongoing effort to enhance the accessibility, sharing, and reuse of open climate data to address the climate crisis. Our goal is to share strategies that align with existing data sharing principles and pave the way for a more interconnected and accessible future for climate data.

Our recommendations offer practical steps and best practices, crafted in collaboration with key stakeholders and organizations dedicated to advancing open practices in climate data. We provide recommendations for 1) legal and licensing terms, 2) using metadata values for attribution and provenance, and 3) management and governance for better sharing.

Opening climate data requires an examination of the public’s legal rights to access and use the climate data, often dictated by copyright and licensing. This legal detail is sometimes missing from climate data sharing and legal interoperability conversations. Our recommendations suggest two options: Option A: CC0 + Attribution Request, in order to maximize reuse by dedicating climate data to the public domain, plus a request for attribution; and Option B: CC BY 4.0, for retaining data ownership and legal enforcement of attribution. We address how to navigate license stacking and attribution stacking for climate data hosts and for users working with multiple climate data sources.

We also propose standardized human- and machine-readable metadata values that enhance transparency, reduce guesswork, and ensure broader accessibility to climate data. We built upon existing model metadata schemas and standards, including those that address license and attribution information. These recommendations address a gap and provide metadata schema that standardize the inclusion of upfront, clear values related to attribution, licensing and provenance.

Lastly, we highlight four key aspects of effective climate data management: designating a dedicated technical managing steward, designating a legal and/or policy steward, encouraging collaborative data sharing, and regularly revisiting and updating data sharing policies in accordance with parallel open data policies and standards…(More)”.

Empowering open data sharing for social good: a privacy-aware approach


Paper by Tânia Carvalho et al: “The Covid-19 pandemic has affected the world at multiple levels. Data sharing was pivotal for advancing research to understand the underlying causes and implement effective containment strategies. In response, many countries have facilitated access to daily cases to support research initiatives, fostering collaboration between organisations and making such data available to the public through open data platforms. Despite the several advantages of data sharing, one of the major concerns before releasing health data is its impact on individuals’ privacy. Such a sharing process should adhere to state-of-the-art methods in Data Protection by Design and by Default. In this paper, we use a Covid-19 data set from Portugal’s second-largest hospital to show how it is feasible to ensure data privacy while improving the quality and maintaining the utility of the data. Our goal is to demonstrate how knowledge exchange in multidisciplinary teams of healthcare practitioners, data privacy, and data science experts is crucial to co-developing strategies that ensure high utility in de-identified data…(More).”

Why Digital Public Goods, including AI, Should Depend on Open Data


Article by Cable Green: “Acknowledging that some data should not be shared (for moral, ethical and/or privacy reasons) and some cannot be shared (for legal or other reasons), Creative Commons (CC) thinks there is value in incentivizing the creation, sharing, and use of open data to advance knowledge production. As open communities continue to imagine, design, and build digital public goods and public infrastructure services for education, science, and culture, these goods and services – whenever possible and appropriate – should produce, share, and/or build upon open data.

Open Data and Digital Public Goods (DPGs)

CC is a member of the Digital Public Goods Alliance (DPGA) and CC’s legal tools have been recognized as digital public goods (DPGs). DPGs are “open-source software, open standards, open data, open AI systems, and open content collections that adhere to privacy and other applicable best practices, do no harm, and are of high relevance for attainment of the United Nations 2030 Sustainable Development Goals (SDGs).” If we want to solve the world’s greatest challenges, governments and other funders will need to invest in, develop, openly license, share, and use DPGs.

Open data is important to DPGs because data is a key driver of economic vitality with demonstrated potential to serve the public good. In the public sector, data informs policy making and public services delivery by helping to channel scarce resources to those most in need; providing the means to hold governments accountable and foster social innovation. In short, data has the potential to improve people’s lives. When data is closed or otherwise unavailable, the public does not accrue these benefits.CC was recently part of a DPGA sub-committee working to preserve the integrity of open data as part of the DPG Standard. This important update to the DPG Standard was introduced to ensure only open datasets and content collections with open licenses are eligible for recognition as DPGs. This new requirement means open data sets and content collections must meet the following criteria to be recognised as a digital public good.

  1. Comprehensive Open Licensing:
    1. The entire data set/content collection must be under an acceptable open licence. Mixed-licensed collections will no longer be accepted.
  2. Accessible and Discoverable:
    1. All data sets and content collection DPGs must be openly licensed and easily accessible from a distinct, single location, such as a unique URL.
  3. Permitted Access Restrictions:
    1. Certain access restrictions – such as logins, registrations, API keys, and throttling – are permitted as long as they do not discriminate against users or restrict usage based on geography or any other factors…(More)”.

Reimagining data for Open Source AI: A call to action


Report by Open Source Initiative: “Artificial intelligence (AI) is changing the world at a remarkable pace, with Open Source AI playing a pivotal role in shaping its trajectory. Yet, as AI advances, a fundamental challenge emerges: How do we create a data ecosystem that is not only robust but also equitable and sustainable?

The Open Source Initiative (OSI) and Open Future have taken a significant step toward addressing this challenge by releasing a white paper: “Data Governance in Open Source AI: Enabling Responsible and Systematic Access.” This document is the culmination of a global co-design process, enriched by insights from a vibrant two-day workshop held in Paris in October 2024….

The white paper offers a blueprint for a data ecosystem rooted in fairness, inclusivity and sustainability. It calls for two transformative shifts:

  1. From Open Data to Data Commons: Moving beyond the notion of unrestricted data to a model that balances openness with the rights and needs of all stakeholders.
  2. Broadening the stakeholder universe: Creating collaborative frameworks that unite communities, stewards and creators in equitable data-sharing practices.

To bring these shifts to life, the white paper delves into six critical focus areas:

  • Data preparation
  • Preference signaling and licensing
  • Data stewards and custodians
  • Environmental sustainability
  • Reciprocity and compensation
  • Policy interventions…(More)”

Towards Best Practices for Open Datasets for LLM Training


Paper by Stefan Baack et al: “Many AI companies are training their large language models (LLMs) on data without the permission of the copyright owners. The permissibility of doing so varies by jurisdiction: in countries like the EU and Japan, this is allowed under certain restrictions, while in the United States, the legal landscape is more ambiguous. Regardless of the legal status, concerns from creative producers have led to several high-profile copyright lawsuits, and the threat of litigation is commonly cited as a reason for the recent trend towards minimizing the information shared about training datasets by both corporate and public interest actors. This trend in limiting data information causes harm by hindering transparency, accountability, and innovation in the broader ecosystem by denying researchers, auditors, and impacted individuals access to the information needed to understand AI models.
While this could be mitigated by training language models on open access and public domain data, at the time of writing, there are no such models (trained at a meaningful scale) due to the substantial technical and sociological challenges in assembling the necessary corpus. These challenges include incomplete and unreliable metadata, the cost and complexity of digitizing physical records, and the diverse set of legal and technical skills required to ensure relevance and responsibility in a quickly changing landscape. Building towards a future where AI systems can be trained on openly licensed data that is responsibly curated and governed requires collaboration across legal, technical, and policy domains, along with investments in metadata standards, digitization, and fostering a culture of openness…(More)”.

Generative Artificial Intelligence and Open Data: Guidelines and Best Practices


US Department of Commerce: “…This guidance provides actionable guidelines and best practices for publishing open data optimized for generative AI systems. While it is designed for use by the Department of Commerce and its bureaus, this guidance has been made publicly available to benefit open data publishers globally…(More)”. See also: A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI