A New Social Contract for AI? Comparing CC Signals and the Social License for Data Reuse


Article by Stefaan Verhulst: “Last week, Creative Commons — the global nonprofit best known for its open copyright licenses — released “CC Signals: A New Social Contract for the Age of AI.” This framework seeks to offer creators a means to signal their preferences for how their works are used in machine learning, including training Artificial Intelligence systems. It marks an important step toward integrating re-use preferences and shared benefits directly into the AI development lifecycle….

From a responsible AI perspective, the CC Signals framework is an important development. It demonstrates how soft governance mechanisms — declarations, usage expressions, and social signaling — can supplement or even fill gaps left by inconsistent global copyright regimes in the context of AI. At the same time, this initiative provides an interesting point of comparison with our ongoing work to develop a Social License for Data Reuse. A social license for data reuse is a participatory governance framework that allows communities to collectively define, signal and enforce the conditions under which data about them can be reused — including training AI. Unlike traditional consent-based mechanisms, which focus on individual permissions at the point of collection, a social license introduces a community-centered, continuous process of engagement — ensuring that data practices align with shared values, ethical norms, and contextual realities. It provides a complementary layer to legal compliance, emphasizing trust, legitimacy, and accountability in data governance.

While both frameworks are designed to signal preferences and expectations for data or content reuse, they differ meaningfully in scope, method, and theory of change.

Below, we offer a comparative analysis of the two frameworks — highlighting how each approaches the challenge of embedding legitimacy and trust into AI and data ecosystems…(More)”.

Beyond AI and Copyright


White Paper by Paul Keller: “…argues for interventions to ensure the sustainability of the information ecosystem in the age of generative AI. Authored by Paul Keller, the paper builds on Open Future’s ongoing work on Public AI and on AI and creative labour, and proposes measures aimed at ensuring a healthy and equitable digital knowledge commons.

Rather than focusing on the rights of individual creators or the infringement debates that dominate current policy discourse, the paper frames generative AI as a new cultural and social technology—one that is rapidly reshaping how societies access, produce, and value information. It identifies two major structural risks: the growing concentration of control over knowledge, and the hollowing out of the institutions and economies that sustain human information production.

To counter these risks, the paper calls for the development of public AI infrastructures and a redistributive mechanism based on a levy on commercial AI systems trained on publicly available information. The proceeds would support not only creators and rightholders, but also public service media, cultural heritage institutions, open content platforms, and the development of Public AI systems…(More)”.

5 Ways Cooperatives Can Shape the Future of AI


Article by Trebor Scholz and Stefano Tortorici: “Today, AI development is controlled by a small cadre of firms. Companies like OpenAI, Alphabet, Amazon, Meta, and Microsoft dominate through vast computational resources, massive proprietary datasets, deep pools of technical talent, extractive data practices, low-cost labor, and capital that enables continuous experimentation and rapid deployment. Even open-source challengers like DeepSeek run on vast computational muscle and industrial training pipelines.

This domination brings problems: privacy violation and cost-minimizing labor strategies, high environmental costs from data centers, and evident biases in models that can reinforce discrimination in hiring, healthcare, credit scoring, policing, and beyond. These problems tend to affect the people who are already too often left out. AI’s opaque algorithms don’t just sidestep democratic control and transparency—they shape who gets heard, who’s watched, and who’s quietly pushed aside.

Yet, as companies consider using this technology, it can seem that there are few other options. As such, it can seem that they are locked into these compromises.

A different model is taking shape, however, with little fanfare, but with real potential. AI cooperatives—organizations developing or governing AI technologies based on cooperative principles—offer a promising alternative. The cooperative movement, with its global footprint and diversity of models, has been successful from banking and agriculture to insurance and manufacturing. Cooperatives enterprises, which are owned and governed by their members, have long managed infrastructure for the public good.

A handful of AI cooperatives offer early examples of how democratic governance and shared ownership could shape more accountable and community-centered uses of the technology. Most are large agricultural cooperatives that are putting AI to use in their day-to-day operations, such as IFFCO’s DRONAI program (AI for fertilization), FrieslandCampina (dairy quality control), and Fonterra (milk production analytics). Cooperatives must urgently organize to challenge AI’s dominance or remain on the sidelines of critical political and technological developments.​

There is undeniably potential here, for both existing cooperatives and companies that might want to partner with them. The $589 billion drop in Nvidia’s market cap DeepSeek triggered shows how quickly open-source innovation can shift the landscape. But for cooperative AI labs to do more than signal intent, they need public infrastructure, civic partnerships, and serious backing…(More)”.

Trends in AI Supercomputers


Paper by Konstantin F. Pilz, James Sanders, Robi Rahman, and Lennart Heim: “Frontier AI development relies on powerful AI supercomputers, yet analysis of these systems is limited. We create a dataset of 500 AI supercomputers from 2019 to 2025 and analyze key trends in performance, power needs, hardware cost, ownership, and global distribution. We find that the computational performance of AI supercomputers has doubled every nine months, while hardware acquisition cost and power needs both doubled every year. The leading system in March 2025, xAI’s Colossus, used 200,000 AI chips, had a hardware cost of $7B, and required 300 MW of power, as much as 250,000 households. As AI supercomputers evolved from tools for science to industrial machines, companies rapidly expanded their share of total AI supercomputer performance, while the share of governments and academia diminished. Globally, the United States accounts for about 75% of total performance in our dataset, with China in second place at 15%. If the observed trends continue, the leading AI supercomputer in 2030 will achieve 2×1022 16-bit FLOP/s, use two million AI chips, have a hardware cost of $200 billion, and require 9 GW of power. Our analysis provides visibility into the AI supercomputer landscape, allowing policymakers to assess key AI trends like resource needs, ownership, and national competitiveness…(More)”.

AGI vs. AAI: Grassroots Ingenuity and Frugal Innovation Will Shape the Future


Article by Akash Kapur: “Step back from the day-to-day flurry surrounding AI, and a global divergence in narratives is becoming increasingly clear. In Silicon Valley, New York, and London, the conversation centers on the long-range pursuit of artificial general intelligence (AGI)—systems that might one day equal or surpass humans at almost everything. This is the moon-shot paradigm, fueled by multi-billion-dollar capital expenditure and almost metaphysical ambition.

In contrast, much of the Global South is converging on something more grounded: the search for near-term, proven use cases that can be deployed with today’s hardware, and limited budgets and bandwidth. Call it Applied AI, or AAI. This quest for applicability—and relevance—is more humble than AGI. Its yardstick for success is more measured, and certainly less existential. Rather than pose profound questions about the nature of consciousness and humanity, Applied AI asks questions like: Does the model fix a real-world problem? Can it run on patchy 4G, a mid-range GPU, or a refurbished phone? What new yield can it bring to farmers or fishermen, or which bureaucratic bottleneck can it cut?

One way to think of AAI is as intelligence that ships. Vernacular chatbots, offline crop-disease detectors, speech-to-text tools for courtrooms: examples of similar applications and products, tailored and designed for specific sectors, are growing fast. In Africa, PlantVillage Nuru helps Kenyan farmers diagnose crop diseases entirely offline; South-Africa-based Lelapa AI is training “small language models” for at least 13 African languages; and Nigeria’s EqualyzAI runs chatbots that are trained to provide Hausa and Yoruba translations for customers…(More)”.

What Counts as Discovery?


Essay by Nisheeth Vishnoi: “Long before there were “scientists,” there was science. Across every continent, humans developed knowledge systems grounded in experience, abstraction, and prediction—driven not merely by curiosity, but by a desire to transform patterns into principles, and observation into discovery. Farmers tracked solstices, sailors read stars, artisans perfected metallurgy, and physicians documented plant remedies. They built calendars, mapped cycles, and tested interventions—turning empirical insight into reliable knowledge.

From the oral sciences of Africa, which encoded botanical, medical, and ecological knowledge across generations, to the astronomical observatories of Mesoamerica, where priests tracked solstices, eclipses, and planetary motion with remarkable accuracy, early human civilizations sought more than survival. In Babylon, scribes logged celestial movements and built predictive models; in India, the architects of Vedic altars designed ritual structures whose proportions mirrored cosmic rhythms, embedding arithmetic and geometry into sacred form. Across these diverse cultures, discovery was not a separate enterprise—it was entwined with ritual, survival, and meaning. Yet the tools were recognizably scientific: systematic observation, abstraction, and the search for hidden order.

This was science before the name. And it reminds us that discovery has never belonged to any one civilization or era. Discovery is not intelligence itself, but one of its sharpest expressions—an act that turns perception into principle through a conceptual leap. While intelligence is broader and encompasses adaptation, inference, and learning in various forms (biological, cultural, and even mechanical), discovery marks those moments when something new is framed, not just found. 

Life forms learn, adapt, and even innovate. But it is humans who turned observation into explanation, explanation into abstraction, and abstraction into method. The rise of formal science brought mathematical structure and experiment, but it did not invent the impulse to understand—it gave it form, language, and reach.

And today, we stand at the edge of something unfamiliar: the possibility of lifeless discoveries. Artificial Intelligence machines, built without awareness or curiosity, are beginning to surface patterns and propose explanations, sometimes without our full understanding. If science has long been a dialogue between the world and living minds, we are now entering a strange new phase: abstraction without awareness, discovery without a discoverer.

AI systems now assist in everything from understanding black holes to predicting protein folds and even symbolic equation discovery. They parse vast datasets, detect regularities, and generate increasingly sophisticated outputs. Some claim they’re not just accelerating research, but beginning to reshape science itself—perhaps even to discover.

But what truly counts as a scientific discovery? This essay examines that question…(More)”

AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums


Article by Emanuel Maiberg: “The report, titled “Are AI Bots Knocking Cultural Heritage Offline?” was written by Weinberg of the GLAM-E Lab, a joint initiative between the Centre for Science, Culture and the Law at the University of Exeter and the Engelberg Center on Innovation Law & Policy at NYU Law, which works with smaller cultural institutions and community organizations to build open access capacity and expertise. GLAM is an acronym for galleries, libraries, archives, and museums. The report is based on a survey of 43 institutions with open online resources and collections in Europe, North America, and Oceania. Respondents also shared data and analytics, and some followed up with individual interviews. The data is anonymized so institutions could share information more freely, and to prevent AI bot operators from undermining their counter measures.  

Of the 43 respondents, 39 said they had experienced a recent increase in traffic. Twenty-seven of those 39 attributed the increase in traffic to AI training data bots, with an additional seven saying the AI bots could be contributing to the increase. 

“Multiple respondents compared the behavior of the swarming bots to more traditional online behavior such as Distributed Denial of Service (DDoS) attacks designed to maliciously drive unsustainable levels of traffic to a server, effectively taking it offline,” the report said. “Like a DDoS incident, the swarms quickly overwhelm the collections, knocking servers offline and forcing administrators to scramble to implement countermeasures. As one respondent noted, ‘If they wanted us dead, we’d be dead.’”…(More)”

AI and Social Media: A Political Economy Perspective


Paper by Daron Acemoglu, Asuman Ozdaglar & James Siderius: “We consider the political consequences of the use of artificial intelligence (AI) by online platforms engaged in social media content dissemination, entertainment, or electronic commerce. We identify two distinct but complementary mechanisms, the social media channel and the digital ads channel, which together and separately contribute to the polarization of voters and consequently the polarization of parties. First, AI-driven recommendations aimed at maximizing user engagement on platforms create echo chambers (or “filter bubbles”) that increase the likelihood that individuals are not confronted with counter-attitudinal content. Consequently, social media engagement makes voters more polarized, and then parties respond by becoming more polarized themselves. Second, we show that party competition can encourage platforms to rely more on targeted digital ads for monetization (as opposed to a subscription-based business model), and such ads in turn make the electorate more polarized, further contributing to the polarization of parties. These effects do not arise when one party is dominant, in which case the profit-maximizing business model of the platform is subscription-based. We discuss the impact regulations can have on the polarizing effects of AI-powered online platforms…(More)”.

Introducing CC Signals: A New Social Contract for the Age of AI


Creative Commons: “Creative Commons (CC) today announces the public kickoff of the CC signals project, a new preference signals framework designed to increase reciprocity and sustain a creative commons in the age of AI. The development of CC signals represents a major step forward in building a more equitable, sustainable AI ecosystem rooted in shared benefits. This step is the culmination of years of consultation and analysis. As we enter this new phase of work, we are actively seeking input from the public. 

As artificial intelligence (AI) transforms how knowledge is created, shared, and reused, we are at a fork in the road that will define the future of access to knowledge and shared creativity. One path leads to data extraction and the erosion of openness; the other leads to a walled-off internet guarded by paywalls. CC signals offer another way, grounded in the nuanced values of the commons expressed by the collective.

Based on the same principles that gave rise to the CC licenses and tens of billions of works openly licensed online, CC signals will allow dataset holders to signal their preferences for how their content can be reused by machines based on a set of limited but meaningful options shaped in the public interest. They are both a technical and legal tool and a social proposition: a call for a new pact between those who share data and those who use it to train AI models.

“CC signals are designed to sustain the commons in the age of AI,” said Anna Tumadóttir, CEO, Creative Commons. “Just as the CC licenses helped build the open web, we believe CC signals will help shape an open AI ecosystem grounded in reciprocity.”

CC signals recognize that change requires systems-level coordination. They are tools that will be built for machine and human readability, and are flexible across legal, technical, and normative contexts. However, at their core CC signals are anchored in mobilizing the power of the collective. While CC signals may range in enforceability, legally binding in some cases and normative in others, their application will always carry ethical weight that says we give, we take, we give again, and we are all in this together.

Now Ready for Feedback 

More information about CC signals and early design decisions are available on the CC website. We are committed to developing CC signals transparently and alongside our partners and community. We are actively seeking public feedback and input over the next few months as we work toward an alpha launch in November 2025….(More)”

Robodebt: When automation fails


Article by Don Moynihan: “From 2016 to 2020, the Australian government operated an automated debt assessment and recovery system, known as “Robodebt,” to recover fraudulent or overpaid welfare benefits. The goal was to save $4.77 billion through debt recovery and reduced public service costs. However, the algorithm and policies at the heart of Robodebt caused wildly inaccurate assessments, and administrative burdens that disproportionately impacted those with the least resources. After a federal court ruled the policy unlawful, the government was forced to terminate Robodebt and agree to a $1.8 billion settlement.

Robodebt is important because it is an example of a costly failure with automation. By automation, I mean the use of data to create digital defaults for decisions. This could involve the use of AI, or it could mean the use of algorithms reading administrative data. Cases like Robodebt serve as canaries in the coalmine for policymakers interested in using AI or algorithms as an means to downsize public services on the hazy notion that automation will pick up the slack. But I think they are missing the very real risks involved.

To be clear, the lesson is not “all automation is bad.” Indeed, it offer real benefits in potentially reducing administrative costs and hassles and increasing access to public services (e.g. the use of automated or “ex parte” renewals for Medicaid, for example, which Republicans are considering limiting in their new budget bill). It is this promise that makes automation so attractive to policymakers. But it is also the case that automation can be used to deny access to services, and to put people into digital cages that are burdensome to escape from. This is why we need to learn from cases where it has been deployed.

The experience of Robodebt underlines the dangers of using citizens as lab rats to adopt AI on a broad scale before it is has been proven to work. Alongside the parallel collapse of the Dutch government childcare system, Robodebt provides an extraordinarily rich text to understand how automated decision processes can go wrong.

I recently wrote about Robodebt (with co-authors Morten Hybschmann, Kathryn Gimborys, Scott Loudin, Will McClellan), both in the journal of Perspectives on Public Management and Governance and as a teaching case study at the Better Government Lab...(More)”.