A New Social Contract for AI? Comparing CC Signals and the Social License for Data Reuse


Article by Stefaan Verhulst: “Last week, Creative Commons — the global nonprofit best known for its open copyright licenses — released “CC Signals: A New Social Contract for the Age of AI.” This framework seeks to offer creators a means to signal their preferences for how their works are used in machine learning, including training Artificial Intelligence systems. It marks an important step toward integrating re-use preferences and shared benefits directly into the AI development lifecycle….

From a responsible AI perspective, the CC Signals framework is an important development. It demonstrates how soft governance mechanisms — declarations, usage expressions, and social signaling — can supplement or even fill gaps left by inconsistent global copyright regimes in the context of AI. At the same time, this initiative provides an interesting point of comparison with our ongoing work to develop a Social License for Data Reuse. A social license for data reuse is a participatory governance framework that allows communities to collectively define, signal and enforce the conditions under which data about them can be reused — including training AI. Unlike traditional consent-based mechanisms, which focus on individual permissions at the point of collection, a social license introduces a community-centered, continuous process of engagement — ensuring that data practices align with shared values, ethical norms, and contextual realities. It provides a complementary layer to legal compliance, emphasizing trust, legitimacy, and accountability in data governance.

While both frameworks are designed to signal preferences and expectations for data or content reuse, they differ meaningfully in scope, method, and theory of change.

Below, we offer a comparative analysis of the two frameworks — highlighting how each approaches the challenge of embedding legitimacy and trust into AI and data ecosystems…(More)”.

Unpacking B2G data sharing mechanism under the EU data act


Paper by Ludovica Paseri and Stefaan G. Verhulst: “The paper proposes an analysis of the business-to-government (B2G) data sharing mechanism envisaged by the Regulation EU 2023/2854, the so-called Data Act. The Regulation, in force since 11 January 2024, will be applicable from 12 September 2025, requiring the actors involved to put in place a compliance process. The focus of the paper is to present an assessment of the mechanism foreseen by the EU legislators, with the intention of highlighting two bottlenecks, represented by: (i) the flexibility of the definition of “exceptional need”, “public emergency” and “public interest”; (ii) the cumbersome procedure for data holders. The paper discusses the role that could be played by in-house data stewardship structures as a particularly beneficial contact point for complying with B2G data sharing requirements…(More)“.

Commission facilitates data access for researchers under the Digital Services Act


Press Release: “On 2 July 2025, the Commission published a delegated act outlining rules granting access to data for qualified researchers under the Digital Services Act (DSA). This delegated act enables access to the internal data of very large online platforms (VLOPs) and search engines (VLOSEs) to research the systemic risks and on the mitigation measures in the European Union.

The delegated act on data access clarifies the procedures for VLOPs and VLOSEs to share data with vetted researchers, including data formats and requirements for data documentation. Moreover, the delegated act sets out which information Digital Services Coordinators (DSCs), VLOPs and VLOSEs must make public to facilitate vetted researchers’ applications to access relevant datasets.

With the adoption of the delegated act, the Commission will launch the DSA data access portal where researchers interested in accessing data under the new mechanism can find information and exchange with VLOPs, VLOSEs and DSCs on their data access applications. 

Before accessing internal data, researchers must be vetted by a DSC

For this vetting process, researchers must submit a data access application demonstrating their affiliation to a research organisation, their independence from commercial interests, and their ability to manage the requested data in line with security, confidentiality and privacy rules. In addition, researchers need to disclose the funding of the research project for which the data is requested and commit to publishing the results of their research. Only data that is necessary to perform research on systemic risks in the EU can be requested.

To complement the rules in the delegated act, on 27 June 2025 the Board of Digital Services endorsed a proposal for further cooperation among DSCs in the vetting process of researchers…(More)”.

Cloudflare Introduces Default Blocking of A.I. Data Scrapers


Article by Natallie Rocha: “Data for A.I. systems has become an increasingly contentious issue. OpenAI, Anthropic, Google and other companies building A.I. systems have amassed reams of information from across the internet to train their A.I. models. High-quality data is particularly prized because it helps A.I. models become more proficient in generating accurate answers, videos and images.

But website publishers, authors, news organizations and other content creators have accused A.I. companies of using their material without permission and payment. Last month, Reddit sued Anthropic, saying the start-up had unlawfully used the data of its more than 100 million daily users to train its A.I. systems. In 2023, The New York Times sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to A.I. systems. OpenAI and Microsoft have denied those claims.

Some publishers have struck licensing deals with A.I. companies to receive compensation for their content. In May, The Times agreed to license its editorial content to Amazon for use in the tech giant’s A.I. platforms. Axel Springer, Condé Nast and News Corp have also entered into agreements with A.I. companies to receive revenue for the use of their material.

Mark Howard, the chief operating officer of Time, said he welcomed Cloudflare’s move. Data scraping by A.I. companies threatens anyone who creates content, he said, adding that news publishers like Time deserved fair compensation for what they published…(More)”.

AI and Assembly: Coming Together and Apart in a Datafied World


Book edited by Toussaint Nothias and Lucy Bernholz: “Artificial intelligence has moved from the lab into everyday life and is now seemingly everywhere. As AI creeps into every aspect of our lives, the data grab required to power AI also expands. People worldwide are tracked, analyzed, and influenced, whether on or off their screens, inside their homes or outside in public, still or in transit, alone or together. What does this mean for our ability to assemble with others for collective action, including protesting, holding community meetings and organizing rallies ? In this context, where and how does assembly take place, and who participates by choice and who by coercion? AI and Assembly explores these questions and offers global perspectives on the present and future of assembly in a world taken over by AI.

The contributors analyze how AI threatens free assembly by clustering people without consent, amplifying social biases, and empowering authoritarian surveillance. But they also explore new forms of associational life that emerge in response to these harms, from communities in the US conducting algorithmic audits to human rights activists in East Africa calling for biometric data protection and rideshare drivers in London advocating for fair pay. Ultimately, AI and Assembly is a rallying cry for those committed to a digital future beyond the narrow horizon of corporate extraction and state surveillance…(More)”.

Enjoy TikTok Explainers? These Old-Fashioned Diagrams Are A Whole Lot Smarter


Article by Jonathon Keats: “In the aftermath of Hiroshima, many of the scientists who built the atomic bomb changed the way they reckoned time. Their conception of the future was published on the cover of The Bulletin of the Atomic Scientists, which portrayed a clock set at seven minutes to midnight. In subsequent months and years, the clock sometimes advanced. Other times, the hands fell back. With this simple indication, the timepiece tracked the likelihood of nuclear annihilation.

Although few of the scientists who worked on the Manhattan Project are still alive, the Doomsday Clock remains operational, steadfastly translating risk into units of hours and minutes. Over time, the diagram has become iconic, and not only for subscribers to The Bulletin. It’s now so broadly recognizable that we may no longer recognize what makes it radical.

12 - Fondazione Prada_Diagrams
John Auldjo. Map of Vesuvius showing the direction of the streams of lava in the eruptions from 1631 to 1831, 1832. Exhibition copy from a printed book In John Auldjo, Sketches of Vesuvius: with Short Accounts of Its Principal Eruptions from the Commencement of the Christian Era to the Present Time (Napoli: George Glass, 1832). Olschki 53, plate before p. 27, Biblioteca Nazionale Centrale di Firenze, Firenze. Courtesy Ministero della Cultura – Biblioteca Nazionale Centrale di Firenze. Any unauthorized reproduction by any means whatsoever is prohibited.Biblioteca Nazionale Centrale di Firenze

A thrilling new exhibition at the Fondazione Prada brings the Doomsday Clock back into focus. Featuring hundreds of diagrams from the past millennium, ranging from financial charts to maps of volcanic eruptions, the exhibition provides the kind of survey that brings definition to an entire category of visual communication. Each work benefits from its association with others that are manifestly different in form and function…(More)”.

Beyond AI and Copyright


White Paper by Paul Keller: “…argues for interventions to ensure the sustainability of the information ecosystem in the age of generative AI. Authored by Paul Keller, the paper builds on Open Future’s ongoing work on Public AI and on AI and creative labour, and proposes measures aimed at ensuring a healthy and equitable digital knowledge commons.

Rather than focusing on the rights of individual creators or the infringement debates that dominate current policy discourse, the paper frames generative AI as a new cultural and social technology—one that is rapidly reshaping how societies access, produce, and value information. It identifies two major structural risks: the growing concentration of control over knowledge, and the hollowing out of the institutions and economies that sustain human information production.

To counter these risks, the paper calls for the development of public AI infrastructures and a redistributive mechanism based on a levy on commercial AI systems trained on publicly available information. The proceeds would support not only creators and rightholders, but also public service media, cultural heritage institutions, open content platforms, and the development of Public AI systems…(More)”.

Community Engagement Is Crucial for Successful State Data Efforts


Resource by the Data Quality Campaign: “Engaging communities is a critical step toward ensuring that data efforts work for their intended audiences. People, including state policymakers, school leaders, families, college administrators, employers, and the public, should have a say in how their state provides access to education and workforce data. And as state leaders build robust statewide longitudinal data systems (SLDSs) or move other data efforts forward, they must deliberately create consistent opportunities for communities to weigh in. This resource explores how states can meaningfully engage with communities to build trust and improve data efforts by ensuring that systems, tools, and resources are valuable to the people who use them…(More)”.

Data integration and synthesis for pandemic and epidemic intelligence


Paper by Barbara Tornimbene et al: “The COVID-19 pandemic highlighted substantial obstacles in real-time data generation and management needed for clinical research and epidemiological analysis. Three years after the pandemic, reflection on the difficulties of data integration offers potential to improve emergency preparedness. The fourth session of the WHO Pandemic and Epidemic Intelligence Forum sought to report the experiences of key global institutions in data integration and synthesis, with the aim of identifying solutions for effective integration. Data integration, defined as the combination of heterogeneous sources into a cohesive system, allows for combining epidemiological data with contextual elements such as socioeconomic determinants to create a more complete picture of disease patterns. The approach is critical for predicting outbreaks, determining disease burden, and evaluating interventions. The use of contextual information improves real-time intelligence and risk assessments, allowing for faster outbreak responses. This report captures the growing acknowledgment of data integration importance in boosting public health intelligence and readiness and show examples of how global institutions are strengthening initiatives to respond to this need. However, obstacles persist, including interoperability, data standardization, and ethical considerations. The success of future data integration efforts will be determined by the development of a common technical and legal framework, the promotion of global collaboration, and the protection of sensitive data. Ultimately, effective data integration can potentially transform public health intelligence and our way to successfully respond to future pandemics…(More)”.

Why AI hardware needs to be open


Article by Ayah Bdeir: “Once again, the future of technology is being engineered in secret by a handful of people and delivered to the rest of us as a sealed, seamless, perfect device. When technology is designed in secrecy and sold to us as a black box, we are reduced to consumers. We wait for updates. We adapt to features. We don’t shape the tools; they shape us. 

This is a problem. And not just for tinkerers and technologists, but for all of us.

We are living through a crisis of disempowerment. Children are more anxious than ever; the former US surgeon general described a loneliness epidemic; people are increasingly worried about AI eroding education. The beautiful devices we use have been correlated with many of these trends. Now AI—arguably the most powerful technology of our era—is moving off the screen and into physical space. 

The timing is not a coincidence. Hardware is having a renaissance. Every major tech company is investing in physical interfaces for AI. Startups are raising capital to build robots, glasses, wearables that are going to track our every move. The form factor of AI is the next battlefield. Do we really want our future mediated entirely through interfaces we can’t open, code we can’t see, and decisions we can’t influence? 

This moment creates an existential opening, a chance to do things differently. Because away from the self-centeredness of Silicon Valley, a quiet, grounded sense of resistance is reactivating. I’m calling it the revenge of the makers. 

In 2007, as the iPhone emerged, the maker movement was taking shape. This subculture advocates for learning-through-making in social environments like hackerspaces and libraries. DIY and open hardware enthusiasts gathered in person at Maker Faires—large events where people of all ages tinkered and shared their inventions in 3D printing, robotics, electronics, and more. Motivated by fun, self-fulfillment, and shared learning, the movement birthed companies like MakerBot, Raspberry Pi, Arduino, and (my own education startup) littleBits from garages and kitchen tables. I myself wanted to challenge the notion that technology had to be intimidating or inaccessible, creating modular electronic building blocks designed to put the power of invention in the hands of everyone…(More)”