AI and Assembly: Coming Together and Apart in a Datafied World


Book edited by Toussaint Nothias and Lucy Bernholz: “Artificial intelligence has moved from the lab into everyday life and is now seemingly everywhere. As AI creeps into every aspect of our lives, the data grab required to power AI also expands. People worldwide are tracked, analyzed, and influenced, whether on or off their screens, inside their homes or outside in public, still or in transit, alone or together. What does this mean for our ability to assemble with others for collective action, including protesting, holding community meetings and organizing rallies ? In this context, where and how does assembly take place, and who participates by choice and who by coercion? AI and Assembly explores these questions and offers global perspectives on the present and future of assembly in a world taken over by AI.

The contributors analyze how AI threatens free assembly by clustering people without consent, amplifying social biases, and empowering authoritarian surveillance. But they also explore new forms of associational life that emerge in response to these harms, from communities in the US conducting algorithmic audits to human rights activists in East Africa calling for biometric data protection and rideshare drivers in London advocating for fair pay. Ultimately, AI and Assembly is a rallying cry for those committed to a digital future beyond the narrow horizon of corporate extraction and state surveillance…(More)”.

Enjoy TikTok Explainers? These Old-Fashioned Diagrams Are A Whole Lot Smarter


Article by Jonathon Keats: “In the aftermath of Hiroshima, many of the scientists who built the atomic bomb changed the way they reckoned time. Their conception of the future was published on the cover of The Bulletin of the Atomic Scientists, which portrayed a clock set at seven minutes to midnight. In subsequent months and years, the clock sometimes advanced. Other times, the hands fell back. With this simple indication, the timepiece tracked the likelihood of nuclear annihilation.

Although few of the scientists who worked on the Manhattan Project are still alive, the Doomsday Clock remains operational, steadfastly translating risk into units of hours and minutes. Over time, the diagram has become iconic, and not only for subscribers to The Bulletin. It’s now so broadly recognizable that we may no longer recognize what makes it radical.

12 - Fondazione Prada_Diagrams
John Auldjo. Map of Vesuvius showing the direction of the streams of lava in the eruptions from 1631 to 1831, 1832. Exhibition copy from a printed book In John Auldjo, Sketches of Vesuvius: with Short Accounts of Its Principal Eruptions from the Commencement of the Christian Era to the Present Time (Napoli: George Glass, 1832). Olschki 53, plate before p. 27, Biblioteca Nazionale Centrale di Firenze, Firenze. Courtesy Ministero della Cultura – Biblioteca Nazionale Centrale di Firenze. Any unauthorized reproduction by any means whatsoever is prohibited.Biblioteca Nazionale Centrale di Firenze

A thrilling new exhibition at the Fondazione Prada brings the Doomsday Clock back into focus. Featuring hundreds of diagrams from the past millennium, ranging from financial charts to maps of volcanic eruptions, the exhibition provides the kind of survey that brings definition to an entire category of visual communication. Each work benefits from its association with others that are manifestly different in form and function…(More)”.

Beyond AI and Copyright


White Paper by Paul Keller: “…argues for interventions to ensure the sustainability of the information ecosystem in the age of generative AI. Authored by Paul Keller, the paper builds on Open Future’s ongoing work on Public AI and on AI and creative labour, and proposes measures aimed at ensuring a healthy and equitable digital knowledge commons.

Rather than focusing on the rights of individual creators or the infringement debates that dominate current policy discourse, the paper frames generative AI as a new cultural and social technology—one that is rapidly reshaping how societies access, produce, and value information. It identifies two major structural risks: the growing concentration of control over knowledge, and the hollowing out of the institutions and economies that sustain human information production.

To counter these risks, the paper calls for the development of public AI infrastructures and a redistributive mechanism based on a levy on commercial AI systems trained on publicly available information. The proceeds would support not only creators and rightholders, but also public service media, cultural heritage institutions, open content platforms, and the development of Public AI systems…(More)”.

Community Engagement Is Crucial for Successful State Data Efforts


Resource by the Data Quality Campaign: “Engaging communities is a critical step toward ensuring that data efforts work for their intended audiences. People, including state policymakers, school leaders, families, college administrators, employers, and the public, should have a say in how their state provides access to education and workforce data. And as state leaders build robust statewide longitudinal data systems (SLDSs) or move other data efforts forward, they must deliberately create consistent opportunities for communities to weigh in. This resource explores how states can meaningfully engage with communities to build trust and improve data efforts by ensuring that systems, tools, and resources are valuable to the people who use them…(More)”.

Data integration and synthesis for pandemic and epidemic intelligence


Paper by Barbara Tornimbene et al: “The COVID-19 pandemic highlighted substantial obstacles in real-time data generation and management needed for clinical research and epidemiological analysis. Three years after the pandemic, reflection on the difficulties of data integration offers potential to improve emergency preparedness. The fourth session of the WHO Pandemic and Epidemic Intelligence Forum sought to report the experiences of key global institutions in data integration and synthesis, with the aim of identifying solutions for effective integration. Data integration, defined as the combination of heterogeneous sources into a cohesive system, allows for combining epidemiological data with contextual elements such as socioeconomic determinants to create a more complete picture of disease patterns. The approach is critical for predicting outbreaks, determining disease burden, and evaluating interventions. The use of contextual information improves real-time intelligence and risk assessments, allowing for faster outbreak responses. This report captures the growing acknowledgment of data integration importance in boosting public health intelligence and readiness and show examples of how global institutions are strengthening initiatives to respond to this need. However, obstacles persist, including interoperability, data standardization, and ethical considerations. The success of future data integration efforts will be determined by the development of a common technical and legal framework, the promotion of global collaboration, and the protection of sensitive data. Ultimately, effective data integration can potentially transform public health intelligence and our way to successfully respond to future pandemics…(More)”.

Why AI hardware needs to be open


Article by Ayah Bdeir: “Once again, the future of technology is being engineered in secret by a handful of people and delivered to the rest of us as a sealed, seamless, perfect device. When technology is designed in secrecy and sold to us as a black box, we are reduced to consumers. We wait for updates. We adapt to features. We don’t shape the tools; they shape us. 

This is a problem. And not just for tinkerers and technologists, but for all of us.

We are living through a crisis of disempowerment. Children are more anxious than ever; the former US surgeon general described a loneliness epidemic; people are increasingly worried about AI eroding education. The beautiful devices we use have been correlated with many of these trends. Now AI—arguably the most powerful technology of our era—is moving off the screen and into physical space. 

The timing is not a coincidence. Hardware is having a renaissance. Every major tech company is investing in physical interfaces for AI. Startups are raising capital to build robots, glasses, wearables that are going to track our every move. The form factor of AI is the next battlefield. Do we really want our future mediated entirely through interfaces we can’t open, code we can’t see, and decisions we can’t influence? 

This moment creates an existential opening, a chance to do things differently. Because away from the self-centeredness of Silicon Valley, a quiet, grounded sense of resistance is reactivating. I’m calling it the revenge of the makers. 

In 2007, as the iPhone emerged, the maker movement was taking shape. This subculture advocates for learning-through-making in social environments like hackerspaces and libraries. DIY and open hardware enthusiasts gathered in person at Maker Faires—large events where people of all ages tinkered and shared their inventions in 3D printing, robotics, electronics, and more. Motivated by fun, self-fulfillment, and shared learning, the movement birthed companies like MakerBot, Raspberry Pi, Arduino, and (my own education startup) littleBits from garages and kitchen tables. I myself wanted to challenge the notion that technology had to be intimidating or inaccessible, creating modular electronic building blocks designed to put the power of invention in the hands of everyone…(More)”

5 Ways Cooperatives Can Shape the Future of AI


Article by Trebor Scholz and Stefano Tortorici: “Today, AI development is controlled by a small cadre of firms. Companies like OpenAI, Alphabet, Amazon, Meta, and Microsoft dominate through vast computational resources, massive proprietary datasets, deep pools of technical talent, extractive data practices, low-cost labor, and capital that enables continuous experimentation and rapid deployment. Even open-source challengers like DeepSeek run on vast computational muscle and industrial training pipelines.

This domination brings problems: privacy violation and cost-minimizing labor strategies, high environmental costs from data centers, and evident biases in models that can reinforce discrimination in hiring, healthcare, credit scoring, policing, and beyond. These problems tend to affect the people who are already too often left out. AI’s opaque algorithms don’t just sidestep democratic control and transparency—they shape who gets heard, who’s watched, and who’s quietly pushed aside.

Yet, as companies consider using this technology, it can seem that there are few other options. As such, it can seem that they are locked into these compromises.

A different model is taking shape, however, with little fanfare, but with real potential. AI cooperatives—organizations developing or governing AI technologies based on cooperative principles—offer a promising alternative. The cooperative movement, with its global footprint and diversity of models, has been successful from banking and agriculture to insurance and manufacturing. Cooperatives enterprises, which are owned and governed by their members, have long managed infrastructure for the public good.

A handful of AI cooperatives offer early examples of how democratic governance and shared ownership could shape more accountable and community-centered uses of the technology. Most are large agricultural cooperatives that are putting AI to use in their day-to-day operations, such as IFFCO’s DRONAI program (AI for fertilization), FrieslandCampina (dairy quality control), and Fonterra (milk production analytics). Cooperatives must urgently organize to challenge AI’s dominance or remain on the sidelines of critical political and technological developments.​

There is undeniably potential here, for both existing cooperatives and companies that might want to partner with them. The $589 billion drop in Nvidia’s market cap DeepSeek triggered shows how quickly open-source innovation can shift the landscape. But for cooperative AI labs to do more than signal intent, they need public infrastructure, civic partnerships, and serious backing…(More)”.

Trends in AI Supercomputers


Paper by Konstantin F. Pilz, James Sanders, Robi Rahman, and Lennart Heim: “Frontier AI development relies on powerful AI supercomputers, yet analysis of these systems is limited. We create a dataset of 500 AI supercomputers from 2019 to 2025 and analyze key trends in performance, power needs, hardware cost, ownership, and global distribution. We find that the computational performance of AI supercomputers has doubled every nine months, while hardware acquisition cost and power needs both doubled every year. The leading system in March 2025, xAI’s Colossus, used 200,000 AI chips, had a hardware cost of $7B, and required 300 MW of power, as much as 250,000 households. As AI supercomputers evolved from tools for science to industrial machines, companies rapidly expanded their share of total AI supercomputer performance, while the share of governments and academia diminished. Globally, the United States accounts for about 75% of total performance in our dataset, with China in second place at 15%. If the observed trends continue, the leading AI supercomputer in 2030 will achieve 2×1022 16-bit FLOP/s, use two million AI chips, have a hardware cost of $200 billion, and require 9 GW of power. Our analysis provides visibility into the AI supercomputer landscape, allowing policymakers to assess key AI trends like resource needs, ownership, and national competitiveness…(More)”.

Data Collection and Analysis for Policy Evaluation: Not for Duty, but for Knowledge


Paper by Valentina Battiloro: “This paper explores the challenges and methods involved in public policy evaluation, focusing on the role of data collection and use. The term “evaluation” encompasses a variety of analyses and approaches, all united by the intent to provide a judgment on a specific policy, but which, depending on the precise knowledge objective, can translate into entirely different activities. Regardless of the type of evaluation, a brief overview of which is provided, the collection of information represents a priority, often undervalued, under the assumption that it is sufficient to “have the data.“ Issues arise concerning the precise definition of the design, the planning of necessary information collection, and the appropriate management of timelines. With regard to administrative data, a potentially valuable source, a number of unresolved challenges remain due to a weak culture of data utilization. Among these are the transition from an administrative data culture to a statistical data culture, and the fundamental issue of microdata accessibility for research purposes, which is currently hindered by significant barriers…(More)”.

Digital Methods: A Short Introduction


Book by Tommaso Venturini and Richard Rogers: “In a direct and accessible way, the authors provide hands-on advice to equip readers with the knowledge they need to understand which digital methods are best suited to their research goals and how to use them. Cutting through theoretical and technical complications, they focus on the different practices associated with digital methods to skillfully provide a quick-start guide to the art of querying, prompting, API calling, scraping, mining, wrangling, visualizing, crawling, plotting networks, and scripting. While embracing the capacity of digital methods to rekindle sociological imagination, this book also delves into their limits and biases and reveals the hard labor of digital fieldwork. The book also touches upon the epistemic and political consequences of these methods, but with the purpose of providing practical advice for their usage…(More)”.