5 Ways Cooperatives Can Shape the Future of AI


Article by Trebor Scholz and Stefano Tortorici: “Today, AI development is controlled by a small cadre of firms. Companies like OpenAI, Alphabet, Amazon, Meta, and Microsoft dominate through vast computational resources, massive proprietary datasets, deep pools of technical talent, extractive data practices, low-cost labor, and capital that enables continuous experimentation and rapid deployment. Even open-source challengers like DeepSeek run on vast computational muscle and industrial training pipelines.

This domination brings problems: privacy violation and cost-minimizing labor strategies, high environmental costs from data centers, and evident biases in models that can reinforce discrimination in hiring, healthcare, credit scoring, policing, and beyond. These problems tend to affect the people who are already too often left out. AI’s opaque algorithms don’t just sidestep democratic control and transparency—they shape who gets heard, who’s watched, and who’s quietly pushed aside.

Yet, as companies consider using this technology, it can seem that there are few other options. As such, it can seem that they are locked into these compromises.

A different model is taking shape, however, with little fanfare, but with real potential. AI cooperatives—organizations developing or governing AI technologies based on cooperative principles—offer a promising alternative. The cooperative movement, with its global footprint and diversity of models, has been successful from banking and agriculture to insurance and manufacturing. Cooperatives enterprises, which are owned and governed by their members, have long managed infrastructure for the public good.

A handful of AI cooperatives offer early examples of how democratic governance and shared ownership could shape more accountable and community-centered uses of the technology. Most are large agricultural cooperatives that are putting AI to use in their day-to-day operations, such as IFFCO’s DRONAI program (AI for fertilization), FrieslandCampina (dairy quality control), and Fonterra (milk production analytics). Cooperatives must urgently organize to challenge AI’s dominance or remain on the sidelines of critical political and technological developments.​

There is undeniably potential here, for both existing cooperatives and companies that might want to partner with them. The $589 billion drop in Nvidia’s market cap DeepSeek triggered shows how quickly open-source innovation can shift the landscape. But for cooperative AI labs to do more than signal intent, they need public infrastructure, civic partnerships, and serious backing…(More)”.

Trends in AI Supercomputers


Paper by Konstantin F. Pilz, James Sanders, Robi Rahman, and Lennart Heim: “Frontier AI development relies on powerful AI supercomputers, yet analysis of these systems is limited. We create a dataset of 500 AI supercomputers from 2019 to 2025 and analyze key trends in performance, power needs, hardware cost, ownership, and global distribution. We find that the computational performance of AI supercomputers has doubled every nine months, while hardware acquisition cost and power needs both doubled every year. The leading system in March 2025, xAI’s Colossus, used 200,000 AI chips, had a hardware cost of $7B, and required 300 MW of power, as much as 250,000 households. As AI supercomputers evolved from tools for science to industrial machines, companies rapidly expanded their share of total AI supercomputer performance, while the share of governments and academia diminished. Globally, the United States accounts for about 75% of total performance in our dataset, with China in second place at 15%. If the observed trends continue, the leading AI supercomputer in 2030 will achieve 2×1022 16-bit FLOP/s, use two million AI chips, have a hardware cost of $200 billion, and require 9 GW of power. Our analysis provides visibility into the AI supercomputer landscape, allowing policymakers to assess key AI trends like resource needs, ownership, and national competitiveness…(More)”.

Data Collection and Analysis for Policy Evaluation: Not for Duty, but for Knowledge


Paper by Valentina Battiloro: “This paper explores the challenges and methods involved in public policy evaluation, focusing on the role of data collection and use. The term “evaluation” encompasses a variety of analyses and approaches, all united by the intent to provide a judgment on a specific policy, but which, depending on the precise knowledge objective, can translate into entirely different activities. Regardless of the type of evaluation, a brief overview of which is provided, the collection of information represents a priority, often undervalued, under the assumption that it is sufficient to “have the data.“ Issues arise concerning the precise definition of the design, the planning of necessary information collection, and the appropriate management of timelines. With regard to administrative data, a potentially valuable source, a number of unresolved challenges remain due to a weak culture of data utilization. Among these are the transition from an administrative data culture to a statistical data culture, and the fundamental issue of microdata accessibility for research purposes, which is currently hindered by significant barriers…(More)”.

Digital Methods: A Short Introduction


Book by Tommaso Venturini and Richard Rogers: “In a direct and accessible way, the authors provide hands-on advice to equip readers with the knowledge they need to understand which digital methods are best suited to their research goals and how to use them. Cutting through theoretical and technical complications, they focus on the different practices associated with digital methods to skillfully provide a quick-start guide to the art of querying, prompting, API calling, scraping, mining, wrangling, visualizing, crawling, plotting networks, and scripting. While embracing the capacity of digital methods to rekindle sociological imagination, this book also delves into their limits and biases and reveals the hard labor of digital fieldwork. The book also touches upon the epistemic and political consequences of these methods, but with the purpose of providing practical advice for their usage…(More)”.

AGI vs. AAI: Grassroots Ingenuity and Frugal Innovation Will Shape the Future


Article by Akash Kapur: “Step back from the day-to-day flurry surrounding AI, and a global divergence in narratives is becoming increasingly clear. In Silicon Valley, New York, and London, the conversation centers on the long-range pursuit of artificial general intelligence (AGI)—systems that might one day equal or surpass humans at almost everything. This is the moon-shot paradigm, fueled by multi-billion-dollar capital expenditure and almost metaphysical ambition.

In contrast, much of the Global South is converging on something more grounded: the search for near-term, proven use cases that can be deployed with today’s hardware, and limited budgets and bandwidth. Call it Applied AI, or AAI. This quest for applicability—and relevance—is more humble than AGI. Its yardstick for success is more measured, and certainly less existential. Rather than pose profound questions about the nature of consciousness and humanity, Applied AI asks questions like: Does the model fix a real-world problem? Can it run on patchy 4G, a mid-range GPU, or a refurbished phone? What new yield can it bring to farmers or fishermen, or which bureaucratic bottleneck can it cut?

One way to think of AAI is as intelligence that ships. Vernacular chatbots, offline crop-disease detectors, speech-to-text tools for courtrooms: examples of similar applications and products, tailored and designed for specific sectors, are growing fast. In Africa, PlantVillage Nuru helps Kenyan farmers diagnose crop diseases entirely offline; South-Africa-based Lelapa AI is training “small language models” for at least 13 African languages; and Nigeria’s EqualyzAI runs chatbots that are trained to provide Hausa and Yoruba translations for customers…(More)”.

Tech: When Silicon Valley Remakes the World


Book by Olivier Alexandre: “Sometimes only an outsider can show how an industry works—and how that industry works upon the world. In Tech, sociologist Olivier Alexandre takes us on a revealing tour of Silicon Valley’s prominent personalities and vibrant networks to capture the way its denizens live, think, relate, and innovate, and how they shape the very code and conduct of business itself.
 
Even seasoned observers will gain insight into the industry’s singular milieu from Alexandre’s piercing eye. He spends as much time with Silicon Valley’s major players as with those who fight daily to survive within a system engineered for disruption. Embedded deep within the community, Alexandre accesses rooms shut tight to the public and reports back on the motivations, ambitions, and radical vision guiding tech companies. From the conquest of space to quantum computing, engineers have recast the infinitely large and small. Some scientists predict the end of death and the replacement of human beings with machines. But at what cost? Alexandre sees a shadow hanging over the Valley, jeopardizing its future and the economy made in its image. Critical yet fair, Tech illuminates anew a world of perpetual revolution…(More)”.

Intended, afforded, and experienced serendipity: overcoming the paradox of artificial serendipity


Paper by Annelien Smets: “Designing for serendipity in information technologies presents significant challenges for both scholars and practitioners. This paper presents a theoretical model of serendipity that aims to address this challenge by providing a structured framework for understanding and designing for serendipity. The model delineates between intended, afforded, and experienced serendipity, recognizing the role of design intents and the subjective nature of experiencing serendipity. Central to the model is the recognition that there is no single definition nor a unique operationalization of serendipity, emphasizing the need for a nuanced approach to its conceptualization and design. By delineating between the intentions of designers, the characteristics of the system, and the experiences of end-users, the model offers a pathway to resolve the paradox of artificial serendipity and provides actionable guidelines to design for serendipity in information technologies. However, it also emphasizes the importance of establishing ‘guardrails’ to guide the design process and mitigate potential negative unintended consequences. The model aims to lay ground to advance both research and the practice of designing for serendipity, leading to more ethical and effective design practices…(More)”.

Companies Are Missing The Chance To Improve The World With Their Data


Article by Nino Letteriello: “This September will mark two years since the Data Governance Act officially became applicable across the European Union. This regulation, part of the broader European data strategy, focuses primarily on data sharing between public and private entities and the overall development of a data-driven economy.

Although less known than its high-profile counterparts—the Data Act and especially the Artificial Intelligence Act—the Data Governance Act introduces a particularly compelling concept: data altruism.

Data altruism refers to the voluntary sharing of data—by individuals or companies—without expecting any reward for purposes of general interest. Such data has immense potential to advance research and drive innovation in areas like healthcare, environmental sustainability and mobility…The absence of structured research into corporate resistance to data donation suggests that the topic remains niche—mostly embraced by tech giants with strong data capabilities and CSR programs, like Meta for Good and Google AI for Good—but still virtually unknown to most companies.

Before we talk about resistance to data donation, perhaps we should explore the level of awareness companies have about the impact such donations could have.

And so, in trying to answer the question I posed at the beginning of this article, perhaps the most appropriate response is yet another question: Do companies even realize that the data they collect, generate and manage could be a vital resource for building a better world?

And if they were more aware of the different ways they could do good with data—would they be more inclined to act?

Despite the existence of the Data Governance Act and the Data Act, these questions remain largely unanswered. But the hope is that, as data becomes more democratized within organizations and as social responsibility and sustainability take center stage, “Data for Good” will become a standard theme in corporate agendas.

After all, private companies are the most valuable and essential data providers and partners for this kind of transformation—and it is often we, the people, who provide them with the very data that could help change our world…(More)”.

What Counts as Discovery?


Essay by Nisheeth Vishnoi: “Long before there were “scientists,” there was science. Across every continent, humans developed knowledge systems grounded in experience, abstraction, and prediction—driven not merely by curiosity, but by a desire to transform patterns into principles, and observation into discovery. Farmers tracked solstices, sailors read stars, artisans perfected metallurgy, and physicians documented plant remedies. They built calendars, mapped cycles, and tested interventions—turning empirical insight into reliable knowledge.

From the oral sciences of Africa, which encoded botanical, medical, and ecological knowledge across generations, to the astronomical observatories of Mesoamerica, where priests tracked solstices, eclipses, and planetary motion with remarkable accuracy, early human civilizations sought more than survival. In Babylon, scribes logged celestial movements and built predictive models; in India, the architects of Vedic altars designed ritual structures whose proportions mirrored cosmic rhythms, embedding arithmetic and geometry into sacred form. Across these diverse cultures, discovery was not a separate enterprise—it was entwined with ritual, survival, and meaning. Yet the tools were recognizably scientific: systematic observation, abstraction, and the search for hidden order.

This was science before the name. And it reminds us that discovery has never belonged to any one civilization or era. Discovery is not intelligence itself, but one of its sharpest expressions—an act that turns perception into principle through a conceptual leap. While intelligence is broader and encompasses adaptation, inference, and learning in various forms (biological, cultural, and even mechanical), discovery marks those moments when something new is framed, not just found. 

Life forms learn, adapt, and even innovate. But it is humans who turned observation into explanation, explanation into abstraction, and abstraction into method. The rise of formal science brought mathematical structure and experiment, but it did not invent the impulse to understand—it gave it form, language, and reach.

And today, we stand at the edge of something unfamiliar: the possibility of lifeless discoveries. Artificial Intelligence machines, built without awareness or curiosity, are beginning to surface patterns and propose explanations, sometimes without our full understanding. If science has long been a dialogue between the world and living minds, we are now entering a strange new phase: abstraction without awareness, discovery without a discoverer.

AI systems now assist in everything from understanding black holes to predicting protein folds and even symbolic equation discovery. They parse vast datasets, detect regularities, and generate increasingly sophisticated outputs. Some claim they’re not just accelerating research, but beginning to reshape science itself—perhaps even to discover.

But what truly counts as a scientific discovery? This essay examines that question…(More)”

A.I. Is Starting to Wear Down Democracy


Article by Steven Lee Myers and Stuart A. Thompson: “Since the explosion of generative artificial intelligence over the last two years, the technology has demeaned or defamed opponents and, for the first time, officials and experts said, begun to have an impact on election results.

Free and easy to use, A.I. tools have generated a flood of fake photos and videos of candidates or supporters saying things they did not or appearing in places they were not — all spread with the relative impunity of anonymity online.

The technology has amplified social and partisan divisions and bolstered antigovernment sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Portugal.

In Romania, a Russian influence operation using A.I. tainted the first round of last year’s presidential election, according to government officials. A court there nullified that result, forcing a new vote last month and bringing a new wave of fabrications. It was the first major election in which A.I. played a decisive role in the outcome. It is unlikely to be the last.

As the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to function.

Madalina Botan, a professor at the National University of Political Studies and Public Administration in Romania’s capital, Bucharest, said there was no question that the technology was already “being used for obviously malevolent purposes” to manipulate voters.

“These mechanics are so sophisticated that they truly managed to get a piece of content to go very viral in a very limited amount of time,” she said. “What can compete with this?”

In the unusually concentrated wave of elections that took place in 2024, A.I. was used in more than 80 percent, according to the International Panel on the Information Environment, an independent organization of scientists based in Switzerland.

It documented 215 instances of A.I. in elections that year, based on government statements, research and news reports. Already this year, A.I. has played a role in at least nine more major elections, from Canada to Australia…(More)”.