Stefaan Verhulst
Paper by Barbara Tornimbene et al: “The COVID-19 pandemic highlighted substantial obstacles in real-time data generation and management needed for clinical research and epidemiological analysis. Three years after the pandemic, reflection on the difficulties of data integration offers potential to improve emergency preparedness. The fourth session of the WHO Pandemic and Epidemic Intelligence Forum sought to report the experiences of key global institutions in data integration and synthesis, with the aim of identifying solutions for effective integration. Data integration, defined as the combination of heterogeneous sources into a cohesive system, allows for combining epidemiological data with contextual elements such as socioeconomic determinants to create a more complete picture of disease patterns. The approach is critical for predicting outbreaks, determining disease burden, and evaluating interventions. The use of contextual information improves real-time intelligence and risk assessments, allowing for faster outbreak responses. This report captures the growing acknowledgment of data integration importance in boosting public health intelligence and readiness and show examples of how global institutions are strengthening initiatives to respond to this need. However, obstacles persist, including interoperability, data standardization, and ethical considerations. The success of future data integration efforts will be determined by the development of a common technical and legal framework, the promotion of global collaboration, and the protection of sensitive data. Ultimately, effective data integration can potentially transform public health intelligence and our way to successfully respond to future pandemics…(More)”.
Article by Ayah Bdeir: “Once again, the future of technology is being engineered in secret by a handful of people and delivered to the rest of us as a sealed, seamless, perfect device. When technology is designed in secrecy and sold to us as a black box, we are reduced to consumers. We wait for updates. We adapt to features. We don’t shape the tools; they shape us.
This is a problem. And not just for tinkerers and technologists, but for all of us.
We are living through a crisis of disempowerment. Children are more anxious than ever; the former US surgeon general described a loneliness epidemic; people are increasingly worried about AI eroding education. The beautiful devices we use have been correlated with many of these trends. Now AI—arguably the most powerful technology of our era—is moving off the screen and into physical space.
The timing is not a coincidence. Hardware is having a renaissance. Every major tech company is investing in physical interfaces for AI. Startups are raising capital to build robots, glasses, wearables that are going to track our every move. The form factor of AI is the next battlefield. Do we really want our future mediated entirely through interfaces we can’t open, code we can’t see, and decisions we can’t influence?
This moment creates an existential opening, a chance to do things differently. Because away from the self-centeredness of Silicon Valley, a quiet, grounded sense of resistance is reactivating. I’m calling it the revenge of the makers.
In 2007, as the iPhone emerged, the maker movement was taking shape. This subculture advocates for learning-through-making in social environments like hackerspaces and libraries. DIY and open hardware enthusiasts gathered in person at Maker Faires—large events where people of all ages tinkered and shared their inventions in 3D printing, robotics, electronics, and more. Motivated by fun, self-fulfillment, and shared learning, the movement birthed companies like MakerBot, Raspberry Pi, Arduino, and (my own education startup) littleBits from garages and kitchen tables. I myself wanted to challenge the notion that technology had to be intimidating or inaccessible, creating modular electronic building blocks designed to put the power of invention in the hands of everyone…(More)”
Article by Trebor Scholz and Stefano Tortorici: “Today, AI development is controlled by a small cadre of firms. Companies like OpenAI, Alphabet, Amazon, Meta, and Microsoft dominate through vast computational resources, massive proprietary datasets, deep pools of technical talent, extractive data practices, low-cost labor, and capital that enables continuous experimentation and rapid deployment. Even open-source challengers like DeepSeek run on vast computational muscle and industrial training pipelines.
This domination brings problems: privacy violation and cost-minimizing labor strategies, high environmental costs from data centers, and evident biases in models that can reinforce discrimination in hiring, healthcare, credit scoring, policing, and beyond. These problems tend to affect the people who are already too often left out. AI’s opaque algorithms don’t just sidestep democratic control and transparency—they shape who gets heard, who’s watched, and who’s quietly pushed aside.
Yet, as companies consider using this technology, it can seem that there are few other options. As such, it can seem that they are locked into these compromises.
A different model is taking shape, however, with little fanfare, but with real potential. AI cooperatives—organizations developing or governing AI technologies based on cooperative principles—offer a promising alternative. The cooperative movement, with its global footprint and diversity of models, has been successful from banking and agriculture to insurance and manufacturing. Cooperatives enterprises, which are owned and governed by their members, have long managed infrastructure for the public good.
A handful of AI cooperatives offer early examples of how democratic governance and shared ownership could shape more accountable and community-centered uses of the technology. Most are large agricultural cooperatives that are putting AI to use in their day-to-day operations, such as IFFCO’s DRONAI program (AI for fertilization), FrieslandCampina (dairy quality control), and Fonterra (milk production analytics). Cooperatives must urgently organize to challenge AI’s dominance or remain on the sidelines of critical political and technological developments.
There is undeniably potential here, for both existing cooperatives and companies that might want to partner with them. The $589 billion drop in Nvidia’s market cap DeepSeek triggered shows how quickly open-source innovation can shift the landscape. But for cooperative AI labs to do more than signal intent, they need public infrastructure, civic partnerships, and serious backing…(More)”.
Paper by Konstantin F. Pilz, James Sanders, Robi Rahman, and Lennart Heim: “Frontier AI development relies on powerful AI supercomputers, yet analysis of these systems is limited. We create a dataset of 500 AI supercomputers from 2019 to 2025 and analyze key trends in performance, power needs, hardware cost, ownership, and global distribution. We find that the computational performance of AI supercomputers has doubled every nine months, while hardware acquisition cost and power needs both doubled every year. The leading system in March 2025, xAI’s Colossus, used 200,000 AI chips, had a hardware cost of $7B, and required 300 MW of power, as much as 250,000 households. As AI supercomputers evolved from tools for science to industrial machines, companies rapidly expanded their share of total AI supercomputer performance, while the share of governments and academia diminished. Globally, the United States accounts for about 75% of total performance in our dataset, with China in second place at 15%. If the observed trends continue, the leading AI supercomputer in 2030 will achieve 2×1022 16-bit FLOP/s, use two million AI chips, have a hardware cost of $200 billion, and require 9 GW of power. Our analysis provides visibility into the AI supercomputer landscape, allowing policymakers to assess key AI trends like resource needs, ownership, and national competitiveness…(More)”.
Paper by Valentina Battiloro: “This paper explores the challenges and methods involved in public policy evaluation, focusing on the role of data collection and use. The term “evaluation” encompasses a variety of analyses and approaches, all united by the intent to provide a judgment on a specific policy, but which, depending on the precise knowledge objective, can translate into entirely different activities. Regardless of the type of evaluation, a brief overview of which is provided, the collection of information represents a priority, often undervalued, under the assumption that it is sufficient to “have the data.“ Issues arise concerning the precise definition of the design, the planning of necessary information collection, and the appropriate management of timelines. With regard to administrative data, a potentially valuable source, a number of unresolved challenges remain due to a weak culture of data utilization. Among these are the transition from an administrative data culture to a statistical data culture, and the fundamental issue of microdata accessibility for research purposes, which is currently hindered by significant barriers…(More)”.
Book by Tommaso Venturini and Richard Rogers: “In a direct and accessible way, the authors provide hands-on advice to equip readers with the knowledge they need to understand which digital methods are best suited to their research goals and how to use them. Cutting through theoretical and technical complications, they focus on the different practices associated with digital methods to skillfully provide a quick-start guide to the art of querying, prompting, API calling, scraping, mining, wrangling, visualizing, crawling, plotting networks, and scripting. While embracing the capacity of digital methods to rekindle sociological imagination, this book also delves into their limits and biases and reveals the hard labor of digital fieldwork. The book also touches upon the epistemic and political consequences of these methods, but with the purpose of providing practical advice for their usage…(More)”.
Article by Akash Kapur: “Step back from the day-to-day flurry surrounding AI, and a global divergence in narratives is becoming increasingly clear. In Silicon Valley, New York, and London, the conversation centers on the long-range pursuit of artificial general intelligence (AGI)—systems that might one day equal or surpass humans at almost everything. This is the moon-shot paradigm, fueled by multi-billion-dollar capital expenditure and almost metaphysical ambition.
In contrast, much of the Global South is converging on something more grounded: the search for near-term, proven use cases that can be deployed with today’s hardware, and limited budgets and bandwidth. Call it Applied AI, or AAI. This quest for applicability—and relevance—is more humble than AGI. Its yardstick for success is more measured, and certainly less existential. Rather than pose profound questions about the nature of consciousness and humanity, Applied AI asks questions like: Does the model fix a real-world problem? Can it run on patchy 4G, a mid-range GPU, or a refurbished phone? What new yield can it bring to farmers or fishermen, or which bureaucratic bottleneck can it cut?
One way to think of AAI is as intelligence that ships. Vernacular chatbots, offline crop-disease detectors, speech-to-text tools for courtrooms: examples of similar applications and products, tailored and designed for specific sectors, are growing fast. In Africa, PlantVillage Nuru helps Kenyan farmers diagnose crop diseases entirely offline; South-Africa-based Lelapa AI is training “small language models” for at least 13 African languages; and Nigeria’s EqualyzAI runs chatbots that are trained to provide Hausa and Yoruba translations for customers…(More)”.
Book by Olivier Alexandre: “Sometimes only an outsider can show how an industry works—and how that industry works upon the world. In Tech, sociologist Olivier Alexandre takes us on a revealing tour of Silicon Valley’s prominent personalities and vibrant networks to capture the way its denizens live, think, relate, and innovate, and how they shape the very code and conduct of business itself.
Even seasoned observers will gain insight into the industry’s singular milieu from Alexandre’s piercing eye. He spends as much time with Silicon Valley’s major players as with those who fight daily to survive within a system engineered for disruption. Embedded deep within the community, Alexandre accesses rooms shut tight to the public and reports back on the motivations, ambitions, and radical vision guiding tech companies. From the conquest of space to quantum computing, engineers have recast the infinitely large and small. Some scientists predict the end of death and the replacement of human beings with machines. But at what cost? Alexandre sees a shadow hanging over the Valley, jeopardizing its future and the economy made in its image. Critical yet fair, Tech illuminates anew a world of perpetual revolution…(More)”.
Paper by Annelien Smets: “Designing for serendipity in information technologies presents significant challenges for both scholars and practitioners. This paper presents a theoretical model of serendipity that aims to address this challenge by providing a structured framework for understanding and designing for serendipity. The model delineates between intended, afforded, and experienced serendipity, recognizing the role of design intents and the subjective nature of experiencing serendipity. Central to the model is the recognition that there is no single definition nor a unique operationalization of serendipity, emphasizing the need for a nuanced approach to its conceptualization and design. By delineating between the intentions of designers, the characteristics of the system, and the experiences of end-users, the model offers a pathway to resolve the paradox of artificial serendipity and provides actionable guidelines to design for serendipity in information technologies. However, it also emphasizes the importance of establishing ‘guardrails’ to guide the design process and mitigate potential negative unintended consequences. The model aims to lay ground to advance both research and the practice of designing for serendipity, leading to more ethical and effective design practices…(More)”.
Article by Nino Letteriello: “This September will mark two years since the Data Governance Act officially became applicable across the European Union. This regulation, part of the broader European data strategy, focuses primarily on data sharing between public and private entities and the overall development of a data-driven economy.
Although less known than its high-profile counterparts—the Data Act and especially the Artificial Intelligence Act—the Data Governance Act introduces a particularly compelling concept: data altruism.
Data altruism refers to the voluntary sharing of data—by individuals or companies—without expecting any reward for purposes of general interest. Such data has immense potential to advance research and drive innovation in areas like healthcare, environmental sustainability and mobility…The absence of structured research into corporate resistance to data donation suggests that the topic remains niche—mostly embraced by tech giants with strong data capabilities and CSR programs, like Meta for Good and Google AI for Good—but still virtually unknown to most companies.
Before we talk about resistance to data donation, perhaps we should explore the level of awareness companies have about the impact such donations could have.
And so, in trying to answer the question I posed at the beginning of this article, perhaps the most appropriate response is yet another question: Do companies even realize that the data they collect, generate and manage could be a vital resource for building a better world?
And if they were more aware of the different ways they could do good with data—would they be more inclined to act?
Despite the existence of the Data Governance Act and the Data Act, these questions remain largely unanswered. But the hope is that, as data becomes more democratized within organizations and as social responsibility and sustainability take center stage, “Data for Good” will become a standard theme in corporate agendas.
After all, private companies are the most valuable and essential data providers and partners for this kind of transformation—and it is often we, the people, who provide them with the very data that could help change our world…(More)”.