Explore our articles
View All Results

Stefaan Verhulst

Book by Ben Green: “The field of data science faces a moral crisis. Despite the desires of data scientists to develop algorithms for good, algorithms regularly produce injustice in practice. Given these persistent harms, the field must reflect on difficult questions about its identity and future. Can data science be a force for promoting social justice in the world? What practices should data scientists follow to achieve this goal?

In Algorithmic Realism, Ben Green presents a bold and interdisciplinary approach to data science. Drawing on his experience practicing data science in the public interest, he argues that improving society with algorithms requires transforming data science from a formalist methodology focused on mathematical models into a practical methodology focused on addressing real-world problems. By providing an expanded framework for the “data science workflow”—the steps that characterize the algorithm development process—he offers a practical, step-by-step guide describing how data scientists can apply their skills in service of social justice. Through these contributions, the book reveals a vision for a renewed, but realistic, optimism about data science’s potential to foster a more equitable world…(More)”.

Algorithmic Realism: Data Science Practices to Promote Social Justice

Scan by Stefaan Verhulst and Begoña G. Otero: “As organizations work with more complex, real-time, and AI-enabled data environments, data governance can no longer be treated as a downstream compliance exercise. It needs to be designed across the full data life cycle: from planning and collection to processing, sharing, analysis, and use. That is the premise behind our new scan released today: Data Governance Innovations: Emerging Practices and Trends Across the Data Life Cycle.

Developed as a companion to our Q&A, What is Data Governance, this scan curates recent developments and maps them to the stages of the data life cycle where they are most relevant in practice: planning, collecting, processing, sharing, analyzing, and using data.

It curates innovations along the following interrelated dimensions:

Practices: new methods, tools, and governance arrangements that can be embedded into organizational operating models—such as privacy-enhancing technologies, data commons, agentic AI for discovery, social license, model cards, policy as code, data spaces, data collaboratives, data sandboxes, digital twins and benefit-sharing mechanisms, among others.

Structural Forces:  dynamics shaping the environment in which data governance decisions are made. These include the rapid deployment of AI and the emergence of agentic systems, increasing regulatory complexity (“regulatory densification”), evolving data sovereignty and cross-border constraints, and the growing “data winter” affecting access to and reuse of data for public interest purposes.

Cross-Cutting Issues: System-wide considerations that influence governance across the entire data lifecycle, including the integration of AI and data governance, the development of social license for data reuse, and alignment with digital public infrastructure.

Designed as a living document, the scan will continue to evolve as governance practices change…(More)”

How Data Governance Is Evolving: Mapping Innovations Across the Data Lifecycle

Paper by Dimitrios Kalogeropoulos, Paul Barach, Andrea Downing, Stefaan Verhulst and Maryam B. Lustberg: “Healthcare services and data ecosystems remain fragmented, inequitable, and misaligned with the real-world needs of patients, clinicians, and public health systems. Existing pathways to patient-centred AI often lack contextual sensitivity and perpetuate disparities, limiting the transformative potential of AI to create personalised and inclusive care. This non-systematic narrative review examines three suitable pathways for integrating Artificial Intelligence (AI) into healthcare and identifies their limitations in realising patient-centred care. We propose a fourth pathway: Adaptive Machine Learning (AML). AML strategically integrates AI into learning health systems, allowing continuous model updates using population-level, context-sensitive real-world data. This quintuple aim based approach enhances personalisation, promotes quality and equity, and strengthens system resilience. We identify three critical enablers of AML: integrative data governance, adaptive study designs, and regulatory evidence sandbox facilities. Taken together these elements can advance the goal of sustainable digital health autonomy and responsible, collaborative data use. The aim of this study is to define a practical and ethically grounded framework for operationalising AML as a fourth pathway to patient-centred AI that aligns with international standards for responsible healthcare innovation, equitable governance, and digital transformation. Realising the full potential of AI in patient-centred healthcare requires urgent and coordinated actions across three priority areas to: (1) develop high-priority clinical use cases that demonstrate how AI can safely learn from real-world data and improve patient outcomes; (2) advance adaptive evaluation frameworks that reflect the lived experiences of diverse and underserved populations; and (3) establish regulatory evidence sandboxes to foster transparent, participatory, and multistakeholder innovation. Future research should prioritise integration of collective consent models and alignment of AI and medical device regulations with international governance toolkits to promote safe, patient-centred, inclusive, and trusted AI adoption in health ecosystems…(More)”.

Advancing Patient-Centred AI with Adaptive Machine Learning

Open Access Book edited by Alessandra Micalizzi: “…explores one of the most pressing transformations of contemporary knowledge production: the integration of artificial intelligence into the practices, methods, and epistemologies of the social sciences. Moving beyond simplistic narratives of automation and efficiency, this volume investigates AI as object, tool, context, and partner of research. Bringing together interdisciplinary perspectives, the contributions examine how algorithmic systems reshape inquiry, interpretation, and representation, while also raising fundamental methodological and ethical questions. From AI-assisted qualitative analysis and ethnography to digital imaginaries, bias, and futures thinking, the chapters reveal the complex co-production between technological systems and social knowledge. Rather than offering definitive answers, the book provides conceptual tools, empirical cases, and methodological reflections for navigating a rapidly evolving research landscape. It invites scholars to engage critically and creatively with artificial intelligence—not as a distant technology, but as an active participant in the construction of contemporary social understanding…(More)”.

Artificial Intelligence and Social Research. Methods, Contexts, Imaginaries

Annual Report by Freedom House: “Global freedom declined for the 20th consecutive year in 2025. A total of 54 countries experienced deterioration in their political rights and civil liberties, while only 35 countries registered improvements.

The largest declines in freedom for the calendar year were caused by military coups and efforts by incumbent leaders to crush peaceful dissent or change constitutional rules in their favor. Guinea-Bissau received the year’s single largest score change, losing 8 points on Freedom in the World’s 100-point scale after the November general elections were disrupted by a coup in which armed men stormed the election commission’s office and destroyed ballots. Military officers also ousted the elected government in Madagascar, bringing the total number of African countries to have experienced a coup since 2019 to nine. In Burkina Faso, which has been under military rule since a 2022 coup, the score declined by 5 points as state security forces and junta-sponsored militias engaged in mass killings and forced displacement of Fulani civilians, while Islamist insurgents attacked people of other faiths and imposed their own religious practices in areas under their control.

Tanzania registered the second most significant deterioration in rights and liberties in 2025, losing 7 points and sinking further into the Not Free category. The incumbent president, Samia Suluhu Hassan, was declared the winner of an election marred by the exclusion of opposition candidates, restrictions on the media, a campaign of forced disappearances of political opponents, and widespread violence against protesters that resulted in at least 1,000 deaths. El Salvador tied with Madagascar for the third largest decline in the world, losing 5 points. Salvadoran authorities persecuted high-profile academics who were critical of the government, threats against the media drove journalists into exile, and the government seized land without providing compensation. The Legislative Assembly, dominated by President Nayib Bukele’s Nuevas Ideas party, passed a constitutional reform that abolished presidential term limits and extended the terms from five to six years, clearing the way for Bukele to seek reelection indefinitely…(More)”.

The Growing Shadow of Autocracy

Article by Ioannes Chountis de Fabbri: “Jürgen Habermas’ enduring work began in the coffee-houses of Georgian London. His deepest insight was, in the end, a conservative one.

Georgian London had around 3,000 coffee houses. For a penny a cup, a merchant, a shopkeeper or a gentleman could sit down together, read the newspapers spread before them, and argue about the affairs of Parliament, the conduct of the war against France, or the merits of the latest edition of Joseph Addison and Richard Steele’s Spectator. The most powerful account of this world, and of its destruction, was perhaps unexpectedly written not by an English historian but by a German philosopher born into a provincial middle-class household in North Rhine-Westphalia, who died on Saturday in the Bavarian town of Starnberg at the age of 96.

Jürgen Habermas was born in 1929 into the kind of provincial, educated German household that had neither resisted nor much assisted Hitler’s regime. At 15 he was sent to the Western Front in the last chaotic months of the war. What followed shaped his cultural and political outlook: the Nuremberg revelations, the slow reckoning with what had been done in Germany’s name against the Jews, and a conviction, arrived at young and never abandoned, that the liberal constitutional state as it had developed in the English-speaking world represented a genuine civilisational achievement. Years later he described himself, with characteristic precision, as a ‘product of re-education’…(More)”.

Jürgen Habermas’ lost world: the coffee-house and the public sphere

Article by Arthur Mensch: “Europe is a land of creators. The continent has nurtured ideas that have enriched, and continue to enrich, the world’s intellectual and creative landscape. Its diverse and multilingual heritage remains one of its greatest strengths, central not only to its identity and soft power but also to its economic vitality.

All this is at risk as AI reshapes the global knowledge economy.

Major AI companies in the US and China are developing their models under permissive or non-existent copyright rules, training them domestically on vast amounts of content — including from European sources.

European AI developers, by contrast, operate in a fragmented legal environment that places them at a competitive disadvantage. The current opt-out framework, designed to enable rights holders to protect their content and prevent AI companies from using it for training if they say so, has proven unworkable in practice. Copyrighted works continue to spread uncontrollably online, while the legal mechanisms designed to protect them remain patchy, inconsistently applied and overly complex.

The result is a framework that satisfies no one. Rights holders correctly fear for their livelihoods yet see no clear path to protection. AI developers face legal uncertainty that hampers investment and growth.

Europe needs to explore a new approach.

At Mistral, we are proposing a revenue-based levy that would be applied to all commercial providers placing AI models on the market or putting them into service in Europe, reflecting their use of content publicly available online…(More)”.

AI companies should pay a content levy in Europe

Article by Barrett and Greene: “Since GenAI first appeared on the scene in late 2022, both benefits and hazards have been chronicled in multiple places, including this website. Advantages of AI play out on a daily basis providing cities and counties quicker results, increased staff efficiency, and improved government-resident communications. 

But as generative AI use took off, media reports surfaced of fabrications delivered in response to prompts (known as hallucinations) and factual errors that were embarrassing and sometimes costly for governments and their vendors.  

“If you don’t have a strategy or plan in place for how you deal with AI hazards, you’re going to get in trouble very fast,” says Brian Funderburk, an advocate for the responsible use of AI in government, and a retired city manager in Texas with 40 years of experience in local government.

The litany of problematic uses of AI seems to grow every day as its use expands. Just for starters, there have been fictitious precedents cited in legal cases. Chatbot errors have also surfaced with some frequency, notably in the much-heralded chatbot designed for businesses developed by New York City in the fall of 2023, that was roundly criticized the following spring for giving business callers incorrect information and sometimes advising them to engage in illegal behavior. 

Multiple companies have had to deal with the consequences of AI mistakes, including Deloitte, which agreed to refund the equivalent of $290,000 in U.S. dollars to the Australian government for a report “that was littered with apparent AI-generated errors,” according to an AP News report

Although hallucinations that AI can conjure have diminished to some extent, the continuing threat of errors requires extensive double-checking and triple-checking by humans that bear responsibility for what’s produced. “It will be a while before we can trust AI unconditionally,” says Funderburk who is currently Vice President and AI Safety Officer at Civic Marketplace…(More)”.

AI Hazards and Guard Rails

Book edited by Crystal Chokshi and Robin Mansell: “This book is about words that fool us into thinking that the digital technologies we use every day are beautiful, benign, and consequence-free. The collection shows how metaphors used by Big Tech to promote digital technologies are reductive or misleading. With a commitment to social justice, the contributors rename digital technologies in order to subvert Big Tech’s branding. Each chapter discusses a specific technology, rechristening it in a way that points explicitly to the social and political harms it is associated with. The alternative vocabularies that are proposed draw attention to what these technologies bring about, providing a means of resisting Silicon Valley’s claims about what people and organisations should buy and experience…(More)”.

 

The Need to Rename Tech

Paper by Ricardo Coelho Da Silva, Leid Zejnilović, Marco Berti, Miguel Pina e Cunha and Pedro Oliveira: “When crises strike, new forms of emergent organizing often arise to address urgent societal needs that formal institutions struggle to meet. Among these, emergent response groups (ERGs)—self-organized communities that form to respond to unexpected and extreme events—offer a particularly salient example of decentralized and nonhierarchical organizing. This multicase study investigates eight ERGs that formed during the COVID-19 pandemic to design and distribute critical medical supplies. Drawing on sensemaking theory, we show how bricolage—making do with at-hand resources—supports coordination and community structuring by reducing equivocality caused by distributed actors. Our findings describe how these ERGs grew rapidly by using bricolage to reduce action, goal, and resource equivocality, enabling coordinated and scalable crisis response efforts. We contribute to research on emergent organizing in crisis contexts by revealing how bricolage fosters coherence and rapid scaling in the absence of formal hierarchies. Our study also challenges the dominant assumption that bricolage is inherently limiting to organizational growth, showing that—in the context of self-organizing collectives—it offers a novel solution to the problem of coordinating action among distributed agents…(More)”.

Bricolage as Enacted Sensemaking in Emergent Response Groups: Organizing in Conditions of Extreme Equivocality

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday