Stefaan Verhulst
Paper by Oliver Escobar and Adrian Bua: “The world faces social, political, economic, and ecological crises, and there is doubt that democratic governance can cope. Democracies rely on a narrow set of institutions and processes anchored in dominant forms of political organisation and imagination. Power inequalities sustain the (re)production of current ills in democratic life. In this context, what does the field of democratic innovation offer to the task of sociopolitical reimagining and change? The field has advanced since the turn of the century, building foundations for democratic renewal. It draws from various traditions of democracy, including participatory and deliberative streams. But there is concern that a non-critical version of deliberative democracy is becoming hegemonic. Deliberative theory generated useful correctives to participatory democracy – that is, a deeper understanding of the communicative fabric of the public sphere as worthy of democratisation; public reasoning as a bridge-builder between streets and institutions and a key precursor to democratic collective action. However, we argue that democratic innovation now needs a participatory corrective to strengthen its potential to mobilise capacity for change. We review emerging critiques in conversation with participatory ideas and practices, illustrating our argument with four gaps in democratic innovation that can become field-expanding dimensions to deliver emancipatory change more effectively: pluriversality, policy, political economy, and empowerment…(More)”.
Paper by Matthew Liao et al: “As AI and digital technologies advance rapidly, governance frameworks struggle to keep pace with emerging applications and risks. This paper introduces a “5W1H” framework to systematically analyze AI governance proposals through six key questions: What should be regulated (data, algorithms, sectors, or risk levels), Why regulate (ethics, legal compliance, market failures, or national interests), Who should regulate (industry, government, or public stakeholders), When regulation should occur (upstream, downstream, or lifecycle approaches), Where it should take place (local, national, or international levels), and How it should be enacted (hard versus soft regulation). The framework is applied to compare the European Union’s AI Act with the current US regulatory landscape, revealing the EU’s comprehensive, risk-based approach versus America’s fragmented, sector-specific strategy. By providing a structured analytical tool, the 5W1H framework helps policymakers, researchers, and stakeholders navigate complex AI governance decisions and identify areas for improvement in existing regulatory approaches…(More)”.
Paper by Amit Misra, Kevin White, Simone Fobi Nsutezo, William Straka III & Juan Lavista: “Floods cause extensive global damage annually, making effective monitoring essential. While satellite observations have proven invaluable for flood detection and tracking, comprehensive global flood datasets spanning extended time periods remain scarce. In this study, we introduce a deep learning flood detection model that leverages the cloud-penetrating capabilities of Sentinel-1 Synthetic Aperture Radar (SAR) satellite imagery, enabling consistent flood extent mapping through cloud cover and in both day and night conditions. By applying this model to 10 years of SAR data, we create a unique, longitudinal global flood extent dataset with predictions unaffected by cloud coverage, offering comprehensive and consistent insights into historically flood-prone areas over the past decade. We use our model predictions to identify historically flood-prone areas in Ethiopia and demonstrate real-time disaster response capabilities during the May 2024 floods in Kenya. Additionally, our longitudinal analysis reveals potential increasing trends in global flood extent over time, although further validation is required to explore links to climate change. To maximize impact, we provide public access to both our model predictions and a code repository, empowering researchers and practitioners worldwide to advance flood monitoring and enhance disaster response strategies…(More)“
Paper by Lucy van Eck et al: “Public sector innovation labs (PSI-labs) are emerging as experimental spaces where governments attempt to generate knowledge for navigating uncertain, technology-driven futures. However, the knowledge they produce often remains “liquid”; relational and difficult to embed in traditional bureaucratic structures. This paper investigates these tensions through an ethnographic study of Vonk, Rotterdam’s digital innovation lab which prepares the municipality for emerging digital technologies in policymaking and service delivery.
Based on over 200 h of participant observation and 15 interviews, it examines how knowledge is created, shared, and embedded – or fails to be. Employing Hans Christian Andersen’s The Emperor’s New Clothes as a metaphor, the analysis highlights the relational and processual nature of knowledge in PSI-labs.
The findings reveal that PSI-labs hold potential for future-oriented governance, but face challenges in translating and embedding their “liquid” knowledge. We argue that knowledge becomes actionable through enactment within dynamic actor-networks. Knowledge is thus not merely a product of PSI-labs, but a shared accomplishment that materialises in the “doing”. This paper argues for strategic mechanisms to ensure the visibility and usability of such knowledge. By combining ethnographic insights with creative storytelling, it offers fresh perspectives on the governance of public sector innovation…(More)”.
Blog by Rainer Kattel: “The UK government published last week the Public Design Evidence Review (PDER), an ambitious attempt to answer a deceptively simple question: How do we create better public policies and services that consistently achieve their intended outcomes? One of the answers, the report argues, lies in public design — a term the report introduces… public design fundamentally challenges modernist assumptions about how governments should work: it questions and expands the idea that politics is about representation and that bureaucracy is about neutral expertise. Instead, it imagines governance as a dynamic, participatory, and creative process, as summarised in the figure below from the PDER report.

Despite these promising ideas and examples, public design remains underdeveloped as a system-wide public practice. Evidence is often limited to individual case studies, with few robust measures of impact — especially on systemic change. There are brilliant cases like Dan Hill’s work Swedish innovation agency, Vinnova. But mostly design roles are still not embedded across the civil service. Toolkits are scattered. Teams often lack shared job descriptions or metrics to evaluate success.
That’s why the Public Design Evidence Review is so important. It systematises the scattered evidence, identifies promising practices, and points toward what needs to change.
To make public design transformative, we need to learn from the digital transformation journey. That means:
- Standardising design roles in government job descriptions and team structures
- Scaling access to design toolkits across departments and agencies
- Measuring impact not just in outputs but in terms of systemic change, dynamic capabilities, and long-term value creation…(More)”.
Article by Eric Mosley: Every organization wants better people data. This information about employee satisfaction and engagement is often used by organizations to assess and improve company culture. But how does the way we collect people data affect its ultimate value to the organization?
In the race to use artificial intelligence (AI), many organizations have defaulted to a familiar mindset around data: Collect everything and sort it out later. But most Americans are uneasy about how companies use their data and are resigned to feeling that they’ve lost control, according to a Pew Research Center survey. And nearly 68% of consumers globally say they are either somewhat or very concerned about their privacy online. These kinds of feelings are dangerous because trust evaporates when people feel watched rather than respected.
From quiet monitoring to inferred behaviour, the rise of passive data mining is triggering a backlash. Some people are setting their own boundaries by asking companies not to track their clicks, mine their Slack or email messages, or make their data part of the company’s algorithm without consent.
If we want people to trust AI systems – or the organizations building them – we need to start with data practices that earn that trust. That means moving from pure extraction to something more cooperative, human and voluntary…(More)”.
Book by David W. Galenson: “When in their lives are innovators most creative and why? This book summarizes more than two decades of research prompted by this question. The result is an authoritative statement of a new unified theory of creativity that overturns both popular and scholarly beliefs about the sources of human inventiveness. David Galenson shows that there are two distinctly different kinds of creativity in virtually every discipline. They result from very different goals and methods, and each produces a specific pattern of discovery over the life cycle. Conceptual innovators make bold leaps to formulate new ideas. The most radical conceptual innovations are made by brash young geniuses, who often lose their creativity thereafter. Great conceptual innovators analyzed in this book include Pablo Picasso, Albert Einstein, Orson Welles, Sylvia Plath, Andy Warhol, Bob Dylan, and Steve Jobs. Experimental innovators make discoveries gradually and unobtrusively, through careful observation and generalization. They gain knowledge over time and make their greatest contributions late in their lives. Great experimental innovators considered in this book include Paul Cézanne, Charles Darwin, Virginia Woolf, Robert Frost, Alfred Hitchcock, John Coltrane, and Warren Buffett. From analysis of the careers of scores of artists, scholars, and entrepreneurs, this book provides a new understanding of the creative processes of great innovators and reveals the systematic patterns that underlie the two life cycles of creativity. It will be of interest to anyone who seeks a deeper understanding of the sources of human creativity…(More)”.
Blog by Qhala: “…In AI, benchmarks are the gold standard for evaluation. They are used to test whether large language models (LLMs) can reason, diagnose, and communicate effectively. In healthcare, LLMs are tested against benchmarks before they’re considered “safe” for clinical use.
But here’s the problem: These benchmarks are primarily built for Western settings. They reflect English-language health systems, Western disease burdens, and datasets scraped from journals and exams thousands of kilometres away from the real-world clinics of Kisumu, Kano, or Kigali.
A study in Kenya found over 90 different clinical guidelines used by frontline health workers in primary care. That’s not chaos, it’s context. Medicine in Africa is deeply localised, shaped by resource availability, epidemiology, and culture. When a mother arrives with a feverish child, a community nurse doesn’t consult the United States Medical Licensing Examination (USMLE). She consults the local Ministry of Health protocol and speaks in Luo, Hausa, or Amharic.
In practice, Human medical Doctors have to go through various levels of rigorous, context-based, localised assessment before they can practise in a country and in a specific specialisation. These licensing exams aren’t arbitrary; they’re tailored to national priorities, clinical practices, and patient populations. They acknowledge that even great doctors must be assessed in context. These assessments are mandatory and are an obvious logic when it comes to human clinicians. A Kenyan-trained doctor must pass the United States Medical Licensing Examination (USMLE). In the United Kingdom, it is the Professional and Linguistic Assessments Board (PLAB) test. In Australia, the relevant assessment is the Australian Medical Council (AMC) examination.
However, unlike the nationally ratified assessments for humans, the LLM benchmarks and subsequently the LLMs and Health AI tools are not created for local realities, nor are they reflective of the local context.
…Amidst the limitations of global benchmarks, a wave of important African-led innovations is starting to reshape the landscape. Projects like AfriMedQA represent some of the first structured attempts to evaluate large language models (LLMs) using African health contexts. These benchmarks thoughtfully align with the continent’s disease burden, such as malaria, HIV, and maternal health. Crucially, they also attempt to account for cultural nuances that are often overlooked in Western-designed benchmarks.
But even these fall short. They remain Anglophone…(More)”.
British Council: “From telling stories that seed future breakthroughs to diversifying AI datasets, artists reimagine what technologies can be, and who they can be for. This publication creates an international evidence base for this argument. 56 leaders in art and technology have offered 40 statements, spanning 20 countries and 5 continents. As a collection, they articulate artists, the cultural sector and creative industries as catalysing progressive innovation with cultural diversity, human values, and community at its core.
Responses include research leads from Adobe, Lelapa AI and Google, who detail the contribution artists make to the human-centric development of high-growth technologies. UK institutions like Serpentine and FACT, and LAS Art Foundation in Germany show cultural organisations are essential spaces for progressive artist-led R&D. Directors of TUMO Centre for Creative Technologies in Armenia, and Diriyah Art Futures in Saudi Arabia highlight education across art and technology as a source of skills for the future. Leaders of African Digital Heritage in Kenya and the Centre for Historical Memory in Colombia demonstrate how community ownership of technologies for heritage preservation increases network resilience. Artists such as Xu Bing in China and Libby Heaney in the UK present art as a site for public demystification of complex technologies, from space satellites to quantum computing.
The perspectives presented in this publication serve as a resource for policy making and programme development spanning art and technology. Global in scope, they offer case studies that highlight why innovation needs artists, on both a national and international scale…(More)”.
Paper by Alex Fischer et al: “While the Sustainable Development Goals (SDGs) were being negotiated, global policymakers assumed that advances in data technology and statistical capabilities, what was dubbed the “data revolution”, would accelerate development outcomes by improving policy efficiency and accountability. The 2014 report to the United Nations Secretary General, “A World That Counts” framed the data-for-development agenda, and proposed four pathways to impact: measuring for accountability, generating disaggregated and real-time data supplies, improving policymaking, and implementing efficiency. The subsequent experience suggests that while many recommendations were implemented globally to advance the production of data and statistics, the impact on SDG outcomes has been inconsistent. Progress towards SDG targets has stalled despite advances in statistical systems capability, data production, and data analytics. The coherence of the SDG policy agenda has undoubtedly improved aspects of data collection and supply, with SDG frameworks standardizing greater indicator reporting. However, other events, including the response to COVID-19, have played catalytic roles in statistical system innovation. Overall, increased financing for statistical systems has not materialized, though planning and monitoring of these national systems may have longer-term impacts. This article reviews how assumptions about the data revolution have evolved and where new assumptions are necessary to advance the impact across the data value chain. These include focusing on measuring what matters most for decision-making needs across polycentric institutions, leveraging the SDGs for global data standardization and strategic financial mobilization, closing data gaps while enhancing policymaker analytic capabilities, and fostering collective intelligence to drive data innovation, credible information, and sustainable development outcomes…(More)”.