Stefaan Verhulst
Book edited by Aleksi Aaltonen, Marta Stelmaszak, and Kalle Lyytinen: “…explores the function and impact of digital data on various spheres of organizational and social life. It examines essential research across disciplines, including management, sociology, and economics, establishing a foundational understanding of the increasing importance of digital data in contemporary society.
By situating its chapters within the layers of a digital data stack, this unique Research Handbook not only offers a variety of diverse perspectives and approaches, but it also provides a structure for cumulative insight. Leading scholars analyse and interpret the creation, governance, and utilization of data, covering key topics such as machine learning, data heterogeneity, temporal fragilities in data sharing, and blockchain finance. Ultimately, this Research Handbook highlights how the kaleidoscopic nature of digital data gives rise to multiple competing realities, making it a reference point for future scholarship…(More)”.
Article by Jeffrey Parsons; Roman Lukyanenko; Brad N. Greenwood; and Caren B. Cooper: “We live in an age of unprecedented opportunities to use existing data for tasks not anticipated when those data were collected, resulting in widespread data repurposing. This commentary defines and maps the scope of data repurposing to highlight its importance for organizations and society and the need to study data repurposing as a frontier of data management. We explain how repurposing differs from original data use and data reuse and then develop a framework for data repurposing consisting of concepts and activities for adapting existing data to new tasks. The framework and its implications are illustrated using two examples of repurposing, one in healthcare and one in citizen science. We conclude by suggesting opportunities for research to better understand data repurposing and enable more effective data repurposing practices…(More)”.
Article by Rebecca Mbaya: “What happens when AI reads African data through the wrong frame and no one in the room knows enough to notice.
The output was clean. Structured. Confident. The generative AI tool had processed survey responses from 191 respondents and returned a set of neatly labelled themes. One of them appeared repeatedly across the data: “Misinformation Resistance.”
I stared at it for a long time.
The survey was about perceptions of Fourth Industrial Revolution technologies (AI, IoT, blockchain) in a specific Congolese context. I had collected the data, I understood the political and historical texture of the community being studied. So what the AI tool(ChatGPT) had labelled “Misinformation Resistance” was not that. Not even close.What the responses actually reflected was something more specific, more historically grounded, and entirely rational: a deep, politically informed distrust of institutions. A community whose relationship with governance (colonial administration, post-independence instability, extractive foreign intervention, cycles of conflict) gave them every reason to be skeptical of new technologies promising transformation. This was a coherent epistemic posture developed over generations of having good reasons not to trust. The AI tool had taken a political trust phenomenon and filed it under cognitive bias. It had done this cleanly, confidently, and without any visible indication that something had gone wrong.
That gap between what the model produced and what the data actually meant was only visible to me because I knew the context. Which raises a question that I have not been able to stop thinking about: what happens in all the cases where no one in the room does?…(More)”.
Article by Nicola Jones: “The escalating conflict between the United States, Israel and Iran has thrown a spotlight on the use of artificial intelligence in warfare. Just one day before the US–Israeli offensive began on 28 February, the US government sidelined one of its main AI suppliers as part of a disagreement that underlines ethical concerns about AI’s use.
And this week, academics and legal experts are meeting in Geneva, Switzerland, to discuss lethal autonomous weapons systems and the procurement of AI in the military, as part of long-running efforts to arrive at an international agreement on the ethical or legal uses of AI in warfare.
Rapid technological development is outpacing slow international discussions, says political scientist Michael Horowitz at the University of Pennsylvania in Philadelphia.
“The current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent,” says Craig Jones, a political geographer at Newcastle University, UK, who researches military targeting….
The US military uses AI based on large language models (LLMs) for logistical and office support, intelligence gathering and analysis, and decision support on the battlefield, says Horowitz. The Maven Smart System, which uses AI for applications including image processing and tactical support, speeds up attack capabilities by suggesting and prioritizing targets, for example. The system has been used in previous conflicts and in the attacks on Iran, according to reports from the Washington Post and other news outlets. “The details are not publicly known,” Horowitz says…(More)”.
Paper by DemNext: “Africa faces a paradox. Most people continue to support democratic institutions, even though their satisfaction is declining with institutions’ ability to deliver inclusive economic prosperity and accountable, responsive governance. Citizens’ assemblies offer a way forward by offering the opportunity to draw on indigenous traditions of sustained deliberation and consensus-building to tackle complex policy problems.
In this paper, we explore how citizens’ assemblies can be adapted to Africa’s diverse contexts by drawing on real-world experiences across the continent. We begin by outlining the civic strengths and cultural traditions that underpin deliberative democracy in Africa, before reviewing emerging deliberative experiments – including citizens’ assemblies – that illustrate their potential. We introduce an analytical framework to assess the strengths and limitations of citizens’ assemblies and apply it to case studies from Mali, Malawi, and The Gambia. Finally, we highlight insights from an upcoming citizens’ assembly in South Africa.
The paper serves two purposes: advancing theoretical frameworks for evaluating deliberative processes in the Global South, and offering practical guidance to foster experimentation and collaboration in democratic innovation across these contexts. Rather than proposing a single model, we identify context-sensitive strategies that help citizens’ assemblies bridge Africa’s democratic delivery gap, while building on longstanding traditions of collective decision making…(More)”.
Article by Stefaan Verhulst: “The world has become more complex, more dynamic and more interconnected than ever before. The challenges we face – from health to climate, from democratic resilience to economic transformation – are deeply intertwined. And we need new ideas to meet these challenges.
Europe has never lacked intellectual ambition, but ideas alone aren’t enough. To make real progress, we need breakthrough discoveries. We need evidence of what works. And we need the institutional capacity to test, validate and scale solutions across borders and disciplines.
That’s where science comes in. Yet good science depends on data. And if we want AI to supercharge discovery and transform science, then data becomes even more important.
The ‘datafication’ of society
Digitalisation has led to an unprecedented datafication of society. When citizens engage with government services, visit a doctor, use a mobility platform, shop online or measure their steps and/or sleep through wearable devices, data are generated.
But this datafication doesn’t stop with individual behaviour. It extends deep into the productive fabric of our economies. Manufacturing systems, industrial supply chains, logistics networks, energy grids and robotic production lines are now embedded with sensors, connected devices and intelligent control systems. The implication is profound – data is no longer a by-product of digital services alone. It’s a structural feature of both our digital and physical infrastructures.
The remarkable feature of digital data isn’t merely its volume. It’s its reusability. When done responsibly, data created for one purpose can often be reused for entirely different objectives – including scientific research.
But there’s a fundamental constraint: access. Much of today’s most valuable data remains locked away in institutional stovepipes – within government agencies, universities and private companies. Despite its public value potential, it often remains inaccessible to scientists and public interest actors.
Europe has taken important steps to address this data asymmetry. Open data policies have expanded transparency. The Data Governance Act and the Data Act seek to facilitate data sharing and rebalance power in data markets. Article 40 of the Digital Services Act creates pathways for vetted researchers to access platform data. The European Open Science Cloud seeks to enable the sharing of scientific data. Sectoral data spaces – including those envisioned under the European Health Data Space – and Data Labs aim to provide structured, interoperable infrastructures for data access and use.
Yet instead of a steady expansion of access, we’re now witnessing a ‘data winter.’ Access to private sector data for research has declined in several domains. Open government data initiatives have slowed or been rolled back. Scientific datasets have become restricted or have disappeared. Open science has struggled to scale beyond pilot projects. And broader political retrenchment risks weakening some of the very infrastructures designed to enable responsible reuse.
Generative AI’s rapid expansion has also triggered backlash. Large-scale data scraping for AI training has blurred the line between openness and extraction. Consequently, institutions and content creators have become more protective, sometimes closing access altogether. And without reliable access to diverse, high-quality data, scientific progress risks stagnation.
What should Europe do? Three priorities stand out.
Access shouldn’t be only supply-driven
For too long, data policy has focused on releasing datasets without clearly articulating the questions they’re meant to answer. But the value of data – and increasingly the value of AI – depends directly on the value of the question.
In short, better questions define better discovery.
If we want to unlock meaningful access, we must invest in what might be called ‘question science’ – the systematic identification of high-priority societal questions; the structuring of those questions so they are researchable and actionable; the mapping of those questions to existing or potential data sources; and embedding them into funding frameworks, governance mandates, and institutional strategies.
When demand is vague, access debates remain abstract. When questions are clear, access becomes purposeful. Researchers, policymakers and data holders can align around concrete objectives. This requires structured, participatory processes that bring scientists, communities, funders and regulators together to define and prioritise the questions that matter most. ..(More)”.
Article by Melissa A. Haendel et al: “It can take many years for evidence generated in research to influence health care guidelines . Meanwhile, the vast data collected during everyday life, particularly during engagement with the health care system, remain largely untapped for public health, precision medicine, postmarket safety, and real-time decision-making. These “real-world data” (RWD) remain fragmented, proprietary, noninteroperable, and inconsistently governed. Although some approaches to RWD have demonstrated value in limited settings, their impact has remained constrained by uneven incentives, voluntary compliance, and the absence of routine auditability of data access and use. To address this, health data should be governed through federated, standards-based, community-driven models that reflect their public benefit, empower patients and communities, and foster public trust and participation. To help achieve these goals, we propose governing health data as essential infrastructure by using public utility models, defined by their public good, distributed stewardship, and public oversight.
Recently, the US Advanced Research Projects Agency for Health (ARPA-H) requested information on economic models that could lower barriers to data access, enable research, and compensate vendors to realize a health data public utility. Existing infrastructure already includes distributed data networks, publicly funded research enclaves, and privately funded platforms, and recent public and private investments have created new entrants that are improving access to health data. We argue, however, that these efforts remain limited by fragmented incentives and governance and must be complemented by reimagining legal, regulatory, and economic policies under a public utility model. Although most challenges and examples described here are drawn from the United States, the underlying lessons and opportunities are globally applicable. In this context, a public utility model addresses persistent barriers to integration, investment, and governance by converting voluntary participation into enforceable obligations, aligning financial incentives with interoperability, and embedding accountability within continuous public oversight…(More)”.
Book by Benjamin Recht: “In the 1940s, mathematicians set out to design computers that could act as ideal rational agents in the face of uncertainty. The Irrational Decision tells the story of how they settled on a peculiar mathematical definition of rationality in which every decision is a statistical question of risk. Benjamin Recht traces how this quantitative standard came to define our understanding of rationality, looking at the history of optimization, game theory, statistical testing, and machine learning. He explains why, now more than ever, we need to resist efforts by powerful tech interests to drive public policy and essentially rule our lives.
While mathematical rationality has proven valuable in accelerating computers, regulating pharmaceuticals, and deploying electronic commerce, it fails to solve messy human problems and has given rise to a view of a rational world that is not only overquantified but surprisingly limited. Recht shows how these mathematical methods emerged from wartime research and influenced fields ranging from economics to health care, drawing on illuminating examples ranging from diet planning to chess to self-driving cars.
Highlighting both the power and limitations of mathematical rationality, The Irrational Decision reveals why only humans can resolve fundamentally political or value-based questions and proposes a more expansive approach to decision making that is appropriately supported by computational tools yet firmly rooted in human intuition, morality, and judgment…(More)”.
Blog by Cosima Lenz, Stefaan Verhulst, and Roshni Singh: “In February, The Governance Lab and CEPS convened researchers, policymakers, funders and advocates to advance the next phase of the 100 Questions Initiative: shifting from identifying priorities to operationalising and institutionalising them within the EU.
Below are twelve takeaways for EU stakeholders.
1. Institutionalise question-driven research
Questions determine what’s measured, funded and prioritised. Questions over women’s health must be embedded upstream within EU research frameworks. This could include requiring funded proposals to outline a clear ‘question statement’, alongside establishing a public European Women’s Health Question Catalogue to guide calls, policy design and investment.
2. Frame women’s health as a competitiveness priority
Even with over EUR 2 billion invested across 1,000+ projects under Horizon 2020 and Horizon Europe, there are still gaps. Positioning women’s health innovation as a competitiveness driver would align political, financial and private-sector incentives while strengthening Europe’s global leadership.
3. Embed women’s health in the Multiannual Financial Framework (MFF)
Women’s health should be explicitly integrated across EU instruments, including the upcoming MFF. Ring-fenced funding, targeted calls and dedicated innovation challenges, potentially via the European Innovation Council, would improve accountability and reduce fragmentation…(More)”.
Paper by Ivan Decostanzi, Yelena Mejova and Kyriaki Kalimeri: “Timely and accurate situational reports are essential for humanitarian decision-making, yet current workflows remain largely manual, resource intensive, and inconsistent. We present a fully automated framework that uses large language models (LLMs) to transform heterogeneous humanitarian documents into structured and evidence-grounded reports. The system integrates semantic text clustering, automatic question generation, retrieval augmented answer extraction with citations, multi-level summarization, and executive summary generation, supported by internal evaluation metrics that emulate expert reasoning. We evaluated the framework across 13 humanitarian events, including natural disasters and conflicts, using more than 1,100 documents from verified sources such as ReliefWeb. The generated questions achieved 84.7 percent relevance, 84.0 percent importance, and 76.4 percent urgency. The extracted answers reached 86.3 percent relevance, with citation precision and recall both exceeding 76 percent. Agreement between human and LLM based evaluations surpassed an F1 score of 0.80. Comparative analysis shows that the proposed framework produces reports that are more structured, interpretable, and actionable than existing baselines. By combining LLM reasoning with transparent citation linking and multi-level evaluation, this study demonstrates that generative AI can autonomously produce accurate, verifiable, and operationally useful humanitarian situation reports…(More)”