Stefaan Verhulst
Article by Thomas B. Edsall: “Sixteen years ago, Peter Thiel, the multibillionaire co-founder of PayPal and Palantir Technologies, was strikingly prescient. Speaking at the 2010 Libertopia conference in San Diego, Thiel, who would go on to bankroll JD Vance’s entry into politics, told the gathering:
We could never win an election on getting certain things because we were in such a small minority, but maybe you could actually unilaterally change the world without having to constantly convince people and beg people and plead with people who are never going to agree with you through technological means, and this is where I think technology is this incredible alternative to politics.
Sometime in the not-too-distant future, Thiel and his tech allies may well have no need to win an election to exert control of the United States and other nations.
As artificial intelligence — led by Nvidia, Microsoft, Alphabet, Meta, Amazon, OpenAI and Anthropic — drives to become the nation’s dominant industry, one of the most pressing questions is how technology is affecting, if not supplanting, politics, potentially diminishing the centrality of elections.
Even more important: Will A.I. continue to increase the concentration of market, political and cultural power, undermining democratic control of the economic and social order? To what degree will A.I. exacerbate inequality?
And will A.I., empowered to operate beyond the reach of public institutions and the electorate, in effect transfer government control and regulatory authority to private corporations, political cadres or both?..(More)”.
Article by Anna Desmarais: “Experts are sounding the alarm over fresh threats to Middle Eastern data centres, warning that this month’s inaugural reported strikes signal a dangerous new trend.
Amazon said two of its data centres in the United Arab Emirates were hit by drone strikes on March 1 and a third centre in Bahrain was damaged by debris from a nearby strike.
Iran’s Islamic Revolutionary Guard Corps (IRGC) claimed responsibility for the attacks, telling state media that the attacks were aimed at identifying the role of these centres in supporting the enemy’s military and intelligence activities.
Analysts say these may be some of the first known physical attacks on data centres, the buildings hold all the infrastructure to power everything from banking apps to cloud services, and artificial intelligence (AI) platforms.
Amazon declined to comment further on the attacks in the Middle East, referring Euronews Next to a health dashboard. As of March 11, several Amazon services are still unavailable or disrupted for customers in the UAE and Bahrain.
Why are data centres a target?
“It’s very likely that data centres will be targeted in the future,” said Vincent Boulanin, director of the governance of AI programme at the Stockholm International Peace Research Institute (SIPRI).
Boulanin said he was not surprised that Iran had mounted attacks against data centres in the United Arab Emirates and Bahrain. Data centres power AI by providing the computer power, storage and high-speed internet needed to train the models.
“Data centres are a critical building block of AI capabilities at the national level,” Boulanin said. “From that perspective, data centres can be considered a very critical infrastructure.”..(More)”.
Report by James Tebrake, El Bachir Boukherouaa, Jeff Danforth, and Miss Nivashini Harikrishnan: “National statistical systems generate the statistics that underpin policy, economic analysis, and public trust. Yet, despite decades of investment in statistical capacity, two persistent challenges, data accessibility and interpretability, limit the impact of these official statistics. The rise of large language models (LLMs) and GenAI applications such as ChatGPT and Gemini appeared to offer a solution by enabling users to retrieve statistics using natural language. However, testing demonstrates that while the GenAI applications excel at synthesizing text, they perform poorly at delivering official statistics: they frequently provide dangerously “reasonable” but incorrect figures. This paper introduces StatGPT, an initiative by the IMF Statistics Department that leverages LLMs not to generate statistics, but to generate structured queries against APIs of official statistical agencies. StatGPT ensures that users receive the exact published figures, every time, while benefiting from natural language interaction. This paper examines the limitations of off-the-shelf GenAI applications, outlines how StatGPT overcomes these limitations, and proposes a roadmap for making official statistics AI-ready through open data access, enriched metadata standards, and strengthened data governance. By aligning technological innovation with statistical rigor, StatGPT represents a critical step toward a future where official statistics remain authoritative, trusted, and universally accessible in an AI-driven world…(More)”.
Book edited by Aleksi Aaltonen, Marta Stelmaszak, and Kalle Lyytinen: “…explores the function and impact of digital data on various spheres of organizational and social life. It examines essential research across disciplines, including management, sociology, and economics, establishing a foundational understanding of the increasing importance of digital data in contemporary society.
By situating its chapters within the layers of a digital data stack, this unique Research Handbook not only offers a variety of diverse perspectives and approaches, but it also provides a structure for cumulative insight. Leading scholars analyse and interpret the creation, governance, and utilization of data, covering key topics such as machine learning, data heterogeneity, temporal fragilities in data sharing, and blockchain finance. Ultimately, this Research Handbook highlights how the kaleidoscopic nature of digital data gives rise to multiple competing realities, making it a reference point for future scholarship…(More)”.
Article by Jeffrey Parsons; Roman Lukyanenko; Brad N. Greenwood; and Caren B. Cooper: “We live in an age of unprecedented opportunities to use existing data for tasks not anticipated when those data were collected, resulting in widespread data repurposing. This commentary defines and maps the scope of data repurposing to highlight its importance for organizations and society and the need to study data repurposing as a frontier of data management. We explain how repurposing differs from original data use and data reuse and then develop a framework for data repurposing consisting of concepts and activities for adapting existing data to new tasks. The framework and its implications are illustrated using two examples of repurposing, one in healthcare and one in citizen science. We conclude by suggesting opportunities for research to better understand data repurposing and enable more effective data repurposing practices…(More)”.
Article by Rebecca Mbaya: “What happens when AI reads African data through the wrong frame and no one in the room knows enough to notice.
The output was clean. Structured. Confident. The generative AI tool had processed survey responses from 191 respondents and returned a set of neatly labelled themes. One of them appeared repeatedly across the data: “Misinformation Resistance.”
I stared at it for a long time.
The survey was about perceptions of Fourth Industrial Revolution technologies (AI, IoT, blockchain) in a specific Congolese context. I had collected the data, I understood the political and historical texture of the community being studied. So what the AI tool(ChatGPT) had labelled “Misinformation Resistance” was not that. Not even close.What the responses actually reflected was something more specific, more historically grounded, and entirely rational: a deep, politically informed distrust of institutions. A community whose relationship with governance (colonial administration, post-independence instability, extractive foreign intervention, cycles of conflict) gave them every reason to be skeptical of new technologies promising transformation. This was a coherent epistemic posture developed over generations of having good reasons not to trust. The AI tool had taken a political trust phenomenon and filed it under cognitive bias. It had done this cleanly, confidently, and without any visible indication that something had gone wrong.
That gap between what the model produced and what the data actually meant was only visible to me because I knew the context. Which raises a question that I have not been able to stop thinking about: what happens in all the cases where no one in the room does?…(More)”.
Article by Daniel Castro: “Whether nations permit AI systems to learn from publicly available information will shape global leadership in artificial intelligence (AI). Restrictive rules on the use of public web data risk shifting AI development to more permissive jurisdictions, undermining a country’s ability to build, deploy, and benefit from next-generation AI systems. A more effective approach emphasizes technical opt-outs, transparency, and safeguards that prevent harmful outputs. Policies that preserve responsible access to the digital commons will better support the next generation of AI capabilities and economic growth…(More)”
Article by Nicola Jones: “The escalating conflict between the United States, Israel and Iran has thrown a spotlight on the use of artificial intelligence in warfare. Just one day before the US–Israeli offensive began on 28 February, the US government sidelined one of its main AI suppliers as part of a disagreement that underlines ethical concerns about AI’s use.
And this week, academics and legal experts are meeting in Geneva, Switzerland, to discuss lethal autonomous weapons systems and the procurement of AI in the military, as part of long-running efforts to arrive at an international agreement on the ethical or legal uses of AI in warfare.
Rapid technological development is outpacing slow international discussions, says political scientist Michael Horowitz at the University of Pennsylvania in Philadelphia.
“The current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent,” says Craig Jones, a political geographer at Newcastle University, UK, who researches military targeting….
The US military uses AI based on large language models (LLMs) for logistical and office support, intelligence gathering and analysis, and decision support on the battlefield, says Horowitz. The Maven Smart System, which uses AI for applications including image processing and tactical support, speeds up attack capabilities by suggesting and prioritizing targets, for example. The system has been used in previous conflicts and in the attacks on Iran, according to reports from the Washington Post and other news outlets. “The details are not publicly known,” Horowitz says…(More)”.
Paper by DemNext: “Africa faces a paradox. Most people continue to support democratic institutions, even though their satisfaction is declining with institutions’ ability to deliver inclusive economic prosperity and accountable, responsive governance. Citizens’ assemblies offer a way forward by offering the opportunity to draw on indigenous traditions of sustained deliberation and consensus-building to tackle complex policy problems.
In this paper, we explore how citizens’ assemblies can be adapted to Africa’s diverse contexts by drawing on real-world experiences across the continent. We begin by outlining the civic strengths and cultural traditions that underpin deliberative democracy in Africa, before reviewing emerging deliberative experiments – including citizens’ assemblies – that illustrate their potential. We introduce an analytical framework to assess the strengths and limitations of citizens’ assemblies and apply it to case studies from Mali, Malawi, and The Gambia. Finally, we highlight insights from an upcoming citizens’ assembly in South Africa.
The paper serves two purposes: advancing theoretical frameworks for evaluating deliberative processes in the Global South, and offering practical guidance to foster experimentation and collaboration in democratic innovation across these contexts. Rather than proposing a single model, we identify context-sensitive strategies that help citizens’ assemblies bridge Africa’s democratic delivery gap, while building on longstanding traditions of collective decision making…(More)”.
Article by Stefaan Verhulst: “The world has become more complex, more dynamic and more interconnected than ever before. The challenges we face – from health to climate, from democratic resilience to economic transformation – are deeply intertwined. And we need new ideas to meet these challenges.
Europe has never lacked intellectual ambition, but ideas alone aren’t enough. To make real progress, we need breakthrough discoveries. We need evidence of what works. And we need the institutional capacity to test, validate and scale solutions across borders and disciplines.
That’s where science comes in. Yet good science depends on data. And if we want AI to supercharge discovery and transform science, then data becomes even more important.
The ‘datafication’ of society
Digitalisation has led to an unprecedented datafication of society. When citizens engage with government services, visit a doctor, use a mobility platform, shop online or measure their steps and/or sleep through wearable devices, data are generated.
But this datafication doesn’t stop with individual behaviour. It extends deep into the productive fabric of our economies. Manufacturing systems, industrial supply chains, logistics networks, energy grids and robotic production lines are now embedded with sensors, connected devices and intelligent control systems. The implication is profound – data is no longer a by-product of digital services alone. It’s a structural feature of both our digital and physical infrastructures.
The remarkable feature of digital data isn’t merely its volume. It’s its reusability. When done responsibly, data created for one purpose can often be reused for entirely different objectives – including scientific research.
But there’s a fundamental constraint: access. Much of today’s most valuable data remains locked away in institutional stovepipes – within government agencies, universities and private companies. Despite its public value potential, it often remains inaccessible to scientists and public interest actors.
Europe has taken important steps to address this data asymmetry. Open data policies have expanded transparency. The Data Governance Act and the Data Act seek to facilitate data sharing and rebalance power in data markets. Article 40 of the Digital Services Act creates pathways for vetted researchers to access platform data. The European Open Science Cloud seeks to enable the sharing of scientific data. Sectoral data spaces – including those envisioned under the European Health Data Space – and Data Labs aim to provide structured, interoperable infrastructures for data access and use.
Yet instead of a steady expansion of access, we’re now witnessing a ‘data winter.’ Access to private sector data for research has declined in several domains. Open government data initiatives have slowed or been rolled back. Scientific datasets have become restricted or have disappeared. Open science has struggled to scale beyond pilot projects. And broader political retrenchment risks weakening some of the very infrastructures designed to enable responsible reuse.
Generative AI’s rapid expansion has also triggered backlash. Large-scale data scraping for AI training has blurred the line between openness and extraction. Consequently, institutions and content creators have become more protective, sometimes closing access altogether. And without reliable access to diverse, high-quality data, scientific progress risks stagnation.
What should Europe do? Three priorities stand out.
Access shouldn’t be only supply-driven
For too long, data policy has focused on releasing datasets without clearly articulating the questions they’re meant to answer. But the value of data – and increasingly the value of AI – depends directly on the value of the question.
In short, better questions define better discovery.
If we want to unlock meaningful access, we must invest in what might be called ‘question science’ – the systematic identification of high-priority societal questions; the structuring of those questions so they are researchable and actionable; the mapping of those questions to existing or potential data sources; and embedding them into funding frameworks, governance mandates, and institutional strategies.
When demand is vague, access debates remain abstract. When questions are clear, access becomes purposeful. Researchers, policymakers and data holders can align around concrete objectives. This requires structured, participatory processes that bring scientists, communities, funders and regulators together to define and prioritise the questions that matter most. ..(More)”.