Stefaan Verhulst
Article by Ananya Bhattacharya: “Already, seven in 10 social media images are AI-generated using tools like Midjourney or DALL-E, and eight in 10 content recommendations rely on AI. Nearly half of all social media content by businesses will be AI-generated in 2026. Meanwhile, AI tools are leaving behind non-English speakers.
The key question now is whether the Oversight Board has the capacity and regional reach to identify systemic harms at scale and create precedents that actually shifts product and policy decisions, Rachel Adams, founder and CEO of the Global Center on AI Governance and author of The New Empire of AI: The Future of Global Inequality, told Rest of World.
What won’t work is if you see some of the early AI safety boards that some of the big majors set up — they’ve got all American boards. That is not going to work.”
“With AI-generated content and AI-driven enforcement and moderation, the volume, velocity, and cross-language nature of problems the board was established to monitor and conduct oversight over have exploded,” Adams said. “That would require either a larger board, or a stronger surrounding capacity, in terms of research, regional advisory mechanisms, and faster procedures for urgent situations.”
The Oversight Board is not the first line of defense. Moderation across Meta’s platforms happens at both a machine and a human level (and sometimes both). Users who are unhappy with the moderation outcomes can appeal to the independent, external board. Not all of the appeals will be addressed — the board takes on only the cases it believes will have the biggest lasting impact. In addition to the 21 board members, the Oversight Board has staff members from around the world, and it leans on professional translation services and country context briefings to deliver decisions…(More)”.
Handbook by Centre for Strategic Futures: “articulates the CSF’s updated understanding of foresight—what we find true in theory and useful in practice. Here, we share what we have learned, from both our own experience and others’.
What is in this publication?
This publication has three sections:
1. Foundations—explaining what foresight is, the value it brings, and the dispositions that its work requires.
2. Forms—describing different kinds of foresight projects and offering examples.
3. Footholds—providing heuristics and ideas for putting foresight to work.
Who is this publication for?
This publication is written mainly for public sector foresight practitioners, which is who we are.
But if you find anything written so far interesting, then this handbook is for you, too. We wrote it to be readable by avoiding jargon and writing plainly.
How should one treat this publication?
You can read it linearly or modularly. To encourage you to meander through the handbook, we made wayfinding easy and left breadcrumbs along the way.
There is no single right way to do foresight. Throughout, we invite you to treat what you read as a starting point. Whether you are new to foresight or have practised it for a long time, we hope this handbook will be a useful and enjoyable companion on your journey into the future we are all heading into.
Article by Thomas B. Edsall: “Sixteen years ago, Peter Thiel, the multibillionaire co-founder of PayPal and Palantir Technologies, was strikingly prescient. Speaking at the 2010 Libertopia conference in San Diego, Thiel, who would go on to bankroll JD Vance’s entry into politics, told the gathering:
We could never win an election on getting certain things because we were in such a small minority, but maybe you could actually unilaterally change the world without having to constantly convince people and beg people and plead with people who are never going to agree with you through technological means, and this is where I think technology is this incredible alternative to politics.
Sometime in the not-too-distant future, Thiel and his tech allies may well have no need to win an election to exert control of the United States and other nations.
As artificial intelligence — led by Nvidia, Microsoft, Alphabet, Meta, Amazon, OpenAI and Anthropic — drives to become the nation’s dominant industry, one of the most pressing questions is how technology is affecting, if not supplanting, politics, potentially diminishing the centrality of elections.
Even more important: Will A.I. continue to increase the concentration of market, political and cultural power, undermining democratic control of the economic and social order? To what degree will A.I. exacerbate inequality?
And will A.I., empowered to operate beyond the reach of public institutions and the electorate, in effect transfer government control and regulatory authority to private corporations, political cadres or both?..(More)”.
Article by Anna Desmarais: “Experts are sounding the alarm over fresh threats to Middle Eastern data centres, warning that this month’s inaugural reported strikes signal a dangerous new trend.
Amazon said two of its data centres in the United Arab Emirates were hit by drone strikes on March 1 and a third centre in Bahrain was damaged by debris from a nearby strike.
Iran’s Islamic Revolutionary Guard Corps (IRGC) claimed responsibility for the attacks, telling state media that the attacks were aimed at identifying the role of these centres in supporting the enemy’s military and intelligence activities.
Analysts say these may be some of the first known physical attacks on data centres, the buildings hold all the infrastructure to power everything from banking apps to cloud services, and artificial intelligence (AI) platforms.
Amazon declined to comment further on the attacks in the Middle East, referring Euronews Next to a health dashboard. As of March 11, several Amazon services are still unavailable or disrupted for customers in the UAE and Bahrain.
Why are data centres a target?
“It’s very likely that data centres will be targeted in the future,” said Vincent Boulanin, director of the governance of AI programme at the Stockholm International Peace Research Institute (SIPRI).
Boulanin said he was not surprised that Iran had mounted attacks against data centres in the United Arab Emirates and Bahrain. Data centres power AI by providing the computer power, storage and high-speed internet needed to train the models.
“Data centres are a critical building block of AI capabilities at the national level,” Boulanin said. “From that perspective, data centres can be considered a very critical infrastructure.”..(More)”.
Report by James Tebrake, El Bachir Boukherouaa, Jeff Danforth, and Miss Nivashini Harikrishnan: “National statistical systems generate the statistics that underpin policy, economic analysis, and public trust. Yet, despite decades of investment in statistical capacity, two persistent challenges, data accessibility and interpretability, limit the impact of these official statistics. The rise of large language models (LLMs) and GenAI applications such as ChatGPT and Gemini appeared to offer a solution by enabling users to retrieve statistics using natural language. However, testing demonstrates that while the GenAI applications excel at synthesizing text, they perform poorly at delivering official statistics: they frequently provide dangerously “reasonable” but incorrect figures. This paper introduces StatGPT, an initiative by the IMF Statistics Department that leverages LLMs not to generate statistics, but to generate structured queries against APIs of official statistical agencies. StatGPT ensures that users receive the exact published figures, every time, while benefiting from natural language interaction. This paper examines the limitations of off-the-shelf GenAI applications, outlines how StatGPT overcomes these limitations, and proposes a roadmap for making official statistics AI-ready through open data access, enriched metadata standards, and strengthened data governance. By aligning technological innovation with statistical rigor, StatGPT represents a critical step toward a future where official statistics remain authoritative, trusted, and universally accessible in an AI-driven world…(More)”.
Book edited by Aleksi Aaltonen, Marta Stelmaszak, and Kalle Lyytinen: “…explores the function and impact of digital data on various spheres of organizational and social life. It examines essential research across disciplines, including management, sociology, and economics, establishing a foundational understanding of the increasing importance of digital data in contemporary society.
By situating its chapters within the layers of a digital data stack, this unique Research Handbook not only offers a variety of diverse perspectives and approaches, but it also provides a structure for cumulative insight. Leading scholars analyse and interpret the creation, governance, and utilization of data, covering key topics such as machine learning, data heterogeneity, temporal fragilities in data sharing, and blockchain finance. Ultimately, this Research Handbook highlights how the kaleidoscopic nature of digital data gives rise to multiple competing realities, making it a reference point for future scholarship…(More)”.
Article by Jeffrey Parsons; Roman Lukyanenko; Brad N. Greenwood; and Caren B. Cooper: “We live in an age of unprecedented opportunities to use existing data for tasks not anticipated when those data were collected, resulting in widespread data repurposing. This commentary defines and maps the scope of data repurposing to highlight its importance for organizations and society and the need to study data repurposing as a frontier of data management. We explain how repurposing differs from original data use and data reuse and then develop a framework for data repurposing consisting of concepts and activities for adapting existing data to new tasks. The framework and its implications are illustrated using two examples of repurposing, one in healthcare and one in citizen science. We conclude by suggesting opportunities for research to better understand data repurposing and enable more effective data repurposing practices…(More)”.
Article by Rebecca Mbaya: “What happens when AI reads African data through the wrong frame and no one in the room knows enough to notice.
The output was clean. Structured. Confident. The generative AI tool had processed survey responses from 191 respondents and returned a set of neatly labelled themes. One of them appeared repeatedly across the data: “Misinformation Resistance.”
I stared at it for a long time.
The survey was about perceptions of Fourth Industrial Revolution technologies (AI, IoT, blockchain) in a specific Congolese context. I had collected the data, I understood the political and historical texture of the community being studied. So what the AI tool(ChatGPT) had labelled “Misinformation Resistance” was not that. Not even close.What the responses actually reflected was something more specific, more historically grounded, and entirely rational: a deep, politically informed distrust of institutions. A community whose relationship with governance (colonial administration, post-independence instability, extractive foreign intervention, cycles of conflict) gave them every reason to be skeptical of new technologies promising transformation. This was a coherent epistemic posture developed over generations of having good reasons not to trust. The AI tool had taken a political trust phenomenon and filed it under cognitive bias. It had done this cleanly, confidently, and without any visible indication that something had gone wrong.
That gap between what the model produced and what the data actually meant was only visible to me because I knew the context. Which raises a question that I have not been able to stop thinking about: what happens in all the cases where no one in the room does?…(More)”.
Article by Daniel Castro: “Whether nations permit AI systems to learn from publicly available information will shape global leadership in artificial intelligence (AI). Restrictive rules on the use of public web data risk shifting AI development to more permissive jurisdictions, undermining a country’s ability to build, deploy, and benefit from next-generation AI systems. A more effective approach emphasizes technical opt-outs, transparency, and safeguards that prevent harmful outputs. Policies that preserve responsible access to the digital commons will better support the next generation of AI capabilities and economic growth…(More)”
Article by Nicola Jones: “The escalating conflict between the United States, Israel and Iran has thrown a spotlight on the use of artificial intelligence in warfare. Just one day before the US–Israeli offensive began on 28 February, the US government sidelined one of its main AI suppliers as part of a disagreement that underlines ethical concerns about AI’s use.
And this week, academics and legal experts are meeting in Geneva, Switzerland, to discuss lethal autonomous weapons systems and the procurement of AI in the military, as part of long-running efforts to arrive at an international agreement on the ethical or legal uses of AI in warfare.
Rapid technological development is outpacing slow international discussions, says political scientist Michael Horowitz at the University of Pennsylvania in Philadelphia.
“The current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent,” says Craig Jones, a political geographer at Newcastle University, UK, who researches military targeting….
The US military uses AI based on large language models (LLMs) for logistical and office support, intelligence gathering and analysis, and decision support on the battlefield, says Horowitz. The Maven Smart System, which uses AI for applications including image processing and tactical support, speeds up attack capabilities by suggesting and prioritizing targets, for example. The system has been used in previous conflicts and in the attacks on Iran, according to reports from the Washington Post and other news outlets. “The details are not publicly known,” Horowitz says…(More)”.