Stefaan Verhulst
Blog and Paper by the Royal Statistical Society: “Artificial intelligence is often talked about as if it can think like a person. We hear that it understands, reasons and even creates. But AIs think quite differently to how people think: they are fundamentally statistical. This is a fact that is not widely understood – but I believe that it is an essential point that needs far greater recognition for AIs to be used effectively, safely and ethically.
Large language models (LLMs), the systems behind many chatbots and search tools, are trained on vast amounts of text and data. They look for patterns in that data and use those patterns to predict what is most likely to come next. When they produce an answer, they are not thinking about it in a human sense. They are generating the most likely response based on what they have seen before.
This is what makes them so impressive. It is also why they sometimes go wrong.
Because these systems are statistical, their outputs depend on the data they have been trained on. If that data contains gaps or biases, the results will reflect that. If the system is used in situations that differ from its training data, its performance can change. And even when an answer sounds confident, it is still based on probability rather than certainty. Understanding this helps us use AI more wisely.
It encourages simple but important questions. Where did the data come from? How representative is it? How reliable is the output? How might results differ for different groups of people? What happens when circumstances change?
These questions matter when AI is used to support decisions about jobs, loans, healthcare, education or public services. As AI becomes more common in everyday systems, basic statistical awareness becomes part of digital knowledge.
This is why, led by its AI Task Force, the RSS has published a landmark paper on the statistical nature of AI. Our core argument is clear: AI systems are built on statistical pattern recognition. They need to be developed, evaluated and governed with rigorous statistical precision…(More)”.
Blog by PUBLIC: “Across the M&E lifecycle, we are already seeing real, deployable applications of AI tools.
- AI-assisted evidence synthesis is probably the most mature area. Tools can now search, screen, and summarise bodies of literature at a scale that would take human teams weeks. For evaluation teams scoping a new programme area, or interested in exploring what some other field could say about their topic, this is genuinely useful today.
A recent example of this is the development of systems like InsightAgent, a multi-agent framework designed for complex systematic reviews. Researchers demonstrated that this tool could partition a massive amount of literature, read and synthesise findings, and draft a rigorous review in just 1.5 hours – a process that traditionally takes months to complete manually. Researchers could also visually monitor the AI’s reading trajectory, adjust its inclusion criteria, and verify its sources in real-time.
- AI-led qualitative interviews – including voice – have been shown to generate substantially richer responses than conventional open text fields. For public sector evaluations, the possibility of running qualitative research at a fraction of the cost is a meaningful shift. Similarly, these practices are effective where there are multiple layers of governance – such as evaluation framework development and qualitative evaluations of ‘unmonetisable’ outcomes, as per the Green Book.
For example, PUBLIC recently utilised Salomo to conduct user research for a major public sector project. Traditionally, gathering and synthesising user research at this scale would take a team of multiple researchers many months to complete. However, by leveraging Salomo’s agentic capabilities, a team of just two researchers was able to process, code, and extract insights from 100 interviews in less than a week.
- Getting to concrete outputs and models more quickly. Analysis and reporting workflows are starting to allow evaluators to go from a research question to a documented, reproducible output – with code, findings, and visualisations – in a fraction of the time previously required.
For example, AI Scientist-V2, is a system capable of automating the scientific research lifecycle. Given a high-level prompt, the agent autonomously formulates hypotheses, writes and debugs experiment code, visualises data, and drafts a complete manuscript in under 15 hours. It also recently produced a research paper that successfully passed a double-blind peer review.
While public sector policy evaluation has its own unique complexities and stakeholder dynamics, the implication is clear. These are tools that can handle the heavy mechanical execution – running the econometrics, generating charts, and drafting technical annexes – freeing up evaluators to focus on the harder interpretive questions and policy implications…(More)”.
NASCIO Report: “Generative AI helped states draft, summarize and analyze. Now, AI is starting to act.
In this new NASCIO report, we explore the rise of agentic AI — systems that can plan, take limited actions and manage multi-step workflows with human oversight. From routing approvals and detecting anomalies to guiding citizen services from start to finish, agentic AI represents the next phase of AI maturity in state government.
This report helps state technology leaders:
- Understand the difference between generative AI and agentic AI
- Recognize five phases states might go through from generative to agentic AI
- Anticipate governance, security and workforce risks
- Identify practical guardrails and incremental next steps
If your state is exploring agentic AI, this publication is a primer for what is coming next.
Agentic AI is here, and the time to review policies, strengthen guardrails and build trust is now…(More)”.
Article by Mexico News Daily: “Turning to artificial intelligence to keep Mexico’s more than 125,000 missing people from being forgotten, a collective in the state of Jalisco has been crafting “living” videos of the missing that talk to the public.
In the state with the highest number of missing persons, the Luz de Esperanza Collective creates Fichas Vivas de Búsqueda, or Living Search Cards — short AI-generated videos that animate photos and recreate the voices of the disappeared for social media.
The clips circulate online, seeking to cut through the noise and force viewers to confront a national human rights crisis.
Using image, facial animation and speech synthesis tools, families script what their relatives would say and work with technologists to produce videos that resemble digital search posters — with a “photo” of the missing person actually “speaking.”
In one 110-second video, the photo of the missing person declares, “I am Carlos Maximiliano Romera Meza. I was 18 years old when I disappeared, and I want to tell you my story.”..(More)”.
Book by Ben Green: “The field of data science faces a moral crisis. Despite the desires of data scientists to develop algorithms for good, algorithms regularly produce injustice in practice. Given these persistent harms, the field must reflect on difficult questions about its identity and future. Can data science be a force for promoting social justice in the world? What practices should data scientists follow to achieve this goal?
In Algorithmic Realism, Ben Green presents a bold and interdisciplinary approach to data science. Drawing on his experience practicing data science in the public interest, he argues that improving society with algorithms requires transforming data science from a formalist methodology focused on mathematical models into a practical methodology focused on addressing real-world problems. By providing an expanded framework for the “data science workflow”—the steps that characterize the algorithm development process—he offers a practical, step-by-step guide describing how data scientists can apply their skills in service of social justice. Through these contributions, the book reveals a vision for a renewed, but realistic, optimism about data science’s potential to foster a more equitable world…(More)”.
Paper by John Rountree & John Gastil: “Generative artificial intelligence (GAI) interfaces, such as ChatGPT, Bloom, and Gemini, may offer transformative possibilities for deliberative democracy. We argue that deliberation scholars and practitioners should use GAI software to run simulations to complement existing deliberative processes. By simulations, we mean the use of GAI software to run hypothetical deliberations, either with or on behalf of human participants, to support rather than replace human judgment. We expand on the notion of a GAI deliberation simulation and showcase an example using GPT-4o. To illustrate the practical advantages of simulations, we showcase two use cases: training facilitators and providing time-sensitive policy consultation. We also address potential cautions and limitations surrounding GAI simulations, such as concerns about transparency and bias. We conclude by exploring the theoretical implications of GAI simulations for developing and refining models of deliberation dynamics…(More)”.
UK government: “Local AI is a new team in the Ministry of Housing, Communities and Local Government (MHCLG), working with councils to support the responsible use of artificial intelligence (AI) across local government. We focus on the real pressures councils face and work in partnership with the sector to tackle shared problems…Over the past year, we’ve focused on understanding where AI can add the most value in local government. This has included:
- discovery work on pressures in temporary accommodation services and the potential to scale an AI transcription and summarisation tool developed by the Incubator for AI (i.AI)
- close collaboration with partners across the sector, including Local Digital, GDS Local, i.AI and the Local Government Association
- participation in the Local Government Innovation Hackathon on Homelessness and Rough Sleeping
- establishing the Mayoral Data Council to improve data quality and availability.
This work has shown that well targeted AI can reduce administrative burden, free up frontline time and support earlier, more consistent interventions.
What we’re doing now
We are moving into delivery and expanding our work in areas where AI can make a practical difference. This includes:
- incubating high impact use cases in temporary accommodation
- developing a new transcription tool for public-facing workers based on i.AI’s Minute tool.
- running further discovery work in other service areas
- learning from local innovation and identifying ideas that can be reused
- supporting councils to strengthen data quality, ethics and safety practices.
This helps ensure we are solving the right problems and that councils help shape our roadmap from the start…(More)” – See also AI Localism
Paper by Sofie Illemann Jæger & Julian;Iñaki Goñi: “In this article, we assert that public participation events are best understood as scaffoldings that provide temporal infrastructures to citizens and that encourage “preferred ways” of acting within the participatory space. To exemplify this approach, we explore the citizens’ climate assembly (2022–2023) in Aarhus, Denmark as a case study of highly professionalised participation. Through a multi-method qualitative approach incorporating participant observation, focus groups, interviews, and surveys, we examine how both organisers and citizen members of the assembly perceive its design and implementation. In our results, we highlight two dimensions of the assembly’s scaffolding. Firstly, we analyse the guidelines that assembly organisers provided to signal what good interaction is within the assembly, namely, the Observation, Assessment, and Recommendation (OVA) method. This method shaped the affordances of members but also defined the epistemic hierarchies of contributions. Secondly, we analyse how the purpose of involving citizens in decision-making was constructed by assembly organisers and how it was communicated during the assembly, leading to uncertainties in process design and facilitation. Finally, we draw methodological and theoretical lessons from this case study and conclude that by treating participatory events as both structured and structuring, we can move beyond evaluations of participation as an ideal to be realised and instead investigate its situated practices and contestations…(More)”.
Open Access Book edited by Alessandra Micalizzi: “…explores one of the most pressing transformations of contemporary knowledge production: the integration of artificial intelligence into the practices, methods, and epistemologies of the social sciences. Moving beyond simplistic narratives of automation and efficiency, this volume investigates AI as object, tool, context, and partner of research. Bringing together interdisciplinary perspectives, the contributions examine how algorithmic systems reshape inquiry, interpretation, and representation, while also raising fundamental methodological and ethical questions. From AI-assisted qualitative analysis and ethnography to digital imaginaries, bias, and futures thinking, the chapters reveal the complex co-production between technological systems and social knowledge. Rather than offering definitive answers, the book provides conceptual tools, empirical cases, and methodological reflections for navigating a rapidly evolving research landscape. It invites scholars to engage critically and creatively with artificial intelligence—not as a distant technology, but as an active participant in the construction of contemporary social understanding…(More)”.
Annual Report by Freedom House: “Global freedom declined for the 20th consecutive year in 2025. A total of 54 countries experienced deterioration in their political rights and civil liberties, while only 35 countries registered improvements.
The largest declines in freedom for the calendar year were caused by military coups and efforts by incumbent leaders to crush peaceful dissent or change constitutional rules in their favor. Guinea-Bissau received the year’s single largest score change, losing 8 points on Freedom in the World’s 100-point scale after the November general elections were disrupted by a coup in which armed men stormed the election commission’s office and destroyed ballots. Military officers also ousted the elected government in Madagascar, bringing the total number of African countries to have experienced a coup since 2019 to nine. In Burkina Faso, which has been under military rule since a 2022 coup, the score declined by 5 points as state security forces and junta-sponsored militias engaged in mass killings and forced displacement of Fulani civilians, while Islamist insurgents attacked people of other faiths and imposed their own religious practices in areas under their control.
Tanzania registered the second most significant deterioration in rights and liberties in 2025, losing 7 points and sinking further into the Not Free category. The incumbent president, Samia Suluhu Hassan, was declared the winner of an election marred by the exclusion of opposition candidates, restrictions on the media, a campaign of forced disappearances of political opponents, and widespread violence against protesters that resulted in at least 1,000 deaths. El Salvador tied with Madagascar for the third largest decline in the world, losing 5 points. Salvadoran authorities persecuted high-profile academics who were critical of the government, threats against the media drove journalists into exile, and the government seized land without providing compensation. The Legislative Assembly, dominated by President Nayib Bukele’s Nuevas Ideas party, passed a constitutional reform that abolished presidential term limits and extended the terms from five to six years, clearing the way for Bukele to seek reelection indefinitely…(More)”.
