Stefaan Verhulst
The GovLab: “…we are launching the Observatory of Public Sector AI, a research initiative of InnovateUS, and a project of The Governance Lab(opens in new window). With data from more than 150,000 public servants, the Observatory represents one of the most comprehensive empirical efforts to date to understand awareness, attitudes, and adoption of AI as well as the impact of AI on work and workers.
Our goal is not simply to document learning, but to translate these insights into a clearer understanding of which investments in upskilling lead to better services, more effective policies, and stronger government capacity.
Our core hypothesis is straightforward: the right investments in public sector human capital can produce measurable improvements in government capability and performance, and ultimately better outcomes for residents. Skill-building is not peripheral to how the government works. It is central to creating institutions that are more effective, more responsive, and better equipped to deliver public value.
We are currently cleaning, analyzing, and expanding this dataset and will publish the Observatory’s first research report later this spring.
The Research Agenda
The Observatory is organized around a set of interconnected research questions that trace the full pathway from learning to impact.
Our goal is not simply to document learning, but to translate these insights into a clearer understanding of which investments in upskilling lead to better services, more effective policies, and stronger government capacity.
We begin with baseline capacity, mapping where public servants start across core AI competencies, identifying where skill gaps are largest, and distinguishing individual limitations from structural constraints such as unclear policies or restricted access to tools.
We then examine task-level use, documenting what public servants are actually doing with AI.
Our data also surface organizational obstacles that shape adoption far more than skill alone. Across agencies, respondents cite inconsistent guidance, uncertainty about permissions, and limited access as primary barriers.
Through matched pre- and post-training assessments, we measure gains in technical proficiency, confidence, and ethical reasoning. We plan to track persistence through three to six-month follow-ups to assess whether skills endure, reshape workflows, and diffuse across teams.
We analyze how training shifts confidence and perceived value, both of which are essential precursors to behavior change. We collect indicators of effectiveness through self-reported workflow improvements that can later be paired with administrative performance data.
Finally, we examine variation across roles, agencies, and geographies, how workers exercise judgment when evaluating accuracy, bias, and reliability in AI outputs, and how different training modalities compare in producing durable learning outcomes…(More)”
Article by David Oks: “Here’s the story of a remarkable scandal from a few years ago.
In the South Pacific, just north of Australia, there is a small, impoverished, and remote country called Papua New Guinea. It’s a country that I’ve always found absolutely fascinating. If there’s any outpost of true remoteness in the world, I think it’s either in the outer mountains of Afghanistan, in the deepest jungles of central Africa, or in the highlands of Papua New Guinea. (PNG, we call it.) Here’s my favorite fact: Papua New Guinea, with about 0.1 percent of the world’s population, hosts more than 10 percent of the world’s languages. Two villages, separated perhaps only by a few miles, will speak languages that are not mutually intelligible. And if you go into rural PNG, far into rural PNG, you’ll find yourself in places that time forgot.
But here’s a question about Papua New Guinea: how many people live there?
The answer should be pretty simple. National governments are supposed to provide annual estimates for their populations. And the PNG government does just that. In 2022, it said that there were 9.4 million people in Papua New Guinea. So 9.4 million people was the official number.
But how did the PNG government reach that number?
The PNG government conducts a census about every ten years. When the PNG government provided its 2022 estimate, the previous census had been done in 2011. But that census was a disaster, and the PNG government didn’t consider its own findings credible. So the PNG government took the 2000 census, which found that the country had 5.5 million people, and worked off of that one. So the 2022 population estimate was an extrapolation from the 2000 census, and the number that the PNG government arrived at was 9.4 million.
But this, even the PNG government would admit, was a hazy guess.
About 80 percent of people in Papua New Guinea live in the countryside. And this is not a countryside of flat plains and paved roads: PNG is a country of mountain highlands and remote islands. Many places, probably most places, don’t have roads leading to them; and the roads that do exist are almost never paved. People speak different languages and have little trust in the central government, which simply isn’t a force in most of the country. So traveling across PNG is extraordinarily treacherous. It’s not a country where you can send people to survey the countryside with much ease. And so the PNG government really had no idea how many people lived in the country.
Late in 2022, word leaked of a report that the UN had commissioned. The report found that PNG’s population was not 9.4 million people, as the government maintained, but closer to 17 million people—roughly double the official number. Researchers had used satellite imagery and household surveys to find that the population in rural areas had been dramatically undercounted.
This was a huge embarrassment for the PNG government. It suggested, first of all, that they were completely incompetent and had no idea what was going on in the country that they claimed to govern. And it also meant that all the economic statistics about PNG—which presented a fairly happy picture—were entirely false. Papua New Guinea had been ranked as a “lower-middle income” country, along with India and Egypt; but if the report was correct then it was simply a “lower-income” country, like Afghanistan or Mali. Any economic progress that the government could have cited was instantly wiped away.
But it wasn’t as though the government could point to census figures of its own. So the country’s prime minister had to admit that he didn’t know what the population was: he didn’t know, he said, whether the population is “17 million, or 13 million, or 10 million.” It basically didn’t matter, he said, because no matter what the population was, “I cannot adequately educate, provide health cover, build infrastructures and create the enabling law and order environment” for the country’s people to succeed…(More)”.
Article by Phillip Olla: “For the past few years, artificial intelligence has felt almost miraculously accessible. Nonprofits, schools, public agencies, and social enterprises have been able to use advanced AI tools at little or no cost. Grant proposals, impact evaluations, program curricula, community outreach campaigns, and policy briefs are now routinely “co-written” with AI. This accessibility has been widely described as the “democratization” of AI. But it rests on a fragile foundation.
The reality is the current era of “free” or heavily subsidized AI is a temporary phase, not a stable feature of the technology. As AI shifts from experimental tool to core infrastructure, its underlying economics such as energy, hardware, privacy, and market power are beginning to assert themselves. That will have serious consequences for equity, public interest work, and the organizations that serve communities most affected by social and economic inequality.
The question is no longer whether AI will become a paid, utility-like service. It is whether social sector institutions will help design that future or simply be forced to adapt to it on unfavorable terms…(More)”.
Paper by Yuval Rymon: “As artificial intelligence becomes embedded in democratic governance, a fundamental question emerges: how does AI transform the role of political representatives? This review analyzes AI’s impact across two channels: input representation (aggregating citizen preferences) and output representation (implementing policy decisions). It employs five democratic criteria to evaluate impacts, and examines the case studies of Taiwan’s vTaiwan platform and Austria’s AMS algorithmic profiling system. The analysis reveals AI transforms representatives’ roles along both channels: from interpreters of obscure public will to facilitators who reconcile clearly expressed preferences with practical constraints (input side), and from direct decision-makers to architects of algorithmic decision-making (ADM) systems (output side). Six institutional conditions determining whether AI enhances or undermines representation are derived: explicit democratic authorization of objectives, transparency extending to the system design stage, accountability mechanisms enabling challenge of system premises by operators, platform independence with institutional integration, active reduction of participation barriers, and clear authority frameworks preventing selective implementation of citizen consensus…(More)”.
Article by Ruchika Joshi and Miranda Bogen: “The ability to remember you and your preferences is rapidly becoming a big selling point for AI chatbots and agents.
Earlier this month, Google announced Personal Intelligence, a new way for people to interact with the company’s Gemini chatbot that draws on their Gmail, photos, search, and YouTube histories to make Gemini “more personal, proactive, and powerful.” It echoes similar moves by OpenAI, Anthropic, and Meta to add new ways for their AI products to remember and draw from people’s personal details and preferences. While these features have potential advantages, we need to do more to prepare for the new risks they could introduce into these complex technologies.
Personalized, interactive AI systems are built to act on our behalf, maintain context across conversations, and improve our ability to carry out all sorts of tasks, from booking travel to filing taxes. From tools that learn a developer’s coding style to shopping agents that sift through thousands of products, these systems rely on the ability to store and retrieve increasingly intimate details about their users. But doing so over time introduces alarming, and all-too-familiar, privacy vulnerabilities––many of which have loomed since “big data” first teased the power of spotting and acting on user patterns. Worse, AI agents now appear poised to plow through whatever safeguards had been adopted to avoid those vulnerabilities.
Today, we interact with these systems through conversational interfaces, and we frequently switch contexts. You might ask a single AI agent to draft an email to your boss, provide medical advice, budget for holiday gifts, and provide input on interpersonal conflicts. Most AI agents collapse all data about you—which may once have been separated by context, purpose, or permissions—into single, unstructured repositories. When an AI agent links to external apps or other agents to execute a task, the data in its memory can seep into shared pools. This technical reality creates the potential for unprecedented privacy breaches that expose not only isolated data points, but the entire mosaic of people’s lives…(More)”.
Article by Sarah Wray: “The UK Government Digital Service (GDS) has published new guidelines to help public sector organisations prepare their datasets for use with artificial intelligence. Alongside a four-pillar framework, the guidance includes an AI-ready data action plan and a self-assessment checklist.
The document states that: “The United Kingdom is at a critical inflection point in its adoption of artificial intelligence across sectors. While advances in machine learning, generative AI capabilities, and agentic AI capabilities continue at pace, the effectiveness, safety, and legitimacy of AI adoption remain fundamentally constrained by the quality, structure, and governance of underlying data.”
The guidelines, which were shaped by input from public sector bodies, departments and expert organisations, set out four pillars of AI-ready datasets to address these issues: technical optimisation; data and metadata quality; organisational and infrastructure context; and legal, security and ethical compliance.
The document states that: “AI readiness is inherently socio technical. Infrastructure modernisation, metadata fitness, and unstructured data pipelines are essential, but insufficient without clear accountability, sustained skills, and explicit legal and ethical decisioning at dataset level.”..The Department for Science, Innovation and Technology (DSIT) has also published a progress update on the National Data Library (NDL).
The forthcoming NDL is envisaged as a tool to make it “easier to find and reuse data across public sector organisations”. Its goal is to support “better prevention, intervention and detection, [and open] up data to industry, the voluntary sector, start-ups and academics to accelerate AI-driven innovation and boost growth”.
The creation of the NDL is backed by over £100m (US$138m) as part of a £1.9bn (US$2.6bn) total investment allocated to DSIT for cross-cutting digital priorities…(More)”.
Paper by Bruno Botas et al: “The increasing use of social media, particularly X (formerly Twitter), has enabled citizens to openly share their views, making it a valuable arena for examining public perceptions of immigration and its intersections with racial discrimination and xenophobia. This study analyzes Spanish digital debates from January 2020 to January 2023 through a mixed methodology that combines text pre-processing, semantic filtering of keywords, topic modeling, and sentiment analysis. A five-topic solution obtained through Latent Dirichlet Allocation (LDA) captured the main dimensions of the discourse: (1) economic and political debates on immigration, (2) international migration and refugee contexts, (3) racism and social discrimination, (4) insults, stereotypes, and xenophobic framings, and (5) small boat arrivals and maritime management. Sentiment analysis using a transformer-based model (roBERTuito) revealed a strong predominance of negativity across all topics, with sharp spikes linked to major migration crises, humanitarian emergencies, and highly mediatized cultural events. Qualitative readings of representative posts further showed that negativity was often articulated through invasion metaphors, securitarian framings, satire, and ridicule, indicating that hostility was not merely reactive but embedded in broader economic, political, and cultural registers. These findings demonstrate that discriminatory discourse in Spain is event-driven, becoming particularly salient during crises and symbolic moments, and underline the persistent role of social media in amplifying racialized exclusion and partisan polarization…(More)”.
Book by Roger Kreuz: “Plagiarism and appropriation are hot topics when they appear in the news. A politician copies a section of a speech, a section of music sounds familiar, the plot of a novel follows the same pattern as an older story, a piece of scientific research is attributed to the wrong researcher… The list is endless. Allegations and convictions of such incidents can easily ruin a career and inspire gossip. People report worrying about unconsciously appropriating someone else’s work. But why do people plagiarise? How many claims of unconscious plagiarism are truthful? How is plagiarism detected, and what are the outcomes for the perpetrators and victims? Strikingly Similar uncovers the deeper psychology behind this controversial human behavior, as well as a cultural history that is far wider and more interesting than sensationalised news stories…(More)”.
Paper by Federico Bartolomucci, Edoardo Ramalli and Valeria Maria Urbano: “The potential benefits deriving from inter-organizational data sharing have increased over time, leading to an intensified interest in data ecosystems. The governance of these endeavors depends on both collaborative and data governance dimensions. However, previous research has often treated these dimensions separately, creating silos that hinder the capacity to deliver value considering their socio-technical nature. Addressing this gap, this study investigates the intertwined relationship between these two dimensions within data ecosystems. It does so by questioning which existing and most relevant relationships exist between them, as well as the nature of these relationships. To this end, we adopt a multiple case study approach, analyzing five data ecosystems. The research led to the development of a conceptual framework for Integrated Governance, highlighting the need for a holistic socio-technical approach that addresses collaborative and data governance dimensions as intertwined. The framework unveils 24 core relationships between these dimensions in data ecosystems and provides insights on the nature of the relationships, distinguishing among causal, explanatory, concurrent, chronological, and overlapping ones. This work introduces a new perspective in the academic discourse on data sharing providing actionable insights for practitioners and enabling them to design and manage data ecosystems more effectively…(More)”.

Article by Julia Angwin: “We are in a phone war. Ever since cameras became embedded in cellphones, people have been using their devices to bear witness to state violence. But now, the state is striking back.
I don’t think it is any coincidence that Alex Pretti was holding his phone when he was shot to death by federal agents in Minneapolis. Or that Renee Good’s partner was filming a federal agent seconds before he killed Ms. Good. Agents have repeatedly knocked phones out of the hands of observers. They have beaten people filming them and followed them to their homes and threatened them. Of the 19 shootings by federal agents in the past year identified by The Trace, a news outlet that investigates gun violence, at least four involved people who were observing or documenting federal agents’ actions.
Courts have long granted citizens a First Amendment right to film in public. But this right on paper is now being increasingly contested on the streets as federal agents try to stop citizens from recording their activities…
Government officials have openly equated filming an agent with violence in statements and in court testimony. In July, Homeland Security Secretary Kristi Noem said that violence against agents includes “videotaping them where they are at, when they are out on operations.”
The nation’s founders worried that if the state had a monopoly on weapons, its citizens could be oppressed. Their answer was the Second Amendment. Now that our phones are the primary weapons of today’s information war, we should be as zealous about our right to bear phones as we are about our right to bear arms. To adopt the language of Second Amendment enthusiasts, perhaps the only thing that can eventually stop a bad guy with a gun is a good guy with a camera…(More)”