Engagement Integrity: Ensuring Legitimacy at a time of AI-Augmented Participation


Article by Stefaan G. Verhulst: “As participatory practices are increasingly tech-enabled, ensuring engagement integrity is becoming more urgent. While considerable scholarly and policy attention has been paid to information integrity (OECD, 2024; Gillwald et al., 2024; Wardle & Derakhshan, 2017; Ghosh & Scott, 2018), including concerns about disinformation, misinformation, and computational propaganda, the integrity of engagement itself — how to ensure collective decision-making is not tech manipulated — remains comparatively under-theorized and under-protected. I define engagement integrity as the procedural fairness and resistance to manipulation of tech-enabled deliberative and participatory processes.

My definition is different from prior discussions of engagement integrity, which mainly emphasized ethical standards when scientists engage with the public (e.g., in advisory roles, communication, or co-research). The concept is particularly salient in light of recent innovations that aim to lower the transaction costs of engagement using artificial intelligence (AI) (Verhulst, 2018). From AI-facilitated citizen assemblies (Simon et al., 2023) to natural language processing (NLP) -enhanced policy proposal platforms (Grobbink & Peach, 2020) to automated analysis of unstructured direct democracy proposals (Grobbink & Peach, 2020) to large-scale deliberative polls augmented with agentic AI (Mulgan, 2022), these developments promise to enhance inclusion, scalability, and sense-making. However, they also create new attack surfaces and vectors of influence that could undermine legitimacy.

This concern is not speculative…(More)”.

How Canada Needs to Respond to the US Data Crisis


Article by Danielle Goldfarb: “The United States is cutting and undermining official US data across a wide range of domains, eroding the foundations of evidence-based policy making. This is happening mostly under the radar here in Canada, buried by news about US President Donald Trump’s barrage of tariffs and many other alarming actions. Doing nothing in response means Canada accepts blind spots in critical areas. Instead, this country should respond by investing in essential data and building the next generation of trusted public intelligence.

The United States has cut or altered more than 2,000 official data sets across the science, health, climate and development sectors, according to the National Security Archive. Deep staff cuts across all program areas effectively cancel or deeply erode many other statistical programs….

Even before this data purge, official US data methods were becoming less relevant and reliable. Traditional government surveys lag by weeks or months and face declining participation. This lag proved particularly problematic during the COVID-19 pandemic and also now, when economic data with a one- or two-month lag is largely irrelevant for tracking the real-time impact of constantly shifting Trump tariffs….

With deep ties to the United States, Canada needs to take action to reduce these critical blind spots. This challenge brings a major strength into the picture: Canada’s statistical agencies have strong reputations as trusted, transparent information sources.

First, Canada should strengthen its data infrastructure. Official Canadian data suffers from similar delays and declining response rates as in the United States. Statistics Canada needs a renewed mandate and stable resources to produce policy-relevant indicators, especially in a timelier way, and in areas where US data has been cut or compromised.

Second, Canada could also act as a trusted place to store vulnerable indicators — inventorying missing data sets, archiving those at risk and coordinating global efforts to reconstruct essential metrics.

Third, Canada has an opportunity to lead in shaping the next generation of trusted and better public-interest intelligence…(More)”.

The Agentic State: How Agentic AI Will Revamp 10 Functional Layers of Public Administration


Whitepaper by the Global Government Technology Centre Berlin: “…explores how agentic AI will transform ten functional layers of government and public administration. The Agentic State signifies a fundamental shift in governance, where AI systems can perceive, reason, and act with minimal human intervention to deliver public value. Its impact on  key functional layers of government will be as follows…(More)”.

Data as Policy


Paper by Janet Freilich and W. Nicholson Price II: “A large literature on regulation highlights the many different methods of policy-making: command-and-control rulemaking, informational disclosures, tort liability, taxes, and more. But the literature overlooks a powerful method to achieve policy objectives: data. The state can provide (or suppress) data as a regulatory tool to solve policy problems. For administrations with expansive views of government’s purpose, government-provided data can serve as infrastructure for innovation and push innovation in socially desirable directions; for administrations with deregulatory ambitions, suppressing or choosing not to collect data can reduce regulatory power or serve as a back-door mechanism to subvert statutory or common law rules. Government-provided data is particularly powerful for data-driven technologies such as AI where it is sometimes more effective than traditional methods of regulation. But government-provided data is a policy tool beyond AI and can influence policy in any field. We illustrate why government-provided data is a compelling tool both for positive regulation and deregulation in contexts ranging from addressing healthcare discrimination, automating legal practice, smart power generation, and others. We then consider objections and limitations to the role of government-provided data as policy instrument, with substantial focus on privacy concerns and the possibility for autocratic abuse.

We build on the broad literature on regulation by introducing data as a regulatory tool. We also join—and diverge from—the growing literature on data by showing that while data can be privately produced purely for private gain, they do not need to be. Rather, government can be deeply involved in the generation and sharing of data, taking a much more publicly oriented view. Ultimately, while government-provided data are not a panacea for either regulatory or data problems, governments should view data provision as an understudied but useful tool in the innovation and governance toolbox…(More)”

How Being Watched Changes How You Think


Article by Simon Makin: “In 1785 English philosopher Jeremy Bentham designed the perfect prison: Cells circle a tower from which an unseen guard can observe any inmate at will. As far as a prisoner knows, at any given time, the guard may be watching—or may not be. Inmates have to assume they’re constantly observed and behave accordingly. Welcome to the Panopticon.

Many of us will recognize this feeling of relentless surveillance. Information about who we are, what we do and buy and where we go is increasingly available to completely anonymous third parties. We’re expected to present much of our lives to online audiences and, in some social circles, to share our location with friends. Millions of effectively invisible closed-circuit television (CCTV) cameras and smart doorbells watch us in public, and we know facial recognition with artificial intelligence can put names to faces.

So how does being watched affect us? “It’s one of the first topics to have been studied in psychology,” says Clément Belletier, a psychologist at University of Clermont Auvergne in France. In 1898 psychologist Norman Triplett showed that cyclists raced harder in the presence of others. From the 1970s onward, studies showed how we change our overt behavior when we are watched to manage our reputation and social consequences.

But being watched doesn’t just change our behavior; decades of research show it also infiltrates our mind to impact how we think. And now a new study reveals how being watched affects unconscious processing in our brain. In this era of surveillance, researchers say, the findings raise concerns about our collective mental health…(More)”.

Who Is Government?


Book edited by Michael Lewis: “The government is a vast, complex system that Americans pay for, rebel against, rely upon, dismiss, and celebrate. It’s also our shared resource for addressing the biggest problems of society. And it’s made up of people, mostly unrecognized and uncelebrated, doing work that can be deeply consequential and beneficial to everyone.

Michael Lewis invited his favorite writers, including Casey Cep, Dave Eggers, John Lanchester, Geraldine Brooks, Sarah Vowell, and W. Kamau Bell, to join him in finding someone doing an interesting job for the government and writing about them. The stories they found are unexpected, riveting, and inspiring, including a former coal miner devoted to making mine roofs less likely to collapse, saving thousands of lives; an IRS agent straight out of a crime thriller; and the manager who made the National Cemetery Administration the best-run organization, public or private, in the entire country. Each essay shines a spotlight on the essential behind-the-scenes work of exemplary federal employees.

Whether they’re digitizing archives, chasing down cybercriminals, or discovering new planets, these public servants are committed to their work and universally reluctant to take credit. Expanding on the Washington Post series, the vivid profiles in Who Is Government? blow up the stereotype of the irrelevant bureaucrat. They show how the essential business of government makes our lives possible, and how much it matters…(More)”.

Citizen Centricity in Public Policy Making


Book by Naci Karkin and Volkan Göçoğlu: “The book explores and positions citizen centricity within conventional public administration and public policy analysis theories and approaches. It seeks to define an appropriate perspective while utilizing popular, independent, and standalone concepts from the literature that support citizen centricity. Additionally, it illustrates the implementation part with practical cases. It ultimately presents a novel and descriptive approach to provide insights into how citizen centricity can be applied in practice. This descriptive novel approach has three essential components: a base and two pillars. The foundation includes new-age public policy making approaches and complexity theory. The first column reflects the conceptual dimension, which comprises supporting concepts from the literature on citizen centricity. The second column represents the practical dimension, a structure supported by academic research that provides practical cases and inspiration for future applications. The descriptive novel approach accepts citizen centricity as a fundamental approach in public policy making and aims to create a new awareness in the academic community on the subject. Additionally, the book provides refreshed conceptual and theoretical backgrounds, along with tangible participatory models and frameworks, benefiting academics, professionals, and graduate students…(More)”.

Why Generative AI Isn’t Transforming Government (Yet) — and What We Can Do About It


Article by Tiago C. Peixoto: “A few weeks ago, I reached out to a handful of seasoned digital services practitioners, NGOs, and philanthropies with a simple question: Where are the compelling generative AI (GenAI) use cases in public-sector workflows? I wasn’t looking for better search or smarter chatbots. I wanted examples of automation of real public workflows – something genuinely interesting and working. The responses, though numerous, were underwhelming.

That question has gained importance amid a growing number of reports forecasting AI’s transformative impact on government. The Alan Turing Institute, for instance, published a rigorous study estimating the potential of AI to help automate over 140 million government transactions in the UK. The Tony Blair Institute also weighed in, suggesting that a substantive portion of public-sector work could be automated. While the report helped bring welcome attention to the issue, its use of GPT-4 to assess task automatability has sparked a healthy discussion about how best to evaluate feasibility. Like other studies in this area, both reports highlight potential – but stop short of demonstrating real service automation.

Without testing technologies in real service environments – where workflows, incentives, and institutional constraints shape outcomes – and grounding each pilot in clear efficiency or well-being metrics, estimates risk becoming abstractions that underestimate feasibility.

This pattern aligns with what Arvind Narayanan and Sayash Kapoor argue in “AI as Normal Technology:” the impact of AI is realized only when methods translate into applications and diffuse through real-world systems. My own review, admittedly non-representative, confirms their call for more empirical work on the innovation-diffusion lag.

In the public sector, the gap between capability and impact is not only wide but also structural…(More)”

When data disappear: public health pays as US policy strays


Paper by Thomas McAndrew, Andrew A Lover, Garrik Hoyt, and Maimuna S Majumder: “Presidential actions on Jan 20, 2025, by President Donald Trump, including executive orders, have delayed access to or led to the removal of crucial public health data sources in the USA. The continuous collection and maintenance of health data support public health, safety, and security associated with diseases such as seasonal influenza. To show how public health data surveillance enhances public health practice, we analysed data from seven US Government-maintained sources associated with seasonal influenza. We fit two models that forecast the number of national incident influenza hospitalisations in the USA: (1) a data-rich model incorporating data from all seven Government data sources; and (2) a data-poor model built using a single Government hospitalisation data source, representing the minimal required information to produce a forecast of influenza hospitalisations. The data-rich model generated reliable forecasts useful for public health decision making, whereas the predictions using the data-poor model were highly uncertain, rendering them impractical. Thus, health data can serve as a transparent and standardised foundation to improve domestic and global health. Therefore, a plan should be developed to safeguard public health data as a public good…(More)”.

Public AI White Paper – A Public Alternative to Private AI Dominance


White paper by the Bertelsmann Stiftung and Open Future: “Today, the most advanced AI systems are developed and controlled by a small number of private companies. These companies hold power not only over the models themselves but also over key resources such as computing infrastructure. This concentration of power poses not only economic risks but also significant democratic challenges.

The Public AI White Paper presents an alternative vision, outlining how open and public-interest approaches to AI can be developed and institutionalized. It advocates for a rebalancing of power within the AI ecosystem – with the goal of enabling societies to shape AI actively, rather than merely consume it…(More)”.