Governing in the Age of AI: Reimagining Local Government


Report by the Tony Blair Institute for Global Change: “…The limits of the existing operating model have been reached. Starved of resources by cuts inflicted by previous governments over the past 15 years, many councils are on the verge of bankruptcy even though local taxes are at their highest level. Residents wait too long for care, too long for planning applications and too long for benefits; many people never receive what they are entitled to. Public satisfaction with local services is sliding.

Today, however, there are new tools – enabled by artificial intelligence – that would allow councils to tackle these challenges. The day-to-day tasks of local government, whether related to the delivery of public services or planning for the local area, can all be performed faster, better and cheaper with the use of AI – a true transformation not unlike the one seen a century ago.

These tools would allow councils to overturn an operating model that is bureaucratic, labour-intensive and unresponsive to need. AI could release staff from repetitive tasks and relieve an overburdened and demotivated workforce. It could help citizens navigate the labyrinth of institutions, webpages and forms with greater ease and convenience. It could support councils to make better long-term decisions to drive economic growth, without which the resource pressure will only continue to build…(More)”.

Co-Designing AI Systems with Value-Sensitive Citizen Science


Paper by Sachit Mahajan and Dirk Helbing: “As artificial intelligence (AI) systems increasingly shape everyday life, integrating diverse community values into their development becomes both an ethical imperative and a practical necessity. This paper introduces Value Sensitive Citizen Science (VSCS), a systematic framework combining Value Sensitive Design (VSD) principles with citizen science methods to foster meaningful public participation in AI. Addressing critical gaps in existing approaches, VSCS integrates culturally grounded participatory methods and structured cognitive scaffolding through the Participatory Value-Cognition Taxonomy (PVCT). Through iterative value-sensitive participation cycles guided by an extended scenario logic (What-if, If-then, Then-what, What-now), community members act as genuine coresearchers-identifying, translating, and operationalizing local values into concrete technical requirements. The framework also institutionalizes governance structures for ongoing oversight, adaptability, and accountability across the AI lifecycle. By explicitly bridging participatory design with algorithmic accountability, VSCS ensures that AI systems reflect evolving community priorities rather than reinforcing top-down or monocultural perspectives. Critical discussions highlight VSCS’s practical implications, addressing challenges such as power dynamics, scalability, and epistemic justice. The paper concludes by outlining actionable strategies for policymakers and practitioners, alongside future research directions aimed at advancing participatory, value-driven AI development across diverse technical and sociocultural contexts…(More)”.


Balancing Data Sharing and Privacy to Enhance Integrity and Trust in Government Programs


Paper by National Academy of Public Administration: “Improper payments and fraud cost the federal government hundreds of billions of dollars each year, wasting taxpayer money and eroding public trust. At the same time, agencies are increasingly expected to do more with less. Finding better ways to share data, without compromising privacy, is critical for ensuring program integrity in a resource-constrained environment.

Key Takeaways

  • Data sharing strengthens program integrity and fraud prevention. Agencies and oversight bodies like GAO and OIGs have uncovered large-scale fraud by using shared data.
  • Opportunities exist to streamline and expedite the compliance processes required by privacy laws and reduce systemic barriers to sharing data across federal agencies.
  • Targeted reforms can address these barriers while protecting privacy:
    1. OMB could issue guidance to authorize fraud prevention as a routine use in System of Records Notices.
    2. Congress could enact special authorities or exemptions for data sharing that supports program integrity and fraud prevention.
    3. A centralized data platform could help to drive cultural change and support secure, responsible data sharing…(More)”

Glorious RAGs : A Safer Path to Using AI in the Social Sector


Blog by Jim Fruchterman: “Social sector leaders ask me all the time for advice on using AI. As someone who started for-profit machine learning (AI) companies in the 1980s, but then pivoted to running nonprofit social enterprises, I’m often the first person from Silicon Valley that many nonprofit leaders have met. I joke that my role is often that of “anti-consultant,” talking leaders out of doing an app, a blockchain (smile) or firing half their staff because of AI. Recently, much of my role has been tamping down the excessive expectations being bandied about for the impact of AI on organizations. However, two years into the latest AI fad wave created by ChatGPT and its LLM (large language model) peers, more and more of the leaders are describing eminently sensible applications of LLMs to their programs. The most frequent of these approaches can be described as variations on “Retrieval-Augmented Generation,” also known as RAG. I am quite enthusiastic about using RAG for social impact, because it addresses a real need and supplies guardrails for using LLMs effectively…(More)”

Understanding and Addressing Misinformation About Science


Report by National Academies of Sciences, Engineering, and Medicine: “Our current information ecosystem makes it easier for misinformation about science to spread and harder for people to figure out what is scientifically accurate. Proactive solutions are needed to address misinformation about science, an issue of public concern given its potential to cause harm at individual, community, and societal levels. Improving access to high-quality scientific information can fill information voids that exist for topics of interest to people, reducing the likelihood of exposure to and uptake of misinformation about science. Misinformation is commonly perceived as a matter of bad actors maliciously misleading the public, but misinformation about science arises both intentionally and inadvertently and from a wide range of sources…(More)”.

Bad Public Policy: Malignity, Volatility and the Inherent Vices of Policymaking


Book by Policy studies assume the existence of baseline parameters – such as honest governments doing their best to create public value, publics responding in good faith, and both parties relying on a policy-making process which aligns with the public interest. In such circumstances, policy goals are expected to be produced through mechanisms in which the public can articulate its preferences and policy-makers are expected to listen to what has been said in determining their governments’ courses of action. While these conditions are found in some governments, there is evidence from around the world that much policy-making occurs without these pre-conditions and processes. Unlike situations which produce what can be thought of as ‘good’ public policy, ‘bad’ public policy is a more common outcome. How this happens and what makes for bad public policy are the subjects of this Element…(More)”.

Rebooting the global consensus: Norm entrepreneurship, data governance and the inalienability of digital bodies


Paper by Siddharth Peter de Souza and Linnet Taylor: “The establishment of norms among states is a common way of governing international actions. This article analyses the potential of norm-building for governing data and artificial intelligence technologies’ collective effects. Rather than focusing on state actors’s ability to establish and enforce norms, however, we identify a contrasting process taking place among civil society organisations in response to the international neoliberal consensus on the commodification of data. The norm we identify – ‘nothing about us without us’ – asserts civil society’s agency, and specifically the right of those represented in datasets to give or refuse permission through structures of democratic representation. We argue that this represents a form of norm-building that should be taken as seriously as that of states, and analyse how it is constructing the political power, relations, and resources to engage in governing technology at scale. We first outline how this counter-norming is anchored in data’s connections to bodies, land, community, and labour. We explore the history of formal international norm-making and the current norm-making work being done by civil society organisations internationally, and argue that these, although very different in their configurations and strategies, are comparable in scale and scope. Based on this, we make two assertions: first, that a norm-making lens is a useful way for both civil society and research to frame challenges to the primacy of market logics in law and governance, and second, that the conceptual exclusion of civil society actors as norm-makers is an obstacle to the recognition of counter-power in those spheres…(More)”.

Mini-Publics and Party Ideology: Who Commissioned the Deliberative Wave in Europe?


Paper by Rodrigo Ramis-Moyano et al: “The increasing implementation of deliberative mini-publics (DMPs) such as Citizens’ Assemblies and Citizens’ Juries led the OECD to identify a ‘deliberative wave’. The burgeoning scholarship on DMPs has increased understanding of how they operate and their impact, but less attention has been paid to the drivers behind this diffusion. Existing research on democratic innovations has underlined the role of the governing party’s ideology as a relevant variable in the study of the adoption of other procedures such as participatory budgeting, placing left-wing parties as a prominent actor in this process. Unlike this previous literature, we have little understanding of whether mini-publics appeal equally across the ideological spectrum. This paper draws on the large-N OECD database to analyse the impact of governing party affiliation on the commissioning of DMPs in Europe across the last four decades. Our analysis finds the ideological pattern of adoption is less clear cut compared to other democratic innovations such as participatory budgeting. But stronger ideological differentiation emerges when we pay close attention to the design features of DMPs implemented…(More)”.

The Weaponization of Expertise


Book by Jacob Hale Russell and Dennis Patterson: “Experts are not infallible. Treating them as such has done us all a grave disservice and, as The Weaponization of Expertise makes painfully clear, given rise to the very populism that all-knowing experts and their elite coterie decry. Jacob Hale Russell and Dennis Patterson use the devastating example of the COVID-19 pandemic to illustrate their case, revealing how the hubris of all-too-human experts undermined—perhaps irreparably—public faith in elite policymaking. Paradoxically, by turning science into dogmatism, the overweening elite response has also proved deeply corrosive to expertise itself—in effect, doing exactly what elite policymakers accuse their critics of doing.

A much-needed corrective to a dangerous blind faith in expertise, The Weaponization of Expertise identifies a cluster of pathologies that have enveloped many institutions meant to help referee expert knowledge, in particular a disavowal of the doubt, uncertainty, and counterarguments that are crucial to the accumulation of knowledge. At a time when trust in expertise and faith in institutions are most needed and most lacking, this work issues a stark reminder that a crisis of misinformation may well begin at the top…(More)”.

Inquiry as Infrastructure: Defining Good Questions in the Age of Data and AI


Paper by Stefaan Verhulst: “The most consequential failures in data-driven policymaking and AI deployment often stem not from poor models or inadequate datasets but from poorly framed questions. This paper centers question literacy as a critical yet underdeveloped competency in the data and policy landscape. Arguing for a “new science of questions,” it explores what constitutes a good question-one that is not only technically feasible but also ethically grounded, socially legitimate, and aligned with real-world needs. Drawing on insights from The GovLab’s 100 Questions Initiative, the paper develops a taxonomy of question types-descriptive, diagnostic, predictive, and prescriptive-and identifies five essential criteria for question quality: questions must be general yet concrete, co-designed with affected communities and domain experts, purpose-driven and ethically sound, grounded in data and technical realities, and capable of evolving through iterative refinement. The paper also outlines common pathologies of bad questions, such as vague formulation, biased framing, and solution-first thinking. Rather than treating questions as incidental to analysis, it argues for institutionalizing deliberate question design through tools like Q-Labs, question maturity models, and new professional roles for data stewards. Ultimately, the paper contends that the questions are infrastructures of meaning. What we ask shapes not only what data we collect or what models we build but also what values we uphold and what futures we make possible…(More)”.