Glorious RAGs : A Safer Path to Using AI in the Social Sector


Blog by Jim Fruchterman: “Social sector leaders ask me all the time for advice on using AI. As someone who started for-profit machine learning (AI) companies in the 1980s, but then pivoted to running nonprofit social enterprises, I’m often the first person from Silicon Valley that many nonprofit leaders have met. I joke that my role is often that of “anti-consultant,” talking leaders out of doing an app, a blockchain (smile) or firing half their staff because of AI. Recently, much of my role has been tamping down the excessive expectations being bandied about for the impact of AI on organizations. However, two years into the latest AI fad wave created by ChatGPT and its LLM (large language model) peers, more and more of the leaders are describing eminently sensible applications of LLMs to their programs. The most frequent of these approaches can be described as variations on “Retrieval-Augmented Generation,” also known as RAG. I am quite enthusiastic about using RAG for social impact, because it addresses a real need and supplies guardrails for using LLMs effectively…(More)”

Understanding and Addressing Misinformation About Science


Report by National Academies of Sciences, Engineering, and Medicine: “Our current information ecosystem makes it easier for misinformation about science to spread and harder for people to figure out what is scientifically accurate. Proactive solutions are needed to address misinformation about science, an issue of public concern given its potential to cause harm at individual, community, and societal levels. Improving access to high-quality scientific information can fill information voids that exist for topics of interest to people, reducing the likelihood of exposure to and uptake of misinformation about science. Misinformation is commonly perceived as a matter of bad actors maliciously misleading the public, but misinformation about science arises both intentionally and inadvertently and from a wide range of sources…(More)”.

Bad Public Policy: Malignity, Volatility and the Inherent Vices of Policymaking


Book by Policy studies assume the existence of baseline parameters – such as honest governments doing their best to create public value, publics responding in good faith, and both parties relying on a policy-making process which aligns with the public interest. In such circumstances, policy goals are expected to be produced through mechanisms in which the public can articulate its preferences and policy-makers are expected to listen to what has been said in determining their governments’ courses of action. While these conditions are found in some governments, there is evidence from around the world that much policy-making occurs without these pre-conditions and processes. Unlike situations which produce what can be thought of as ‘good’ public policy, ‘bad’ public policy is a more common outcome. How this happens and what makes for bad public policy are the subjects of this Element…(More)”.

Rebooting the global consensus: Norm entrepreneurship, data governance and the inalienability of digital bodies


Paper by Siddharth Peter de Souza and Linnet Taylor: “The establishment of norms among states is a common way of governing international actions. This article analyses the potential of norm-building for governing data and artificial intelligence technologies’ collective effects. Rather than focusing on state actors’s ability to establish and enforce norms, however, we identify a contrasting process taking place among civil society organisations in response to the international neoliberal consensus on the commodification of data. The norm we identify – ‘nothing about us without us’ – asserts civil society’s agency, and specifically the right of those represented in datasets to give or refuse permission through structures of democratic representation. We argue that this represents a form of norm-building that should be taken as seriously as that of states, and analyse how it is constructing the political power, relations, and resources to engage in governing technology at scale. We first outline how this counter-norming is anchored in data’s connections to bodies, land, community, and labour. We explore the history of formal international norm-making and the current norm-making work being done by civil society organisations internationally, and argue that these, although very different in their configurations and strategies, are comparable in scale and scope. Based on this, we make two assertions: first, that a norm-making lens is a useful way for both civil society and research to frame challenges to the primacy of market logics in law and governance, and second, that the conceptual exclusion of civil society actors as norm-makers is an obstacle to the recognition of counter-power in those spheres…(More)”.

Mini-Publics and Party Ideology: Who Commissioned the Deliberative Wave in Europe?


Paper by Rodrigo Ramis-Moyano et al: “The increasing implementation of deliberative mini-publics (DMPs) such as Citizens’ Assemblies and Citizens’ Juries led the OECD to identify a ‘deliberative wave’. The burgeoning scholarship on DMPs has increased understanding of how they operate and their impact, but less attention has been paid to the drivers behind this diffusion. Existing research on democratic innovations has underlined the role of the governing party’s ideology as a relevant variable in the study of the adoption of other procedures such as participatory budgeting, placing left-wing parties as a prominent actor in this process. Unlike this previous literature, we have little understanding of whether mini-publics appeal equally across the ideological spectrum. This paper draws on the large-N OECD database to analyse the impact of governing party affiliation on the commissioning of DMPs in Europe across the last four decades. Our analysis finds the ideological pattern of adoption is less clear cut compared to other democratic innovations such as participatory budgeting. But stronger ideological differentiation emerges when we pay close attention to the design features of DMPs implemented…(More)”.

The Weaponization of Expertise


Book by Jacob Hale Russell and Dennis Patterson: “Experts are not infallible. Treating them as such has done us all a grave disservice and, as The Weaponization of Expertise makes painfully clear, given rise to the very populism that all-knowing experts and their elite coterie decry. Jacob Hale Russell and Dennis Patterson use the devastating example of the COVID-19 pandemic to illustrate their case, revealing how the hubris of all-too-human experts undermined—perhaps irreparably—public faith in elite policymaking. Paradoxically, by turning science into dogmatism, the overweening elite response has also proved deeply corrosive to expertise itself—in effect, doing exactly what elite policymakers accuse their critics of doing.

A much-needed corrective to a dangerous blind faith in expertise, The Weaponization of Expertise identifies a cluster of pathologies that have enveloped many institutions meant to help referee expert knowledge, in particular a disavowal of the doubt, uncertainty, and counterarguments that are crucial to the accumulation of knowledge. At a time when trust in expertise and faith in institutions are most needed and most lacking, this work issues a stark reminder that a crisis of misinformation may well begin at the top…(More)”.

Inquiry as Infrastructure: Defining Good Questions in the Age of Data and AI


Paper by Stefaan Verhulst: “The most consequential failures in data-driven policymaking and AI deployment often stem not from poor models or inadequate datasets but from poorly framed questions. This paper centers question literacy as a critical yet underdeveloped competency in the data and policy landscape. Arguing for a “new science of questions,” it explores what constitutes a good question-one that is not only technically feasible but also ethically grounded, socially legitimate, and aligned with real-world needs. Drawing on insights from The GovLab’s 100 Questions Initiative, the paper develops a taxonomy of question types-descriptive, diagnostic, predictive, and prescriptive-and identifies five essential criteria for question quality: questions must be general yet concrete, co-designed with affected communities and domain experts, purpose-driven and ethically sound, grounded in data and technical realities, and capable of evolving through iterative refinement. The paper also outlines common pathologies of bad questions, such as vague formulation, biased framing, and solution-first thinking. Rather than treating questions as incidental to analysis, it argues for institutionalizing deliberate question design through tools like Q-Labs, question maturity models, and new professional roles for data stewards. Ultimately, the paper contends that the questions are infrastructures of meaning. What we ask shapes not only what data we collect or what models we build but also what values we uphold and what futures we make possible…(More)”.

Guiding the provision of quality policy advice: the 5D model


Paper by Christopher Walker and Sally Washington: “… presents a process model to guide the production of quality policy advice. The work draws on engagement with both public sector practitioners and academics to design a process model for the development of policy advice that works in practice (can be used by policy professionals in their day-to-day work) and aligns with theory (can be taught as part of explaining the dynamics of a wider policy advisory system). The 5D Model defines five key domains of inquiry: understanding Demand, being open to Discovery, undertaking Design, identifying critical Decision points, and shaping advice to enable Delivery. Our goal is a ‘repeatable, scalable’ model for supporting policy practitioners to provide quality advice to decision makers. The model was developed and tested through an extensive process of engagement with senior policy practitioners who noted the heuristic gave structure to practices that determine how policy advice is organized and formulated. Academic colleagues confirmed the utility of the model for explaining and teaching how policy is designed and delivered within the context of a wider policy advisory system (PAS). A unique aspect of this work was the collaboration and shared interest amongst academics and practitioners to define a model that is ‘useful for teaching’ and ‘useful for doing’…(More)”.

Brazil’s AI-powered social security app is wrongly rejecting claims


Article by Gabriel Daros: “Brazil’s social security institute, known as INSS, added AI to its app in 2018 in an effort to cut red tape and speed up claims. The office, known for its long lines and wait times, had around 2 million pending requests for everything from doctor’s appointments to sick pay to pensions to retirement benefits at the time. While the AI-powered tool has since helped process thousands of basic claims, it has also rejected requests from hundreds of people like de Brito — who live in remote areas and have little digital literacy — for minor errors.

The government is right to digitize its systems to improve efficiency, but that has come at a cost, Edjane Rodrigues, secretary for social policies at the National Confederation of Workers in Agriculture, told Rest of World.

“If the government adopts this kind of service to speed up benefits for the people, this is good. We are not against it,” she said. But, particularly among farm workers, claims can be complex because of the nature of their work, she said, referring to cases that require additional paperwork, such as when a piece of land is owned by one individual but worked by a group of families. “There are many peculiarities in agriculture, and rural workers are being especially harmed” by the app, according to Rodrigues.

“Each automated decision is based on specified legal criteria, ensuring that the standards set by the social security legislation are respected,” a spokesperson for INSS told Rest of World. “Automation does not work in an arbitrary manner. Instead, it follows clear rules and regulations, mirroring the expected standards applied in conventional analysis.”

Governments across Latin America have been introducing AI to improve their processes. Last year, Argentina began using ChatGPT to draft court rulings, a move that officials said helped cut legal costs and reduce processing times. Costa Rica has partnered with Microsoft to launch an AI tool to optimize tax data collection and check for fraud in digital tax receipts. El Salvador recently set up an AI lab to develop tools for government services.

But while some of these efforts have delivered promising results, experts have raised concerns about the risk of officials with little tech know-how applying these tools with no transparency or workarounds…(More)”.

Exit to Open


Article by Jim Fruchterman and Steve Francis: “What happens when a nonprofit program or an entire organization needs to shut down? The communities being served, and often society as a whole, are the losers. What if it were possible to mitigate some of that damage by sharing valuable intellectual property assets of the closing effort for longer term benefit? Organizations in these tough circumstances must give serious thought to a responsible exit for their intangible assets.

At the present moment of unparalleled disruption, the entire nonprofit sector is rethinking everything: language to describe their work, funding sources, partnerships, and even their continued existence. Nonprofit programs and entire charities will be closing, or being merged out of existence. Difficult choices are being made. Who will fill the role of witness and archivist to preserve the knowledge of these organizations, their writings, media, software, and data, for those who carry on, either now or in the future?

We believe leaders in these tough days should consider a model we’re calling Exit to Open (E2O) and related exit concepts to safeguard these assets going forward…

Exit to Open (E2O) exploits three elements:

  1. We are in an era where the cost of digital preservation is low; storing a few more bytes for a long time is cheap.
  2. It’s far more effective for an organization’s staff to isolate and archive critical content than an outsider with limited knowledge attempting to do so later.
  3. These resources are of greatest use if there is a human available to interpret them, and a deliberate archival process allows for the identification of these potential interpreters…(More)”.