Nonprofit AI: A Comprehensive Guide to Implementing Artificial Intelligence for Social Good


Book by Nathan Chappell and Scott Rosenkrans: “…an insightful and practical overview of how purpose-driven organizations can use AI to increase their impact and advance their missions. The authors offer an all-encompassing guide to understanding the promise and peril of implementing AI in the nonprofit sector, addressing both the theoretical and hands-on aspects of this necessary transformation.

The book provides you with case studies, practical tools, ethical frameworks and templates you can use to address the challenges of AI adoption – including ethical limitations – head-on. It draws on the authors’ thirty years of combined experience in the nonprofit industry to help you equip your nonprofit stakeholders with the knowledge and tools they need to successfully navigate the AI revolution.

You’ll also find:

  • Innovative and proven approaches to responsible and beneficial AI implementation taken by real-world organizations that will inspire and guide you as you move forward
  • Strategic planning, project management, and data governance templates and resources you can use immediately in your own nonprofit
  • Information on available AI training programs and resources to build AI fluency and capacity within nonprofit organizations.
  • Best practices for ensuring AI systems are transparent, accountable, and aligned with the mission and values of nonprofit organizations…(More)”.

The Dangers of AI Nationalism and Beggar-Thy-Neighbour Policies


Paper by Susan Aaronson: “As they attempt to nurture and govern AI, some nations are acting in ways that – with or without direct intent – discriminate among foreign market actors. For example, some governments are excluding foreign firms from access to incentives for high-speed computing, or requiring local content in the AI supply chain, or adopting export controls for the advanced chips that power many types of AI. If policy makers in country X can limit access to the building blocks of AI – whether funds, data or high-speed computing power – it might slow down or limit the AI prowess of its competitors in country Y and/or Z. At the same time, however, such policies could violate international trade norms of non-discrimination. Moreover, if policy makers can shape regulations in ways that benefit local AI competitors, they may also impede the competitiveness of other nations’ AI developers. Such regulatory policies could be discriminatory and breach international trade rules as well as long-standing rules about how nations and firms compete – which, over time, could reduce trust among nations. In this article, the author attempts to illuminate AI nationalism and its consequences by answering four questions:

– What are nations doing to nurture AI capacity within their borders?

Are some of these actions trade distorting?

 – Are some nations adopting twenty-first century beggar thy neighbour policies?

– What are the implications of such trade-distorting actions?

The author finds that AI nationalist policies appear to help countries with the largest and most established technology firms across multiple levels of the AI value chain. Hence, policy makers’ efforts to dominate these sectors, as example through large investment sums or beggar thy neighbour policies are not a good way to build trust…(More)”.

Balancing Data Sharing and Privacy to Enhance Integrity and Trust in Government Programs


Paper by National Academy of Public Administration: “Improper payments and fraud cost the federal government hundreds of billions of dollars each year, wasting taxpayer money and eroding public trust. At the same time, agencies are increasingly expected to do more with less. Finding better ways to share data, without compromising privacy, is critical for ensuring program integrity in a resource-constrained environment.

Key Takeaways

  • Data sharing strengthens program integrity and fraud prevention. Agencies and oversight bodies like GAO and OIGs have uncovered large-scale fraud by using shared data.
  • Opportunities exist to streamline and expedite the compliance processes required by privacy laws and reduce systemic barriers to sharing data across federal agencies.
  • Targeted reforms can address these barriers while protecting privacy:
    1. OMB could issue guidance to authorize fraud prevention as a routine use in System of Records Notices.
    2. Congress could enact special authorities or exemptions for data sharing that supports program integrity and fraud prevention.
    3. A centralized data platform could help to drive cultural change and support secure, responsible data sharing…(More)”

Glorious RAGs : A Safer Path to Using AI in the Social Sector


Blog by Jim Fruchterman: “Social sector leaders ask me all the time for advice on using AI. As someone who started for-profit machine learning (AI) companies in the 1980s, but then pivoted to running nonprofit social enterprises, I’m often the first person from Silicon Valley that many nonprofit leaders have met. I joke that my role is often that of “anti-consultant,” talking leaders out of doing an app, a blockchain (smile) or firing half their staff because of AI. Recently, much of my role has been tamping down the excessive expectations being bandied about for the impact of AI on organizations. However, two years into the latest AI fad wave created by ChatGPT and its LLM (large language model) peers, more and more of the leaders are describing eminently sensible applications of LLMs to their programs. The most frequent of these approaches can be described as variations on “Retrieval-Augmented Generation,” also known as RAG. I am quite enthusiastic about using RAG for social impact, because it addresses a real need and supplies guardrails for using LLMs effectively…(More)”

AI Agents in Global Governance: Digital Representation for Unheard Voices


Book by Eduardo Albrecht: “Governments now routinely use AI-based software to gather information about citizens and determine the level of privacy a person can enjoy, how far they can travel, what public benefits they may receive, and what they can and cannot say publicly. What input do citizens have in how these machines think?

In Political Automation, Eduardo Albrecht explores this question in various domains, including policing, national security, and international peacekeeping. Drawing upon interviews with rights activists, Albrecht examines popular attempts to interact with this novel form of algorithmic governance so far. He then proposes the idea of a Third House, a virtual chamber that legislates exclusively on AI in government decision-making and is based on principles of direct democracy, unlike existing upper and lower houses that are representative. Digital citizens, AI powered replicas of ourselves, would act as our personal emissaries to this Third House. An in-depth look at how political automation impacts the lives of citizens, this book addresses the challenges at the heart of automation in public policy decision-making and offers a way forward…(More)”.

A matter of choice: People and possibilities in the age of AI


UNDP Human Development Report 2025: “Artificial intelligence (AI) has broken into a dizzying gallop. While AI feats grab headlines, they privilege technology in a make-believe vacuum, obscuring what really matters: people’s choices.

The choices that people have and can realize, within ever expanding freedoms, are essential to human development, whose goal is for people to live lives they value and have reason to value. A world with AI is flush with choices the exercise of which is both a matter of human development and a means to advance it.

Going forward, development depends less on what AI can do—not on how human-like it is perceived to be—and more on mobilizing people’s imaginations to reshape economies and societies to make the most of it. Instead of trying vainly to predict what will happen, this year’s Human Development Report asks what choices can be made so that new development pathways for all countries dot the horizon, helping everyone have a shot at thriving in a world with AI…(More)”.

Charting the AI for Good Landscape – A New Look


Article by Perry Hewitt and Jake Porway: “More than 50% of nonprofits report that their organization uses generative AI in day-to-day operations. We’ve also seen an explosion of AI tools and investments. 10% of all the AI companies that exist in the US were founded in 2022, and that number has likely grown in subsequent years.  With investors funneling over $300B into AI and machine learning startups, it’s unlikely this trend will reverse any time soon.

Not surprisingly, the conversation about Artificial Intelligence (AI) is now everywhere, spanning from commercial uses such as virtual assistants and consumer AI to public goods, like AI-driven drug discovery and chatbots for education. The dizzying amount of new AI programs and initiatives – over 5000 new tools listed in 2023 on AI directories like TheresAnAI alone – can make the AI landscape challenging to navigate in general, much less for social impact. Luckily, four years ago, we surveyed the Data and AI for Good landscape and mapped out distinct families of initiatives based on their core goals. Today, we are revisiting that landscape to help folks get a handle on the AI for Good landscape today and to reflect on how the field has expanded, diversified, and matured…(More)”.

Smart Cities:Technologies and Policy Options to Enhance Services and Transparency


GAO Report: “Cities across the nation are using “smart city” technologies like traffic cameras and gunshot detectors to improve public services. In this technology assessment, we looked at their use in transportation and law enforcement.

Experts and city officials reported multiple benefits. For example, Houston uses cameras and Bluetooth sensors to measure traffic flow and adjust signal timing. Other cities use license plate readers to find stolen vehicles.

But the technologies can be costly and the benefits unclear. The data they collect may be sold, raising privacy and civil liberties concerns. We offer three policy options to address such challenges…(More)”.

Data Commons: The Missing Infrastructure for Public Interest Artificial Intelligence


Article by Stefaan Verhulst, Burton Davis and Andrew Schroeder: “Artificial intelligence is celebrated as the defining technology of our time. From ChatGPT to Copilot and beyond, generative AI systems are reshaping how we work, learn, and govern. But behind the headline-grabbing breakthroughs lies a fundamental problem: The data these systems depend on to produce useful results that serve the public interest is increasingly out of reach.

Without access to diverse, high-quality datasets, AI models risk reinforcing bias, deepening inequality, and returning less accurate, more imprecise results. Yet, access to data remains fragmented, siloed, and increasingly enclosed. What was once open—government records, scientific research, public media—is now locked away by proprietary terms, outdated policies, or simple neglect. We are entering a data winter just as AI’s influence over public life is heating up.

This isn’t just a technical glitch. It’s a structural failure. What we urgently need is new infrastructure: data commons.

A data commons is a shared pool of data resources—responsibly governed, managed using participatory approaches, and made available for reuse in the public interest. Done correctly, commons can ensure that communities and other networks have a say in how their data is used, that public interest organizations can access the data they need, and that the benefits of AI can be applied to meet societal challenges.

Commons offer a practical response to the paradox of data scarcity amid abundance. By pooling datasets across organizations—governments, universities, libraries, and more—they match data supply with real-world demand, making it easier to build AI that responds to public needs.

We’re already seeing early signs of what this future might look like. Projects like Common Corpus, MLCommons, and Harvard’s Institutional Data Initiative show how diverse institutions can collaborate to make data both accessible and accountable. These initiatives emphasize open standards, participatory governance, and responsible reuse. They challenge the idea that data must be either locked up or left unprotected, offering a third way rooted in shared value and public purpose.

But the pace of progress isn’t matching the urgency of the moment. While policymakers debate AI regulation, they often ignore the infrastructure that makes public interest applications possible in the first place. Without better access to high-quality, responsibly governed data, AI for the common good will remain more aspiration than reality.

That’s why we’re launching The New Commons Challenge—a call to action for universities, libraries, civil society, and technologists to build data ecosystems that fuel public-interest AI…(More)”.

Real-time prices, real results: comparing crowdsourcing, AI, and traditional data collection


Article by Julius Adewopo, Bo Andree, Zacharey Carmichael, Steve Penson, Kamwoo Lee: “Timely, high-quality food price data is essential for shock responsive decision-making. However, in many low- and middle-income countries, such data is often delayed, limited in geographic coverage, or unavailable due to operational constraints. Traditional price monitoring, which relies on structured surveys conducted by trained enumerators, is often constrained by challenges related to cost, frequency, and reach.

To help overcome these limitations, the World Bank launched the Real-Time Prices (RTP) data platform. This effort provides monthly price data using a machine learning framework. The models combine survey results with predictions derived from observations in nearby markets and related commodities. This approach helps fill gaps in local price data across a basket of goods, enabling real-time monitoring of inflation dynamics even when survey data is incomplete or irregular.

In parallel, new approaches—such as citizen-submitted (crowdsourced) data—are being explored to complement conventional data collection methods. These crowdsourced data were recently published in a Nature Scientific Data paper. While the adoption of these innovations is accelerating, maintaining trust requires rigorous validation.

newly published study in PLOS compares the two emerging methods with the traditional, enumerator-led gold standard, providing  new evidence that both crowdsourced and AI-imputed prices can serve as credible, timely alternatives to traditional ground-truth data collection—especially in contexts where conventional methods face limitations…(More)”.