Nonprofit AI: A Comprehensive Guide to Implementing Artificial Intelligence for Social Good


Book by Nathan Chappell and Scott Rosenkrans: “…an insightful and practical overview of how purpose-driven organizations can use AI to increase their impact and advance their missions. The authors offer an all-encompassing guide to understanding the promise and peril of implementing AI in the nonprofit sector, addressing both the theoretical and hands-on aspects of this necessary transformation.

The book provides you with case studies, practical tools, ethical frameworks and templates you can use to address the challenges of AI adoption – including ethical limitations – head-on. It draws on the authors’ thirty years of combined experience in the nonprofit industry to help you equip your nonprofit stakeholders with the knowledge and tools they need to successfully navigate the AI revolution.

You’ll also find:

  • Innovative and proven approaches to responsible and beneficial AI implementation taken by real-world organizations that will inspire and guide you as you move forward
  • Strategic planning, project management, and data governance templates and resources you can use immediately in your own nonprofit
  • Information on available AI training programs and resources to build AI fluency and capacity within nonprofit organizations.
  • Best practices for ensuring AI systems are transparent, accountable, and aligned with the mission and values of nonprofit organizations…(More)”.

The Dangers of AI Nationalism and Beggar-Thy-Neighbour Policies


Paper by Susan Aaronson: “As they attempt to nurture and govern AI, some nations are acting in ways that – with or without direct intent – discriminate among foreign market actors. For example, some governments are excluding foreign firms from access to incentives for high-speed computing, or requiring local content in the AI supply chain, or adopting export controls for the advanced chips that power many types of AI. If policy makers in country X can limit access to the building blocks of AI – whether funds, data or high-speed computing power – it might slow down or limit the AI prowess of its competitors in country Y and/or Z. At the same time, however, such policies could violate international trade norms of non-discrimination. Moreover, if policy makers can shape regulations in ways that benefit local AI competitors, they may also impede the competitiveness of other nations’ AI developers. Such regulatory policies could be discriminatory and breach international trade rules as well as long-standing rules about how nations and firms compete – which, over time, could reduce trust among nations. In this article, the author attempts to illuminate AI nationalism and its consequences by answering four questions:

– What are nations doing to nurture AI capacity within their borders?

Are some of these actions trade distorting?

 – Are some nations adopting twenty-first century beggar thy neighbour policies?

– What are the implications of such trade-distorting actions?

The author finds that AI nationalist policies appear to help countries with the largest and most established technology firms across multiple levels of the AI value chain. Hence, policy makers’ efforts to dominate these sectors, as example through large investment sums or beggar thy neighbour policies are not a good way to build trust…(More)”.

Glorious RAGs : A Safer Path to Using AI in the Social Sector


Blog by Jim Fruchterman: “Social sector leaders ask me all the time for advice on using AI. As someone who started for-profit machine learning (AI) companies in the 1980s, but then pivoted to running nonprofit social enterprises, I’m often the first person from Silicon Valley that many nonprofit leaders have met. I joke that my role is often that of “anti-consultant,” talking leaders out of doing an app, a blockchain (smile) or firing half their staff because of AI. Recently, much of my role has been tamping down the excessive expectations being bandied about for the impact of AI on organizations. However, two years into the latest AI fad wave created by ChatGPT and its LLM (large language model) peers, more and more of the leaders are describing eminently sensible applications of LLMs to their programs. The most frequent of these approaches can be described as variations on “Retrieval-Augmented Generation,” also known as RAG. I am quite enthusiastic about using RAG for social impact, because it addresses a real need and supplies guardrails for using LLMs effectively…(More)”

AI Agents in Global Governance: Digital Representation for Unheard Voices


Book by Eduardo Albrecht: “Governments now routinely use AI-based software to gather information about citizens and determine the level of privacy a person can enjoy, how far they can travel, what public benefits they may receive, and what they can and cannot say publicly. What input do citizens have in how these machines think?

In Political Automation, Eduardo Albrecht explores this question in various domains, including policing, national security, and international peacekeeping. Drawing upon interviews with rights activists, Albrecht examines popular attempts to interact with this novel form of algorithmic governance so far. He then proposes the idea of a Third House, a virtual chamber that legislates exclusively on AI in government decision-making and is based on principles of direct democracy, unlike existing upper and lower houses that are representative. Digital citizens, AI powered replicas of ourselves, would act as our personal emissaries to this Third House. An in-depth look at how political automation impacts the lives of citizens, this book addresses the challenges at the heart of automation in public policy decision-making and offers a way forward…(More)”.

A matter of choice: People and possibilities in the age of AI


UNDP Human Development Report 2025: “Artificial intelligence (AI) has broken into a dizzying gallop. While AI feats grab headlines, they privilege technology in a make-believe vacuum, obscuring what really matters: people’s choices.

The choices that people have and can realize, within ever expanding freedoms, are essential to human development, whose goal is for people to live lives they value and have reason to value. A world with AI is flush with choices the exercise of which is both a matter of human development and a means to advance it.

Going forward, development depends less on what AI can do—not on how human-like it is perceived to be—and more on mobilizing people’s imaginations to reshape economies and societies to make the most of it. Instead of trying vainly to predict what will happen, this year’s Human Development Report asks what choices can be made so that new development pathways for all countries dot the horizon, helping everyone have a shot at thriving in a world with AI…(More)”.

Charting the AI for Good Landscape – A New Look


Article by Perry Hewitt and Jake Porway: “More than 50% of nonprofits report that their organization uses generative AI in day-to-day operations. We’ve also seen an explosion of AI tools and investments. 10% of all the AI companies that exist in the US were founded in 2022, and that number has likely grown in subsequent years.  With investors funneling over $300B into AI and machine learning startups, it’s unlikely this trend will reverse any time soon.

Not surprisingly, the conversation about Artificial Intelligence (AI) is now everywhere, spanning from commercial uses such as virtual assistants and consumer AI to public goods, like AI-driven drug discovery and chatbots for education. The dizzying amount of new AI programs and initiatives – over 5000 new tools listed in 2023 on AI directories like TheresAnAI alone – can make the AI landscape challenging to navigate in general, much less for social impact. Luckily, four years ago, we surveyed the Data and AI for Good landscape and mapped out distinct families of initiatives based on their core goals. Today, we are revisiting that landscape to help folks get a handle on the AI for Good landscape today and to reflect on how the field has expanded, diversified, and matured…(More)”.

Smart Cities:Technologies and Policy Options to Enhance Services and Transparency


GAO Report: “Cities across the nation are using “smart city” technologies like traffic cameras and gunshot detectors to improve public services. In this technology assessment, we looked at their use in transportation and law enforcement.

Experts and city officials reported multiple benefits. For example, Houston uses cameras and Bluetooth sensors to measure traffic flow and adjust signal timing. Other cities use license plate readers to find stolen vehicles.

But the technologies can be costly and the benefits unclear. The data they collect may be sold, raising privacy and civil liberties concerns. We offer three policy options to address such challenges…(More)”.

Data Commons: The Missing Infrastructure for Public Interest Artificial Intelligence


Article by Stefaan Verhulst, Burton Davis and Andrew Schroeder: “Artificial intelligence is celebrated as the defining technology of our time. From ChatGPT to Copilot and beyond, generative AI systems are reshaping how we work, learn, and govern. But behind the headline-grabbing breakthroughs lies a fundamental problem: The data these systems depend on to produce useful results that serve the public interest is increasingly out of reach.

Without access to diverse, high-quality datasets, AI models risk reinforcing bias, deepening inequality, and returning less accurate, more imprecise results. Yet, access to data remains fragmented, siloed, and increasingly enclosed. What was once open—government records, scientific research, public media—is now locked away by proprietary terms, outdated policies, or simple neglect. We are entering a data winter just as AI’s influence over public life is heating up.

This isn’t just a technical glitch. It’s a structural failure. What we urgently need is new infrastructure: data commons.

A data commons is a shared pool of data resources—responsibly governed, managed using participatory approaches, and made available for reuse in the public interest. Done correctly, commons can ensure that communities and other networks have a say in how their data is used, that public interest organizations can access the data they need, and that the benefits of AI can be applied to meet societal challenges.

Commons offer a practical response to the paradox of data scarcity amid abundance. By pooling datasets across organizations—governments, universities, libraries, and more—they match data supply with real-world demand, making it easier to build AI that responds to public needs.

We’re already seeing early signs of what this future might look like. Projects like Common Corpus, MLCommons, and Harvard’s Institutional Data Initiative show how diverse institutions can collaborate to make data both accessible and accountable. These initiatives emphasize open standards, participatory governance, and responsible reuse. They challenge the idea that data must be either locked up or left unprotected, offering a third way rooted in shared value and public purpose.

But the pace of progress isn’t matching the urgency of the moment. While policymakers debate AI regulation, they often ignore the infrastructure that makes public interest applications possible in the first place. Without better access to high-quality, responsibly governed data, AI for the common good will remain more aspiration than reality.

That’s why we’re launching The New Commons Challenge—a call to action for universities, libraries, civil society, and technologists to build data ecosystems that fuel public-interest AI…(More)”.

These Startups Are Building Advanced AI Models Without Data Centers


Article by Will Knight: “Researchers have trained a new kind of large language model (LLM) using GPUs dotted across the world and fed private as well as public data—a move that suggests that the dominant way of building artificial intelligence could be disrupted.

Article by Will Knight: “Flower AI and Vana, two startups pursuing unconventional approaches to building AI, worked together to create the new model, called Collective-1.

Flower created techniques that allow training to be spread across hundreds of computers connected over the internet. The company’s technology is already used by some firms to train AI models without needing to pool compute resources or data. Vana provided sources of data including private messages from X, Reddit, and Telegram.

Collective-1 is small by modern standards, with 7 billion parameters—values that combine to give the model its abilities—compared to hundreds of billions for today’s most advanced models, such as those that power programs like ChatGPTClaude, and Gemini.

Nic Lane, a computer scientist at the University of Cambridge and cofounder of Flower AI, says that the distributed approach promises to scale far beyond the size of Collective-1. Lane adds that Flower AI is partway through training a model with 30 billion parameters using conventional data, and plans to train another model with 100 billion parameters—close to the size offered by industry leaders—later this year. “It could really change the way everyone thinks about AI, so we’re chasing this pretty hard,” Lane says. He says the startup is also incorporating images and audio into training to create multimodal models.

Distributed model-building could also unsettle the power dynamics that have shaped the AI industry…(More)”

AI action plan database


A project by the Institute for Progress: “In January 2025, President Trump tasked the Office of Science and Technology Policy with creating an AI Action Plan to promote American AI Leadership. The government requested input from the public, and received 10,068 submissions. The database below summarizes specific recommendations from these submissions. … We used AI to extract recommendations from each submission, and to tag them with relevant information. Click on a recommendation to learn more about it. See our analysis of common themes and ideas across these recommendations…(More)”.