Paper by Joel Z. Leibo et al: “Artificial Intelligence (AI) systems are increasingly placed in positions where their decisions have real consequences, e.g., moderating online spaces, conducting research, and advising on policy. Ensuring they operate in a safe and ethically acceptable fashion is thus critical. However, most solutions have been a form of one-size-fits-all “alignment”. We are worried that such systems, which overlook enduring moral diversity, will spark resistance, erode trust, and destabilize our institutions. This paper traces the underlying problem to an often-unstated Axiom of Rational Convergence: the idea that under ideal conditions, rational agents will converge in the limit of conversation on a single ethics. Treating that premise as both optional and doubtful, we propose what we call the appropriateness framework: an alternative approach grounded in conflict theory, cultural evolution, multi-agent systems, and institutional economics. The appropriateness framework treats persistent disagreement as the normal case and designs for it by applying four principles: (1) contextual grounding, (2) community customization, (3) continual adaptation, and (4) polycentric governance. We argue here that adopting these design principles is a good way to shift the main alignment metaphor from moral unification to a more productive metaphor of conflict management, and that taking this step is both desirable and urgent…(More)”.
The Importance of Co-Designing Questions: 10 Lessons from Inquiry-Driven Grantmaking
Article by Hannah Chafetz and Stefaan Verhulst: “How can a question-based approach to philanthropy enable better learning and deeper evaluation across both sides of the partnership and help make progress towards long-term systemic change? That’s what Siegel Family Endowment (Siegel), a family foundation based in New York City, sought to answer by creating an Inquiry-Driven Grantmaking approach.
While many philanthropies continue to follow traditional practices that focus on achieving a set of strategic objectives, Siegel employs an inquiry-driven approach, which focuses on answering questions that can accelerate insights and iteration across the systems they seek to change. By framing their goal as “learning” rather than an “outcome” or “metric,” they aim to generate knowledge that can be shared across the whole field and unlock impact beyond the work on individual grants.
The Siegel approach centers on co-designing and iteratively refining questions with grantees to address evolving strategic priorities, using rapid iteration and stakeholder engagement to generate insights that inform both grantee efforts and the foundation’s decision-making.
Their approach was piloted in 2020, and refined and operationalized the years that followed. As of 2024, it was applied across the vast majority of their grantmaking portfolio. Laura Maher, Chief of Staff and Director of External Engagement at Siegel Family Endowment, notes: “Before our Inquiry-Driven Grantmaking approach we spent roughly 90% of our time on the grant writing process and 10% checking in with grantees, and now that’s balancing out more.”

Image of the Inquiry-Driven Grantmaking Process from the Siegel Family Endowment
Earlier this year, the DATA4Philanthropy team conducted two in-depth discussions with Siegel’s Knowledge and Impact team to discuss their Inquiry-Driven Grantmaking approach and what they learned thus far from applying their new methodology. While the Siegel team notes that there is still much to be learned, there are several takeaways that can be applied to others looking to initiate a questions-led approach.
Below we provide 10 emerging lessons from these discussions…(More)”.
A World of Unintended Consequences
Essay by Edward Tenner: “One of the great, underappreciated facts about our technology-driven age is that unintended consequences tend to outnumber intended ones. As much as we would like to believe that we are in control, scholars who have studied catastrophic failures have shown that humility is ultimately the only justifiable attitude…
Here’s a story about a revolution that never happened. Nearly 90 years ago, a 26-year-old newly credentialed Harvard sociology PhD and future American Philosophical Society member, Robert K. Merton, published a paper in the American Sociological Review that would become one of the most frequently cited in his discipline: “The Unanticipated Consequences of Purposive Social Action.”While the language of the paper was modest, it offered an obvious but revolutionary insight: many or most phenomena in the social world are unintended – for better or worse. Today, even management gurus like Tom Peters acknowledge that, “Unintended consequences outnumber intended consequences. … Strategies rarely unfold as we imagined. Intended consequences are rare.”
Merton had promised a monograph on the history and analysis of the problem, with its “vast scope and manifold implications.” Somewhere along the way, however, he abandoned the project, perhaps because it risked becoming a book about everything. Moreover, his apparent retreat may have discouraged other social scientists from attempting it, revealing one of the paradoxes of the subject’s study: because it is so universal and important, it may be best suited for case studies rather than grand theories.
Ironically, while unintentionality-centered analysis might have produced a Copernican revolution in social science, it is more likely that it would have unleashed adverse unintended consequences for any scholar attempting it – just as Thomas Kuhn’s idea of scientific paradigms embroiled him in decades of controversies. Besides, there are also ideological barriers to the study of unintended consequences. For every enthusiast there seems to be a hater, and dwelling on the unintended consequences of an opponent’s policies invites retaliation in kind.
This was economist Albert O. Hirschman’s point in his own critique of the theme. Hirschman himself had formidable credentials as a student of unintended consequences. One of his most celebrated and controversial ideas, the “hiding hand,” was a spin-off of Adam Smith’s famous metaphor for the market (the invisible hand). In Development Projects Observed, Hirschman noted that many successful programs might never have been launched had all the difficulties been known; but once a commitment was made, human ingenuity prevailed, and new and unforeseen solutions were found. The Sydney Opera House, for example, exceeded its budget by 1,300%, but it turned out to be a bargain once it became Australia’s unofficial icon…(More)”
Nonprofit AI: A Comprehensive Guide to Implementing Artificial Intelligence for Social Good
Book by Nathan Chappell and Scott Rosenkrans: “…an insightful and practical overview of how purpose-driven organizations can use AI to increase their impact and advance their missions. The authors offer an all-encompassing guide to understanding the promise and peril of implementing AI in the nonprofit sector, addressing both the theoretical and hands-on aspects of this necessary transformation.
The book provides you with case studies, practical tools, ethical frameworks and templates you can use to address the challenges of AI adoption – including ethical limitations – head-on. It draws on the authors’ thirty years of combined experience in the nonprofit industry to help you equip your nonprofit stakeholders with the knowledge and tools they need to successfully navigate the AI revolution.
You’ll also find:
- Innovative and proven approaches to responsible and beneficial AI implementation taken by real-world organizations that will inspire and guide you as you move forward
- Strategic planning, project management, and data governance templates and resources you can use immediately in your own nonprofit
- Information on available AI training programs and resources to build AI fluency and capacity within nonprofit organizations.
- Best practices for ensuring AI systems are transparent, accountable, and aligned with the mission and values of nonprofit organizations…(More)”.
AI Agents in Global Governance: Digital Representation for Unheard Voices
Book by Eduardo Albrecht: “Governments now routinely use AI-based software to gather information about citizens and determine the level of privacy a person can enjoy, how far they can travel, what public benefits they may receive, and what they can and cannot say publicly. What input do citizens have in how these machines think?
In Political Automation, Eduardo Albrecht explores this question in various domains, including policing, national security, and international peacekeeping. Drawing upon interviews with rights activists, Albrecht examines popular attempts to interact with this novel form of algorithmic governance so far. He then proposes the idea of a Third House, a virtual chamber that legislates exclusively on AI in government decision-making and is based on principles of direct democracy, unlike existing upper and lower houses that are representative. Digital citizens, AI powered replicas of ourselves, would act as our personal emissaries to this Third House. An in-depth look at how political automation impacts the lives of citizens, this book addresses the challenges at the heart of automation in public policy decision-making and offers a way forward…(More)”.
A matter of choice: People and possibilities in the age of AI
UNDP Human Development Report 2025: “Artificial intelligence (AI) has broken into a dizzying gallop. While AI feats grab headlines, they privilege technology in a make-believe vacuum, obscuring what really matters: people’s choices.
The choices that people have and can realize, within ever expanding freedoms, are essential to human development, whose goal is for people to live lives they value and have reason to value. A world with AI is flush with choices the exercise of which is both a matter of human development and a means to advance it.
Going forward, development depends less on what AI can do—not on how human-like it is perceived to be—and more on mobilizing people’s imaginations to reshape economies and societies to make the most of it. Instead of trying vainly to predict what will happen, this year’s Human Development Report asks what choices can be made so that new development pathways for all countries dot the horizon, helping everyone have a shot at thriving in a world with AI…(More)”.
Charting the AI for Good Landscape – A New Look
Article by Perry Hewitt and Jake Porway: “More than 50% of nonprofits report that their organization uses generative AI in day-to-day operations. We’ve also seen an explosion of AI tools and investments. 10% of all the AI companies that exist in the US were founded in 2022, and that number has likely grown in subsequent years. With investors funneling over $300B into AI and machine learning startups, it’s unlikely this trend will reverse any time soon.
Not surprisingly, the conversation about Artificial Intelligence (AI) is now everywhere, spanning from commercial uses such as virtual assistants and consumer AI to public goods, like AI-driven drug discovery and chatbots for education. The dizzying amount of new AI programs and initiatives – over 5000 new tools listed in 2023 on AI directories like TheresAnAI alone – can make the AI landscape challenging to navigate in general, much less for social impact. Luckily, four years ago, we surveyed the Data and AI for Good landscape and mapped out distinct families of initiatives based on their core goals. Today, we are revisiting that landscape to help folks get a handle on the AI for Good landscape today and to reflect on how the field has expanded, diversified, and matured…(More)”.
The RRI Citizen Review Panel: a public engagement method for supporting responsible territorial policymaking
Paper by Maya Vestergaard Bidstrup et al: “Responsible Territorial Policymaking incorporates the main principles of Responsible Research and Innovation (RRI) into the policymaking process, making it well-suited for guiding the development of sustainable and resilient territorial policies that prioritise societal needs. As a cornerstone in RRI, public engagement plays a central role in this process, underscoring the importance of involving all societal actors to align outcomes with the needs, expectations, and values of society. In the absence of existing methods to gather sufficiently and effectively the citizens’ review of multiple policies at a territorial level, the RRI Citizen Review Panel is a new public engagement method developed to facilitate citizens’ review and validation of territorial policies. By using RRI as an analytical framework, this paper examines whether the RRI Citizen Review Panel can support Responsible Territorial Policymaking, not only by incorporating citizens’ perspectives into territorial policymaking, but also by making policies more responsible. The paper demonstrates that in the review of territorial policies, citizens are adding elements of RRI to a wide range of policies within different policy areas, contributing to making policies more responsible. Consequently, the RRI Citizen Review Panel emerges as a valuable tool for policymakers, enabling them to gather citizen perspectives and imbue policies with a heightened sense of responsibility…(More)”.
Playing for science: Designing science games
Paper by Claudio M Radaelli: “How can science have more impact on policy decisions? The P-Cube Project has approached this question by creating five pedagogical computer games based on missions given to a policy entrepreneur (the player) advocating for science-informed policy decisions. The player explores simplified strategies for policy change rooted in a small number of variables, thus making it possible to learn without a prior background in political science or public administration. The games evolved from the intuition that, instead of making additional efforts to explain science to decision-makers, we should directly empower would-be scientists (our primary audience for the games), post-graduates in public policy and administration, and activists for science. The two design principles of the games revolve around learning about how policy decisions are made (a learning-about-content principle) and reflection. Indeed, the presence of science in the policy process raises ethical and normative decisions, especially when we consider controversial strategies like civil disobedience and alliances with industry. To be on the side of science does not mean to be outside society and politics. I show the motivation, principles, scripts and pilots of the science games, reflecting on how they can be used and for what reasons…(More)”
Data Commons: The Missing Infrastructure for Public Interest Artificial Intelligence
Article by Stefaan Verhulst, Burton Davis and Andrew Schroeder: “Artificial intelligence is celebrated as the defining technology of our time. From ChatGPT to Copilot and beyond, generative AI systems are reshaping how we work, learn, and govern. But behind the headline-grabbing breakthroughs lies a fundamental problem: The data these systems depend on to produce useful results that serve the public interest is increasingly out of reach.
Without access to diverse, high-quality datasets, AI models risk reinforcing bias, deepening inequality, and returning less accurate, more imprecise results. Yet, access to data remains fragmented, siloed, and increasingly enclosed. What was once open—government records, scientific research, public media—is now locked away by proprietary terms, outdated policies, or simple neglect. We are entering a data winter just as AI’s influence over public life is heating up.
This isn’t just a technical glitch. It’s a structural failure. What we urgently need is new infrastructure: data commons.
A data commons is a shared pool of data resources—responsibly governed, managed using participatory approaches, and made available for reuse in the public interest. Done correctly, commons can ensure that communities and other networks have a say in how their data is used, that public interest organizations can access the data they need, and that the benefits of AI can be applied to meet societal challenges.
Commons offer a practical response to the paradox of data scarcity amid abundance. By pooling datasets across organizations—governments, universities, libraries, and more—they match data supply with real-world demand, making it easier to build AI that responds to public needs.
We’re already seeing early signs of what this future might look like. Projects like Common Corpus, MLCommons, and Harvard’s Institutional Data Initiative show how diverse institutions can collaborate to make data both accessible and accountable. These initiatives emphasize open standards, participatory governance, and responsible reuse. They challenge the idea that data must be either locked up or left unprotected, offering a third way rooted in shared value and public purpose.
But the pace of progress isn’t matching the urgency of the moment. While policymakers debate AI regulation, they often ignore the infrastructure that makes public interest applications possible in the first place. Without better access to high-quality, responsibly governed data, AI for the common good will remain more aspiration than reality.
That’s why we’re launching The New Commons Challenge—a call to action for universities, libraries, civil society, and technologists to build data ecosystems that fuel public-interest AI…(More)”.