The Importance of Co-Designing Questions: 10 Lessons from Inquiry-Driven Grantmaking


Article by Hannah Chafetz and Stefaan Verhulst: “How can a question-based approach to philanthropy enable better learning and deeper evaluation across both sides of the partnership and help make progress towards long-term systemic change? That’s what Siegel Family Endowment (Siegel), a family foundation based in New York City, sought to answer by creating an Inquiry-Driven Grantmaking approach

While many philanthropies continue to follow traditional practices that focus on achieving a set of strategic objectives, Siegel employs an inquiry-driven approach, which focuses on answering questions that can accelerate insights and iteration across the systems they seek to change. By framing their goal as “learning” rather than an “outcome” or “metric,” they aim to generate knowledge that can be shared across the whole field and unlock impact beyond the work on individual grants. 

The Siegel approach centers on co-designing and iteratively refining questions with grantees to address evolving strategic priorities, using rapid iteration and stakeholder engagement to generate insights that inform both grantee efforts and the foundation’s decision-making.

Their approach was piloted in 2020, and refined and operationalized the years that followed. As of 2024, it was applied across the vast majority of their grantmaking portfolio. Laura Maher, Chief of Staff and Director of External Engagement at Siegel Family Endowment, notes: “Before our Inquiry-Driven Grantmaking approach we spent roughly 90% of our time on the grant writing process and 10% checking in with grantees, and now that’s balancing out more.”

Screenshot 2025 05 08 at 4.29.24 Pm

Image of the Inquiry-Driven Grantmaking Process from the Siegel Family Endowment

Earlier this year, the DATA4Philanthropy team conducted two in-depth discussions with Siegel’s Knowledge and Impact team to discuss their Inquiry-Driven Grantmaking approach and what they learned thus far from applying their new methodology. While the Siegel team notes that there is still much to be learned, there are several takeaways that can be applied to others looking to initiate a questions-led approach. 

Below we provide 10 emerging lessons from these discussions…(More)”.

The world at our fingertips, just out of reach: the algorithmic age of AI


Article by Soumi Banerjee: “Artificial intelligence (AI) has made global movements, testimonies, and critiques seem just a swipe away. The digital realm, powered by machine learning and algorithmic recommendation systems, offers an abundance of visual, textual, and auditory information. With a few swipes or keystrokes, the unbounded world lies open before us. Yet this ‘openness’ conceals a fundamental paradox: the distinction between availability and accessibility.

What is technically available is not always epistemically accessible. What appears global is often algorithmically curated. And what is served to users under the guise of choice frequently reflects the imperatives of engagement, profit, and emotional resonance over critical understanding or cognitive expansion.

The transformative potential of AI in democratising access to information comes with risks. Algorithmic enclosure and content curation can deepen epistemic inequality, particularly for the youth, whose digital fluency often masks a lack of epistemic literacy. What we need is algorithmic transparency, civic education in media literacy, and inclusive knowledge formats…(More)”.

Building Community-Centered AI Collaborations


Article by Michelle Flores Vryn and Meena Das: “AI can only boost the under-resourced nonprofit world if we design it to serve the communities we care about. But as nonprofits consider how to incorporate AI into their work, many look to expertise from tech sector, expecting tools and implementation advice as well as ethical guidance. Yet when mission-driven entities—with a strong focus on people, communities, and equity—partner solely with tech companies, they may encounter a variety of obstacles, such as:

  1. Limited understanding of community needs: Sector-specific knowledge is essential for aligning AI with nonprofit missions, something many tech companies lack.
  2. Bias in AI models: Without diverse input, AI models may exacerbate biases or misrepresent the communities that nonprofits serve.
  3. Resource constraints: Tech solutions often presume budgets or capacity beyond what nonprofits can bring to bear, creating a reliance on tools that fit the nonprofit context.

We need creative, diverse collaborations across various fields to ensure that technology is deployed in ways that align with nonprofit values, build trust, and serve the greater good. Seeking partners outside of the tech world helps nonprofits develop AI solutions that are context-aware, equitable, and resource-sensitive. Most importantly, nonprofit practitioners must deeply consider our ideal future state: What does an AI-empowered nonprofit sector look like when it truly centers human well-being, community agency, and ethical technology?

Imagining this future means not just reacting to emerging technology but proactively shaping its trajectory. Instead of simply adapting to AI’s capabilities, nonprofits should ask:

  • What problems do we truly need AI to solve?
  • Whose voices must be centered in AI decision-making?
  • How do we ensure AI remains a tool for empowerment rather than control?..(More)”.

Policy Implications of DeepSeek AI’s Talent Base


Brief by Amy Zegart and Emerson Johnston: “Chinese startup DeepSeek’s highly capable R1 and V3 models challenged prevailing beliefs about the United States’ advantage in AI innovation, but public debate focused more on the company’s training data and computing power than human talent. We analyzed data on the 223 authors listed on DeepSeek’s five foundational technical research papers, including information on their research output, citations, and institutional affiliations, to identify notable talent patterns. Nearly all of DeepSeek’s researchers were educated or trained in China, and more than half never left China for schooling or work. Of the quarter or so that did gain some experience in the United States, most returned to China to work on AI development there. These findings challenge the core assumption that the United States holds a natural AI talent lead. Policymakers need to reinvest in competing to attract and retain the world’s best AI talent while bolstering STEM education to maintain competitiveness…(More)”.

How Bad Is China’s Economy? The Data Needed to Answer Is Vanishing


Article by Rebecca Feng and Jason Douglas: “Not long ago, anyone could comb through a wide range of official data from China. Then it started to disappear. 

Land sales measures, foreign investment data and unemployment indicators have gone dark in recent years. Data on cremations and a business confidence index have been cut off. Even official soy sauce production reports are gone.

In all, Chinese officials have stopped publishing hundreds of data points once used by researchers and investors, according to a Wall Street Journal analysis. 

In most cases, Chinese authorities haven’t given any reason for ending or withholding data. But the missing numbers have come as the world’s second biggest economy has stumbled under the weight of excessive debt, a crumbling real-estate market and other troubles—spurring heavy-handed efforts by authorities to control the narrative.China’s National Bureau of Statistics stopped publishing some numbers related to unemployment in urban areas in recent years. After an anonymous user on the bureau’s website asked why one of those data points had disappeared, the bureau said only that the ministry that provided it stopped sharing the data.

The disappearing data have made it harder for people to know what’s going on in China at a pivotal time, with the trade war between Washington and Beijing expected to hit China hard and weaken global growth. Plunging trade with the U.S. has already led to production shutdowns and job cuts.

Getting a true read on China’s growth has always been tricky. Many economists have long questioned the reliability of China’s headline gross domestic product data, and concerns have intensified recently. Official figures put GDP growth at 5% last year and 5.2% in 2023, but some have estimated that Beijing overstated its numbers by as much as 2 to 3 percentage points. 

To get what they consider to be more realistic assessments of China’s growth, economists have turned to alternative sources such as movie box office revenues, satellite data on the intensity of nighttime lights, the operating rates of cement factories and electricity generation by major power companies. Some parse location data from mapping services run by private companies such as Chinese tech giant Baidu to gauge business activity. 

One economist said he has been assessing the health of China’s services sector by counting news stories about owners of gyms and beauty salons who abruptly close up and skip town with users’ membership fees…(More)”.

Glorious RAGs : A Safer Path to Using AI in the Social Sector


Blog by Jim Fruchterman: “Social sector leaders ask me all the time for advice on using AI. As someone who started for-profit machine learning (AI) companies in the 1980s, but then pivoted to running nonprofit social enterprises, I’m often the first person from Silicon Valley that many nonprofit leaders have met. I joke that my role is often that of “anti-consultant,” talking leaders out of doing an app, a blockchain (smile) or firing half their staff because of AI. Recently, much of my role has been tamping down the excessive expectations being bandied about for the impact of AI on organizations. However, two years into the latest AI fad wave created by ChatGPT and its LLM (large language model) peers, more and more of the leaders are describing eminently sensible applications of LLMs to their programs. The most frequent of these approaches can be described as variations on “Retrieval-Augmented Generation,” also known as RAG. I am quite enthusiastic about using RAG for social impact, because it addresses a real need and supplies guardrails for using LLMs effectively…(More)”

The RRI Citizen Review Panel: a public engagement method for supporting responsible territorial policymaking


Paper by Maya Vestergaard Bidstrup et al: “Responsible Territorial Policymaking incorporates the main principles of Responsible Research and Innovation (RRI) into the policymaking process, making it well-suited for guiding the development of sustainable and resilient territorial policies that prioritise societal needs. As a cornerstone in RRI, public engagement plays a central role in this process, underscoring the importance of involving all societal actors to align outcomes with the needs, expectations, and values of society. In the absence of existing methods to gather sufficiently and effectively the citizens’ review of multiple policies at a territorial level, the RRI Citizen Review Panel is a new public engagement method developed to facilitate citizens’ review and validation of territorial policies. By using RRI as an analytical framework, this paper examines whether the RRI Citizen Review Panel can support Responsible Territorial Policymaking, not only by incorporating citizens’ perspectives into territorial policymaking, but also by making policies more responsible. The paper demonstrates that in the review of territorial policies, citizens are adding elements of RRI to a wide range of policies within different policy areas, contributing to making policies more responsible. Consequently, the RRI Citizen Review Panel emerges as a valuable tool for policymakers, enabling them to gather citizen perspectives and imbue policies with a heightened sense of responsibility…(More)”.

Real-time prices, real results: comparing crowdsourcing, AI, and traditional data collection


Article by Julius Adewopo, Bo Andree, Zacharey Carmichael, Steve Penson, Kamwoo Lee: “Timely, high-quality food price data is essential for shock responsive decision-making. However, in many low- and middle-income countries, such data is often delayed, limited in geographic coverage, or unavailable due to operational constraints. Traditional price monitoring, which relies on structured surveys conducted by trained enumerators, is often constrained by challenges related to cost, frequency, and reach.

To help overcome these limitations, the World Bank launched the Real-Time Prices (RTP) data platform. This effort provides monthly price data using a machine learning framework. The models combine survey results with predictions derived from observations in nearby markets and related commodities. This approach helps fill gaps in local price data across a basket of goods, enabling real-time monitoring of inflation dynamics even when survey data is incomplete or irregular.

In parallel, new approaches—such as citizen-submitted (crowdsourced) data—are being explored to complement conventional data collection methods. These crowdsourced data were recently published in a Nature Scientific Data paper. While the adoption of these innovations is accelerating, maintaining trust requires rigorous validation.

newly published study in PLOS compares the two emerging methods with the traditional, enumerator-led gold standard, providing  new evidence that both crowdsourced and AI-imputed prices can serve as credible, timely alternatives to traditional ground-truth data collection—especially in contexts where conventional methods face limitations…(More)”.

Understanding and Addressing Misinformation About Science


Report by National Academies of Sciences, Engineering, and Medicine: “Our current information ecosystem makes it easier for misinformation about science to spread and harder for people to figure out what is scientifically accurate. Proactive solutions are needed to address misinformation about science, an issue of public concern given its potential to cause harm at individual, community, and societal levels. Improving access to high-quality scientific information can fill information voids that exist for topics of interest to people, reducing the likelihood of exposure to and uptake of misinformation about science. Misinformation is commonly perceived as a matter of bad actors maliciously misleading the public, but misinformation about science arises both intentionally and inadvertently and from a wide range of sources…(More)”.

Bad Public Policy: Malignity, Volatility and the Inherent Vices of Policymaking


Book by Policy studies assume the existence of baseline parameters – such as honest governments doing their best to create public value, publics responding in good faith, and both parties relying on a policy-making process which aligns with the public interest. In such circumstances, policy goals are expected to be produced through mechanisms in which the public can articulate its preferences and policy-makers are expected to listen to what has been said in determining their governments’ courses of action. While these conditions are found in some governments, there is evidence from around the world that much policy-making occurs without these pre-conditions and processes. Unlike situations which produce what can be thought of as ‘good’ public policy, ‘bad’ public policy is a more common outcome. How this happens and what makes for bad public policy are the subjects of this Element…(More)”.