Marc Gunther at The Chronicle of Philanthropy: “Can pregnant women in Zambia be persuaded to deliver their babies in hospitals or clinics rather than at home? How much are villagers in Cambodia willing to pay for a simple latrine? What qualities predict success for a small-scale entrepreneur who advises farmers?
Governments, foundations, and nonprofits that want to help the world’s poor regularly face questions like these. Answers are elusive. While an estimated $135 billion in government aid and another $15 billion in charitable giving flow annually to developing countries, surprisingly few projects benefit from rigorous evaluations. Those that do get scrutinized in academic studies often don’t see the results for years, long after the projects have ended.
IDinsight puts data-driven research on speed. Its goal is to produce useful, low-cost research results fast enough that nonprofits can use it make midcourse corrections to their programs….
IDinsight calls this kind of research “decision-focused evaluation,” which sets it apart from traditional monitoring and evaluation (M&E) and academic research. M&E, experts say, is mostly about accountability and outputs — how many training sessions were held, how much food was distributed, and so on. Usually, it occurs after a program is complete. Academic studies are typically shaped by researchers’ desire to break new ground and publish on topics of broad interest. The IDinsight approach aims instead “for contemporaneous decision-making rather than for publication in the American Economic Review,” says Ruth Levine, who directs the global development program at the William and Flora Hewlett Foundation.
A decade ago, Ms. Levine and William Savedoff, a senior fellow at the Center for Global Development, wrote an influential paper entitled “When Will We Ever Learn? Improving Lives Through Impact Evaluation.” They lamented that an “absence of evidence” for the effectiveness of global development programs “not only wastes money but denies poor people crucial support to improve their lives.”
Since then, impact evaluation has come a “huge distance,” Ms. Levine says….
Actually, others are. Innovations for Poverty Action recently created the Goldilocks Initiative to do what it calls “right fit” evaluations leading to better policy and programs, according to Thoai Ngo, who leads the effort. Its first clients include GiveDirectly, which facilitates cash transfers to the extreme poor, and Splash, a water charity….All this focus on data has generated pushback. Many nonprofits don’t have the resources to do rigorous research, according to Debra Allcock Tyler, chief executive at Directory of Social Change, a British charity that provides training, data, and other resources for social enterprises.
All this focus on data has generated pushback. Many nonprofits don’t have the resources to do rigorous research, according to Debra Allcock Tyler, chief executive at Directory of Social Change, a British charity that provides training, data, and other resources for social enterprises.
“A great deal of the time, data is pointless,” Allcock Tyler said last year at a London seminar on data and nonprofits. “Very often it is dangerous and can be used against us, and sometimes it takes away precious resources from other things that we might more usefully do.”
A bigger problem may be that the accumulation of knowledge does not necessarily lead to better policies or practices.
“People often trust their experience more than a systematic review,” says Ms. Levine of the Hewlett Foundation. IDinsight’s Esther Wang agrees. “A lot of our frustration is looking at the development world and asking why are we not accountable for the money that we are spending,” she says. “That’s a waste that none of us really feels is justifiable.”…(More)”