ODI Researcher Philipp Krause at BeyondBudgets: “Randomized control trials (RCTs) have had a great decade. The stunning line-up of speakers who celebrated J-PAL’s tenth anniversary in Boston last December gives some indication of just how great. They are the shiny new tool of development policy, and a lot of them are pretty cool. Browsing through J-PAL’s library of projects, it’s easy to see how so many of them end up in top-notch academic journals.
So far, so good. But the ambition of RCTs is not just to provide a gold-standard measurement of impact. They aim to actually have an impact on the real world themselves. The scenario goes something like this: researchers investigate the effect of an intervention and use the findings to either get out of that mess quickly (if the intervention doesn’t work) or scale it up quickly (if it does). In the pursuit of this impact-seeker’s Nirvana, it’s easy to conflate a couple of things, notably that an RCT is not the only way to evaluate impact; and evaluating impact is not the only way to use evidence for policy. Unfortunately, it is now surprisingly common to hear RCTs conflated with evidence-use, and evidence-use equated with the key ingredient for better public services in developing countries. The reality of evidence use is different.
Today’s rich countries didn’t get rich by using evidence systematically. This is a point that we recently discussed at a big World Bank – ODI conference on the (coincidental?) tenth anniversary of the WDR 2004. Lant Pritchett made it best when describing Randomistas as engaging in faith-based activity: nobody could accuse the likes of Germany, Switzerland, Sweden or the US of achieving human development by systematically scaling up what works.
What these countries do have in spades is people noisily demanding stuff, and governments giving it to them. In fact, some of the greatest innovations in providing health, unemployment benefits and pensions to poor people (and taking them to scale) happened because citizens seemed to want them, and giving them stuff seemed like a good way to shut them up. Ask Otto Bismarck. It’s not too much of a stretch to call this the history of public spending in a nutshell….
The bottom line is governments s that care about impact have plenty of cheaper, timelier and more appropriate tools and options available to them than RCTs. That doesn’t mean RCTs shouldn’t be done, of course. And the evaluation of aid is a different matter altogether, where donors are free to be as inefficient about evidence-basing as they wish without burdening poor countries.
But for governments the choice of how to go about using systematic evidence is theirs to make. And it’s a tough capability to pick up. Many governments choose not to do it, and there’s no evidence that they suffer for it. It would be wrong for donors to suggest to low-income countries that RCTs are in any way critical for their public service capability. Better call them what they are: interesting, but marginal.”