Explore our articles
View All Results
Share:

Iterative A/B Testing for Social Impact: Rigorous, Rapid, Regular 

Article by Noam Angrist, Amanda Beatty, Claire Cullen & Tendekai Mukoyi Nkwane: “Many nonprofits in low- and middle-income countries face a critical mismatch: urgent social problems demand rapid program iteration, yet organizations often wait years for externally-produced evaluation results. When they do conduct rigorous evaluations, these are typically one-off studies that rarely keep pace with evolving implementation contexts or inform real-time decisions.

This tension between problem urgency and evidence generation speed is familiar to many implementers. After our organization, Youth Impact, ran an initial Randomized Controlled Trial (RCT) in Botswana on an HIV and teen pregnancy prevention program, we faced new questions relevant for government scale-up. The RCT showed near-peer educators effectively changed risky teen behavior while other messengers like public school teachers did not, but government partners needed ongoing answers about cost-effectiveness, implementation variations, and program adaptations. Waiting years between evaluation cycles meant missing the window to influence program design and consequential government reforms.

We needed an approach that maintained rigorous standards but operated at implementation speed. The technology sector offered a model: Microsoft alone runs approximately 100,000 A/B tests each year to continuously optimize products. A famous Gmail experiment, testing different advertising link colors, generated $200 million annually for Google and showed how small, rigorously tested variations can have outsized impact.

While social impact programs present unique complexities, we have found that a similar underlying approach can translate well to the social sector. Iterative A/B testing uses randomization to compare multiple program variations to answer questions about efficiency and cost-effectiveness, in addition to questions about general effectiveness (as in a traditional RCT). A/B testing also produces causal evidence in weeks or months, instead of years as in traditional randomized trials. Iterative A/B testing has a critical role to play to unlock social impact: causal evidence delivered rapidly enough to optimize programs during implementation and scale-up…(More)”.

Share
How to contribute:

Did you come across – or create – a compelling project/report/book/app at the leading edge of innovation in governance?

Share it with us at info@thelivinglib.org so that we can add it to the Collection!

About the Curator

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday

Related articles

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday