Article by Marco Di Natale: “In the past two decades, impact evaluation has become an unavoidable topic in the social sector. Yet beyond the discourse on how to measure social impact lies a structural problem: We are not producing reliable knowledge about what works, regardless of the method used. While part of the debate gravitates toward Randomized Control Trials (RCTs), the real gap lies in the absence of standards, capacities, and institutional structures that enable civil society and philanthropy to learn systematically. This article aims to reframe the conversation on what matters: rigor.
Drawing on my experience leading impact evaluations within government (including experimental and non-experimental studies) and later advising civil society organizations and philanthropic funders, I have seen how this gap is reinforced from different directions. On one side, reporting requirements often prioritize speed, volume, and compliance over understanding. On the other, critiques of experimental and quantitative approaches have sometimes been used to legitimize evaluations that abandon basic scientific logic altogether, as if complexity or social purpose exempted the sector from standards of credible inference. This article examines how these dynamics converged and what a more rigorous, learning-oriented approach to evaluation would require from philanthropy, organizations, and evaluators alike…(More)”.