“Why most programs don’t work?” Practitioners, decision-makers, and program funders asked us this question many times. Based on our decades of experience, we have rarely come across programs that are effective, not to mention a reasonable return on investment. We observed that some organizations spent literally tens of millions of dollars on a program and another several million on the evaluation. After several years of tireless efforts, evaluators concluded that the program did not generate a statistically significant effect on outcomes of interest. It is heartbreaking to see such results after spending tens of millions of dollars.
Broadly speaking, there are three major reasons that many programs have generated disappointing results. First, there is a lack of understanding of the problem at hand. Without a thorough understanding of the issue we try to address, the solution we come up with is unlikely to work. Second, the program is not well designed to have the right components to solve the problem, which may be traced back to the first reason. Lastly, the program is not well implemented due possibly to insufficient planning for likely behavioral responses from program implementers and various stakeholders. These three major explanations for program failures are inter-connected: the second explanation is associated with the first and the last with the first two.
To avoid disappointing results and wasting millions of dollars, we take a different approach. Instead of evaluating for the sake of the evaluation, we propose to our clients approaches to understanding the problem at hand, designing a program that will address the problem, and testing and re-testing the program before we see promising signs of effectiveness. Then the program can be expanded, and a formal evaluation will be conducted. Ronald Fisher, a biologist and a statistician once said, “To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.” The same can be said of a program at the time of a formal evaluation when it is too late to change the course.
This is why we offer our assistance at the early stage of program development.
At the time of a formal evaluation, if our approach is adopted, we will have already figured the nature of the problem, the rationale for program design, and behavioral responses from different stakeholders. What we need to do is to collect the right data, assess the implementation process, measure the outcomes of interest, and confirm our findings from testing and re-testing the program. We will select the right tools for the evaluation, including both quantitative and qualitative methods.