By Benita Williams
Those who work in education are acutely aware of the complexity of the problems in the South African education system. No silver bullet solution exists. Despite this, donors like the Zenex Foundation continue to implement initiatives to improve education with the expectation that evaluation should help us learn.
We use ex-ante studies, process evaluations, economic evaluations, outcome evaluations and impact evaluations to answer a wide array of questions.
End-of-programme evaluations frequently aim to answer questions such as:
- “Was the project implemented as planned?”
- “What results were achieved by the intervention?”
- “What is the likelihood that intervention effects will be sustained?”
It may also ask:
- “Why did the intervention achieve good results in context A, and poor results in context B?”
To answer these questions, different evaluation approaches, designs and methods may be appropriate. It is not reasonable to expect that one type of evaluation design can adequately address all evaluation questions in all contexts.
A randomised control trial (RCT) or another quantitatively oriented design such as a regression discontinuity design or propensity score matching might be the best method to answer: “What magnitude of change was achieved by the intervention that would not have been achieved otherwise” in a specific case. It will likely have to be supplemented with other ualitative methods, such as descriptive case studies and ethnographic observation if it wants to answer the important “why” questions.
The NONIE guidance on Impact Evaluation recognises that an RCT is not always required to answer impact questions. Alternative impact evaluation designs could be used when it is not necessary to quantify the effects of an intervention, but the effects still need to be attributed to the intervention. The General Elimination Method and Causal Contribution Analysis are two such designs that rely on mixed-method data collection. Theory-based evaluation is also a widely used alternative.
The ILO pokes fun at the idea that RCTs are some kind of “gold standard” and concludes: “The only standard that does exist is one of methodological appropriateness.”
In selecting an evaluation approach, we need to strive for rigour. Michael Quinn Patton, the author of classic evaluation texts such as Developmental Evaluation says, “Rigour does not reside in methods, but in rigorous thinking.” If this is true, evaluators and those who commission evaluations are invited to critically engage with more than just the evaluation design. We are invited to embrace evaluative thinking. Only then will we learn from our efforts to solve the education problems.
Source: Lessons and Reflections about Implementing M&E in South Africa: An anthology of articles by Zenex staff, M&E experts and NGOs. http://www.zenexfoundation.