By Gail Campbell and Thabisile Zuma
Any organisation that is committed to improving the quality of evaluations understands that one of the most critical steps is getting the design right. Our own experience in M&E, particularly in designing evaluations has evolved over the years. We see evaluation design as the structure of the evaluation that will provide information/data to answer evaluation questions. The design is determined by the purpose of the evaluation, the programme theory of change, the evaluation questions and of course, the budget.
This is overlaid with various contextual elements, existing monitoring, and performance data. Over time, the design of evaluations at Zenex has progressively improved. Some of the notable improvements include:
- clearer Terms of References that specify the project outcomes and evaluation questions;
- detailed upfront engagement with evaluation teams to ensure that the evaluation is able to adequately respond to the questions that Zenex wants to be answered; and
- clarificatory workshops with all partners in a programme to co-develop/refine the Theory of Change and logic models.
Based on our experiences, here are some of our examples of ways that the design of evaluations can be improved.
Quality Terms of Reference
The biggest change in our design process has been developing our capacity to draft Terms of Reference. In the past, we appointed an evaluator and were heavily reliant on the evaluator to develop the evaluation questions, together with the theory of change and design.
This changed as our knowledge improved. In 2008, this hands-off approach came to haunt us, so to speak. We had to commission a review of an evaluation mid-way through the evaluation. The review led to a revised design. We realised that we needed to build more in-house capacity on M&E. We have learnt on-the-job through engaging with our evaluation partners and through participating in M&E training programmes.
As mentioned by Dr Fatima Adam in her article on commissioning evaluations, the process is as follows:
- Zenex develops the scope of work and that informs the call for proposals;
- we put out a call for proposals and interested evaluators submit proposals to undertake the evaluation;
- we then short-list and select a preferred provider based on an assessment
of their proposals against a set of criteria;and
- once the evaluator is appointed, Zenex has an engagement with the evaluators. This involves a robust discussion about sampling, design and methods and the trade-offs that need to be made based on cost and feasibility.
Quality Assurance mechanisms
While our own M&E competencies have improved, we realised that we still needed assistance from M&E experts. As needed we appoint reviewers to comment on the evaluation design. We have had experts on our panels to select service providers and we have experts to review evaluation reports. This has been especially valuable for evaluations of complex and/or largescale projects. We have appointed quality reviewers who are experts in M&E and/or the education context. The reviewers, therefore, serve as an additional quality assurance mechanism that supports us and the evaluators.
Impact Evaluation Designs
The complex education change process and varying contextual factors are a challenge in impact designs. As Benita Williams put it in her article on evaluation designs, the South African education system is complex and no silver bullet solution exists. The same applies to evaluation designs. A Randomised Control Trial (RCT) might be the best method to answer a question about the magnitude of change achieved by an intervention, but may need to be supplemented with other qualitative methods if it wants to answer the important “why” questions.
Zenex has experienced two problems in designing impact evaluations. These are (i) finding matched control schools and (ii) having sufficient control schools. In our earlier foray into evaluations, we did not apply statistical rigour to determine the number of control schools and we tended to have fewer control schools. While we did this with the good intention of trying to manage budgets, we realised that it affected the rigour of the evaluations and we could not make conclusive claims about whether or not we achieved the desired impact. This was a major critique of our evaluation portfolio in our ten-year review (2007: 32-47).
We realised that you can seriously compromise the evaluation design if you are constrained by finances. Evaluators want to ensure absolute rigour when designing evaluations but this can often outweigh the budget for the evaluation. During the inception phase we therefore, have to balance the need for rigour with the reality of our resource constraints. Without compromising quality, we must find the most cost-efficient design given the available resources. In Gail Campbell’s article on evaluation costs, she talks about balancing costs with the need for evaluation rigour.
Mixed method approaches
In the earlier evaluations, we relied mainly on qualitative data and analysis. As we progressed and gained more experience, we looked at quantitative data to determine whether programmes had the desired impact. This change was aligned with the evolution in the field of M&E to combine both quantitative and qualitative data in outcomes and impact evaluations.
We are a strong proponent of mixed methods designs to ensure that we can explain the quantitative data (we made a determined effort not to commission “black box” evaluations). In many of our evaluations, quantitative data is triangulated with qualitative data from observations, interviews, surveys and document reviews.
Our learning continues through our Continued Education Professional Development (CEPD) and through engaging with evaluation experts in the education sector
– Gail Campbell is CEO at the Zenex Foundation
– Thabisile Zuma is Knowledge and Information Manager at the Zenex Foundation
Source: Lessons and Reflections about Implementing M&E in South Africa: An anthology of articles by Zenex staff, M&E experts and NGOs. http://www.zenexfoundation.org.za