By Benita Reddi-Williams and Fatima Mathivha
The increase in the call for proposals for evaluation in recent years is likely an indication of growing understanding of the importance of evaluation. More specifically, there appears to be growing interest in understanding how programmes are actually implemented and the ‘impact’ of particular programmes. And whilst in theory this bodes well for ensuring accountability and improving programming to maximise benefits, the process of responding to calls for proposals brings prevalent challenges to the surface.
Jumping the gun
Poor internal understanding of M&E or lack of planning for an evaluation means that some programmes are not yet ready to be evaluated when they put out a call. However, this often only becomes apparent after contracts are signed and evaluators have been given access to programme documentation and data. This results in one of three options being chosen:
- dropping the evaluation;
- using the evaluation budget to develop and embed M&E within the organisation or programme; or
- conducting an evaluation with weaker methodologies, predominantly assessing observable data gathering perceptions, with an understanding that the reliability of the findings will be compromised by limited evidence.
An unclear Terms of Reference
Project implementers develop Terms of Reference wearing their implementer hats. And although project implementers bring a wealth of knowledge to evaluation, the information most critical for developing a good and realistic proposal is sometimes missing. Here is an external evaluator’s wish list, if we’re not being too fussy:
- a clear summary description of the intervention with respect to details which affect evaluation budgets and sampling (e.g. the number of beneficiaries, the number of implementation years, location of sites);
- a problem statement;
- a theory of change or (even better) a logframe/logic model;
- an indication of existing monitoring data and existing external databases the project uses when evaluating progress internally; and
- the specific unit of analysis of the evaluation that is required (e.g. a policy, plan, programme, project or system).
The “I” word
Impact has been the black sheep of the evaluation world – with some evaluators doing away with academic rigour to avoid the process of explaining our seemingly niche interpretation of ‘impact’. Although impact is generally thought to mean “what difference did we make”, conducting an impact evaluation is more of a science that requires specific data and documentation to determine causal relations between programmes and changes, some of which can be collected as part of the evaluation – but at a cost.
The true cost of evaluation
More is always more, so whether clients budget enough for evaluations or not is a whole conversation on its own. What’s more useful to understand is that for an external evaluator, allocated budgets for evaluations are often hard to navigate. This is because external evaluators usually don’t have the luxury of collaborating with programme managers to consider methodological alternatives that have budgetary implications. As a result, the possibilities that are actually available aren’t explored and an opportunity is missed.
What’s more valuable than money itself?
Time – an evaluator never has enough of it. This is usually a result of organisational deadlines for the evaluation results to inform strategic and financial planning. But something has got to give, and usually the quality of the evaluation or the scope of the evaluation are compromised. Reliable and useful evaluations are still possible – even in the context of limited resources.
– Benita Reddi-Williams is Specialist Research, Monitoring and Evaluation Manager at JET Education Services
– Fatima Mathivha is Monitoring & Evaluation Officer at JET Education Services
Source: Lessons and Reflections about Implementing M&E in South Africa: An anthology of articles by Zenex staff, M&E experts and NGOs. http://www.zenexfoundation.