monitor

Demystifying Monitoring and Evaluation

For most development practitioners, the term ‘monitoring and evaluation’ (M&E) doesn’t require an extensive introduction. The importance of organisational measurement cannot be overstated in the world of CSI, in which millions of rands are spent annually on social development programmes that yield little systemic change or sustained impact. Drawing from her extensive experience, Jennifer Bisgard, co-founder of M&E consultancy Khulisa Management Services, writes about why M&E needs more rigorous design and consistent implementation.

The business rationale behind better M&E practice cuts across all spheres of CSI initiatives. Managers and directors want data-driven decisions. Stakeholders want to collaborate with projects that work. Programme managers want to publicise their successes, reach and impact, and demonstrate accountability. Companies want to invest money in projects that produce good results. Everybody wants to learn from the process, with the goal of improving their programme.

Theoretically, M&E offers the ultimate organisational win-win. However, as CSI managers move from the boardroom decision to monitor and evaluate, to the reality on the ground, they often find themselves navigating through unfamiliar territory and unforeseen obstacles.

The most important differences between monitoring and evaluation are the underlying processes. Monitoring is a routine job that entails the diligent collection and analysis of data for a project’s inputs, outputs, activities and short-term outcomes. It is best used when compared to specific targets and expectations.

Evaluation, on the other hand, is often retrospective. It is the process during which the proof of concept is assessed, or a project’s achievements are measured, against its objectives and intended impact. It’s the 'catch-your-breath-and-reflect' moment in a programme’s lifecycle, during which you must interrogate whether you are achieving your objectives.

Although the corporate sector has made inroads over the past 10 years, several CSI projects and programmes still comprise weak M&E systems which do not provide ongoing feedback and fall prey to obvious pitfalls. Private companies are lagging behind donors, especially international foundations, bilateral and multilateral funders.

In the corporate sector, Khulisa Management Services often comes across faith-based development. People may have a deep conviction that what they are doing is making a difference. However, if you do not put some distance between yourself and the cause, and punch holes into the concept, in all likelihood you will have a project that looks great on paper, but has no impact on the ground.

Evaluators should be thought of as critical friends – people who understand that CSI managers are trying to make a difference with their whole hearts and that is not what is being questioned. M&E is not a fault- finding mission, and it does not question an organisation’s altruism or integrity.

Rather, M&E is about adapting the approaches – using what works, discarding what doesn’t – to improve the project’s outcomes and impact. Since money is scarce, you want to guarantee targeted social interventions that align with your organisation’s business strategy and give the board some indication of return on investment; but where do you start?

Standing in front of a blank canvas, ready to paint the first shapes, lines and colours of your M&E strategy may seem as overwhelming as it is exciting. The best foundation for planning and implementing M&E is to begin with the tried and tested ‘theory of change’.

Moving from faith to facts

Although the corporate sector has made inroads over the past 10 years, several CSI projects and programmes still comprise weak M&E systems which do not provide ongoing feedback and fall prey to obvious pitfalls. Private companies are lagging behind donors, especially international foundations, bilateral and multilateral funders.

In the corporate sector, Khulisa Management Services often comes across faith-based development. People may have a deep conviction that what they are doing is making a difference. However, if you do not put some distance between yourself and the cause, and punch holes into the concept, in all likelihood you will have a project that looks great on paper, but has no impact on the ground.

Evaluators should be thought of as critical friends – people who understand that CSI managers are trying to make a difference with their whole hearts and that is not what is being questioned. M&E is not a fault- finding mission, and it does not question an organisation’s altruism or integrity.

Rather, M&E is about adapting the approaches – using what works, discarding what doesn’t – to improve the project’s outcomes and impact. Since money is scarce, you want to guarantee targeted social interventions that align with your organisation’s business strategy and give the board some indication of return on investment; but where do you start?

Standing in front of a blank canvas, ready to paint the first shapes, lines and colours of your M&E strategy may seem as overwhelming as it is exciting. The best foundation for planning and implementing M&E is to begin with the tried and tested ‘theory of change’.

Unpacking your programme’s ‘so what?’

A theory of change depicts how a programme’s inputs and activities are understood to produce outputs – a series of short and medium-term outcomes and, ultimately, long-term impacts. Evaluation theorists refer to the theory of change as ‘the missing middle’: filling the gaps between what goes into and what comes out of a project.

This is usually represented in a logic model, which graphically depicts the desired outcome of your project. The logic model includes inputs (what is invested), outputs (activities and participants reached) and outcomes. It also documents the key assumptions in each step.

In 2014, Khulisa developed an M&E framework for urban permaculture project Ekukhanyeni, taking them through a collaborative process of developing a theory of change and logic model.

demistifying CSI 1

 

One of the important criteria for developing a theory of change is that it has to be participatory and consultative of all stakeholders. Khulisa invites funders, project designers, managers and beneficiaries to talk about what they want to accomplish and what they believe will foster change.

With the Ekukhanyeni project, the gardeners were actively involved in the development of the theory of change, building capacity and also helping to understand why M&E is important for the project’s future.

Demistifying CSI 2

When resources are invested in an education programme, outcomes are anticipated. However, there are associated assumptions that need to be uncovered, and a chain of events required to take place, in order for those outcomes to be realised.

An indicator is a simple way to measure a complex scenario. For example, people often judge the quality of a high school by the indicator of its matric results. A good theory of change would add several other indicators to judge the quality of the high school and might include the turnover of teaching staff and the throughput (for example, is a student who starts grade 8 likely to complete grade 12 in the prescribed five years?).

Before a project starts, people usually have an idea of what they want to do and achieve. The theory of change tests the logic behind those ideas and, through the process, indicators are developed, which become the core of an M&E framework.

Data 101: If you can’t use it, don’t collect it

When a client starts doing M&E, they collect data on everything. The golden rule about data collection is that less is more. There needs to be clarity regarding which indicators to choose and the reporting process has to be as painless as possible.

Once the indicators have been identified and the measures have been set in place to collect the data, feedback loops should be created to inform those who are collecting the data. An M&E framework requires feedback mechanisms which show your data collectors that the data is being used to improve a project in specific ways. It closes the gap and it makes the data collectors feel that what they are doing is valued.

It is important that everyone involved in the project knows what each piece of data is going to be used for. If the data collector does not understand the big picture, there is less motivation to provide accurate data. The answers lie in fewer but more relevant data points, and this goes back to proving the theory of change.

In the early grade reading example, data should be collected at each link of the chain. If additional teaching and learning materials are introduced into the classroom, the first data point would be whether or not the books have been delivered and are in use. Only then can measurement occur regarding whether any learning has taken place. There should be a logical link, or skip pattern, between data collection points.

Is your project ready for evaluation?

Once relevant data has been collected over time, it is possible to answer specific questions about the success or failure of a project. An evaluation is only as good as the person managing it. If the evaluation process is outsourced, it is critical for the evaluation questions to be explicit.

The best way for programme managers to keep a tight grip on this process is to develop a thorough, highly specific terms of reference (ToR). This is a useful exercise, even if an evaluation is undertaken internally.

It is necessary to critically interrogate the data and monitoring system when developing a ToR. If the monitoring system does not produce good routine data, the evaluation could become extremely expensive, since the evidence will need to be gathered from scratch, rather than merely being validated.

BetterEvaluation.org, an international collaboration which provides excellent resources on how to improve evaluation practice, has developed a reference guide called GeneraToR. This tool helps programme managers to think through what they want, before commencing or commissioning an evaluation.

It’s not uncommon for an evaluator to discover that the programme is not ready for an evaluation during the evaluation. To prevent this, a school of thought now emphasises the importance of doing an evaluability assessment; this allows you to assess how far implementation has gone, whether there is a clear theory of change, and to ensure that routine monitoring data exists to use during the evaluation. The evaluability assessment may either point to a need to strengthen a project’s monitoring systems; allow more time for the project to deliver; or give the thumbs-up for the evaluation to commence.

Demistifying CSI 3

Which methodologies are most suitable?

M&E is a dynamic, ever-evolving discipline and new methodologies constantly emerge at international conferences.

An evaluability assessment asks:

  • Is the evaluation feasible, given the theory of change and its operational status and location?
  • Is there sufficient routine and relevant data about the project, including sufficient management systems to provide it?
  • Is there both utility and practicality in conducting the evaluation, given the views and availability of relevant stakeholders?

Two things are important to note:

1. Evaluation questions could be answered by adopting multiple methodologies.
2. Clarity of outcomes is needed when choosing the most suitable method(s) for an evaluation. If the theory of change has been approached in the right manner, these outcomes should already be clear.

Source: Trialogue Business in Society Handbook 2017

 

 

THE KNOWLEDGE HUB IS A logo INITIATIVE. (C) 2017-2020.