By Brahm Fleisch
Over the past decade, there has been a resurgence of interest in evidence informed education policies and programmes in South Africa. Specifically, there is a growing recognition that rigorous research designs, particularly designs that include ‘counterfactuals’ can provide strong findings into what works to improve learning outcomes.
What exactly is a rigorous research design that includes a counterfactual and why should we trust such studies?
There are a number of different ‘counterfactual’ research designs including:
- propensity score matching studies;
- natural experiments;
- regression discontinuity designs; and
- randomised control trials.
What they share in common is the core comparison of at least two groups (of learners, classrooms or schools) that are as close to identical as possible in both observed and unobserved characteristics. In these studies, one group gets an intervention and the other does not (called the control group).
In randomised control trials, the allocation to groups is done randomly. When we measure the difference in average outcomes between the two groups (intervention and control) at the end of the intervention or any point thereafter, we can discover if the intervention actually works to improve learning outcomes and the size of the impact of the intervention. For example, the intervention group is on average six months ahead of children in the control group at the end of Grade 2 on a learning outcome like accurate word reading per minute.
Why is it important to have a counterfactual?
The key idea behind the ‘counterfactual’ is that we are able to isolate factors associated with the intervention and not measure other factors or causes of improvement. If we take a standard study that measures correct answers in simple mathematics operations, and we see that average learning gains go up from 35% to 50% between the pre and post-test, could we be absolutely certain that the improvement is caused by the intervention? The answer is no.
An example from the Reading Catch-Up Programme
The experience of the Reading Catch-Up Programme undertaken in the Pinetown District of the KwaZulu Department of Education in 2014 with funding from the Zenex Foundation showed very strong gains in one term for learners doing an English remedial programme. We would have incorrectly assumed that these gains were because of the intervention had we not had an equivalent control group of learners (and schools) which improved by almost exactly the same amount.
The mean difference between the intervention group and the control group was very small and not statistically significant. The findings of Reading Catch-Up Programme randomise control trial helped policymakers avoid a costly decision about implementing a programme that looked good in theory but did not necessarily work effectively in practice in this context.
Conclusion
While randomised control trials and other related counterfactual research design studies are an important part of the tools that policymakers can use to improve decision-making, they are not perfect and need to be interpreted with care. We know that the best of such studies need to be complemented with other research studies, particularly qualitative case studies that provide insights into the actual mechanisms that make these programmes work.
Source: Lessons and Reflections about Implementing M&E in South Africa: An anthology of articles by Zenex staff, M&E experts and NGOs. www.zenexfoundation.org.za