If nations are to meet their commitments to offer better education to their citizens, as expressed in the United Nations’ Sustainable Development Goals 2030, there is an urgent need to improve the quality of education offered in schools. This is no truer than in South Africa. Well over R16 000 is spent each year to educate a South African child in a public school. The efficacy of this investment leaves much to be desired, with fewer than two out of every 10 children in grade four able to read for meaning in any language. Education economist Dr Gabrielle Wills makes the case for the critical value of research in informing quality improvements: from experimental research such as randomised control trials (RCTs) that explore new models to improve teaching in classrooms, to identifying the cost-effectiveness of competing models.
Equally important as experimental research is the need to use existing administrative data to assess how well the system is functioning, and to develop new tests to monitor learning improvements over time. Yet most education systems in the world spend far too little on high-quality education research. Even in wealthier countries, education research receives less attention than it should, with startling differences in how much is spent on research in education relative to other sectors. For example, in 2010/11 spending on medical research in the United Kingdom was about 33 times more than spend on education research.ii This is absurd when the lion’s share of national budgets is usually allocated to education.
While there is no official data on how much the South African public sector spends on education research, what is clear is that it is minimal. If one just considers experimental research studies in education in South Africa, these have mostly been paid for by private philanthropists or development agencies such as USAID, UNICEF or the Department for International Development.
Furthermore, despite the majority of learners performing at or below lower- income country standards, South Africa’s middle-income country status often disqualifies researchers from accessing development funding for research projects. It is no surprise then that few experimental research studies in education have been implemented in South Africa.
In a recent review, Taylor (2019) iii identified just six published RCTs investigating primary or secondary school learning improvements in South Africa over the span of a decade (2009 to 2018).
Why is so little spent on education research?
If a private company were to spend a significant share of its budget on one piece of equipment, they would go to great lengths to ensure that the most appropriate product was purchased for their needs, that it worked properly, and that they invested in research and development to ensure that they were able to sustain the best possible yields from the equipment. Despite the fact that the largest proportion of South African taxpayers’ money is spent on education (16% to 17% of total annual government allocations) IV, we invest so little in ensuring that it is serving citizens in the most effective way. There are various possible explanations for this irrationality.
First, the benefit of education research is usually viewed as a public good – the private collection of knowledge in this field is difficult and the possibility of private returns less apparent.v Contrast this with medical research, where experimental trials of new drugs yield significant private benefits for pharmaceutical companies. With private funds channelled into these studies, medical research has made great strides. The collective work of lots of small yet well-executed experimental studies have contributed to breakthroughs in medical research, the effective treatment of disease and ultimately to saving lives.
Or consider the agricultural industry in which research into plant hybrids that are disease-resistant and produce greater yields has resulted in significant returns for farmers and private providers of agricultural goods. Without a broader and longer-term view on how educational research into plant hybrids that are disease-resistant and produce greater yields has resulted in significant returns for farmers and private providers of agricultural goods. Without a broader and longer-term view on how educational progress prospers a society, the private sector is unlikely to invest in research.
The lack of spend on education research may also be attributed to the perceived low value of research outputs, relative to their cost. This is a legitimate concern in the context of too many low-quality studies in education that are of little use to funders, government or practitioners. However, even when empirical evaluations of new programmes or interventions are well executed, the impact of these programmes on learning is often disappointing.vi Funders get frustrated and implementers become disheartened. But we need to continue trialling new approaches, reiterating new versions of those that show some promise, and learning from existing studies. vii
The history of the commercialisation of the light bulb is a good analogy. Edison was not the first person to invent the light bulb. There were at least 22 inventors that came before. He was building on existing knowledge and is widely quoted as failing 1 000 times. His real success, relative to his predecessors, is attributed to developing an entire integrated system of electric lighting where the lamp was just one component of a wider system of generators, mains and feeders that powered the lamp.viii In the same way, the research process eventually strikes gold. After learning from others, reiterating versions of what may work, we find the most effective programme components – and the system components that support their success – to positively alter the learning trajectories of children.ix
Finally, and most importantly, too little is invested in research (and, closely related to this, monitoring and evaluation) because the national value of quality education is often not fully understood. Education is often viewed as a luxury – something that we spend on when times are good. Viewing education from this lens is incorrect. Education is primarily an investment where the economic value of building human capital is confirmed time and again across studies.x Economists Hanushek and Woessmann xi used 40 years of data from 50 countries to show that having a better educated population, as measured by cognitive skills on international tests of learning, has significant economic impact.
When a country’s test scores are one standard deviation higher than the average, that country’s annual economic growth rate (in GDP per capita) will be about two percentage points higher than the cross-country average. Furthermore, in South Africa, with the highest levels of income inequality in the world, improving the quality of education in no-fee schools is likely to have significant implications for reducing this inequality. Roughly 60% of the differences in wages (the largest component of incomes earned) across historically privileged and disadvantaged South Africans are attributed to differences in the quality of education they received.xii
Investing in education research in a slow growth environment
If differences in cognitive skills lead to economically significant differences in economic growth, then expenditure on education should certainly not be reduced in difficult times. In South Africa, we are observing declining trends in the purchasing power of every rand spent in public education and rising learner-educator ratios.xiii In this context, investment in research, monitoring and evaluation is critical for enhancing efficiencies within the sector; in other words, increasing how much is learnt in school for every rand spent per child.
There is a desperate need for the South African education system to run ‘leaner’ per learner. Public expenditure per child in South Africa was at least thrice as much as in Kenya in 2007 xiv, yet in the same year grade six South African learners (even when limited to those in our two most functional provincial administrations, Gauteng and the Western Cape) did worse than similarly poor Kenyan grade six students on the same mathematics test. This result holds at every level of learners’ socioeconomic status. xv
How to support educational efficiency through research
Monitoring and accountability systems are imperative for ensuring that funds are effectively allocated and used. We need better monitoring systems, including data management, tracking of per-learner spending, as well as accountability processes for redress. Building institutional capacity in these areas could contribute to significant savings and improved functionality.
While monitoring and tracking are the responsibility of the state, funded research using existing datasets can support these efforts, offer solutions and reveal areas for improvement. For example, new research on the cost of repetition in South Africa xvi shows that if learner repetition was better managed, there could be large savings for the public system. Spending on repeaters accounts for at least 8% of the national government allocation to education. The study suggested that just halving grade ten repetition rates – where at a minimum one learner in every five repeat each year – could free up roughly R2 billion that provinces could use towards alternate programmes such as early remediation.
This study also highlighted the poor quality of available school data on repeaters. At a minimum, realising improvements in the monitoring and implementation of repetition policy – and realising large savings – requires significant improvements in the quality of data collected from schools. This is just one example of many possible studies that could be funded using existing data to guide improved monitoring efforts.
Supporting experimental research to identify programmes or policies that yield the best possible learning gains in context- specific settings is another way to improve system efficiency. In the face of political pressure, too often the promulgation of policy and its implementation occurs well before an evidence base has identified whether favoured programmes may foster learning gains or, worst case, erode opportunities to learn xvii.
Unfortunately, South Africa has a history of spending on education programmes that were either insufficiently or not at all piloted or evaluated in our schools. The failure of outcomes-based education (OBE or Curriculum 2005) – arguably the worst mistake made in post-apartheid education – is a case in point. We owe it to children to rigorously test curricula, programmes and new approaches to teaching before making an entire generation the guinea pigs of new fads or favoured ideals. Even if something works in another context, it is untested if it has not been trialled and evaluated in South African schools, across a variety of socioeconomic settings. Business, non- profit organisations and philanthropic funders can play a significant role here.
As Stephen Taylor, current director of Research, Monitoring and Evaluation in the national Department of Basic Education, described in a recent book:
“… the basic education sector faces an environment of increasingly tight budget pressures. Therefore, more than ever, public and corporate social investment (CSI) spending must be directed to those programmes and policies that we know actually impact on the most important education outcomes. And if we are to know what works, then experimental research will have a valuable role to play in figuring out South Africa’s education policy questions.” xvii
Identifying the causal effects of programmes or policies
If children’s test scores are higher after participating in an intervention programme, this is not evidence enough that the intervention has worked. The Reading Catch-Up Programme (RCUP)xviii – a randomised control trial implemented in the KwaZulu-Natal district of Pinetown in 2014 – is a good example of why an experiment is needed. RCUP was aimed at supporting grade four learners with English as an additional language. It had been reported to be highly effective in Gauteng as part of the Gauteng Primary Literacy and Mathematics Strategy with learner test scores increasing notably after the programme intervention. However, a programme is only effective if children learn more than they normally would. This can be tested using experiments.
A sample is selected and the programme is randomly allocated to some groups (the treatment group) and not others (control group), but all groups are tested. The RCUP was randomly assigned to some schools and not others in the district. The results showed that learners in both control and treatment schools experienced learning gains, but by the same amount. In other words, the programme was not effective in its current form. If this had not been evaluated, extra spending could have gone to a programme that yielded no additional improvements in learning. This programme was subsequently revised and built upon, culminating in the success of the Early Grade Reading Study. xix
On rare occasions there are also opportunities to explore whether programmes or policies have causal effects on learning using existing data, rather than implementing experiments. For example, Gustafsson and Taylor (2018) xx took advantage of a unique event where provincial boundary lines were redrawn. They could evaluate provincial effectiveness by comparing how schools performed in the matric examinations before and after they had been shifted to a new provincial administration. It turns out that provincial functionality matters a lot for matriculation outcomes.
Developing research capacity improves the quality of research
Equally important to the need for funding of research projects and experimental programmes is the need to build research capacity within the country. Implementing the kind of analyses described here requires researchers with technical quantitative skills who are well acquainted with available administrative datasets, understand the policy context and have the experience to ask the right questions. Qualitative researchers are also critical for exploring why programmes do or do not work.
Ensuring both a sustainable stream of high-quality research outputs and the effective monitoring and evaluation of spending within provincial departments of education will depend on building capacity for this work. Corporates could support these efforts by offering post- graduate funding for study in relevant fields, targeted at promising young students or officials within public institutions. Beyond building technical quantitative skills, we need civil servants who understand the value of research and can discern its quality. Without this, research uptake will be slow and ineffective.
In conclusion, there are both direct and indirect routes for funders to positively shape the research landscape in education: funding research and investing in a future stream of highly capable civil servants. This will lead the way for improved service delivery in basic education.
Practical tips for effective research for development
When should research be conducted?
If funders want to know whether a programme directly results in improved outcomes for its beneficiaries then the research process should start well before the programme is implemented. Before implementation starts, the sample size necessary to detect programme impact must be determined; the sample must be randomly assigned into the group receiving the programme (treatment) and the one that does not (control); and baseline data must be collected. Often, programmes charge ahead without the necessary research preparations having been conducted, making it impossible to identify causal impacts. Alternatively, if the intention is to pilot a programme before a scale- up, then a study on the feasibility of this programme for impact could be initiated during the piloting process.
At what stage should a research budget be decided on, and what are the unexpected/hidden factors in research processes that could be overlooked when budgeting?
The approach used for costing a research process should be informed by the complexity of the research questions to be answered. Sometimes, the amount required to adequately answer the proposed research questions will be clear to the research team. At other times, a research scoping exercise is needed to determine if and how the research question can be answered. Prior questions will need to be asked such as ‘does the necessary data exist?’, ‘is it of sufficient quality?’, and ‘what new data needs to be collected?’ In this case, budgeting for a scoping study may, in turn, be necessary to establish a reasonable research budget.
The importance of quality data for quality research outputs should not be underestimated and this requires a sufficient budget. If answering the research questions requires collecting new data, this is usually the largest research cost component. Alternatively, if existing administrative data is required to meet project objectives, the time and resource required to adequately prepare the data for analysis should not be underestimated. Funders should also ensure that budget is allocated for the dissemination of research findings to maximise research uptake, as well as allocating resources for the preparation of new data to be shared through online data repositories.
Who should lead the commissioning of research and how can confirmation bias be guarded against?
The question of who should lead the commissioning of research involves a prior question: Who will use the research? Increasingly funders require that strategies for research uptake or impact are embedded into the entire research process. The best strategy for uptake, however, is to ensure that the key stakeholders using the research, or adopting intended policies or practices, are involved in the research project from design to dissemination. This may mean that the intended user commissions the work. For example, if research recommendations (or programmes or policies that are proven to work) are to be adopted by public institutions, then the relevant public institution should commission the research with the backing of funders. This can also be used as an opportunity to build research capacity within government departments.
Steering committees and governance processes can be used to manage power dynamics with respect to who has the largest influence on how the research is undertaken. These measures can also limit possibilities for confirmation bias that may be driven by stakeholders that would like research findings to be positioned in their interests.
How can implementing partners ensure that research is integrated into their work, thereby also securing research funding?
Time constraints and limited research capacity generally constrain the ability of implementers to conduct ‘in-house’ research on their specific programmes. But it’s also difficult for implementers to be objective about the impact and efficacy of their programmes when they are fully invested in day-to-day operations. For this reason, it is recommended that the research process be outsourced. While it may be infeasible for implementing partners to fund fully established researchers, many new PhD and Masters students are looking for exciting possibilities for their dissertations. Promising students and their supervisors may also be able to connect the research to wider projects that have secured funding.
Which platforms can be used to share research findings?
Academics should publish the results of funded evaluations or research projects in good journals. This can help validate study findings and foster analytical rigour which, in turn, supports funders to get value for money. Briefs and summary reports that articulate results, lessons learnt and recommendations for next steps are also needed to disseminate research to a wider audience. If there are key policy messages or issues emerging of relevance to the public, it is highly recommended that the research team write editorials for the media. Recommended platforms for publicly sharing outputs include the websites of research institutions or government departments, but preferred repositories for information sharing are often sector specific.
- Howie, S. et al. (2017) Progress in International Reading Literacy Study 2016. South African Children’s Reading Literacy Achievement. Summary Report, Pretoria: Centre for Evaluation and Assessment and the University of Pretoria.https://www.up.ac.za/media/shared/164/ZP_Files/pirls-literacy-2016_grade-4_15-dec-2017_low-quality.zp137684.pdf
- Davies, N. (2018) Advocates of RCTs in education should look more closely at the differences between medical research and education research. Blog for the London School of Economics. https://blogs.lse.ac.uk/politicsandpolicy/theres-no-such-thing-as-a-free-rct-a-response-to-goldacre-and-gove/
- Taylor, S. (2019) How Can Learning Inequalities be Reduced? Lessons Learnt from Experimental Research in South Africa. and future possibilities. Springer. In: Spaull, N. and Jansen, J. (2019) South African Schooling Quality. The Enigma of Inequality. A study of the present situation
- UNICEF (2018) South Africa Education Budget Brief 2018/19. Paris: UNICEF https://www.unicef.org/southafrica/resources_21937.html
- Davies, N. (2018) see endnote ii.
- Taylor, S. (2019) see endnote iii.
- Haynes, L. et al. (2012) Test, Learn, Adapt: Developing public policy with randomized control trials. Cabinet Office. pp. 32. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/62529/TLA-1906126.pdf
- Hughes, T. P. (1977) “Edison’s method”. In: Pickett, W. B. (ed.). Technology at the Turning Point. San Francisco: San Francisco Press. pp. 5–22.
- Fleisch, B. (2018). Education triple cocktail: The system-wide instructional reform in South Africa. Sandton: Juta.
- Psacharopoulos, G. and Patrinos, H. A. (2004) Returns to investment in education: A further update. Education Economics. 12 (2), pp. 111–134.
- Hanushek, E. and Woessmann, L. (2012) Do better schools lead to more growth? Cognitive skills, economic outcomes, and causation. Journal of Economic Growth 17, pp. 267–321. https://hanushek.stanford.edu/sites/default/files/publications/Hanushek%2BWoessmann%202012%20JEconGrowth%2017%284%29.pdf
- Burger, J. and Van der Berg, S. (2011) Modelling cognitive skills, ability and school quality to explain labour market earnings differentials. Stellenbosch Economics Department Working Paper 08/2011. Stellenbosch University. https://www.ekon.sun.ac.za/wpapers/2011/wp082011/wp-08-2011.pdf
- Spaull, N. (2018) Basic education thrown under the bus – and it shows up in test results. 16 April 2018. Business Day. https://www.businesslive.co.za/bd/opinion/2018-04-16-basic-education-thrown-under-the-bus–and-it-shows-up-in-test-results/
- This gap in spending has subsequently widened between the two countries. Spending per primary school learner in 2014 in international dollars was R291 in Kenya and R2 280 in South Africa.
- Wills, G., Shepherd, D. and Kotze, J. (2018) Chapter 6: Explaining the Western Cape performance paradox: An econometric analysis. In: Levy, B. et al. (2018) The Politics and Governance of Basic Education – A Tale of Two South African Provinces. Oxford University Press.
- Van der Berg, S. et al. (2019) The Cost of Repetition in South Africa. Report for the DG Murry Trust. Research on Socio-Economic Policy (ReSEP), Stellenbosch University. https://www.ekon.sun.ac.za/wpapers/2019/wp132019
- Taylor, S. (2019) see endnote iii.
- Fleisch, B. et al. (2017) Failing to catch up in reading in the middle years: The findings of the impact evaluation of the Reading Catch-Up Programme in South Africa. International Journal of Educational Development. 53 pp. 36–47. http://dx.doi.org/10.1016/j.ijedudev.2016.11.008 0738-0593
- DBE (2017) Policy Summary – Results of Year 2 Impact Evaluation of the Early Grade Reading Study (August 2017). Pretoria: Department of Basic Education.
- Gustafsson, M. and Taylor, S. (2018) Treating Schools to a New Administration: Evidence of the Impact of Better Practices in the System-Level Administration of Schools. Journal of African Economies. 27 (5), November 2018, pp. 515–537.