By Nompumelelo Mohohlwane
Evidence-based research, monitoring and evaluation is foundational to rational decision-making when informing policies, programmes and interventions. This is imperative for public policy to have an impact on service delivery, and especially so in the education sector. The information derived from this evidence base determines the value or merit of programmes or policy options by identifying standards, performing empirical investigations using various techniques and integrating these findings into conclusions and recommendations for the sector. In the short-term, this is helpful for programme managers to improve performance and accountability. In the long-term, the knowledge generated for the sector could inform broader programme and policy design, and practices beyond the programme being evaluated.
The Department of Basic Education (DBE) leads and conducts several national research, monitoring and evaluation efforts to support the medium-to-long range performance of the education system. This includes:
- comparative analyses using existing data such as the General Household Survey data;
- collecting new data to evaluate progress against sector indicators and goals where such data is not regularly collected;
- conducting research on policy options or prospective programmes and interventions; and
- conducting programme specific evaluation such as the implementation evaluation of the National School Nutrition Programme.
System level challenges associated with implementing evaluations include uncertainty on the value of evaluations, concerns about repercussions, and perceived underperformance. A programme manager may be held accountable for the programme being evaluated, but they may not have control of all the underlying processes due to complexities in the structure, resourcing and the scale of programmes.
These complexities include concurrent functions between national departments as well as the national and provincial education departments; funding that is received directly from National Treasury or Provincial Treasury to nine different Provincial Education Departments but accounted for nationally; and the number of schools in a programme. These concerns are specifically addressed through highlighting the benefits of undertaking evaluations. This includes opportunities to collect reliable data on concerns which programme managers may have; objectivity and documenting both successes and challenges; identifying ways to improve efficiencies; and providing recommendations that are targeted at the various levels of delivery, including district and provincial levels in addition to the national office.
Despite the continuous efforts being made to address these concerns, some challenges continue to persist. Collecting school-level data requires communication with provinces, districts, circuits and schools. The contact details of schools are not always up-to-date, and information may not always be cascaded on time. A further complication arises when an unannounced visit is the most appropriate evaluation strategy: the unanticipated disruption to the school may result in a denial of access to the school and the unavailability of key personnel and information. In some instances the province, district or the school itself may have a different plan on the data collection day.
Once schools have been accessed, the school day may be too short for the planned data collection due to a high number of data collection instruments or delays in logistics to start collecting data at the school. Finally, the school calendar and curriculum activities such as examinations, affect when schools may be accessed and which grades are available. A substantial amount of planning and mitigation strategies are necessary to cater for these data collection processes and unforeseen circumstances.
Although conducting research, monitoring and evaluations in a system as large as ours has its challenges, it is both possible and remains important. Moving forward, the recommendations are for the DBE to firstly adopt existing guidelines and develop new guidelines for research, monitoring and evaluation, in new and existing programmes; secondly, to systematically build capacity among programme managers and advocate for high quality evaluations internally and externally; and thirdly, to ensure that organisational structures such as evaluation Steering Committees include provincial representatives, programme managers as well as the evaluation officials.
Source: Lessons and Reflections about Implementing M&E in South Africa: An anthology of articles by Zenex staff, M&E experts and NGOs. www.zenexfoundation.org.za