South African companies spent over R13 billion on corporate social investment (CSI) in 2025. There is a need to account for this spend and assess its social impact. Monitoring and evaluation (M&E) systems provide the mechanism to do so. While most companies and nonprofits invest in M&E processes, many struggle to capture outcomes and insights at a level that serves the interests of multiple stakeholders. In this article, Nick Rockey takes a look at how companies in South Africa are implementing M&E and what can be done to enhance the substantial investment in this process.

Why M&E matters
Unlike procurement of company goods or services, CSI funding does not deliver a benefit with an easily quantifiable return on investment. Without M&E, funds are disbursed into a void with no direct measurable deliverables or associated value. M&E is thus first and foremost required to ensure accountability by those receiving funds, as well as to demonstrate how social value is generated in return for the funds provided. M&E findings form the basis of good reporting to internal and external stakeholders. In fact, according to Trialogue research, reporting to boards is the most common use of evaluation data by companies and nonprofit organisations. M&E advances reports from project descriptions and anecdotal accounts of success to hard evidence that shows results. For most companies with rigorous governance and risk management processes, this is often a condition of providing funding.
M&E is also required as a basis for learning, with most companies and nonprofits using it to make decisions around strategy and projects. It helps identify project elements that are sub-optimal and should be discontinued or improved, as well as elements that are working well and can be amplified, repeated, or scaled. A measure of social benefit enables analysis of cost efficiency, which is particularly important if there is an intent to replicate or scale the project.
Data that is benchmarked or tracked over time provides valuable insights. It shows relative performance, which allows for deeper analysis and refinement of the approach. It creates a body of knowledge that can be shared more broadly in the developmental ecosystem and used to inform ongoing research and inputs to policy, thereby contributing to advances in developmental practice.

Monitoring and evaluation explained
Monitoring is the process of regularly tracking progress. Good practice is to build this into the implementation, with output and outcome indicators – or key performance indicators (KPIs) – defined upfront and measured on an ongoing basis. Data can be extracted from multiple sources, such as attendance registers, exam scores, survey responses, or qualitative feedback forms that are recorded as activities take place. Tracked over time, these records provide an excellent account of how the project is being implemented. They allow for transparency and prompt feedback so that problems with implementation can be identified immediately, not at the end of the year or funding cycle when it is too late to take corrective action.
Evaluations are typically a process undertaken at a point in time to review a particular aspect of a project. Various forms of evaluation exist. In an ideal world, a diagnostic evaluation would be done to understand the context and the issues that need to be addressed before embarking on a project. A design evaluation provides insights into how to address the needs best or improve the situation for the better. There are often many ways to address a social issue, with some solutions being more holistic, longer-term and more expensive, while others are more symptomatic and confined by nature. The design evaluation determines the approach that is most suitable for the project context. Other forms of evaluation include implementation evaluation to assess the effectiveness of the process, outcomes, or impact evaluations to measure results over both the short and long term, and cost-effectiveness evaluations, which incorporate the cost of achieving the results.
The robustness of M&E processes varies considerably. From rapid reviews, which include a project site visit, to extensive evaluations that gather input from involved and affected stakeholder groups through surveys, interviews, or focus group discussions. Baseline and endline evaluations track shifts in outcomes. Control groups compare the results of those benefiting from an intervention with similar groups that have not received the intervention.
M&E can be an involved and resource-intensive process, so companies need to tailor their M&E solutions to the project’s scale and the level of support they provide. A one-off reactive grant may simply require a letter from the implementing partner outlining the project’s activities and outputs. In contrast, a long-term flagship programme will require a robust M&E framework that includes ongoing monitoring and periodic evaluation.
Tailoring m&e to different csi approaches
Corporate interventions in development often form part of a CSI programme. Different corporate strategies influence the scale and duration of support. Trialogue maps CSI approaches on an evolution spectrum and where programmes or projects are positioned on this spectrum influences how M&E should be done. Most companies will have a mix of approaches in their CSI portfolio – with some charitable grants or projects (CSI 1.0), some flagship strategic CSI projects (CSI 2.0) and, more rarely, some leveraged CSI programmes (CSI 3.0), each requiring a different approach to M&E.
CSI 1.0 (charitable CSI) is characterised by smaller, one-off grants, largely made reactively to nonprofits. At this level, the cost of setting up a robust monitoring framework and undertaking evaluations is likely not justified by the scale of investment. The relative share of funding provided by the company is also a consideration. If you are contributing just a small share of the funding, you will have less influence in the M&E process. For grants of this nature, there should be robust upfront vetting of the implementing partner and the cause being supported. When investing in an established nonprofit or project, there should be a degree of monitoring in place that can serve as the basis of a feedback report.
In CSI 2.0 (strategic CSI), companies invest in flagship programmes. These are often aligned to the nature of the business, supported for three to five years or longer and at a more substantial level of investment. In this instance, developing and agreeing on an M&E framework and reporting protocols are fully justified. Companies should work with their implementing partners to develop a theory of change and a monitoring framework. If multiple funders are involved, it is fair that the costs are proportioned accordingly. In some instances, this framework may exist in full or in part, which may involve less time and effort to achieve a standard that works for both the funder(s) and the implementer.
Evaluations should also be included for flagship programmes and may be initiated and funded by a corporate funder, particularly when the company is a major or sole contributing funder. An evaluation may be initiated in response to a board querying the impact of years of funding. In such cases, if the theory of change and baseline metrics are not in place, it will be difficult to assess. Alternatively, it may be called for based on internal governance requirements for an independent assessment of the project, without considering the nature and purpose of the evaluation. The rigour and purpose of the evaluation must be fully considered. In most cases, companies are not sufficiently invested in projects to justify a series of evaluations, from diagnostic to impact. Therefore, elements of these types of evaluation may be incorporated into a single process that sense-checks project design, reviews implementation processes against the implementation plan and assesses evidence of outcomes.
For CSI 3.0 (leveraged CSI), robust M&E becomes critical. Any project that is run with the aim of establishing a lead practice blueprint to be used for thought leadership, replication and scale, or to influence policy, needs to be defendable. Robust M&E is required to provide confidence in stated results, ideally with comparative analysis over time and against a control group. Beyond the numbers, M&E processes should identify challenges and conditions on which successful outcomes may be contingent. Other objectives may include analysing the scalability of the project or identifying elements that could be applied more broadly, findings that could be considered by policymakers or guidance that can be applied by others funding or implementing projects of a similar nature.

1.0 Charitable CSI
Where a charitable CSI approach works
- A medium-sized company with a limited budget operates in a rural area alongside a large community facing multiple needs.
- A large company with multiple regional offices wants to support local communities, but each office has only a small share of the budget and limited CSI expertise.
- A national retailer allocates a small discretionary budget to individual stores, enabling them to respond to customer requests for donations to local projects and thereby demonstrate customer loyalty.
- A company allocates a portion of its CSI budget – Trialogue research shows this to be about 10% – to charitable CSI so that it can respond to unexpected crises or requests that invariably arise.
2.0 Strategic CSI
Santam P4RR
Santam’s Partnership for Risk and Resiliency (R4RR) programme supports municipalities with capacity building and resources to improve capacity to manage disaster risks. The initiative was developed and is run by Santam as a flagship project. A theory of change (ToC) was created for the programme, outlining the intended developmental outcomes and the business benefits associated with improved risk management. Based on the ToC, an indicator framework was established and implemented. Municipal partners were involved in this process to ensure that the metrics used are relevant and measurable. Partners now contribute data to track outputs (in the form of deliverables) and outcomes. Given the wide variety of services Santam supports across municipalities, a dashboard has been developed to highlight the nature and scale of support provided, along with qualitative feedback. The dashboard allows users to filter by time period, municipality and specific service offerings, enabling deeper analysis and insight.
3.0 Leveraged CSI
Standard Bank Tutuwa Community Foundation
Tutuwa aims for leveraged impact in line with CSI 3.0, and as such, M&E forms a key aspect of its investments. The foundation, therefore, sets out the structure of feedback it requires as a condition for providing funding.
For example, Tutuwa funded a pilot project to assess if and how the WeThinkCode initiative could be institutionalised within TVET colleges. M&E processes used for existing WeThinkCode projects were applied to the pilot to assess differences in outcomes, with initial indications that the outcomes would likely be comparable over time. Importantly, assessments also considered the viability of achieving scale, where the extent of independence and integration was a critical consideration.
Another example is Tutuwa’s support of the Teacher Internship Collaboration South Africa (TICZA), where a variety of M&E processes were initiated and funded as part of the collaborative. This included an initial mapping of the sector, monitoring data collected from implementing partners, a survey of newly qualified teachers, a cost-effectiveness analysis and a summative evaluation of the programme. The M&E findings are intended to inform long-term decision-making on how extended teacher internships could be institutionalised more broadly.
Measuring cost effectiveness
In the course of regular business, there is a familiarity with cost structures and it is common to assess the returns of an investment against the costs incurred. In the developmental space, it is not straightforward. For instance, what is the rand value associated with an improvement of maths marks, or achieving school readiness at an early childhood development (ECD) level, or improving mental health?
The Social Return on Investment (SROI) model addresses this by deriving a return ratio. However, while this provides a number, the real value lies in the process of deriving the number, which is both qualitative and complex, being dependent on stakeholder perceptions of value and a host of assumptions. The return ratio is simply a numerical value that can be tracked over time for a single project and is not directly comparable to other projects, which have distinct sets of assumptions and stakeholder perceptions. For further information on SROI, read the viewpoint on page 99.
Other cost comparison processes can be used. A simple exercise is to track the total cost of the intervention and divide that by the number of people reached, yielding a cost per beneficiary. This analysis focuses on the cost of delivery, rather than the cost of achieving outcomes. However, even at this basic level, the analysis can be helpful if benchmarked against similar programmes or sites.
A more insightful approach is to consider the cost-effectiveness of achieving a particular outcome. This is more readily achieved if the outcomes are clearly defined, for instance, earning a bachelor’s degree, setting up a viable small business or creating employment. There may also be staggered outcomes where the costs per beneficiary of achieving a particular threshold are measured.
Cost-effectiveness is typically limited to a comparison of numbers and excludes more qualitative aspects. A cost-benefit analysis would seek to incorporate these qualitative elements to provide a more rounded sense of benefits, both direct and indirect, from the intervention. Benefits may be expressed in hard numbers (e.g. improved attendance at a school) or through perceptions and attitudes (e.g. feelings of wellbeing). The cost-benefit analysis may be more challenging to benchmark, but it can be useful for gauging the less tangible benefits derived from an intervention.
Strengthening M&E through participation
In the development field, the need for support invariably outweighs the availability of funds. This can create a power imbalance, with funders dictating terms and nonprofits having to absorb demands, sometimes in an unreasonable manner. While it is fair to request feedback on how funds are applied and the results achieved, a one-sided approach to M&E not only creates an environment of distrust and anxiety but is also unlikely to yield the best outcomes.
Both the funder and the implementer are mutually accountable and invested in achieving the best return on the funds spent. There is a natural balance of power with the implementer contributing developmental know-how and the funder providing the necessary financial resources. Such a balance should ensure an open and honest relationship, whereby the implementor is comfortable sharing challenges and drawbacks, and funders are prepared to work with implementers to overcome these. In this instance, M&E becomes a co-created process, with the emphasis falling as much on discovery and learning as on accountability.
Nonprofits are knowledgeable about the community’s circumstances and the reality of project implementation on the ground, including what can be measured, how to obtain information effectively and the limitations of data collection. In most cases, they will bear responsibility for collecting data as the programme is implemented. Their involvement in the design of the M&E framework and processes is critical, not only for ensuring buy-in but for a constructive, mutually beneficial learning process.
Thankfully, Trialogue’s 2025 primary research found that over 80% of companies collaborated with implementing partners on all M&E processes – including the design of programming and M&E methodology, data collection and feedback on findings. Similarly, 89% of nonprofits reported that they participated in M&E methodology design with corporate donors and 91% in programme design. These results suggest increased collaboration: in 2021, only 35% of companies involved implementing partners in designing M&E methodologies and only a fifth (21%) of nonprofits reported being involved in these processes with corporate donors.
Even more importantly, and often overlooked, is the involvement of beneficiaries and stakeholders in programme and M&E design and implementation. These parties have a wealth of knowledge to contribute, which can be sourced through focus groups, storytelling and built-in appraisal mechanisms. Not only can these stakeholders provide valuable insights, but their involvement can also secure better buy-in, which is critical to achieving successful outcomes.
Unfortunately, less than half of companies (41%) involved beneficiaries in the design of M&E and even fewer nonprofits (21%) reported that their corporate partners did so. Beneficiaries were most often involved in data collection and providing feedback on the findings.
A developmental intervention needs to work for all parties involved. It should not be a high-handed approach to improving lives without the participation and support of those whose lives are being impacted. This applies as much to the M&E process as it does to project implementation.
Terminology
Monitoring: the continuous collection and analysis of project data to track progress, support ongoing decision-making and ensure accountability.
Evaluation: a formal assessment of a project, or elements of a project, to provide evidence-based insights, inform decision-making, ensure accountability and guide future improvement.
Monitoring and evaluation (M&E): combines the ongoing monitoring of activities with the evaluation of project elements to provide a more complete understanding of project progress and results.
Monitoring, evaluation and learning (MEL): builds on the M&E process to extract and apply lessons learnt to support decision-making and improve processes, outcomes and impact.
Monitoring, evaluation, research and learning (MERL): incorporates the additional element of research to generate new knowledge and provide deeper evidence to support decision-making and improvement in process, outcomes and impact.
Theory of change: a structured outline that maps the pathway from the current state to a desired future state, detailing the activities, expected outcomes and assumptions necessary to achieve lasting change (for more information, read ‘Troubleshooting the theory of change’ on page 112).
Indicator framework: a tool, aligned with a programme’s theory of change, that links objectives and expected outcomes to measurable indicators, specifying baselines, targets, data sources, frequency and responsible parties.
Cost-effectiveness example: TICZA
A cost-effectiveness analysis was applied to an extended student-teacher internship model, facilitated by TICZA. The costs consisted of initial teacher training fees plus internship support costs. Comparisons were made between distance and contact study models, as well as the internship model, which places students studying at a distance learning facility in a school environment and provides these students with mentorship and wraparound support.
Costs per beneficiary were measured at different phases of the programme to account for student attrition during their study period, as well as the numbers that graduated and those who went on to work as teachers. Although the data was not sufficiently robust to yield definitive findings, the exercise was useful in demonstrating that, while the initial costs of supporting the internship model were higher, the cost-effectiveness was significantly improved when adjusted for the higher graduation and employment levels.
M&E systems example: Tiger Brands Foundation
The Tiger Brands Foundation (TBF) uses a virtual monitoring platform that displays real-time data on the number of meals provided by each school every day. The tool is app-based, features geo-tracking capability, allows for narrative and photo input, and is free for users (the Foundation covers the data costs). On-site users enter the data via their phones every morning. The system tracks stock and programme delivery and is monitored by TBF Regional Coordinators. It is managed by two dedicated TBF national office staff members.
From paper to platform: the shift in M&E practices
Data systems are integral to the M&E process. M&E requires systems to gather, analyse and report on data. There is understandably variation in levels of system sophistication, with some correlation to the size and maturity of the programmes involved. A highly sophisticated system will be integrated within company systems, pulling financials and non-financial data and reporting in accordance with budget cycles. By contrast, some companies establish monitoring criteria but leave the format, details of feedback and systems used to the implementing partner. The downside of this is that there are significant variations in the quality and consistency of feedback, often making it difficult to accurately assess how projects are performing.
It is recommended that a process be convened between the funder and implementer to agree on the theory of change and indicator framework at the outset of the partnership. This ensures expectations are aligned and enables the implementing partner to use current data and reporting processes.
It is tempting to use freely available software and existing desktop applications, particularly when internal IT departments resist new software. However, one can quickly encounter limitations in functionality or find oneself recapturing data into an alternative system. Most companies and nonprofits reported using either manual processes or Microsoft suite programmes (such as Excel or Power BI) for M&E in 2025. More than half of surveyed companies used Microsoft programmes for data collection (54%), reporting (52%) and analysis (54%). Nonprofits also used Microsoft programmes for the full range of M&E processes, but at slightly lower rates than companies, ranging from 36% to 45%.
Specialised software or app development for data gathering not only comes at a cost, but once committed, it is difficult to change if it proves to be overly complex or not fit for purpose. For smaller, bespoke systems, there is also the risk that the service provider may no longer be able to support the system. Perhaps that explains why neither companies nor nonprofits used specialised systems much in 2025. Although at low levels, nonprofits used them more than companies, particularly for data collection (26%).
The benefits of automating and embedding technology into the M&E process far outweigh the barriers. Information can be collected from beneficiaries in real time using simple tools that enhance accuracy through built-in quality controls, enabling early diagnosis and response. A simple design enables the submission of data and feedback online or via mobile phone using pre-agreed templates. Information is stored in a database that can be queried, with automated reporting and high-level dashboards that offer the option to drill down into greater detail as needed.
Data and information systems are also increasingly being used as a platform for delivery. Platforms such as those used for online schooling content or skills courses are not only effective in achieving cost-effective reach but also enable monitoring data, such as usage patterns, to be extracted and assessed as the programme is being implemented.

The value of ‘closing the loop’
In theory, setting up a monitoring and evaluation process is not complex. Yet, in practice, we see a limited return on the time and effort invested in M&E processes. Reasons for this include being over-zealous in the depth and complexity of indicators, lack of consultation with partners, insufficient consistency in applying measurement protocols, lack of clarity around processes and roles, insufficient capacity dedicated to M&E, lack of scrutiny of data quality, lack of data systems and finally, insufficient attention paid to analysing results.
Analysis of both qualitative and quantitative data is a vital part of the M&E process. The resource and effort applied to the M&E process are wasted if users are unable to extract findings and generate insights. There is little value in simply compiling, tabling and filing reports. Yet, there is a wealth of knowledge that comes from effective M&E processes, which provides a true perspective on outcomes, particularly when tracked over time, across sites, or in comparison to similar programmes. This knowledge and insight can support all parties in adjusting their strategies and projects to achieve better results and share these insights to influence the practices of others.
The M&E oversight role should come with a curiosity about what is working and what is not, as well as where efficiencies are being realised. The knowledge gained can be used for further investigation, to adapt practices, or to share insights with others. This is the true value of M&E.

