Nuts & Bolts: Closing the Circle for Results-Based Management in Social Policy
The term “results-based management” has become increasingly popular in social policy circles in recent years. The idea is that social policy decisions will be guided by information on the extent to which different programs improve the well-being of the intended beneficiaries.
The figure below illustrates a results-based decision-making cycle: starting by diagnosing the problem, using the diagnosis to plan policy targets and objectives, and then, designing specific programs or policies to accomplish them. Once financing is available, implementation can begin, including monitoring mechanisms to ensure that the program is being executed according to plan.
When the program is complete, it is important to measure whether the original objective has been accomplished and to evaluate whether goals have been effectively achieved. The process becomes a results-based cycle when measurement and evaluation results are continuously and systematically used for policy improvement.
Over the last decade, the availability of evidence on what works in social policy has significantly expanded. However, for this evidence to play its expected role as a key input in results-based management systems, it has to be used systematically. This is one of the challenges faced by Latin American and Caribbean countries. Although the region is often viewed as having relatively advanced monitoring and evaluation systems compared to other developing countries, in fact, less than half of the region’s countries have adopted all of the elements of the results-based management cycle.
Among the underlying reasons for the slow uptake in the public sector, two are key. The first includes political, technical, bureaucratic, and operational factors that provide negative incentives for systematically internalizing evidence. The second relates to evaluation arrangements within governments that may lead to one of the following problems: the tendency to highlight positive outcomes under the centralized and sectoral model; tensions from academic rigor, practical use, and timeliness that can arise when evaluation and implementation units belong to the same sector; and difficulty in influencing program managers who do not belong to the same government agency in an independent model.
The note discusses the advantages and limitations of each of the three models–centralized, sectoral and independent–for organizing the evaluation function; and highlights the examples of South Africa, Peru and Mexico to show that progress in evaluation use can be made under any of the three arrangements.
Read the full note here.