The World Bank’s Strategic Impact Evaluation Fund (SIEF) supports scientifically rigorous research that measures the impact of programs and policies to improve education, health, access to quality water and sanitation, and early childhood development in low and middle income countries. The majority of the evaluation are randomized control trials (RCTs) and they were chosen through a competitive process open to researchers worldwide.
Evidence about programs’ impacts and cost-effectiveness allows governments and others to better focus future efforts and investments. Projects evaluated include both World Bank projects and projects designed, implemented and funded by other groups. We ensure active engagement with key stakeholders and we support teams to make evidence accessible and policy relevant. Research teams can be made up of experts anywhere in the world, regardless of affiliation. The teams are always headed by a World Bank impact evaluation expert, providing fiduciary oversight and ensuring that the research is relevant to World Bank dialogue and country teams.
SIEF was launched with the support of the British government’s Department for International Development (DFID). Other donors include the London-based Children’s Investment Fund Foundation (CIFF), which seeks catalytic change for children including promoting early childhood development and evidence-based solutions.
What is impact evaluation?
Knowing what works, what doesn’t, and why, is essential for crafting effective human development programs. Impact evaluation is a tool for measuring a program’s effectiveness. If policymakers know which programs help kids do better in school, increase maternal and child health, or boost employment, for example, they can build on those successes to create more—and more effective—programs to help the world’s poor.
Impact evaluations provide this evidence by comparing a program’s outcomes with what would have happened without the program, often referred to as a “counterfactual.” Specifically, evaluations compare beneficiaries of the program being evaluated with a comparison group of people sharing the same characteristics such as poverty level and education, for example, but who didn’t receive the program. This is usually done by randomly assigning the program intervention to control and treatment groups before the program is launched, and then comparing differences in outcomes once the program is implemented.
By isolating the impact of the program, impact evaluations provide policymakers and practitioners with valuable information for deciding whether they want to scale-up a program, change it or even cancel it.