Ending extreme poverty and building shared prosperity requires evidence to identify those programs and policies that will have a real impact. The World Bank’s Strategic Impact Evaluation Fund (SIEF) makes this happen by investing in impact evaluations of innovative human development programs in low-and middle income countries, and by working directly with policymakers and other key stakeholders to use the results and build better policies and programs that successfully improve people’s lives.
SIEF is a multi-donor trust fund created in 2012 with the support of the British government’s Department for International Development (DFID) and currently also receives support from the London-based Children’s Investment Fund Foundation(CIFF), which seeks catalytic change for children including promoting early childhood development and evidence-based solutions. SIEF focuses on four human development areas that are crucial to improving the lives of the world’s poorest and most vulnerable: Early Childhood Development and Nutrition, Basic Education, Health Systems and Service Delivery, and Water Supply, Sanitation, and Hygiene.
SIEF partners with leading impact evaluation researchers and those who develop and implement innovative programs — both within governments and within non-governmental and other organizations. In the process, we ensure active engagement with key stakeholders and we support teams to make evidence accessible and policy relevant.
What is impact evaluation?
Knowing what works, what doesn’t, and why, is essential for crafting effective human development programs. Impact evaluation is a tool for measuring a program’s effectiveness. If policymakers know which programs help kids do better in school, increase maternal and child health, or boost employment, for example, they can build on those successes to create more—and more effective—programs to help the world’s poor.
Impact evaluations provide this evidence by comparing a program’s outcomes with what would have happened without the program, often referred to as a “counterfactual.” Specifically, evaluations compare beneficiaries of the program being evaluated with a comparison group of people sharing the same characteristics such as poverty level and education, for example, but who didn’t receive the program. This is usually done by randomly assigning the program intervention to control and treatment groups before the program is launched, and then comparing differences in outcomes once the program is implemented.
By isolating the impact of the program, impact evaluations provide policymakers and practitioners with valuable information for deciding whether they want to scale-up a program, change it or even cancel it.