- Surprising lack of consistent, reliable data on development effectiveness: Among the various sectoral interventions, we have no uniformly reliable data on the effectiveness of every dollar spent. For example of every dollar spent in infrastructure programs in sub-Saharan Africa, how many cents are effective? Based on the same assumptions, do we have a comparable number for South East Asia? In other words why don’t we have more data on possible development investments and the associated costs, benefits/returns and risks?
- Failure to look at development effectiveness evidence at the planning stage: Very few development programs look at the effectiveness evidence before the selection of a particular intervention. Say, a sectoral intervention A in a particular region has a history of positive outcomes (due to attributable factors such as well performing implementation agencies) as opposed to another intervention B where chances of improved outcomes are foggy. Given the same needs (roughly) why shouldn’t we route funds to A instead of B in the planning stage? Why should we give equal preference to both based purely on need?
Our Top Ten Blog Posts by Readership in 2012
Originally published on April 3, 2012
Knowledge, or the lack of, is often associated with the success or failure of development initiatives. For decades, communication’s main role was to fill the knowledge gap between what audiences knew and what they needed to know, with the assumption that this would induce change. We now know that this is seldom the case. In the modernization paradigm, media were expected to provide needed knowledge through messages that could fill knowledge gaps, build modern attitudes, and eventually shape behaviours. After years of under-delivering on their promises, development managers and decision-makers are increasingly realizing that it is not enough to have sound technical solutions and disseminate information in order to have audiences adopt the innovations.
Guest post from ace evaluator Dr Karl Hughes (right, in the field. Literally.)
Just over a year ago now, I wrote a blog featured on FP2P – Can we demonstrate effectiveness without bankrupting our NGO and/or becoming a randomista? – about Oxfam’s attempt to up its game in understanding and demonstrating its effectiveness. Here, I outlined our ambitious plan of ‘randomly selecting and then evaluating, using relatively rigorous methods by NGO standards, 40-ish mature interventions in various thematic areas’. We have dubbed these ‘effectiveness reviews’. Given that most NGOs are currently grappling with how to credibly demonstrate their effectiveness, our ‘global experiment’ has grabbed the attention of some eminent bloggers (see William Savedoff’s post for a recent example). Now I’m back with an update.
Let us go back to the main theme of this blog: why sound technical solutions devised by top ranking technical experts and supported by plenty of resources from the richest countries have failed to deliver the expected results. A review of past experiences identified a number of causes for the failures of past approaches, but most of them appear to be traceable to one directly linked to communication/dialogue, or the lack of; i.e. the limited involvement of the so-called ‘beneficiaries’ in the decisions and the design of activities that concerned their lives. To sum up, lack of results in development initiatives due to people failing to adopt the prescribed behaviours were largely due to the neglect of the voices of those who were expected to adopt and live with such innovations and technical solutions.
Recently I was invited to hold the XI Raushni Deshpande Oration at the Lady Irwin College in New Delhi, India. This blog is a summary and a reflection of that presentation. As it can be inferred from the title, the focus is on why so many development initiatives have failed in the past and many are still failing in the present. Why after all these years, after all the money poured in, all the construction being made and all the resources dedicated to address this issue, are latrines still not being used in many places? Or they are used but not for the intended purpose? And why are bed nets aimed at preventing malaria adopted even when they are easily available? And many more ‘why’s’ such as these could be added to the list.
The newly launched IEG Annual Review of Development Effectiveness 2009 attests the World Bank a significant increase in development effectiveness from financial year 2007 to 2008. After a somewhat disappointing result last year, 81 % of the development projects that closed in fiscal 2008 were rated satisfactory with regard to the extent to which the operation's major relevant objectives were achieved efficiently.
One crux remains: the measurement of impact. Monitoring and evaluation components in development projects are by far not as frequent as IEG would wish: Two thirds of the projects in 2008 had marginal or negligible M&E components. Isabel Guerrero, World Bank Vice President of the South Asia Region, listed several reasons at the launch of the IEG report this week: the lack of integrative indicators, the Bank's tradition to measure outputs instead of outcomes, the lack of baseline assessments in most projects, and reluctance on the clients' side to realize M&E in projects.
“Effectiveness in aid is also effectiveness in governance”, said Mark Nelson, senior operations officer at the World Bank Institute (WBI) during a recent panel discussion on the progress-to-date of the