What does it mean to do policy-relevant research and evaluation? How does it differ from policy adjacent research and evaluation? Heather Lanthorn explores these questions and offers some food for thought on intention and decision making.
This post is really a conversation with myself, which I started here, but I would be happy if everyone was conversing on it a bit more: what does it mean to do research that is ‘policy relevant’? From my vantage point in impact evaluation and applied political-economy and stakeholder analyses, ‘policy relevant’ is a glossy label that a researcher or organization can apply to his/her own work at his/her own discretion. This is confusing, slightly unsettling, and probably dulls some of the gloss off the label.
The main thrust of the discussion is this: we (researchers, donors, folks who have generally bought-into the goal of evidence- and evaluation-informed decision-making) should be clear (and more humble) about what is meant by ‘policy relevant’ research and evaluation. I don’t have an answer to this, but I try to lay out some of the key facets, below.
Overall, we need more thought and clarity – as well as humility – around what it means to be doing policy-relevant work. As a start, we may try to distinguish work that is ‘policy adjacent’ (done on a policy) from work that is either ‘decision-relevant’ or ‘policymaker-relevant’ (similar to ‘decision-relevant,’ (done with the explicit, ex ante purpose of informing a policy or practice decision and therefore an intent to be actionable).
I believe the distinction I am trying to draw echoes what Tom Pepinsky wrestled with when he blogged that it was the “murky and quirky” questions and research (a delightful turn of phrase that Tom borrowed from Don Emmerson) “that actually influence how they [policymakers / stakeholders] make decisions” in each of their own idiosyncratic settings. These questions may be narrow, operational, and linked to a middle-range or program theory (of change) when compared to a grander, paradigmatic question.
Throughout, my claim is not that one type of work is more important or that one type will always inform better decision-making. I am, however, asking that, as “policy-relevant” becomes an increasingly popular buzzword, we pause and think about what it means.
Evidence-informed policymaking is gaining importance in several African countries. Networks of researchers and policymakers in Malawi, Uganda, Cameroon, South Africa, Kenya, Ghana, Benin and Zimbabwe are working assiduously to ensure credible evidence reaches government officials in time and are also building the capacity of policymakers to use the evidence effectively. The Africa Evidence Network (AEN) is one such body working with governments in South Africa and Malawi. It held its first colloquium in November 2014 in Johannesburg.
Africa Evidence Network, the beginning
A network of over 300 policymakers, researchers and practitioners, AEN is now emerging as a regional body in its own right. The network began in December 2012 with a meeting of 20 African representatives at 3ie’s Dhaka Colloquium of Systematic Reviews in International Development.
- First, the aspiration: the general desire of researchers (and others) to see more evidence used in decision-making (let’s say both judgment and learning) related to aid and development so that scarce resources are allocated more wisely and/or so that more resources are brought to bear on the problem.
- Second, the dashed hopes: the realization that data and evidence currently play a limited role in decision-making (see, for example, the report, “What is the evidence on evidence-informed policy-making”, as well as here).
- Third, the new hope: the recognition that “policy champions” (also “policy entrepreneurs” and “policy opportunists”) may be a bridge between the two.
- Fourth, the new plan of attack: bring “policy champions” and other stakeholders in to the research process much earlier in order to get up-take of evaluation results into the debates and decisions. This even includes bringing policy champions (say, bureaucrats) on as research PIs.
There seems to be a sleight of hand at work in the above formulation, and it is somewhat worrying in terms of equipoise and the possible use of the range of results that can emerge from an impact evaluation study. Said another way, it seems potentially at odds with the idea that the answer to an evaluation is unknown at the start of the evaluation.
How can we better design ICT programs for development and evaluate their impact on improving peoples’ well-being? A new approach, the Alternative Evaluation Framework (AEF) takes into account multiple dimensions of peoples’ economic, social and political lives rather than simply focusing on access, expenditure and infrastructure of ICT tools. This new approach is presented in How-To Notes, Valuing Information: A Framework for Evaluating the Impact of ICT Programs, authored by Bjorn-Soren Gigler, a Senior Governance Specialist at the World Bank Institute’s Innovation Practice.
Guest post from ace evaluator Dr Karl Hughes (right, in the field. Literally.)
Just over a year ago now, I wrote a blog featured on FP2P – Can we demonstrate effectiveness without bankrupting our NGO and/or becoming a randomista? – about Oxfam’s attempt to up its game in understanding and demonstrating its effectiveness. Here, I outlined our ambitious plan of ‘randomly selecting and then evaluating, using relatively rigorous methods by NGO standards, 40-ish mature interventions in various thematic areas’. We have dubbed these ‘effectiveness reviews’. Given that most NGOs are currently grappling with how to credibly demonstrate their effectiveness, our ‘global experiment’ has grabbed the attention of some eminent bloggers (see William Savedoff’s post for a recent example). Now I’m back with an update.