March 20, 2013
Niels Dabelstein discusses the major challenges facing evaluators today, the ingredients of a high quality humanitarian evaluation, improving learning and accountability, and the future of humanitarian aid evaluations.
Mr. Dabelstein is the Former Head of Evaluations at DANIDA.Read interview transcript
The main challenge, in one sense, is the capacity of the evaluation committee to cope with the demand for evaluations. That’s really a challenge to be able to provide high quality evaluations with an increasing demand, particularly, in view of the usually rather limited resources made available for evaluation. Sometimes people think you can change the world for $100,000 but you can’t.
What are the ingredients of a high quality humanitarian evaluation?
There are many elements of a good evaluation. First of all, thorough research, field research, you can’t do from homebase. Not relying on one single source or two sources of evidence, but really triangulate. Focus on utilization. The evaluation process itself is a learning exercise, both for the evaluator and for those being evaluated. And for the evaluator to put focus on utilization from the very beginning, from the design of the evaluation. How do you involve stakeholders in a way that they learn throughout the evaluation process, not just getting a report which they can put on the shelf afterwards. Provide clear conclusions. A lot of reports have, what I call “wool-in-the-mouth,” they don’t speak clear language, they take so many precautions, that in the end it’s not quite clear what they are actually concluding. Then recommendations are convoluted or not clear. And the evaluation report is only read by a few people, you have to communicate the result of the evaluation in many different ways, as many as you can find.
How can we improve learning and accountability?
Evaluators have to communicate the results and lessons to both field staff and operational staff at headquarters, and not least to policy makers. Because a lot of the constraints on humanitarian agencies are political constraints. Very often, it’s a political environment that sets the limits to what humanitarian agencies can do, and so the evaluation must also be able to communicate to policy makers.
How do you see the future of humanitarian aid evaluations?
The increased number and complexity of humanitarian disasters, first of all, but then the responses to that, I think requires us to be more comprehensive in our evaluations. Single-project, single-intervention evaluations give us very little knowledge. But you have to see the totality of interventions in order to see whether it works, what works, and what doesn’t work. So that is one of the challenges to this humanitarian sector. That is, to get their act together and commission these system-wide evaluations so you evaluate the event not the individual interventions. I think that is the future of evaluation.