How does the HRI work?

The process of the HRI starts with field teams visiting 9-13 crisis-affected countries, interviewing senior representatives of humanitarian organizations working there. These are the majority of the operational response agencies that receive donor government funding for the crisis, as well as government officials, local authorities and civil society organizations. For last year’s edition we could get over 2000 responses to a survey questionnaire asking respondents for their opinions and perceptions – based on their direct experience liaising with the donors who support their work – of how well donors are applying good practice. The results of the field research is then complemented by quantitative data on government donor funding from sources such as the UN, World Bank and the Red Cross/Red Crescent and other indicators that try to assess donor quality of its response.

Once the relevant data is collected, the HRI assesses and benchmarks donors against 35 indicators aligned against the main concepts contained in the Good Humanitarian Donorship Principles. The indicators are organized into the five pillars of donor practice. Each has a qualitative component derived from field survey responses and a quantitative component based on publically available data, equally weighted within the pillar to ensure a fair and objective overview of donor government performance. The scores for each indicator and pillar are used to generate a comparative overall ranking of the OECD/DAC donors. This allows governments to better benchmark their humanitarian assistance against peers and to use the analysis to work with their stakeholders to improve their humanitarian assistance.

An innovation to the 2010 Index is to include a multi-dimensional analysis which classifies and groups donors according to the patterns of their similarities and respective differences in their performance. While the ranking provides a useful synthesis of donors’ overall performance, there is a risk that the results can be over-simplified or misinterpreted, and the relationship between individual indicators and overall donor practice can be lost. The advantage of this new approach is that it analyses donors by using a more “holistic” approach. The analysis can also offer more details on a donor’s strengths and areas for improvement compared to its peers, which in turn may help decision makers to refine and improve their humanitarian strategies.