FAQ

How does the HRI measure the adherence of donors to the GHD Principles?

The HRI uses both quantitative and qualitative data to build 35 indicators, organised into five pillars, which capture the essence of the Principles of Good Humanitarian Donorship. The pillars correspond to five basic questions:

Pillar 1: Are donor responses based on the needs of affected populations and not subordinated to political, strategic or other interests?

Pillar 2: Do donors support strengthening local capacity, prevention of future crises and long-term recovery?

Pillar 3: Do donor policies and practices effectively support the work of humanitarian organisations?

Pillar 4: Do donors respect international humanitarian law and actively promote humanitarian access to enable protection of civilians affected by crises?

Pillar 5: Do donors contribute to accountability and learning in humanitarian action?

The qualitative data comes from field research in 14 crisis (Afghanistan, the Central African Republic, Colombia, the Democratic Republic of the Congo, Haiti, Indonesia, the occupied Palestinian territories, Pakistan, the Philippines, Somalia, Sri Lanka, Sudan, Yemen and Zimbabwe) where DARA teams interviewed senior representatives of humanitarian organisations working in these crises around the world around their experiences with the donor governments that fund their programmes. Teams also conduct a survey questionnaire on donor practice, and collected nearly 2000 responses (1384 about OECD/DAC donors ranked in the HRI and 565 other donors and funding sources).

This valuable information is complemented by extensive quantitative data comes from several published sources, such as the OECD/DAC, World Bank, UN, and information published by governments.

What trends has the index identified?

Over the past four years, the HRI shows that overall, all donors continue to do reasonably well in Pillar 1 (Responding to needs). However, there is a significant range between the highest scored and lowest scored donors, reflecting differences in the way donors understand and apply core humanitarian principles and GHD concepts around neutrality, impartiality and independence of aid. In the same period, donors uniformly have done less well in Pillar 2 (Prevention, risk reduction and recovery), showing that this is an area where all donors need to prioritise attention. Pillar (Working with humanitarian partners) shows a high degree of variance in donors’ scores, reflecting different approaches among donors on how they engage with and support humanitarian actors and opportunities for significant improvements in the way many donors interact with and support the international humanitarian system. Pillar 4 (Protection and international law) shows reasonably consistent donor behavior, with a smaller range of scores and the generally the second-highest average scores compared to other pillars. However, there are still significant differences among donors in core indicators for this pillar related to compliance with international laws and conventions that support humanitarian action, indicating that there is room for improvement. Finally, Pillar 5 (Learning and accountability) shows the highest variance in donor scores as well as the lowest average scores, indicating both that there are vast differences in the way donors are performing in this area and the reality that for several donors, this is simply not a priority.

How have donors responded to the HRI?

Some donors have expressed concerns about the HRI rankings and aspects of the methodology. DARA has taken these concerns into consideration and has continually refined and improved the methodology. DARA is committed to working with donors in a constructive dialogue on how to use the HRI as a tool for identifying and promoting good practice, and has engaged with several donor governments to discuss specific measures to improve their performance. More and more staff of donor agencies have told DARA that the HRI provides them with information that can be used for internal lobbying within their agencies and with their governments to encourage and apply good practice. This suggests that the HRI can be compatible with donors’ own efforts to measure and improve their performance.

Is there any evidence that is HRI is being used to drive change in donors’ policy and practices?

Several donor agencies and political representatives have used the HRI as background information to prepare for parliamentary debates on humanitarian assistance. For example, the Norwegian Auditor General’s office cited the HRI several times in its review of the country’s humanitarian assistance. Denmark and Italy both acknowledged the HRI findings on their websites. Other donor agencies have used the HRI to help identify gaps in their organisational set-up and procedures. Many donor representatives have also asked for the analysis from crises that DARA has studied for the HRI to see how they compare to overall donor responses, and look to improve. In addition, DARA has provided briefings to dozens of humanitarian organisations at the international and national level to discuss the HRI findings and how they can be used to leverage changes in donor policy and practice. The HRI data and findings have been cited and used by several organisastions, research institutions and policy think tanks.

While not directly referencing the HRI, some governments have acknowledged the importance of the themes and systemic issues identified through the HRI research over the past four years and have now incorporated them as priorities in their humanitarian assistance. For instance, several donors are now emphasising the need to improve the quality and use of needs assessments, reviewing funding and engagement with humanitarian NGOs, improving pooled funds like CERF, reinforcing evidence-based approaches to assessing the quality and effectiveness of aid, addressing issues of protection, etc. – all themes explored in past editions of the HRI.

How is the HRI financed?

The HRI is privately funded and receives no funding from donor governments in order to protect the independence and integrity of the research process. The 2010 report was financed in part due to generous contributions from the Avina Foundation, the Dutch Postcode Lottery, and private philanthropists. Many different humanitarian agencies also support the HRI by providing in-kind goods, services and logistics support during the HRI missions and to promote and disseminate the findings to a wider audience.

Has the methodology used in the HRI 2009 changed compared with previous years?

DARA takes every edition of the HRI as an opportunity to continue to refine and improve the methodology based on feedback and lessons learnt from the previous year. This year’s changes include:

• The number of indicators has been consolidated from 60 to 35 in order to simply the presentation of results and focus more clearly on key aspects of donor practice.

• Indicators have been distributed evenly among the HRI’s five pillars. Within pillars, qualitative and qualitative indicators represent 50 percent respectively of the calculation of the overall pillar score. This helps to ensure that donors’ scores in pillars reflect a more balanced view of their performance.

• The survey design has also been revised and a comprehensive statistical analysis of responses was conducted to adjust for any possible social or cultural factors that could impact the pattern of responses. This helps to reduce the effect of possible biases that could favour and/or penalize donors, and to convert the survey responses into more comparable donor scores.

• Statistical calculations and optimal values were revised and improved, and all scores have been harmonized to a 0- 10 scale for better presentation and comparability among indicators, pillars and the overall final scores.

• Sophisticated multidimensional statistical techniques were used to test and validate the data and indicator scores, and to allow for a deeper analysis of the interrelations among donors’ performance and the different principles that make up the GHD. This analysis was used to place the donors into three groups, based on the patterns of their scores.

• Finally, a new quantitative indicator has been added to Pillar 2 (Prevention, risk reduction and recovery) as a proxy measure for donor governments’ efforts to reduce climate-related vulnerability. This is in line with DARA’s commitment to track and measure the human consequences of climate change.

DARA continues to work with other stakeholders to build a common approach to defining standard data sources and performance indicators, and has initiated a wide consultation process to understand concepts of good donor practice beyond what is stated in the GHD Principles in order to incorporate into the HRI’s future analysis.

Given these changes, can this year’s findings be compared with previous years?

Yes and no. The overall performance of donors, the main findings and key messages can be compared over the years, and the basic foundation of the HRI remains the same, as do many of the different indicators. However, we are not quite at the stage where we can compare individual donor performance from one year to the next – although it is our intention to conduct a trend analysis after five years of the HRI. It is also important to note that the ranking is based on data of donors´ humanitarian assistance in a specific year and compared to the other GHD donors; therefore a change in the ranking of a country may be partially influenced by changes in another country’s performance or due to factors specific to the crises assessed that year.

How reliable are the HRI results?

One of the reasons DARA developed the HRI was to provide the humanitarian sector with an empirical evidence base to assess donor performance, rather than informal and anecdotal information on donor government’s performance. The HRI methodology and process represents the “state of the art” in terms of performance measurement in the humanitarian sector. The indicators capturing the GHD Principles and the research methodology used to construct the HRI were developed in wide consultation with experts from the humanitarian sector and other sectors, with the aim of ensuring transparency, consistency and rigor in the approach. The HRI attempts to ensure the findings are as accurate and reliable as possible through the following measures:

• representative sample of humanitarian crises (geographically, type of crisis and funding levels)

• use of a standardized survey design targeting informed actors

• use of internationally comparable quantitative data from published sources,

• the use application of standard statistical analysis,

• regular peer review

Representative crises:

• The crises studied each year are representative sample of the different types of humanitarian crises – disasters, conflicts and complex emergencies – and the response by the international community. The HRI also attempts to ensure wide geographic coverage in order to assess how well donors responded in different situations. The large number of crises studied each year (the largest exercise of its kind in the sector) ensures that there is a solid basis from which to generate analysis of how donors and humanitarian agencies are doing in general, as well as the specific information from each crisis. This year the crises studied received over 60 percent of the funding mobilized to respond to crises in 2009 and over 50 percent of OECD/DAC humanitarian funding as recorded by the Financial Tracking Service.

Standardized survey design:

• The same standard survey questionnaire is used with all the people interviewed in all crises studied, and the results are analyzed using the same methods. This means that the results from survey in one crisis are compatible and comparable with other crises. The standard interview questions are supplemented with open-ended questions and interviews with other key actors, which helps to validate the consistency of responses. The survey is also administered in the working language used in the crisis response, which also avoids language bias and possible misinterpretations of the questions.

Survey respondents

• The field survey deliberately targets representatives of all the different humanitarian organizations engaged in the response (UN, Red Cross Red Crescent, NGOs) and responses are only gathered from those with a working relationship with their funders and donors. This ensures that responses are based on the person’s actual knowledge of the donor’s practices, not conjecture or unfounded information. The high number of organizations interviewed in each crisis context (up to 90% of the organizations engaged in the response)allows for a more complete picture of perceptions of how donors are performing

High survey response rate:

• The very high number of responses (nearly 2000 this year) meets and exceeds the number required to conduct adequate statistical analysis of the data. This also means the results are more likely to represent the actual range of views of the majority of respondents than the opinions of just a few individuals.

Application of standard statistical analysis:

• The entire set of data (both quantitative and qualitative) is analysed using standard statistical methods common in academic research, such as variance, multivariate analysis, etc. This means the HRI data for each indicator is tested and reviewed before generating the final results and scores. This provides a level of assurance that the results are based on accurate and reliable information, and accordingly, the HRI is able to draw valid conclusions about how well donors are performing individually and collectively with respect to the GHD principles.

Peer review:

• The HRI process and results are shared with a Peer Review Committee and other stakeholders to ensure that the findings are based on objective analysis and interpretation of the data and indicators.