Tuesday, March 30, 2010

Unhealthy Evaluation Practices!?

Winston Churchill said, "Criticism may not be agreeable, but it is necessary. It fulfils the same function as pain in the human body. It calls attention to an unhealthy state of things." (No I did not quote this from hearing him.) The following article calls to attention some unhealth things in the (non)use of evlauation within International NGOs (INGOs) especially in trying to convinence the public they are accomplishing their mission statements through effective strategies and interventions. The article is titled, "Measuring Performance versus Impact: Evaluation Practices and their Implications on Governance and Accountability of Humanitarian NGOs," by Claude Bruderlein and MaryAnn Dakkak (June 30, 2009, SSRN).

The authors say that their study "confirms also a growing frustration among humanitarian professionals themselves that, while much is measured and evaluated, it is rarely the actual impact of their work. Instead it is apparent that evaluation as it mostly takes place today reflects primarily the needs of donors; is irrelevant for serious organizational learning and programming efforts; adds considerably to the burden of local staff and partners; and does little to shed light on the roles, influence and impact of INGOs as central actors in humanitarian action and protection."

One quote in the article, from a high ranking person in an INGO, "Evaluation as it is used today is the worst way to learn:It is done post-program (often after the new program has started),it is unhelpful, doesn’t address what produces good programming,focuses on attribution and doesn’t delve into the ambiguities of relationships;They are largely unused and a waste of resources and time."

The main critisms of evaluations in INGOs (the "pains" Churchill mentioned) are:
  1. While organizations want evaluations for moral reasons, they only do what is actually required by donors.
  2. Evaluations are often not useful.
  3. Evaluations are often not used.
  4. New evaluation materials will help little as existing ones are not enforced.
  5. Evaluation criteria are often inappropriate.
  6. Impact evaluation as the one really meaningful approach is almost never done, and is just at the beginning of its development.
In order to treat these unhealth evaluation practices, the authors recommend:
  1. Ensure that evaluations have leverage on programming, including through the direct involvement of evaluators, e.g. by scoring INGOs based on their resolution of identified problems and their integration of evaluator recommendations. Incidentally, these measures are also likely to have implications on the overall quality of evaluations.
  2. Clarify and separate competing organizational accountabilities, by effectively dividing INGO operations into for-profit and non-profit activities, or by partnering with outside for-profit entities. As they exist, most INGOs examined do neither adequately fulfill their internal governance accountability, nor their external business accountability.
  3. Develop and invest in dedicated evaluation research capacity, in-house or through partnerships with academic institutions that provide a rigorous basis and feedback mechanism to INGOs, their donors and the general public.
  4. Increase collaboration among INGOs and donors, based on existing efforts to consolidate, integrate and simplify evaluation methodologies in the interest of less time-consuming yet more meaningful and outcome-focused approaches.
  5. Develop a common approach towards donors and the public on what good humanitarian practice requires, in terms of minimum organizational overheads for rigorous and professional standards of evaluation, programming and organizational learning.
  6. Create a consortium of advocacy organizations, similar as they exist in other areas as an effective way of creating space for dialogue and inter-agency collaboration towards the definition of shared standards in advocacy.
  7. Share evaluations and learn collaboratively, in particular from failures and problems presently not included (or well hidden) in evaluation reports – primarily by fostering collective approaches for open evaluation dialogue.
  8. Experiment with a system of peer-reviewed evaluations, initially internal and confidential to each organization allowing for rigorous and open reviews of evaluation methods – similar to methods applied by ALNAP as an effective collaborative of evaluators but with more effective ways to actually enforce and ensure good practice.
  9. Agree on standardized quantitative and qualitative metrics of impact that would allow for a sufficiently practical and pertinent measurement of impact – as part and priority focus of an improved dialogue, even if it involved superceding existing collaboration successes in consolidating agency methods and indicators.
  10. Ensure that timelines and resources for evaluations are flexible and sufficient, including to undertake meaningful qualitative research of impact over the long-term and to ensure that evaluations on advocacy and policy can be adjusted to affect relevant processes.
  11. Preserve flexibility and check for unintended consequences, especially in advocacy and policy programming to take into account the dynamics of relevant political contexts.
  12. Agree on a simple but shared evaluation language, integrated into all stages of evaluation and programming that allows for the effective involvement of professionals and beneficiaries at and across all levels of humanitarian assistance.
Of all the criticisms, from my experience I agree that organizational learning from evaluation findings is quite rare. All too often, we (myself included) are too busy in search of the next funding to apply evaluation finding to current or future programs and projects; most evaluations focus on achieving results but rarely assess  the "operational" aspects on how those results were (not)achieved; and that the unintended consequences are rarely investigated.

1 comment:

  1. Thank you for referring to our work. It truly is a frustrating world to work in!