The authors say that their study "confirms also a growing frustration among humanitarian professionals themselves that, while much is measured and evaluated, it is rarely the actual impact of their work. Instead it is apparent that evaluation as it mostly takes place today reflects primarily the needs of donors; is irrelevant for serious organizational learning and programming efforts; adds considerably to the burden of local staff and partners; and does little to shed light on the roles, influence and impact of INGOs as central actors in humanitarian action and protection."
One quote in the article, from a high ranking person in an INGO, "Evaluation as it is used today is the worst way to learn:It is done post-program (often after the new program has started),it is unhelpful, doesn’t address what produces good programming,focuses on attribution and doesn’t delve into the ambiguities of relationships;They are largely unused and a waste of resources and time."
The main critisms of evaluations in INGOs (the "pains" Churchill mentioned) are:
- While organizations want evaluations for moral reasons, they only do what is actually required by donors.
- Evaluations are often not useful.
- Evaluations are often not used.
- New evaluation materials will help little as existing ones are not enforced.
- Evaluation criteria are often inappropriate.
- Impact evaluation as the one really meaningful approach is almost never done, and is just at the beginning of its development.
- Ensure that evaluations have leverage on programming, including through the direct involvement of evaluators, e.g. by scoring INGOs based on their resolution of identified problems and their integration of evaluator recommendations. Incidentally, these measures are also likely to have implications on the overall quality of evaluations.
- Clarify and separate competing organizational accountabilities, by effectively dividing INGO operations into for-profit and non-profit activities, or by partnering with outside for-profit entities. As they exist, most INGOs examined do neither adequately fulfill their internal governance accountability, nor their external business accountability.
- Develop and invest in dedicated evaluation research capacity, in-house or through partnerships with academic institutions that provide a rigorous basis and feedback mechanism to INGOs, their donors and the general public.
- Increase collaboration among INGOs and donors, based on existing efforts to consolidate, integrate and simplify evaluation methodologies in the interest of less time-consuming yet more meaningful and outcome-focused approaches.
- Develop a common approach towards donors and the public on what good humanitarian practice requires, in terms of minimum organizational overheads for rigorous and professional standards of evaluation, programming and organizational learning.
- Create a consortium of advocacy organizations, similar as they exist in other areas as an effective way of creating space for dialogue and inter-agency collaboration towards the definition of shared standards in advocacy.
- Share evaluations and learn collaboratively, in particular from failures and problems presently not included (or well hidden) in evaluation reports – primarily by fostering collective approaches for open evaluation dialogue.
- Experiment with a system of peer-reviewed evaluations, initially internal and confidential to each organization allowing for rigorous and open reviews of evaluation methods – similar to methods applied by ALNAP as an effective collaborative of evaluators but with more effective ways to actually enforce and ensure good practice.
- Agree on standardized quantitative and qualitative metrics of impact that would allow for a sufficiently practical and pertinent measurement of impact – as part and priority focus of an improved dialogue, even if it involved superceding existing collaboration successes in consolidating agency methods and indicators.
- Ensure that timelines and resources for evaluations are flexible and sufficient, including to undertake meaningful qualitative research of impact over the long-term and to ensure that evaluations on advocacy and policy can be adjusted to affect relevant processes.
- Preserve flexibility and check for unintended consequences, especially in advocacy and policy programming to take into account the dynamics of relevant political contexts.
- Agree on a simple but shared evaluation language, integrated into all stages of evaluation and programming that allows for the effective involvement of professionals and beneficiaries at and across all levels of humanitarian assistance.
Of all the criticisms, from my experience I agree that organizational learning from evaluation findings is quite rare. All too often, we (myself included) are too busy in search of the next funding to apply evaluation finding to current or future programs and projects; most evaluations focus on achieving results but rarely assess the "operational" aspects on how those results were (not)achieved; and that the unintended consequences are rarely investigated.
Thank you for referring to our work. It truly is a frustrating world to work in!
ReplyDelete