Distance learning refers to students who learn from a teacher but who are not physically in a classroom, but rather some distance away but connecting with the teacher and lesson via technology, such as an internet browser or other software. Distance surveying is basically the same principle in that it refers to surveying individuals from a distance via technology, such as telephone, email or the internet.
I'm currently involved with a child protection research project in Iraq. One component of the research is to survey key informants on child protection mechanisms, regulations and policies. These key informants include people from Iraqi ministries and agencies, lawyers, United Nations officials, local and international NGOs, and child protection professionals inside Iraq, both in the Arabic and Kurkish speaking governorates. The distance to travel as well as security issues makes conducting face-2-face interviews costly and risky.
A wonderful tool for distance surveying is Google Forms. It is a flexible form for survey development than includes a built-in data collection system and it does not require any coding and is free of charge! The only "catch" is that you must have an internet connection and a Google Gmail account. Once have get, or if you have a Gmail account, go to Documents and select Forms. To develop a survey tool is quite intuitive but for those who would like some help go here: Google Forms Tutorial
Also, you may have noticed that to use this type of distance survey the respondents will need to have access to an internet connection and an internet browser. Most of the key informants I mentioned above have access to an internet connection and an internet browser.
The survey process involves finding a "survey champion" in each of the groups, for example one person in the UN who will promote the survey among colleagues. The survey champion would call or send an email telling their colleague about the survey and the importance of their participation. A follow-up email would be sent with the link to the Google Form survey. Each key informant would complete the questionnaire. For those key informants who did not complete the online survey in the requested time would receive a follow-up from the "survey champion" in their group.
Once completed all responses are automatically compiled into a Google Spreadsheet which can then be analyzed online, using Google analysis, or downloaded into other data analysis software such as SPSS.
Discussion of the design, monitoring and evaluation of international development project and programs.
Design, Monitoring and Evaluation - Save the Children (SC)
- LARRY DERSHEM - Tbilisi, Georgia
- Tbilisi, Georgia
- Welcome to my (our) blog about project design, monitoring and evaluation in the Middle East, Caucasus & Central Asia Region. This blog is a forum to discuss designs for project evaluation, monitoring systems, and how to conduct evaluations. In addition, I will occasionally highlight new methodologies and techniques in research, data collection methodologies and data analysis. If you have any questions regarding DME, don't hesitate to ask. Feel free to use this blog to pose questions, highlight various projects, dilemmas and challenges in DME you confront in your projects or to share successes. Just use the Comments link below.
Friday, November 26, 2010
Wednesday, November 24, 2010
Software for Network Visualization and Analysis
On 27 March 2010 I had blog called, Network Analysis and Visualization of Qualitative Data. In response to this blog I received several requests to provide software programs that can be used in network visualization and analysis.
Increasingly each year there are more software developed for network visualization, but all network visualization tools require some degree of knowledge about how to structure network data and databases PRIOR to visualization. The only network visualization tool I know of (at this time) that requires a minimum knowledge of network data structure (or theory), is free-of-charge, and has some degree of document support is NodeXL.
NodeXL is a MSExcel add-on template for visually analyzing networks but also provides basic network statistics (in-degree, density, etc.). Most network visualization software require network data to be entered in one format (e.g., txt, DL, or other) and then imported into a software (NetDraw, Pajek) for visualization. NodeXL allows you to enter data directly, via in a familiar spreadsheet format, then to visualize it, and modify various characteristics of the network easily.
NodeXL is up-dated frequently and can provide great visual illustrations of ties, links, clusters, communities, and networks for a better understanding of your data.
Tuesday, November 23, 2010
Measuring Empowerment
A new handbook was brought to my attention recently, which I have not had a chance to review yet. It is called, Measuring Empowerment? Ask Them: Quantifying qualitative outcomes from people's own analysis (2010), published by the Swedish International Development Cooperation Agency (Sida). The authors are Dee Jupp and Shoel Ibn Ali. The Preface is written by Robert Chambers.
I have taken parts from the Premble to provide a quick summary of this handbook:
"Quantitative analyses of qualitative assessments of outcomes and impacts can be undertaken with relative ease and at low cost. It is possible to measure what many regard as unmeasurable. This publication suggests that steps in the process of attainment of rights and the process of empowerment are easy to identify and measure for those active in the struggle to achieve them.....This paper presents the experience of one social movement in Bangladesh, which managed to find a way to measure empowerment by letting the members themselves explain what benefits they acquired from the Movement and by developing a means to measure change over time. These measures, which are primarily of use to the members, have then been subjected to numerical analysis outside of the village environment to provide convincing quantitative data, which satisfies the demands of results-based management."
I have taken parts from the Premble to provide a quick summary of this handbook:
"Quantitative analyses of qualitative assessments of outcomes and impacts can be undertaken with relative ease and at low cost. It is possible to measure what many regard as unmeasurable. This publication suggests that steps in the process of attainment of rights and the process of empowerment are easy to identify and measure for those active in the struggle to achieve them.....This paper presents the experience of one social movement in Bangladesh, which managed to find a way to measure empowerment by letting the members themselves explain what benefits they acquired from the Movement and by developing a means to measure change over time. These measures, which are primarily of use to the members, have then been subjected to numerical analysis outside of the village environment to provide convincing quantitative data, which satisfies the demands of results-based management."
Labels:
Empowerment,
Measurement,
Robert Chambers
Participatory Impact Assessment
The Feinstein International Center, a Tufts University, has a rather comprehensive guide on participatory assessment titled, Participatory Impact Assessment: A Guide for Practitioners (2007).The authors are Andrew Carley, John Burns, Dawit Abebe and Omeno Suji. This guide focuses on measuring the impact of livelihood projects.
An eight stage approach is outlined that includes:
An eight stage approach is outlined that includes:
- Provide a framework for assessing the impact of livelihoods interventions.
- Clarify the differences between measuring process and real impact .
- Demonstrate how Participatory Impact Assessment (PIA) can be used to measure the impact of different projects in different contexts using community identified impact indicators .
- Demonstrate how participatory methods can be used to measure impact where no baseline data exists .
- Demonstrate how participatory methods can be used to attribute impact to a project .
- Demonstrate how qualitative data from participatory tools can be systematically collected and numerically presented to give representative results of project impact .
The three fundamental questions the PIA attempts to answer are: a) what changes have there been in the community since the start of the project?, b) which of these changes are attributable to the project?, and c) what differences have these changes made to people's lives?
Not all donors are open to participatory methods, especially if it entails the development of indicators by community members during or even after a project. Most donors prefer a list of result and impact indicators prior to project implementation. But, for those projects with greater donor flexibility to use participatory assessment techniques this guide presents some basic steps and methods.
Sunday, November 21, 2010
Visualization Methods
Two guys (Ralph Lengler & Martin J. Eppler) form the Institute of Corporate Communication have developed a "Periodic table" of visualization methods that shows examples of about 100 visualization methods in six categories, which are:
FF (Force Field)
- Data Visualization - representation of quantitative data in schematic form (either with or without axes).
- Information Visualization - the use of interactive visual representations of data to amplify knowledge...that is, data is transformed into an image.
- Concept Visualization - methods to elaborate (mostly) quantitative concepts, ideas, plans, and analyses.
- Strategy Visualization - the systematic use of complementary visual representations in the analysis, development, formulation, communication, and implementation of strategies in organizations.
- Metaphor Visualization - displays information graphically to organize and structure information.
- Compound Visualization - complementary use of different graphic representation formats in one single schema or frame.
Examples:
SM (Stakeholder Map)
FF (Force Field)
This is a great source for thinking of how to illustrate your ideas and data.
Saturday, November 20, 2010
Participatory Photo Mapping
Participatory Photo Mapping (PPM) is a tool for exploring the "experience of place" and for communicating this experience to community stakeholders and decision-makers. Using Participatory Photo Mapping helps uncover supports and barriers to well-being, especially related to the built environment. The PPM approach photography, narrative stories, and mapping.
The PPM process has four steps:
Step 1: Provide participants with digital cameras and GPS units and have them take pictures of their neighborhood, documenting routine use of community and recreation environments.
Step 2: These photos become the objects of focus group sessions in which open dialogue creates emerging themes that are attached to particular images. Conduct focus group and narrative sessions where the photographs are projected onto a wall and community people talk about the images and are engaged in exploring perceptions of their neighborhood environment.
Step 3: The images are then geocoded as part of a neighborhood-level geographic information system that includes other demographic and spatial data, such as population, household characteristics and crime statistics, to create a qualitative GIS focused on the experience of community and recreation environments.
Step 4: Use learned knowledge to communicate the information to local decision-makers, such as health professionals, business owners, community organizations, and policy makers.
Below are some links to videos by Dr. , originally designed, to develop and design collaborative projects and networks to improve health and well-being of communities by strengthening health information systems and sharing that information with community stakeholders and public health decision-makers.
PPM allows you to:
Watch the videos (links below) to learn more about this tool.
The PPM process has four steps:
Step 1: Provide participants with digital cameras and GPS units and have them take pictures of their neighborhood, documenting routine use of community and recreation environments.
Step 2: These photos become the objects of focus group sessions in which open dialogue creates emerging themes that are attached to particular images. Conduct focus group and narrative sessions where the photographs are projected onto a wall and community people talk about the images and are engaged in exploring perceptions of their neighborhood environment.
Step 3: The images are then geocoded as part of a neighborhood-level geographic information system that includes other demographic and spatial data, such as population, household characteristics and crime statistics, to create a qualitative GIS focused on the experience of community and recreation environments.
Step 4: Use learned knowledge to communicate the information to local decision-makers, such as health professionals, business owners, community organizations, and policy makers.
Below are some links to videos by Dr. , originally designed, to develop and design collaborative projects and networks to improve health and well-being of communities by strengthening health information systems and sharing that information with community stakeholders and public health decision-makers.
PPM allows you to:
- assess the community and environmental contributions to health, safety and well-being,
- address peoples’ perceptions of their neighborhood environments,
- identify environmental factors that impact health and well-being,
- identify community supports and barriers to health and well-being, present this information to stakeholders and decision-makers.
Watch the videos (links below) to learn more about this tool.
With the cost of digital cameras declining each day, and the ability to instantly print these photos, this technique of community participation (from youth to adults) in identifying community problems and or issues and allowing for multiple interpretations of what the photo is about and why it is important to start the discussion on how it can be resolved.
Tuesday, November 2, 2010
Livelihood Strategy Videos from Gaza, Palestine
The third video focuses on normal Gazan life (Abd, 28 Years old) from Beit Lahya in the Northern Gaza Strip. He is a young farmer confronting challenges with earning an income from his farm to take care of his children and helping friends overcome many personal and economic challenges.
Livelihood Strategy Videos from Gaza, Palestine
The second livelihoods video from Gaza, Palestine, is about Mohammed, who is 30 years of age and works in Save the Children's Livelihood Department. This video presents a young man trying to use his education and commitment to help youth organizations and communities in Gaza.
Livelihood Strategy Videos from Gaza, Palestine
Save the Children's office in Gaza, Palestine, has produced three short videos on livelihood strategies. The first video is about Torfa, a 56 year old woman from Khan Younis, Southern Gaza Strip, who is trying to support her family. I will be featuring the other two videos shortly.
Saturday, October 30, 2010
Networks of Self-Categorised Stories
Rick Davies, developer and manager of MandE website, discusses an interesting technique to analyze stories generated from his Most Significant Change method for program/project evaluation. He calls the technique, Networks of Self-Catorgoris(z)ed Stories.
For most people in the field, the most challenging aspect of this technique will be obtaining the UCINET (by Analytic Technologies) software, which must be purchased and takes some time to comfortably use, but can quickly and easily generate two-mode data. However, the software, Netdraw, which graphs the data can be download, installed and used for free.
This technique illustrates several of the future trends in evaluation highlighted by Michael Patton in a webinar that I mentioned in an earlier post: transdiscipline (graph theory, content analysis), systems thinking (interdependence), and complex concepts (emergent categories). An illustration of networks of self-categorisation by Rick is below.
For most people in the field, the most challenging aspect of this technique will be obtaining the UCINET (by Analytic Technologies) software, which must be purchased and takes some time to comfortably use, but can quickly and easily generate two-mode data. However, the software, Netdraw, which graphs the data can be download, installed and used for free.
Michael Quinn Patton: Future Trends in Evaluation
On the website, My M&E, in the Webniars page, under the title, Developing Capacities for Country M&E Systems, you can find an informative webinar by Michael Quinn Patton who discusses future trends in program and/or project evaluation.
The webinar was recorded using Elluminate and therefore when you click on the link, if you computer does not already have the most up-to-date Java script, it will download the Java script automatically, and the Elluminate interface will open.
If you are not interested in the opening remarks, you can move the playback slider forward to the 6:30 minute point, which is about where Michaels presentation begins. Also, at the end of the presentation Michael takes questions from an online, international audience.
In his presentation, Michael highlights 6 new trends in evaluation:
- Globalization of the evaluation profession.
- Evaluation as a transdiscipline.
- Increased political intent in accountability, performance, indicators and transparency.
- Growing interest on evaluation capacity-building and skill development.
- Debates about methods.
- Using system thinking and complex concepts.
Sunday, September 19, 2010
The Road to Results: Designing and Conducting Effective Development Evaluations
A very practical handbook, which also has some practical exercises, is the 2009 World Bank publication by Linda G. Morra Imas and Ray C. Rist, titled: The Road to Results: Designing and Conduction Effective Development Evaluations. It is a 585 page book that covers a wide variety of topic on program evaluation. The nice thing is that you can read it online for FREE. The detailed table of contents is below. I have also added the link to the Documents section of this blog.
FOUNDATIONS
Chapter 1. Introducing Development Evaluation
Evaluation: What is it?
The Origins and History of the Evaluation Discipline
The Development Evaluation Context
Principles and Standards for Development Evaluation
Examples of Development Evaluations
Chapter 2. Understanding the Issues Driving Development Evaluation
Overview of Evaluation in Developed and Developing Countries
Implications of Emerging Development Issues
PREPARING AND CONDUCTING EFFECTIVE DEVELOPMENT EVALUATIONS
Chapter 3. Building a Results-Based Monitoring and Evaluation System
Importance of Results-Based Monitoring and Evaluation
What is Results-Based Monitoring and Evaluation?
Traditional Versus Results-Based Monitoring and Evaluation
Ten Steps to Building a Results-Based Monitoring and Evaluation System
Chapter 4. Understanding the Evaluation Context and the Program Theory of Change
Front-End Analysis
Identifying the Main Client and Key stakeholders
Understanding the Context
Tapping Existing Knowledge
Constructing, Using, and Assessing a Theory of Change
Chapter 5. Considering the Evaluation Approach
General Approaches to Evaluation
DESIGNING AND CONDUCTING
Chapter 6. Developing Evaluation Questions and Starting the Design Matrix
Sources of Questions
Types of Questions
Identifying and Selecting Questions
Developing Good Questions
Designing the Evaluation
Chapter 7. Selecting Designs for Cause-and-Effect, Descriptive, and Normative Evaluation Questions
Connecting Questions to Design
Designs for Cause-and-Effect Questions
Designs for Descriptive Questions
Designs for Normative Questions
The Need for More Rigorous Evaluation Designs
Chapter 8. Selecting and Constructing Data Collection Instruments
Data Collection Strategies
Characteristics of Good Measures
Quantitative and Qualitative Data
Tools for Collecting Data
Chapter 9. Choosing the Sampling Strategy
Introduction to Sampling
Types of Samples: Random and Nonrandom
Determining the Sample Size
Chapter 10. Planning for and Conducting Data Analysis
Data Analysis Strategy
Analyzing Qualitative Data
Analyzing Quantitative Data
Linking Qualitative Data and Quantitative Data
MEETING CHALLENGES
Chapter 11. Evaluating Complex Interventions
Big-Picture Views of Development Evaluation
Joint Evaluations
Country Program Evaluations
Sector Program Evaluations
Thematic Evaluations
Evaluation of Global and Regional Partnership Programs
LEADING
Chapter 12. Managing an Evaluation
Managing the Design Matrix
Contracting the Evaluation
Roles and Responsibilities of Different Players
Managing People, Tasks, and Budgets
Chapter 13. Presenting Results
Crafting a Communication Strategy
Writing an Evaluation Report
Displaying information Visually
Making an Oral Presentation
Chapter 14. Guiding the Evaluator: Evaluation Ethics, Politics, Standards, and Guiding Principles
Ethical Behavior
Politics and Evaluation
Evaluation standards and Guiding Principles
Chapter 15. Looking to the Future
Past to Present
The Future
Monday, September 13, 2010
Pareto's 80/20 Principle in Development Projects
In the early 1900s an Italian economist, Vilfredo Pareto, observed that in most countries 20% of the people owned 80% of the wealth. It was soon described as the Pareto's Principle. In the 1930s and 1940s, Dr. Juran who studied Quality Management began noticing a similar pattern in organizations, in that 20% of something in an organization accounted for 80% of the results. Dr. Juran began referring to the "vital few and the trivial many."
Other examples of the 80:20 ratio are:
Other examples of the 80:20 ratio are:
- 80% of all deaths on account of sickness happen from 20% percent of diseases.
- 80% of the nutrition you acquire comes from 20% of the foodstuff you eat.
- 80% and above marks are scored by only 20% of children in examinations.
- 80% of the work in an office is done by 20% of the staff.
- 80% market share of a product is owned by 20% of business houses.
- 80% of what an presenter presents is understood only by 20% of the audience.
- 80% of the people browsing the Internet go to 20% of the web-sites.
- 80% of the most listened music will be from 20% of the albums produced.
What are some examples from projects that might follow Pareto's Principle? For example:
- 80% of project results come from 20% of the project activities.
- 80% of project results come from the efforts of 20% of the project staff.
- 80% of project results come from 20% of project funds.
- 80% of project of beneficiaries benefit from 20% of the project activities.
Can you think of other aspects of projects that do or may follow Pareto's Principle?
Sunday, September 12, 2010
Using Excel to Create a Gantt Chart
Every project or program must present with the proposal a Gantt Chart illustrating its list of activities and completing dates. After being awarded, the Gantt Chart must be revised and updated.
Not only does every project need a Gantt Chart but virtually all project staff use MS Excel spreadsheet application. What is very nice, several people have published helpful instructions on how to use Excel to quickly create useful and easy to understand Gantt Charts.
Michelle McDonough has published her set of instructions here: Gantt Charts from MS Excel
The site, Techblissonline, presents a downloadable Excel Gantt Chart template. An example is shown below.
Monday, September 6, 2010
Guide to Statistical Charts
For those who produce reports that include various types of statistical data, you know the importance of graphically illustrating data. Most people readily understand data graphs than data tables.
The UK government has published a nice little guide on that sets out some principles and conventions for making statistical charts. The first part sets out some general principles to follow for any chart and then covers the default formatting used by the Social & General Statistics Section of the Library in the UK.
A chart works on visual and different analytical levels is open to greater interpretation. There are many more options to make a good graph or chart when blending chart type, colour, size, dimensions, labelling, scales etc. There are few rules, but the general principles highlighted in this paper help improve any graph or chart presentation and thus understanding.
Related articles by Zemanta
Saturday, September 4, 2010
Google Maps for Project Description
Google offers many great services. One great service is My Maps in Google Maps. Once you have an account in Google Maps you can geographically locate any aspect of your project for others to view. For example, Save the Children is implementing the Youth Empowerment Program (YEP) in the country of Yemen in schools primarily in the south.
Using Google Maps, via the Satellite View, the project staff located all the schools the YEP project. Once located, the schools were tagged with an icon (GPS units can be used to help locate project activities). Not only did the staff locate the schools and place icons were they were located in Yemen, but they also created pop-up windows so that when various stakeholders (donors, Ministry of Education officials, SC HQ staff) clicked on the icon, a "ballon window" appears providing specific details about the school. It is also possible to include pictures and video besides text.
The YEP project schools can be viewed here: Schools Save the Children is working with in Yemen. Click on any of the icons to the left the school will be immediately identified via a balloon-window that will show various details about the school (not all schools have details entered yet). If you are interested, keep zooming in and you will actually see the school building.
Also, an organization can add its Logo to Google Maps and can be integrated in to your organization's website or your project/program website. So, when appropriate, use Google Maps to show others where your projects or programs are being implemented.
The YEP project schools can be viewed here: Schools Save the Children is working with in Yemen. Click on any of the icons to the left the school will be immediately identified via a balloon-window that will show various details about the school (not all schools have details entered yet). If you are interested, keep zooming in and you will actually see the school building.
Also, an organization can add its Logo to Google Maps and can be integrated in to your organization's website or your project/program website. So, when appropriate, use Google Maps to show others where your projects or programs are being implemented.
Thursday, September 2, 2010
Understanding the Use of Focus Group Discussions (FGDs)
Recently, I was asked to assist with a research project. The main aim of the research is to obtain prevalence rates for violence against children as well as the context in which they occur. The methodology proposed was the use of focus groups discussions (FGDs).
The original design estimated the number of interviews (sample size) that would be needed to be a representative of the larger population of children and parents. These samples sizes were then divided by 10, the average number of people in a FGDs, and the research project determined the number of FGDs that would be needed to obtain prevalence rates would be a total of 980 FGDs. Yes, 980 FGDs among children and parents on violence against children.
In addition, the research protocols stated that the FGDs would:
- recruit participants randomly
- cover a wide range of issues
- include confidentiality
- not attempt to get individual accounts of violence
- would solicit consensus on type and contexts of violence
- data analysis would be completed in a short period of time
- The sampling was based on individuals not groups. The sampling parameters (confidence interval, margin-of-error, etc.) would not be applicable once you took individuals and formed them into groups.
- FGDs findings can not provide prevalence rates nor be generalized of a larger population.
- Random sampling does not apply to FGDs; instead FGDs use either purposive or convenience samples. FGD participants are selected because of some common characteristic(s) not randomly.
- Generally, FGDs should focus on a few issues with sufficient time to "dig deeper" into these few issues rather than discussing a wide range of issues lightly.
- FGDs are not good for discussing sensitive issues such as child violence or exploitation, which is better handled in private interviews or in-depth interviews.
- FGDs cannot ensure confidentiality of what is discussed. FGDs organizers and the moderator cannot control what participants may tell others what was discussed in the FGDs afterwards.
- FGDs should solicit as many diverse opinions and views and NOT attempt to impose consensus.
- Data generated by FGDs are not cheap and easy to enter, analyze and interpret. Despite what may project staff think and/or believe, qualitative data entry and analysis must be systematic and rigorous and is as challenging to analyze as quantitative data.
- A good FGD requires an experience moderator and consistency across FGDs, which would be almost next to impossible with 980 FGDs.
Anyone interested in the best practices of using FGDs should read, International Focus Group Research: a hanbook for the health and social sciences, by Monique Hennink.
Tuesday, August 31, 2010
New Monitoring & Evaluation Site by UNICEF's Evaluation Office
UNICEF's Evaluation Office has developed a website called, MyMandE. As stated on the websites homepage, MyMandE "is an interactive WEB 2.0 platform to share knowledge on country-led M&E systems worldwide. In addition to being a learning resource, My M&E facilitates the strengthening of a global community, while identifying good practices and lessons learned about program monitoring and evaluation in general, and on country-led M&E systems in particular."
"While My M&E was founded by IOCE, UNICEF and DevInfo, it is managed by a consortium of partner organizations including IDEAS, IPDET, WHO/PAHO, UNIFEM, ReLAC, Preval, Agencia brasileira de Avaliacao, SLEvA and IPEN. If your organization wishes to join the consortium as a partner, please send an email to Marco Segone, UNICEF Evaluation Office, at msegone@unicef.org."
"My M&E is a collaborative website whose content can be modified continuously by users. To develop and strengthen a global community on country-led M&E systems, registered users have the facility to complete their own social profile and exchange experiences and knowledge through blogs, discussion forums, documents, webinars and videos."
"While My M&E was founded by IOCE, UNICEF and DevInfo, it is managed by a consortium of partner organizations including IDEAS, IPDET, WHO/PAHO, UNIFEM, ReLAC, Preval, Agencia brasileira de Avaliacao, SLEvA and IPEN. If your organization wishes to join the consortium as a partner, please send an email to Marco Segone, UNICEF Evaluation Office, at msegone@unicef.org."
"My M&E is a collaborative website whose content can be modified continuously by users. To develop and strengthen a global community on country-led M&E systems, registered users have the facility to complete their own social profile and exchange experiences and knowledge through blogs, discussion forums, documents, webinars and videos."
Marco Segone discusses MyMandE in an presentation available online. MyMandE has the following sections:
- Wiki
- Community
- Webinars
- Videos
- How to
- Trainings
- Virtual Library
- Jobs
- Roster
Under Videos, my 3-part series on impact evaluation is listed! So, for those interested in international program/project evaluation this site provides many resources (and hopefully more will be added) and allows the international community of M&E people to contribute.
Thursday, August 26, 2010
Standardized Tools for Measuring Child Abuse and Violence Against Children
In 2006, the World Report on Violence Against Children was published by the United Nations, authored by Paulo Sergio Pinheiro. This report was the first comprehensive global attempt to describe the scale of
all forms of violence against children and highlighted the large scale rate of violence against children at home, in schools, in their communities, and by the state despite most countries having signed the Convention of the Rights of the Child.
From a monitoring & evaluation perspective, why should I discuss child abuse and violence against children? One of the major findings of this study was that too many complicated and different tools were being used across countries which made it difficult to assess and compare child abuse rates and trends.
In response to this finding, recommendations were made to develop a approaches and standardized ways to measure voilence agaisnt children that would allow for cross-national comparisons. A set of tools were
developed by 122 experts and are discussed in an article published in 2009 in the journal, Child Abuse & Neglect. These tools are called the International Child Abuse Screening Tools (or ICAST). The ICAST tools were piloted in convenience samples in 7 countries representing all regions of the world. As stated in the journal article mentioned above, these tools were specifically designed to be used in cross-cultural, multi-national, multi-cultural studies so that comparisons could be made across time and countries.
There are 4 ICAST tools to measure child abuse: a) the parent report for young children, b) the child report, c) the institutionalized child report, and d) the young adult retrospective report. These 4 ICAST tools can be found on the International Child Abuse and Neglect website. They have been translated into the following languages: Spanish, Arabic, Icelandic, Hindi and Russian.
If your project or program is considering a baseline assessment of child abuse or violence, these tools may be useful.
all forms of violence against children and highlighted the large scale rate of violence against children at home, in schools, in their communities, and by the state despite most countries having signed the Convention of the Rights of the Child.
From a monitoring & evaluation perspective, why should I discuss child abuse and violence against children? One of the major findings of this study was that too many complicated and different tools were being used across countries which made it difficult to assess and compare child abuse rates and trends.
In response to this finding, recommendations were made to develop a approaches and standardized ways to measure voilence agaisnt children that would allow for cross-national comparisons. A set of tools were
developed by 122 experts and are discussed in an article published in 2009 in the journal, Child Abuse & Neglect. These tools are called the International Child Abuse Screening Tools (or ICAST). The ICAST tools were piloted in convenience samples in 7 countries representing all regions of the world. As stated in the journal article mentioned above, these tools were specifically designed to be used in cross-cultural, multi-national, multi-cultural studies so that comparisons could be made across time and countries.
There are 4 ICAST tools to measure child abuse: a) the parent report for young children, b) the child report, c) the institutionalized child report, and d) the young adult retrospective report. These 4 ICAST tools can be found on the International Child Abuse and Neglect website. They have been translated into the following languages: Spanish, Arabic, Icelandic, Hindi and Russian.
If your project or program is considering a baseline assessment of child abuse or violence, these tools may be useful.
Wednesday, August 25, 2010
Personal Digital Assistants (PDAs) and Child Protection
Recently, Save the Children (SC) received a grant from Google to test some innovative ideas. One of the innovative projects that SC is testing is the use of personal digital assistants (PDAs) in child protection. PDAs are hand-held computer that is a mobile information manager.
The use of PDAs for child protection is being field tested in the country of Azerbaijan. Once part of the Soviet Union, Azerbaijan-- like many other Soviet states-- institutionalized children for even minor mental or physical disabilities or even if the parents could not afford to support them. Since independence, international organizations have encourage the Azeri government to pass de-institutionalization reforms; that is, as much as possible returning children to their families, or foster homes, and supporting both the children and the families. One approach to meeting de-institutionalization of children in Azerbaijan is the community case management approach. Case management involves assessing, monitoring and evaluating each child and family. Community case management involves ensuring local referral and support services are available, accessible and used by the children and their families.
Currently, child case management involves the use of paper-based in-take assessment forms, child and family monitoring forms, and referral service follow-up. Collecting all this data requires time and money. In addition, the time to process all these data (data entry, data cleaning, data analysis, case reports produced) means that the children and families may not have immediate care or support they need.
With a current project in Azerbaijan, SC is conducting a comparative study of paper-based vs. PDA child case management. In this study, 10 of 60 child case management social workers have been randomly selected to participate in two groups. The first group will be 5 randomly selected social workers selected to use paper-based child case management forms and the second group will be 5 randomly selected social workers to use PDAs that have digital versions of the child case management forms.
After social workers in each group has completed case management forms for approximately 10 children and families, SC will examine the following:
- The time and cost differences between paper-based vs. PDAs for each child and family, which will determine the cost effectiveness of PDAs. That is, do PDAs reduce costs and if so, how much;
- Data quality, which will determine if the error rates are different. That is, does direct data entry and transmission to the central database reduce errors and if so which errors and by how much; and
- User satisfaction, which will determine if social workers are most satisfied with paper-based on PDAs as a case management data tool.
The pilot testing of PDAs for child protection is occurring in Azerbaijan as I write (end of Aug 2010) but should be completed in late September. Once the results are in I'll share them on this blog.
Admitting Project Failure
Buckminster Fuller said, "If I ran a school, I'd give the average grade to the ones who gave me all the right answers, for being good parrots. I'd give the top grades to those who made a lot of mistakes and told me about them, and then told me what they learned from them."
This quote characterizes the new website, FAILFaire, which provides an online forum to report and discuss project failures. As I have mentioned in a previous blog, all too often most project evaluations do not mention any project shortcomings or failures. The international development community often highlights success and files away failures. But this is a mistake!!! Only by "talking openly and seeing where we have failed may help us learn, make better decisions, and avoid making the same mistakes again." I believe it is no secret that many project just don't work...for various reasons.
Hopefully, though, most projects should be based on sound evidence that they will work prior to implementation. But, for those occasions when projects do fail they should be talk about for lessons learned.
Besides the website, FAILFaire holds conferences. The 1st was held in New York by MobileActive and focused on technology and the 2nd was held in Washington DC in July by the World Bank.
One example reported at a FAILFaire conference was by UNICEF. It was the 5 Million Stories by
2010 Project. UNICEF Innovations’ Chris Fabian and Erica Kochi co-presented what they jokingly referred to as a “zombie project”, because despite the fact that the project couldn’t get off the ground, it kept being half-heartedly restarted over the years. “Our Stories” was designed to give children around the world the chance to tell their stories to be published online as part of a look at the global experience of childhood, with the ultimate goal of having 5 million stories posted by 2010.
Launched in 2007, Kochi and Fabian estimate the project had a .008 success rate, since it only gathered 400 stories. They say that this project was a failure of real world application, in that although the idea was good, there was no real desire for it among the community it targeted. As Kochi explained, “No one asked for this.” Other problems included using proprietary, non-open source code so that they couldn’t adjust when there were problems, a lack of ownership and commitment to the project by key stakeholders, and a long timeline that mean that resources never aligned with needs – in 2007 there was money for PR, in 2008 pro-bonoe design resources, in 2009 the software development. In 2010, they finally shelved the project.
So, if you have a failed project, please proudly post it on FAILFaire.
This quote characterizes the new website, FAILFaire, which provides an online forum to report and discuss project failures. As I have mentioned in a previous blog, all too often most project evaluations do not mention any project shortcomings or failures. The international development community often highlights success and files away failures. But this is a mistake!!! Only by "talking openly and seeing where we have failed may help us learn, make better decisions, and avoid making the same mistakes again." I believe it is no secret that many project just don't work...for various reasons.
Hopefully, though, most projects should be based on sound evidence that they will work prior to implementation. But, for those occasions when projects do fail they should be talk about for lessons learned.
Besides the website, FAILFaire holds conferences. The 1st was held in New York by MobileActive and focused on technology and the 2nd was held in Washington DC in July by the World Bank.
One example reported at a FAILFaire conference was by UNICEF. It was the 5 Million Stories by
2010 Project. UNICEF Innovations’ Chris Fabian and Erica Kochi co-presented what they jokingly referred to as a “zombie project”, because despite the fact that the project couldn’t get off the ground, it kept being half-heartedly restarted over the years. “Our Stories” was designed to give children around the world the chance to tell their stories to be published online as part of a look at the global experience of childhood, with the ultimate goal of having 5 million stories posted by 2010.
Launched in 2007, Kochi and Fabian estimate the project had a .008 success rate, since it only gathered 400 stories. They say that this project was a failure of real world application, in that although the idea was good, there was no real desire for it among the community it targeted. As Kochi explained, “No one asked for this.” Other problems included using proprietary, non-open source code so that they couldn’t adjust when there were problems, a lack of ownership and commitment to the project by key stakeholders, and a long timeline that mean that resources never aligned with needs – in 2007 there was money for PR, in 2008 pro-bonoe design resources, in 2009 the software development. In 2010, they finally shelved the project.
So, if you have a failed project, please proudly post it on FAILFaire.
Friday, May 21, 2010
On Home Leave (to the US) Until July 2010
I will be taking my annual Home Leave to the US during the month of June, 2010. When I return home to Tbilisi, Georgia, in July 2010 I will resume my program/project design, monitoring and evaluation blog.
Friday, May 14, 2010
Competencies Needed by Evaluators
In 2001, the American Journal of Evaluation published an article by Jean A. King, Laurie Stevahn, Gail Ghere and Jane Minnema titled, "Toward a Taxonomy of Essential Evaluator Competencies," (22; 229). These authors conducted an exploratory study to determine the extent to which 31 evaluation professionals from diverse backgrounds and approaches could reach agreement on a proposed taxonomy of essential evaluator competencies.
Using the weighted scores, I entered the competencies into Wordle to get a graphic representation of these competencies. Those competencies in large font size were weighted as more important than those in small font, though all were considered important.
Using the weighted scores, I entered the competencies into Wordle to get a graphic representation of these competencies. Those competencies in large font size were weighted as more important than those in small font, though all were considered important.
The most important competencies were Framing a Research Question, Research Methods and Research Design. Of comparatively less important were competencies of Training Others, Supervising, and Responding to RFPs (Request for Proposals).
Quotes Related to Evaluation
- True genius resides in the capacity for evaluation of uncertain, hazardous, and conflicting information. Winston Churchill
- The only man who behaves sensibly is my tailor; he takes my measurements anew every time he sees me, while all the rest go on with their old measurements and expect me to fit them. George Bernard Shaw
- Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted. Albert Einstein
- We cannot discover what ought to be the case by examining what is the case. We must decide what ought to be the case. Paul Taylor
- The most serious mistakes are not being made as a result of wrong answers. The truly dangerous this is asking the wrong question. Peter Drucker
- One of the great mistakes is to judge policies and programs by their intentions rather than their results. Milton Friedman
- The pure and simple truth is rarely pure and never simple! Oscar Wilde
- First get your facts; then you can distort them at your leisure. Mark Twain
- My mind is made up, don't confuse me with the facts! Unknown
- A funeral eulogy is a belated plea for the defense delivered after the evidence is all in. Irvin S. Cobb
- There's a world of difference between truth and facts. Facts can obscure the truth. Maya Angelou
- It is easier to believe a lie that one has heard a thousand times than to believe a fact that no one has heard before. Unknown
- We want the facts to fit the preconceptions. When they don't it is easier to ignore the facts than to change the preconceptions. Jessamyn West
- If at first you don't succeed, destroy all evidence that you tried. Unknown
- Absence of evidence is not evidence of absence. Carl Sagan
- Evaluate what you want -- because what gets measured, gets produced. James Belasco
- Price is what you pay. Value is what you get. Warren Buffett
- For changes to be of any true value, they've got to be lasting and consistent. Tony Robbins
- Extraordinary claims require extraordinary evidence.Carl Sagan
Friday, May 7, 2010
Country-Level Monitoring & Evaluation Dashboard
This past week (2-7 May 2010) I worked with a great team in SC's Cairo office to sketch-out a system that would provide the Senior Management Team (SMT) a consolidated "dashboard" report of key project
indicators on a quarterly basis.
The goal of the dashboard report is to provide the SMT an at-a-glance view of the progress of all projects to meet 1) quarterly benchmarks on key indicators, 2) successes and challenges for that quarter, 3) the burn-rate of project funds, 4) as well as number of beneficiaries reached, thus allowing better informed decision-making by the SMT on project improvement BEFORE the end of the reporting year or the project.
Like the dashboard of a car, this Dashboard Report will carry only those indicators that the SMT consider "key" to measuring the accomplisment of the project NOT ALL indicators.
Example of a Dashboard Report
indicators on a quarterly basis.
The goal of the dashboard report is to provide the SMT an at-a-glance view of the progress of all projects to meet 1) quarterly benchmarks on key indicators, 2) successes and challenges for that quarter, 3) the burn-rate of project funds, 4) as well as number of beneficiaries reached, thus allowing better informed decision-making by the SMT on project improvement BEFORE the end of the reporting year or the project.
Like the dashboard of a car, this Dashboard Report will carry only those indicators that the SMT consider "key" to measuring the accomplisment of the project NOT ALL indicators.
Example of a Dashboard Report
A Dashboard Report is primarily based on comparing quarterly achievements with quarterly benchmarks for activity-level indicators. The Dashboard Report will present result-level indicators on an annual basis. Therefore, if projects do not have end-of-project targets and established quarterly (or semi-annual) benchmarks then developing a meaningfuly Dashboard Report to show process is more difficult.
In addition, identifying over or under achievment of quarterly benchmarks, or annual targets, will only be the beginning of the process; if, for example, there is an under achievement of a quarterly benchmark, then the SMT can "drill-down" deeper with the Project Director as to possible reasons which can be discussed and hopefully resolved before the next quarter.
A Dashboard Report should be simple and easy to read; however, this takes a lot of work! Behind a good Dashboard Report is consensus of key indicators, systems to ensure data quality, adequate staff resources, long-term SMT committment, userfriendly IT tools, clearly defined roles and flow of data, and project with M&E plans that clearly state end-of-project targets to be achieved and quarterly or semi-annual bechmarks to track progress toward the targets.
The Egypt Country Office will be working on developing these systems and has planned the 1st draft of a Dashboard Report this summer. Once it is developed I will present it here.
Wednesday, April 21, 2010
Barriers to Learning within NGOs...as well as Regional & Country Offices
Just as individuals must learn in order to survive and grow in the new complexities of a global environment, so must organizations. There are different motives for learning: profit (maximize monetary gain), value (emphasis on ethics and reconciling interests), and altruism (doing good for others).
I would say that most people working in NGOs are motived by altruism. Organizations employing staff who are primarliy movtivated by altruism is positive but can negatively affect organizational learning in specific ways. The organization, Networking for International Development, conducted a survey of NGOs regarding organizational learning and published a report called, "Working with Barriers to Organisational Learning."
The focus of the report is on barriers to that seem to limit learning within NGOs. Ten barriers are discussed:
I would say that most people working in NGOs are motived by altruism. Organizations employing staff who are primarliy movtivated by altruism is positive but can negatively affect organizational learning in specific ways. The organization, Networking for International Development, conducted a survey of NGOs regarding organizational learning and published a report called, "Working with Barriers to Organisational Learning."
The focus of the report is on barriers to that seem to limit learning within NGOs. Ten barriers are discussed:
- Bias for Action - often altruistic motives can lead to an activist tendency in which staff feel that there is not time to slow down, clarify issues, and reflect on what is happening.
- The Undiscussables - altruism to do good can lead to the tendency to avoid issues because of fear of upsetting others or to avoid conflict.
- Commitment to 'the Cause'- similar to the Bias for Action above, this is a sense that the altruistic 'cause' has to be achieved and that taking time to reflect may lead to questionning of what is being done may not achieve the ultimate goal.
- A Cultural Bias - many people working in international NGOs come from the US or western cultures. The dominant culture of the organization may ignore other means and methods of interaction, discussion, reflection and learning.
- Advocacy at the Expense of Inquiry - the altruistic urge can lead to emphasis on advocating and defending a position at the expense of learning about other views.
- The Role of Leadership - combining most or many of the points above, organizational leaders often set the tone for what is acceptable forms of questioning, inquiry, interaction all of which affects overall learning.
- Learning to Unlearn - possibly surrounded by others who have similar altruistic motives, the trap of doing what is easy because of habits and assumptions that have been relied on for years with few challenges of learning new habits.
- Practicing What We Preach - as part of the altruistic zeal there can be a tendency to promote processes, methods and practices that the organization itself does not do.
- The Funding Environment - altruism often relies on outside funding and all too often funding (ie., donor) limits the extent of innovating and testing as well as enacting change based on learning.
- Thinking Strategically About Learning - even though many NGO staff are motivated by altruism, there can still be a tendency to be competetive, which results in a core practice of learning being an "internal" activity with little priority placed on learning from peer organizations.
Thursday, April 8, 2010
Responding to Evaluation Findings
John Scott Bayley, an Evaluation Specialist at the Independent Evaluation Department in the Asian Development Bank published an article in the Evaluation Journal of Australasia about Handy Hints for Program Managers.
Though meant to be both lighthearted as well as serious, one of his handy hints for Program Managers is if they feel threatened by the results of an evaluation study they should consider responding with one of the following strategies:
- Attack the evaluation’s methodology;
- Attack the data's interpretation and resulting conclusions;
- Attack the evaluation’s assumptions;
- Attack the recommendations;
- Substitute previously unstated goals for the official program goals;
- Attack the evaluators personally, claim they are biased or unfamiliar with the program;
- Attack the evaluation’s key issues and research questions;
- Do not participate in the evaluation, but argue that the findings lack an adequate contextual background;
- Rally together those who are threatened by the findings;
- Indicate the findings are reasonable, but unable to be implemented due to a lack of resources, political opposition, the staff need training etc;
- Complain about a lack of consultation;
- Argue that the evaluators did not appreciate the subtleties of the program;
- Simply pretend that the evaluation never occurred, ignore it;
- State that the program’s environment has changed, and the findings are no longer relevant;
- Stall for time until the evaluation is forgotten about;
- Argue that the union and staff will not accept the recommendations;
- Argue that while the program has not achieve it's goals, it does achieve other important things that are too subtle to be easily measured;
- Say that the evaluation leaves important questions unanswered, its' significance is questionable;
- Argue that the data is open to alternative interpretations, the evaluation’s conclusions have been questions by others;
- Attack the steering committee;
- Claim that the results contradict commonsense experience, and testimonials from clients;
- Claim that the findings are contradicted by other research conducted by various experts in the field;
- Agree with the findings and indicate that you have known about this for some time, and you started making changes months ago;
- Argue that the findings contradict the spirit and philosophy of the dept/program;
- Make up quotations that support your case and attribute them to knowledgeable sources; and
- Argue about definitions and interpretations.
Some others that I have heard that are not listed above are:
- Argue that the project did not have a sufficient budget to monitoring the results and thus cannot be held responsible for not achieving them.
- Argue that the results the project was trying to achieve are so unique that they are not measurable.
- Argue that the "real" results of the project will occur years after the evaluation.
Thursday, April 1, 2010
Development Assets of Yemeni Youth
ave the Children (SC) has been working in Yemen since 1963 with programming in education, health, child protection and civil society. In 2008, SC was awarded a grant from the United States Agency for International Development (USAID) for a Youth Empowerment Program (YEP), which operates in the governorates of Sa'ana, Ibb, Aden and Abyan.
In late 2008 to early 2009, I was involved in an assessment to better understand how youth in Yemen are fairing personally and socially, which would help inform the programming in the YEP program. One of the measures we used to assess how youth were doing was the well regarded Search Institute's Developmental Assets Profile (DAP).
In late 2008 to early 2009, I was involved in an assessment to better understand how youth in Yemen are fairing personally and socially, which would help inform the programming in the YEP program. One of the measures we used to assess how youth were doing was the well regarded Search Institute's Developmental Assets Profile (DAP).
The DAP consists of 58 questions that comprise two domains, Internal and External Assets. The Internal Assets domain is comprise of four sub-scales: Support, Empowerment, Boundaries & Expectations, and Constructive Use of Time. The External Assets domain is also comprised of four sub-scales: Commitment to Learning, Positive Values, Social Competencies, and Positive Identity.
It took about 3 months to adapt the tool to the Yemen context. Adapting the DAP involved initial translation from a Egyptian Arabic version (from a another project) by Yemen project staff, pilot-testing this version with youth via individual interviews and focus group discussion which highlighted issues and wording that required further refinement. Once we had an Arabic version that was adapted for youth in Yemen, a pilot-test was conducted with youth who completed the entire DAP. The results were analyzed for internal reliability (based on Alpha reliability) and temporal stability over a 1-week period of time (Pearson correlation). The internal and temporal reliability test were satifactory, a random sample of 600 youth in the four governorates were interviewed in their homes away from parents and siblings as much as possible. Each question is answered by youth using a scale ranging from "not at all" (=0) to a high of "almost always" (=3) regarding the presence of various situations and conditions in their life in the previous 3 months. The scores are totaled and the developmental assets are categorized as low, fair, good, or excellent.
The graph below present the results, average scores, for each of the four sub-scales that comprise External and Internal Assets. The average scores for each sub-scale has been connected with a line to provide a profile.
Several findings are quickly apparent about how Yemeni youth are fairing personally and socially. First, oveall, Yemeni youth have few constructive opportunities as indicated by the sub-scale Constructive Use of Time having the lowest scores regardless of location. Second, where a youth lives influences their level of developmental assts as show by the substantial differences on these sub-scales depending on which governorate the youth lives (each of the lines represents a governorate). Third, that the Yemeni youth are fairing well on Internal Assets but not so well on External Assets.
The implications of these findings for programming are that youth need more constructive opportunities and outlets. Currently, there are few opportunites in schools, neighborhoods or communities for youth to be involved in structured activities such as sports, music, mentoring, drop-in centers, social groups, or camps. This is particulary the case for youth living in the interior (Sana'a and Ibb), whereas youth living along the coast are more likely to be involved in activities such as boating or fishing. Being involved in constructive activities has been shown to improve empowerment through increasing self-esteem, providing a sense of belonging, developing cognitive, physical and social skills, enhancing a sense of self-worth, and developing relationships.
Although not shown in the graph, further analysis of the DAP shows that Yemeni youth reported low scores for schooling and neighborhood safety. For schooling, these youth did not feel that the schools they attend enforce rules fairly, cares about them, or encourages them to do their best. Thus, empowering youth requires improving the quality of schools. For neigborhood safety, basically, youth felt that neighbors do not help watch out for them, which means working with neighbors to also empower youth in Yemen.
The implications of these findings for programming are that youth need more constructive opportunities and outlets. Currently, there are few opportunites in schools, neighborhoods or communities for youth to be involved in structured activities such as sports, music, mentoring, drop-in centers, social groups, or camps. This is particulary the case for youth living in the interior (Sana'a and Ibb), whereas youth living along the coast are more likely to be involved in activities such as boating or fishing. Being involved in constructive activities has been shown to improve empowerment through increasing self-esteem, providing a sense of belonging, developing cognitive, physical and social skills, enhancing a sense of self-worth, and developing relationships.
Although not shown in the graph, further analysis of the DAP shows that Yemeni youth reported low scores for schooling and neighborhood safety. For schooling, these youth did not feel that the schools they attend enforce rules fairly, cares about them, or encourages them to do their best. Thus, empowering youth requires improving the quality of schools. For neigborhood safety, basically, youth felt that neighbors do not help watch out for them, which means working with neighbors to also empower youth in Yemen.
Tuesday, March 30, 2010
Unhealthy Evaluation Practices!?
Winston Churchill said, "Criticism may not be agreeable, but it is necessary. It fulfils the same function as pain in the human body. It calls attention to an unhealthy state of things." (No I did not quote this from hearing him.) The following article calls to attention some unhealth things in the (non)use of evlauation within International NGOs (INGOs) especially in trying to convinence the public they are accomplishing their mission statements through effective strategies and interventions. The article is titled, "Measuring Performance versus Impact: Evaluation Practices and their Implications on Governance and Accountability of Humanitarian NGOs," by Claude Bruderlein and MaryAnn Dakkak (June 30, 2009, SSRN).
The authors say that their study "confirms also a growing frustration among humanitarian professionals themselves that, while much is measured and evaluated, it is rarely the actual impact of their work. Instead it is apparent that evaluation as it mostly takes place today reflects primarily the needs of donors; is irrelevant for serious organizational learning and programming efforts; adds considerably to the burden of local staff and partners; and does little to shed light on the roles, influence and impact of INGOs as central actors in humanitarian action and protection."
One quote in the article, from a high ranking person in an INGO, "Evaluation as it is used today is the worst way to learn:It is done post-program (often after the new program has started),it is unhelpful, doesn’t address what produces good programming,focuses on attribution and doesn’t delve into the ambiguities of relationships;They are largely unused and a waste of resources and time."
The main critisms of evaluations in INGOs (the "pains" Churchill mentioned) are:
The authors say that their study "confirms also a growing frustration among humanitarian professionals themselves that, while much is measured and evaluated, it is rarely the actual impact of their work. Instead it is apparent that evaluation as it mostly takes place today reflects primarily the needs of donors; is irrelevant for serious organizational learning and programming efforts; adds considerably to the burden of local staff and partners; and does little to shed light on the roles, influence and impact of INGOs as central actors in humanitarian action and protection."
One quote in the article, from a high ranking person in an INGO, "Evaluation as it is used today is the worst way to learn:It is done post-program (often after the new program has started),it is unhelpful, doesn’t address what produces good programming,focuses on attribution and doesn’t delve into the ambiguities of relationships;They are largely unused and a waste of resources and time."
The main critisms of evaluations in INGOs (the "pains" Churchill mentioned) are:
- While organizations want evaluations for moral reasons, they only do what is actually required by donors.
- Evaluations are often not useful.
- Evaluations are often not used.
- New evaluation materials will help little as existing ones are not enforced.
- Evaluation criteria are often inappropriate.
- Impact evaluation as the one really meaningful approach is almost never done, and is just at the beginning of its development.
- Ensure that evaluations have leverage on programming, including through the direct involvement of evaluators, e.g. by scoring INGOs based on their resolution of identified problems and their integration of evaluator recommendations. Incidentally, these measures are also likely to have implications on the overall quality of evaluations.
- Clarify and separate competing organizational accountabilities, by effectively dividing INGO operations into for-profit and non-profit activities, or by partnering with outside for-profit entities. As they exist, most INGOs examined do neither adequately fulfill their internal governance accountability, nor their external business accountability.
- Develop and invest in dedicated evaluation research capacity, in-house or through partnerships with academic institutions that provide a rigorous basis and feedback mechanism to INGOs, their donors and the general public.
- Increase collaboration among INGOs and donors, based on existing efforts to consolidate, integrate and simplify evaluation methodologies in the interest of less time-consuming yet more meaningful and outcome-focused approaches.
- Develop a common approach towards donors and the public on what good humanitarian practice requires, in terms of minimum organizational overheads for rigorous and professional standards of evaluation, programming and organizational learning.
- Create a consortium of advocacy organizations, similar as they exist in other areas as an effective way of creating space for dialogue and inter-agency collaboration towards the definition of shared standards in advocacy.
- Share evaluations and learn collaboratively, in particular from failures and problems presently not included (or well hidden) in evaluation reports – primarily by fostering collective approaches for open evaluation dialogue.
- Experiment with a system of peer-reviewed evaluations, initially internal and confidential to each organization allowing for rigorous and open reviews of evaluation methods – similar to methods applied by ALNAP as an effective collaborative of evaluators but with more effective ways to actually enforce and ensure good practice.
- Agree on standardized quantitative and qualitative metrics of impact that would allow for a sufficiently practical and pertinent measurement of impact – as part and priority focus of an improved dialogue, even if it involved superceding existing collaboration successes in consolidating agency methods and indicators.
- Ensure that timelines and resources for evaluations are flexible and sufficient, including to undertake meaningful qualitative research of impact over the long-term and to ensure that evaluations on advocacy and policy can be adjusted to affect relevant processes.
- Preserve flexibility and check for unintended consequences, especially in advocacy and policy programming to take into account the dynamics of relevant political contexts.
- Agree on a simple but shared evaluation language, integrated into all stages of evaluation and programming that allows for the effective involvement of professionals and beneficiaries at and across all levels of humanitarian assistance.
Of all the criticisms, from my experience I agree that organizational learning from evaluation findings is quite rare. All too often, we (myself included) are too busy in search of the next funding to apply evaluation finding to current or future programs and projects; most evaluations focus on achieving results but rarely assess the "operational" aspects on how those results were (not)achieved; and that the unintended consequences are rarely investigated.
Sunday, March 28, 2010
Demonstrating Project "Impact"
When I conduct workshops in monitoring and evaluation, one of the topics discussed is "impact." When impact is defined in a workshop as, "the net change directly attributed to the project interventions," then it requires using and explaining its related terminology, such as "randomization," "selection bias," "attribution," "counter-factual," "double-difference," and "net-change." Attempting to define each of these terms and have them understood by workshop participants who may be unfamiliar with experimental design is challenging.
To help illustrate these concepts and terms I use an game on the first and last days of the workshop. On the first day of the workshop, as just an ice-breaker, a sheet of paper with a number is placed on the notebook of each workshop participant. Using a randon number generator on my computer, I choose two numbers and the two workshop particpants who have these numbers form one team. Then I randomly generate two more numbers and these two particpants form the second team.
On a table in the workshop, I have the game, Perfection, by Milton Bradley (see picture below). (For those unfamiliar with Perfection it is a plastic box with holes of 16 different shapes in a 4x4 arrangement. The goal is to take 16 plastic shapes and place them in their matching hole in the least amount of time.) The rest of the workshop participants are on the other side of the table either cheering or jeering. One person is chosen to be the timekeeper. All 16 plastic pieces are placed in a pile on the table in front of Team 1 and when the timekeeper says "go" Team 1 starts putting the pieces in their matching holes in the Perfection box. Once all pieces are in, the timekeeper shouts how much time it took them. For example, "1 minute, 45 seconds!"
Perfection, a game by Milton Bradley.
Then Team 2 gets their chance to place all 16 pieces in their matching holes, with the timekeeper shouting out the time it took them. (Of course, there are the usual arguments if the timekeeper is correct.)
On the first day that is all that I do....just use the game as an energizer. HOWEVER, at the end of the first day of the workshop I randomly select one of the teams (in this case Team 2), and I gave them the Perfection game and asked them to SECRETLY practice the game until the last day of the workshop.
On the last day of the workshop, again as an energizer, I asked both teams to come to the table and redo the Perfection game, with the timekeeper to record their time, to see which team was faster. After having both teams redo the Perfection game, I along with the secretly chosen team (Team 2), told the other workshop particpants that they had been practing the Perfection game since the 1st day of the workshop.
After Team 1 settles down from being upset since they were not allowed to practice too, we all gathered at a flip chart with the timekeeper and a list of the impact evaluation terminology I mentioned above. We discussed why I randomized the team members, how this was meant to reduce selection bias (most coordinated participants were not necessarily selected nor people who had played games together before), and how Team 2 formed the factual (the effect of practicing) and Team 1 formed the counter-factual (not practicing).
Next, I had the timekeeper calculate the single differences and the double-difference of the change in time for each team to complete the Perfection game. So, on the flip chart paper, the timekeeper calculated:
Single Differences (absolute change):
Team 1: 90 secs (Time 2) - 120 secs (Time 1) = -30 secs
Team 2: 125 secs (Time 2) - 180 secs (Time 1) = -55 secs
Double-Difference:
55 secs (Team 2: factual) - 30 secs (Team 1: counter-factual) = 25 secs
Net Change: 25 secs
Attribution: That without practicing, having played the Perfection game at least once can decrease the amount of time it takes to complete it a second time. However, practicing for about an hour each day of the 3-day workshop results in even a greater decrease in the amount of time to complete the Perfection game. In this case, of the 55 second decrease in time for Team 2, 25 seconds can be attributed to practicing (the intervention).
Thus, if this were a project that had a training activity that conducted a baseline and end-line of training participants, without the counter-factual a project would report that its training reduced the amount of time to complete the Perfection game by 30.6% (55 secs/180 secs); however, the counter-factual shows that the training had only a 13.9% effect (25 secs/180 secs) on reducing time.
And, as you may have already thought, after this blog I will have to change my "impact" exercise for future workshops!
To help illustrate these concepts and terms I use an game on the first and last days of the workshop. On the first day of the workshop, as just an ice-breaker, a sheet of paper with a number is placed on the notebook of each workshop participant. Using a randon number generator on my computer, I choose two numbers and the two workshop particpants who have these numbers form one team. Then I randomly generate two more numbers and these two particpants form the second team.
On a table in the workshop, I have the game, Perfection, by Milton Bradley (see picture below). (For those unfamiliar with Perfection it is a plastic box with holes of 16 different shapes in a 4x4 arrangement. The goal is to take 16 plastic shapes and place them in their matching hole in the least amount of time.) The rest of the workshop participants are on the other side of the table either cheering or jeering. One person is chosen to be the timekeeper. All 16 plastic pieces are placed in a pile on the table in front of Team 1 and when the timekeeper says "go" Team 1 starts putting the pieces in their matching holes in the Perfection box. Once all pieces are in, the timekeeper shouts how much time it took them. For example, "1 minute, 45 seconds!"
Perfection, a game by Milton Bradley.
Then Team 2 gets their chance to place all 16 pieces in their matching holes, with the timekeeper shouting out the time it took them. (Of course, there are the usual arguments if the timekeeper is correct.)
On the first day that is all that I do....just use the game as an energizer. HOWEVER, at the end of the first day of the workshop I randomly select one of the teams (in this case Team 2), and I gave them the Perfection game and asked them to SECRETLY practice the game until the last day of the workshop.
On the last day of the workshop, again as an energizer, I asked both teams to come to the table and redo the Perfection game, with the timekeeper to record their time, to see which team was faster. After having both teams redo the Perfection game, I along with the secretly chosen team (Team 2), told the other workshop particpants that they had been practing the Perfection game since the 1st day of the workshop.
After Team 1 settles down from being upset since they were not allowed to practice too, we all gathered at a flip chart with the timekeeper and a list of the impact evaluation terminology I mentioned above. We discussed why I randomized the team members, how this was meant to reduce selection bias (most coordinated participants were not necessarily selected nor people who had played games together before), and how Team 2 formed the factual (the effect of practicing) and Team 1 formed the counter-factual (not practicing).
Next, I had the timekeeper calculate the single differences and the double-difference of the change in time for each team to complete the Perfection game. So, on the flip chart paper, the timekeeper calculated:
Single Differences (absolute change):
Team 1: 90 secs (Time 2) - 120 secs (Time 1) = -30 secs
Team 2: 125 secs (Time 2) - 180 secs (Time 1) = -55 secs
Double-Difference:
55 secs (Team 2: factual) - 30 secs (Team 1: counter-factual) = 25 secs
Net Change: 25 secs
Attribution: That without practicing, having played the Perfection game at least once can decrease the amount of time it takes to complete it a second time. However, practicing for about an hour each day of the 3-day workshop results in even a greater decrease in the amount of time to complete the Perfection game. In this case, of the 55 second decrease in time for Team 2, 25 seconds can be attributed to practicing (the intervention).
Thus, if this were a project that had a training activity that conducted a baseline and end-line of training participants, without the counter-factual a project would report that its training reduced the amount of time to complete the Perfection game by 30.6% (55 secs/180 secs); however, the counter-factual shows that the training had only a 13.9% effect (25 secs/180 secs) on reducing time.
And, as you may have already thought, after this blog I will have to change my "impact" exercise for future workshops!
Subscribe to:
Posts (Atom)