A common approach to project evaluation is using a pre- and post-test among beneficiaries. Although this type of design is not very rigorous (I will write about why in a later post), occasionally there are training projects that get started so quickly that only at the end of the training do staff realize that they did not conduct a pretest and subsequently do not have baseline to compare a change in knowledge from the training course.
In an article by Debra Moore and Cynthia Tananis (American Journal of Evaluation, 2009) these authors discuss the issue of not only how to reconstruct a baseline but also the validity and reliability of data when doing so. This method is called a retrospective pretest design.
Now, the authors clearly state that this is a method best used with short-term, intensive training programs and may not be as reliable in other types of activities and interventions.
The authors mention that in a both a pretest and posttest design, or a retrospective pretest design, one of the primary concerns is something called response-shift bias. Response-shift bias occurs when a participant understands the concept being measured at the pretest differently than at the posttest. For example, youth asked on pretest to answer questions about empowerment (the concept) before the training begins may answer differently when asked the same questions about empowerment at the posttest when the training ends because after taking a training course they understand the concept of empowerment differently. Thus, the authors wanted to test if the degree of response-shift bias when a pretest and posttest was conducted for a training course and when a pretest was NOT done and a retrospective pretest was used.
The basic research question was: Do participant’s responses more accurately represent their level of knowledge/awareness at the beginning or after the training?
For example, before taking a training course on DME I many think I know a lot and would willing respond on pretest questionnaire high levels of knowledge and abilities in doing DME. Then after taking a DME course, and being exposed to more detailed and complex issues that I was not previously aware of, I may reassess that my level of knowledge and abilities were not as great as I thought. But, sadly it’s too late to change my pretest responses.
The authors conclude that pretest scores tend to overestimate a particular level of knowledge or ability(larger response-shift bias) than with a retrospective pretest. They also report that other studies have found that self-report retrospective pretest scores are more highly correlated with scores on objective pretest measures of skill development or knowledge than the self-report pretest scores.
The basic message: In projects that include short, intensive training courses, reconstructing a baseline through the use of a retrospective pretest conducted at the end of the training may provide more accurate results than a pretest at the beginning of the training course.
I wish I knew this 10 years ago when I was managing a large, complex education program in Pakistan with lots of short, intensive training courses and no good baseline to convince the EC external evaluators of the great achievements in teacher performance that pupils, parents and teachers all reported.
ReplyDeleteLook forward to your notes on why a pre and post test are not good evaluation methodology. If it is response shift bias then how do we measure it and how much is a good thing anyway?
Hello Larry, this is very interesting. I also would like to see your thoughts on pre/post testing - have I miss it? I am planning for my project 'pre test' as we speak! Thanks!
ReplyDeleteSome important criteria to keep in mind when considering a "retrospective baseline" is that it is good for a) self-report of learning or level of knowledge and b)short-term (~3 months), intensive training.
ReplyDeleteWhen the pre- and post-test have established correct responses or answers, such as "Please name the 5 stages of policy development," then the standard pre- and post-test is good.