When I conduct workshops in monitoring and evaluation, one of the topics discussed is "impact." When impact is defined in a workshop as, "the net change directly attributed to the project interventions," then it requires using and explaining its related terminology, such as "randomization," "selection bias," "attribution," "counter-factual," "double-difference," and "net-change." Attempting to define each of these terms and have them understood by workshop participants who may be unfamiliar with experimental design is challenging.
To help illustrate these concepts and terms I use an game on the first and last days of the workshop. On the first day of the workshop, as just an ice-breaker, a sheet of paper with a number is placed on the notebook of each workshop participant. Using a randon number generator on my computer, I choose two numbers and the two workshop particpants who have these numbers form one team. Then I randomly generate two more numbers and these two particpants form the second team.
On a table in the workshop, I have the game, Perfection, by Milton Bradley (see picture below). (For those unfamiliar with Perfection it is a plastic box with holes of 16 different shapes in a 4x4 arrangement. The goal is to take 16 plastic shapes and place them in their matching hole in the least amount of time.) The rest of the workshop participants are on the other side of the table either cheering or jeering. One person is chosen to be the timekeeper. All 16 plastic pieces are placed in a pile on the table in front of Team 1 and when the timekeeper says "go" Team 1 starts putting the pieces in their matching holes in the Perfection box. Once all pieces are in, the timekeeper shouts how much time it took them. For example, "1 minute, 45 seconds!"
Perfection, a game by Milton Bradley.
Then Team 2 gets their chance to place all 16 pieces in their matching holes, with the timekeeper shouting out the time it took them. (Of course, there are the usual arguments if the timekeeper is correct.)
On the first day that is all that I do....just use the game as an energizer. HOWEVER, at the end of the first day of the workshop I randomly select one of the teams (in this case Team 2), and I gave them the Perfection game and asked them to SECRETLY practice the game until the last day of the workshop.
On the last day of the workshop, again as an energizer, I asked both teams to come to the table and redo the Perfection game, with the timekeeper to record their time, to see which team was faster. After having both teams redo the Perfection game, I along with the secretly chosen team (Team 2), told the other workshop particpants that they had been practing the Perfection game since the 1st day of the workshop.
After Team 1 settles down from being upset since they were not allowed to practice too, we all gathered at a flip chart with the timekeeper and a list of the impact evaluation terminology I mentioned above. We discussed why I randomized the team members, how this was meant to reduce selection bias (most coordinated participants were not necessarily selected nor people who had played games together before), and how Team 2 formed the factual (the effect of practicing) and Team 1 formed the counter-factual (not practicing).
Next, I had the timekeeper calculate the single differences and the double-difference of the change in time for each team to complete the Perfection game. So, on the flip chart paper, the timekeeper calculated:
Single Differences (absolute change):
Team 1: 90 secs (Time 2) - 120 secs (Time 1) = -30 secs
Team 2: 125 secs (Time 2) - 180 secs (Time 1) = -55 secs
55 secs (Team 2: factual) - 30 secs (Team 1: counter-factual) = 25 secs
Net Change: 25 secs
Attribution: That without practicing, having played the Perfection game at least once can decrease the amount of time it takes to complete it a second time. However, practicing for about an hour each day of the 3-day workshop results in even a greater decrease in the amount of time to complete the Perfection game. In this case, of the 55 second decrease in time for Team 2, 25 seconds can be attributed to practicing (the intervention).
Thus, if this were a project that had a training activity that conducted a baseline and end-line of training participants, without the counter-factual a project would report that its training reduced the amount of time to complete the Perfection game by 30.6% (55 secs/180 secs); however, the counter-factual shows that the training had only a 13.9% effect (25 secs/180 secs) on reducing time.
And, as you may have already thought, after this blog I will have to change my "impact" exercise for future workshops!