Off-topic: The survey bias

Bad designer, one of my favorite devil’s advocates asked an interesting question about post-course survey results:

Once in a course or similar and you get to know someone it becomes v difficult to give bad results, in particular, if it is life effecting in some way such as bonuses or future work etc. In fact one can argue that you should get high results just for high effort with integrity.

The bias toward higher scores is definitely present and is in fact so strong that 4.0 usually represents a barely acceptable result; sometimes the minimum acceptable average score for an instructor is set to 4.3 – 4.5. It’s also very important to understand how the questions are phrased and what the results actually mean.

For example: suppose you ask the question “How would you compare this course to other courses you’ve attended” and answer “3” represents “average”. Setting the acceptable score higher than 3.5 (with 4.0 being a stretch goal) is clearly stupid; not every course can be way above-average. As expected, that was also the lowest score in my results (4.4).

On the other hand, when you ask questions like “The instructor demonstrated a sound understanding of the material” with 5 = strongly agree, 4 = somewhat agree and 3 = neutral, then you should expect the average score to be way above 4.0; otherwise you have a huge problem.

There’s also the cultural bias. It’s impossible to get excellent scores in some countries or with some audiences. If you’re teaching a global audience (or travelling to ¾ of the globe like our instructors do), it’s not a problem; the results average out. The instructors tied to a particular tough market have a problem, more so if their goals are set by an organization with a global perspective (I don’t want to go more specific than this, but you can probably guess what I have in mind).

Last but definitely not least, the survey results tend to represent the opinions of the evangelists and the disappointed. When we were working with paper forms filled-in in the classroom, we’ve collected over 90% of the feedback ... but you could argue that those results were heavily biased (and the presence of the instructor definitely was an influence). With the online forms sent to the students after the event, the response rate is way lower; most students that don’t have strong feelings simply ignore the invitation unless (as Bad designer pointed out) they’re aware that someone’s performance goals or bonus is tied to the results, in which case the surveys become pretty useless.

Based on all this, you might wonder why everyone is still doing the post-course surveys. For some people it’s definitely a cargo cult, some others have to do it to satisfy external requirements or ISO 9001 auditors. That’s obviously wrong; you should use the surveys as a health check – if anyone feels strongly enough about the course to fill in the survey form and tell you how good (or bad) you were, that’s a very valuable feedback that should be respected and (most importantly) acted upon.

Add comment
Sidebar