Last week, a group of ugly annual visitors arrived: my course evaluation forms.
A few professors are just crazy about student evaluation and wax obscure about “summative” and “formative” evaluation and so on. Others hate them passionately, some even refusing to hand them out (contrary to their contractual obligations, I might add). I hand them out, and I generally get pretty good scores. But I hate doing it, because I know it’s a pretty useless exercise.
The argument for doing them is straightforward. Course evaluations are used to evaluate an instructor’s performance in class, and who better than those who witness that performance every day? At my university, there is no requirement that any faculty members or administrators observe faculty teaching, and though my department does require new faculty to have a tenured member come in, observe, and write a report, the newbee gets to choose who comes to visit, and tends to choose someone they think will be sympathetic. Besides, anyone can get his act together for one performance. But students see the whole run.
The argument above has more or less won the day at every university in Canada, so far as I know. Student evaluations are as ubiquitous as labs, term papers, and baseball caps. But think a bit more about them, and evaluations start to seem less and less worthwhile.
For one thing, it’s pretty clear that students don’t take them very seriously. I remember one student in a hiring committee meeting being astonished at how gravely other committee members were talking about student evaluations, given, she said, that students fill them out as fast as they can in order to get out of class as quickly as possible. This is borne out by the fact that in my school, evaluations are suspiciously consistent among faculty, with most scores ranging between 4.2 and 4.5 out of 5. Only a small fraction of students take the time to include written comments and those are never long.
Further, it seems likely to me that students are not really evaluating the quality of instruction, but rather their level of satisfaction with the course, which are often very different. An excellent instructor may also be a hard grader, and a student earning Ds in that course is not likely going to give Dr Tenacious a five out of five, especially if the question asks whether Dr T is a fair marker or not. More generally, is an undergraduate student actually in a position to know whether Professor Genial is up on the latest scholarship in particle physics? Or is she really just evaluating whether Professor G is funny? Or cool? Or not? All of which tends to promote grade inflation and an emphasis on entertaining students, not educating them.
If we were really serious about evaluating teaching, we would do it the same way that we evaluate research: anonymous peer review. Universities could make video recordings of professors classes throughout the year and send a random sample to three tenured professors at other universities, who would then, in turn, send back their evaluations.
Such a system would be much more expensive than student evaluations, but any university that really cared deeply about teaching would find the money. But of course, they don’t. They prefer a system that is cheap, and easy, and, well, pretty useless.
If we are to keep student evaluations, they should not be done at the end of the course. They should be done five years later. After graduation, a student may come to see that Dr Tenacious’s high standards were actually the best thing for him, while Professor Genial’s pleasant smile didn’t actually come in very handy after all.