5

You like me! You really, sort of, like me!

Student evaluations are probably here to stay. For worse, not better.


 

Last week, a group of ugly annual visitors arrived: my course evaluation forms.

A few professors are just crazy about student evaluation and wax obscure about “summative” and “formative” evaluation and so on. Others hate them passionately, some even refusing to hand them out (contrary to their contractual obligations, I might add). I hand them out, and I generally get pretty good scores. But I hate doing it, because I know it’s a pretty useless exercise.

The argument for doing them is straightforward. Course evaluations are used to evaluate an instructor’s performance in class, and who better than those who witness that performance every day? At my university, there is no requirement that any faculty members or administrators observe faculty teaching, and though my department does require new faculty to have a tenured member come in, observe, and write a report, the newbee gets to choose who comes to visit, and tends to choose someone they think will be sympathetic. Besides, anyone can get his act together for one performance. But students see the whole run.

The argument above has more or less won the day at every university in Canada, so far as I know. Student evaluations are as ubiquitous as labs, term papers, and baseball caps. But think a bit more about them, and evaluations start to seem less and less worthwhile.

For one thing, it’s pretty clear that students don’t take them very seriously. I remember one student in a hiring committee meeting being astonished at how gravely other committee members were talking about student evaluations, given, she said, that students fill them out as fast as they can in order to get out of class as quickly as possible. This is borne out by the fact that in my school, evaluations are suspiciously consistent among faculty, with most scores ranging between 4.2 and 4.5 out of 5. Only a small fraction of students take the time to include written comments and those are never long.

Further, it seems likely to me that students are not really evaluating the quality of instruction, but rather their level of satisfaction with the course, which are often very different. An excellent instructor may also be a hard grader, and a student earning Ds in that course is not likely going to give Dr Tenacious a five out of five, especially if the question asks whether Dr T is a fair marker or not. More generally, is an undergraduate student actually in a position to know whether Professor Genial is up on the latest scholarship in particle physics? Or is she really just evaluating whether Professor G is funny? Or cool?  Or not? All of which tends to promote grade inflation and an emphasis on entertaining students, not educating them.

If we were really serious about evaluating teaching, we would do it the same way that we evaluate research: anonymous peer review. Universities could make video recordings of professors classes throughout the year and send a random sample to three tenured professors at other universities, who would then, in turn, send back their evaluations.

Such a system would be much more expensive than student evaluations, but any university that really cared deeply about teaching would find the money. But of course, they don’t. They prefer a system that is cheap, and easy, and, well, pretty useless.

If we are to keep student evaluations, they should not be done at the end of the course. They should be done five years later. After graduation, a student may come to see that Dr Tenacious’s high standards were actually the best thing for him, while Professor Genial’s pleasant smile didn’t actually come in very handy after all.


 
Filed under:

You like me! You really, sort of, like me!

  1. Certainly student evaluations in their current incarnation need to be taken with a good helping of salt, yet I think reforms can be made to address the concerns brought up in this article.

    If there’s a problem (and there likely is) with students rating their experience rather than the quality of the professor, then ask them about the two issues separately. In all the evaluations I’ve completed not one has asked about my feelings of performance in the class so I’m forced to factor that sentiment into other responses.

    Also, the article asks “is an undergraduate student actually in a position to know whether Professor Genial is up on the latest scholarship in particle physics?” Well obviously not, but while tenure should be based on both teaching and research, teaching evaulations should only target the former while faculty can look into the latter through number of publications, prestige of journals, number of cititations, etc. This is status quo.

    Another problem stated is that students do the evaluations as fast as possible so that they can leave class. Well, at my institution that evaluations are put at the beginning of class and the professor leaves the room and returns in 10 minutes, problem solved, no incentive to speed through the evaluation.

    I just don’t see the inherent problem, only a failure of innovative thinking.

  2. I agree that the widespread use of course evaluation forms needs to be reconsidered – there is much room for improvement. I have been that same student on a promotions committee, surprised that the student evaluations were given any weight at all – given my experience, attitudes toward the practice are lackadaisical among faculty members and students alike.

    I am uncertain if my experience is replicated in other Canadian institutions, but for me what commonly occurred was that in the last session a stack of papers was left behind while the prof left the room for 5 – 10 minutes (at the beginning or end of class, I have seen both). If students chose to fill out the form, they would do so quickly. If any instructions were given regarding the forms, it was usually along the lines of “here’s the forms guys, you know the drill”.

    Not once, in any orientation session or class, was I informed that the evaluations were considered in the promotions and tenure process. No one ever explained to me that I was evaluating the quality of the teaching and not the quality of the course, and since the forms are generally referred to as “course evaluations” there is potential for confusion there. If memory serves, written instructions at the outset of the questionnaire were limited to concerns of anonymity and such.

    I have often wondered why there are not ongoing opportunities for students to submit feedback throughout their course in some kind of online format. It may have something to do with attitudes towards the reliability of student responses (see Pettigrew).

    These attitudes may be well deserved, even for conscientious students. At end of term I was generally frazzled, underslept and study-weary. I try to take surveys and evaluations seriously, but must admit that I often filled the forms out hastily, perhaps being a little too generous as a result.

    If the student evaluations are to be valued in the context of a broader quality assurance initiative then there should be effective informational campaigns to make students aware of the importance of the process. Students should know, when their opinions are being solicited for the umpteenth time in a semester, that their feedback actually counts. The survey instrument should be carefully designed and administered. Otherwise, student course evaluations have little value other than as fodder for professorial debate.

  3. You know, a lot of this critique of student evaluations amounts to a set of claims I’ve found amazing since day one in university. Scores are similar. Other factors affect students’ impression. The information may be misused or misinterpreted. To all of this may reaction has always been “huh!?!” In an academic environment where researchers routinely grapple with the challenge of teasing good information from out of difficult data sets, with evaluating bias, distinguishing between good and bad comparison, etc., why is it suddenly so impossible to apply those skills to student evaluations? I’ve found those objections to be incredibly disingenuous for years, and I still do.

    I’ll add, however, that I 100% agree comments are more useful than numerical scores. As an instructor, now, I’ve found a simple way to increase the proportion of comments I receive. When I hand out the evaluations, I tell students that I actually want their comments. Seems basic, I know, but the assurance that I actually care what they have to say (their doubt is understandable – given many instructors’ attitudes towards the process) seems to encourage them to make the effort.

    A colleague of mine adds the following advice. Rather than leave students entirely without direction for comments, he suggests a “stop, start, continue” approach. That is, students are encouraged to comment on one thing he’s doing that he should stop doing, one thing he isn’t doing that he should start doing, and one thing he’s doing that he should continue to do. This direction on where to start gets students going. It’s darn good advice, and I’m going to adopt it myself this term.

  4. I think there are two sides to this story. Some students may not care about the surveys, but some universities also have a culture in which it is clear that student opinion is not valued–and that there is no intention to either recognize or correct problematic situations. I have always taken the time to fill out evaluations comprehensively. Last semester, I gave up my anonymity and wrote my home phone number on a Simon Fraser University evaluation in hopes of further discussing my concerns about a professor with a real person. I never did receive a call. Consequently, I will not take the time to fill out any more SFU surveys, because clearly, they do not honour that effort by taking the time to address the issues raised.

Sign in to comment.