Students vs. academics in right to rate professors

At issue: Students want the right to view professor ratings while teachers fear misleading data

by Josh Dehaas

Jemma Wolfe had been dreaming about taking McMaster University’s fairy-tales course ever since she read about it in a brochure when she was in high school. But this arts graduate’s story does not have a happy ending. She says the professor wasted time sorting out technical problems, only to show the class irrelevant YouTube videos. She let students ramble on, and showed up late. At least three times, hundreds of students sat in a lecture hall until a notice reading,“Sorry, can’t make it,” popped up on their class website. Wolfe stuck with it, but avoided a course on children’s literature because it was taught by the same prof.

The teacher, whom she won’t name publicly, had a low score on RateMyProfessors.com, a U.S.-based website where anyone can comment on a professor’s teaching, grading practices—even “hotness.” But the site has a reputation as a place where disgruntled students unload, so she decided to express her dissatisfaction on the official teacher evaluation so the professor might benefit from the feedback and the prof’s superiors would know about the problems.

Wolfe thinks that data, often seen only by professors and sometimes administrators, should be shared so that students can avoid poor instructors and seek out the best. It’s not the first time it has come up. Back in 1997, McMaster created a policy to make some of the information available if the instructors agreed to share it.

But when economics professor Martin Dooley went looking for his results at the library last year, they weren’t available, and hadn’t been for years. After he suggested the policy needed updating, McMaster’s senate committed to posting the results online, but only after the computer system is upgraded. He expects the evaluations will be available by April, 17 years after the first policy was adopted.

Even then, as with most schools that share evaluation data with students, disclosure is voluntary. In fact, many contracts negotiated by teachers’ unions stipulate that the comments section can only be seen by the instructor (not even administrators are privy to it) and the numerical scores are seen only by those who hire and fire. The main argument against transparency is that students may, consciously or unconsciously, give low scores because they are biased, as revenge for poor marks, or even because they don’t like what the instructor wears. Teachers say evaluations are only part of the picture, and students might be misled if that’s all they see.

Nonetheless, students say that, because they are paying thousands of dollars for their education, they have a right to see the scores before deciding which courses to take. Last year, Jordan Sherbino, a vice-president of the University of Saskatchewan Students’ Union, discovered that his school never implemented a 2002 policy to make some information from the evaluations available to students. He’s now lobbying for online access similar to what students have at the University of Toronto and the University of British Columbia. At UBC, they can log in to see an average score, on a scale of one to five, for degree of agreement with six statements, including, “Overall, the instructor was an effective teacher,” and, “The instructor communicated the subject matter effectively.” At U of T, students in the faculty of arts and sciences have, for decades, had access to the “anti-calendar.” Today, students can log in to see average scores on parameters such as, “I would recommend this course,” and, “Assignments improved understanding,” along with response rates, which are often high. Other departments are in the process of implementing a 2011 policy to share course evaluation data with students, but only if the professor agrees to release it.

“It’s about finding the best instructor for the student and not having to rely on RateMyProfessors.com,” says Sherbino. “I don’t know if I’ve ever talked to a student who doesn’t use RateMyProfessors, but it’s not accurate information.” The student evaluations are more reliable because they’re based on a bigger sample size. Sherbino would love to have the scores for all teachers, but at this point, he’s only lobbying for a voluntary system. The U of S professors’ contract says course evaluations are part of personal files, and are only shared with those who decide tenure, promotions and dismissals.

Professors aren’t opposed, in principle, to sharing numerical averages of evaluations (comments are another story), but Michael MacGregor, vice-chair of the U of S faculty association, worries about them going online without enough context for students to understand what they mean. Will they realize when the sample size is too small to be meaningful or that the spread between a 4.0 and 4.5 on a five-point scale might be statistically insignificant? Will they understand that a psychology instructor might score better than an art history teacher because students, on average, prefer psychology? Hiring committees buffer these problems by looking beyond numerical data, by sitting in on a lecture, for example. Students, however, are unlikely to do that. He also worries that teachers might avoid experimentation out of fear that bad results one semester could mean students avoid their classes the next.

And numerous studies have shown that evaluation data are not as objective an indicator of teacher quality as students may think, says Linda Rose-Krasnor, president of the Brock University faculty association. “For example, students are biased against minority women,” she says, adding that they also sometimes judge teachers on unrelated factors such as clothing, age and attractiveness. “It’s not that we’re against sharing information about how we teach,” she says. “It’s just that we have some difficulties with teacher ratings as a single measure.”

Not to mention there are ways to manipulate the system. In his first year of teaching, a seasoned instructor gave John Young some advice: “You give the students good grades on the midterm and then, after the student evaluations are through, you hit them hard on the final [so the] grades are acceptable to the chair of the department, but you still get good student evaluations.” Those evaluations are most useful over time, says Young, a former dean who is now acting vice-president academic at the University of Northern British Columbia in Prince George. Having a few semesters of feedback means instructors can learn how to teach without worrying about the next contract, job, tenure or promotion. “You teach a course once, get some evaluations back, make some adjustments, take the next iteration, make some adjustments,” Young says. “So, over the long haul, they can have a tremendous impact on teachers addressing their weaknesses.” The problem with such an incremental approach is that students can be so turned off by a single experience that they change programs. And despite an increasing recognition of the importance of teaching by administrators, universities require no formal training for instructors, making it easier for poor teachers to end up in front of classrooms.

“You don’t really get any guidance,” say Karalee MacDonald, a master’s student at UNBC who is teaching first-year students a writing course for the first time. Last semester, she took an optional three-day workshop on how to teach, but says some of her peers didn’t even have that before stepping into the classroom. MacDonald supports the idea of giving students access to course-evaluation data. “It’s hard to get information on professors, and that’s an important determinant in whether you’ll excel at school.” She has had a couple of poor teachers in her undergraduate years and she’s determined to do better.

Last year, Mary Perino, who is pursuing a combined arts and education degree at Brock University, had two English classes with the same professor. She says she couldn’t follow the lectures and her ideas were shot down. She sought extra help—from the instructor and the campus writing centre—and even had a high school teacher review her work, yet she still got poor grades and blistering comments on her essays, including one that said she might never make a career out of English. “Every time I saw her, I felt so anxious and so stupid,” she says of the teacher. “It was hard when she even looked at me, because I felt like she was judging me, which was stupid, because I paid so much to be in her courses.”

Perino thought about dropping English, but decided to stick it out. Now she’s glad she did. Despite her experience, she is now considering a career—as an English teacher.




Browse

Students vs. academics in right to rate professors

  1. We make it as difficult as possible to get an education in this country, right at a time when we need educated people the most.

    It’s like a hazing ritual before you’re actually allowed to learn anything….gawd knows why.

    On edit: well actually we do know why….it’s high priests hoarding knowledge.

  2. I attended the University of Saskatchewan a long time ago. At that time professors did not need any education classes before entering the lecture hall. The result was some memorably bad teaching practices, although I was fortunate to have mainly good teachers. In later years I heard that an education program was provided for professors but when I talked to an education student he told me that the professor who taught the course was the worst instructor in the College of Education. Blind leading the blind I guess.

  3. Back in my university days, I had the opportunity to participate on faculty hiring and I found the process quite good.

    1. The hiring committee was made up of a cross-section of the department: (grad student, undergrad and a non-majoring undergrad + Chair of the department + full-time faculty + one non-faculty member from another department).

    2. The process was diverse. It was a combination of sitting in on a guest lecture, reviewing their CV, faculty-only interview, student-only interview.

    3. The process was meaningful. When our student component gave our impressions of the potential candidates it felt like our voices were truly being heard by the Chair of the department.

    This was not a one-day operation. To put together something as meaningful and transparent as this takes time, takes effort and takes buy-in from both faculty and students. I guess I count myself lucky to have been part of something like this.

Your email address will not be published. Required fields are marked *