Getting Familiar With Faculty Evaluations
Students always know when it’s about to happen. With less than two weeks left of classes, the professor wraps up one class a few minutes early, reaching for a manila envelope on their desk. A brief speech about something called the SIRs is brought to your attention as a questionnaire is distributed to you and your peers. The professor quietly leaves, giving you a few minutes to look it over and answer the questions laid out before you.
For once, you are the grader scoring your professor. Think hard about the last 16 weeks, because the “grade” you give your instructor and course overall turns out to be more significant than you thought.
According to Provost Thomas Pearson, there was no formal student evaluation system when he came to then- Monmouth College in 1978. However, there were ways students could leave feedback, such as a book of comments that he believes was placed in the Registrar’s Office. “Students could come by and see what other students thought about faculty, courses, things like that. But it was very random, rather informal and the comments to the best of my knowledge were not used in any formal way by faculty,” Pearson says.
Pearson says it wasn’t until the mid- 1980’s that a formal student evaluation system was introduced as part of negotiation for the Faculty Association. The University saw this as important to evaluate faculty in order to make decisions in terms of tenure, candidacy, continuance and promotion. “Having a more formalized process was felt by the administration to be at the advantage of everybody.”
The evaluation system’s instrument agreed to use were known as Educational Testing Service’s (ETS) Student Instructional Reports, or as most call them, SIRs. It is a nationally normed instrument; Pearson has heard that using one “consistently over time provides the best results in terms of evaluating faculty performance and curricular development.”
The questionnaire handed to students is designed by ETS, so the University has no control on what content or questions appear on it. The Provost’s Office explained that the University has a contract with ETS; the annual costs for the SIRs are $9,500 for processing and $4,500 for materials.
Pearson says the reports are intended to serve two purposes. “One, to serve the faculty in terms of giving them the sense of student feedback on the course on the various aspects from the organization and planning, communication, the interaction between faculty and students, assignments, course outcomes, workloads and overall performance. It’s meant to be helpful to faculty in order to review it in order to think of ways to improve their instruction.” The second purpose it serves, Pearson continues, directly feeds into the evaluation processes for continuance, tenure and promotion.
Susan O’Keefe, Associate Vice President for Academic Administration, explains the administration process of SIRs. “We know which faculty are expected to complete the SIRs in a given year. We notify them, and they are asked to select which of their classes they will have the SIR administered in,” she explains, adding that one restriction is a class must have at least five or six students in it.
Pearson adds that tenured professors distribute SIRs to half of their classes every other year, as required under the faculty contract. For the faculty who are on probationary status, if they teach eight courses a year, they have to distribute SIRs to six of them.
Materials are ordered and received in the Provost’s Office, and when the faculty notifies them which classes they would like to be evaluated in, the materials are then distributed to the dean’s offices in each of the schools, and then made into packets for each faculty member who is participating. Instructions are included for each professor, such as picking a student assistant to return the materials to the Registrar’s Office, leaving the room when the SIRS are being filled out, etc.
Once they are returned to the Registrar’s Office, they are repackaged and mailed to ETS where they are analyzed and then sent back to the University and distributed for the faculty to review.
One of the reasons to stay with the SIR and not use a different evaluation system is because it’s a nationally standardized instrument, Pearson explains. “It really helps us not only see how the faculty member does over a period of time in various courses, but it’s indexed against the national average for faculty teaching those courses. So, we actually have yardsticks we can use and consequently we can make a determination if the faculty is consistently in the bottom 10 percent nationally…that’s not a good place to be as opposed to be more towards the center or above the center.”
As for the faculty who dislike the SIR, Pearson says, “They don’t see it as especially helpful in terms of developing their teaching. They see the categories as rather fixed, unless the faculty have written questions that students fill out- supplemental questions of the form. Sometimes they feel the instrument is not especially valuable. I think others of us have the perspective that it is valuable and that one needs really to know how to interpret the results.”
Senior Ajda Dotday, a senior finance major, likes the idea of faculty providing supplemental questions. “If they handed out more questionnaires that asked what students would change about the class, then professors could understand what students are looking for,” Dotday says.
When asking faculty to comment on the SIRs, some chose not to comment.
Harvey Allen, assistant professor of education, feels his students understand the importance of the SIR. “Since I teach only graduate students who are training to be principals or supervisors, I think they take the evaluation process very seriously. Perhaps putting the whole process online might be a better way to collect the data,” Allen says.
Once the SIRs get returned, faculty members have the option of getting them interpreted from the Center for Excellence in Teaching and Learning (CETL). “The director of that center is particularly good in terms of working with faculty who request to sit down and have their SIRs explained. In addition, each department chair is supposed to meet with their faculty after the reports have come in and talk about how the faculty member has done and how improvements might be made,” Pearson says.
In the last 10 years, the SIR was transformed to the SIR II which provided more information. However, Pearson says these reports are not the only methods the University uses to evaluate faculty. Besides SIRs, there are observations that faculty do of their colleagues and that deans do of faculty; when faculty come up for tenure, department chairs phone students to get their opinion on the faculty member and there are also alumni surveys of faculty.
Several students complain about filling out the reports each semester; many of them feel they do not see changes made after academically rating their professors. However, Pearson says, “I think that is a misperception because they are heavily used in terms of evaluating teaching. Because of the nature of the information and the fact that they’re collected over a period of time, they’re often given considerable weight among the various factors in determining teaching. There are faculty who have said that SIRs appear to be the only instrument that are used for evaluating teaching, which is not true. But I would say just because of the nature that they tend to be pretty significant weight, and so I think faculty would say they hope that students fill them out conscientiously, and try to dissociate their performance in the course with their evaluation of the instructor and the course.”
George Mena, a senior psychology major, believes the SIR is an appropriate method. “I think a teacher is more than just a person who is hired to provide a lecture, and there is a lot of planning that should be put into course meetings. SIRs give us, the students, a chance to criticize the extent to which the professor does this,” Mena says.
There have been some rare cases, Pearson explains, where the packets have not been delivered or certain information has not come back from ETS. “But when faculty apply for continuance and so forth, they write self-evaluation statements where they are asked to really reflect on the results of the SIRs and if there are missing SIRs to explain what happened in that semester, and if they did not perform particularly well they have the opportunity to explain what they think are the factors,” Pearson says.
If a faculty member is repeatedly receiving low scores on the reports, Pearson says the University will look into the situation. “If it’s a pattern and a significant problem, even with a tenured faculty, that would be the basis for bringing together a committee to review the faculty’s performance in terms of continuation.”
Greg Cenicola, a junior criminal justice major, believes that the evaluations are good, but biased. “Most kids don’t care and don’t take them seriously,” he says, “so kids just fill them out with one number; or you get the kid who’s not doing good in class because they never show up or do anything and give the teacher ‘1’s.’ So I think as an idea they’re awesome, but how practical are they is the question.”
Other students have heard that if they fill out an entire SIR with all one number, (‘5’s’ being the highest score, ‘1’s’ being the lowest) that their scores are automatically not counted. Pearson says he does not know how ETS scores each report, but both he and O’Keefe do not believe that is the case. “The worst thing that students can do in the evaluation process is not take them seriously or put things down that are not correct,” he says.
Pearson talks of a common misperception of faculty and even of students-that only faculty who are popular, easy and give high grades get good SIRS. “I’ve been looking at them for over 20 years as Provost and, in fact, that’s not true. It may happen in some cases, but in many cases I found that faculty who are really good, who are demanding but are also accessible and supportive of students do very well on SIRs.”
Something the University is considering is adding an evaluation instrument for online classes. “That issue is being negotiated now. That’s a deficiency we need to fix because more and more faculty are teaching online. We’ll definitely want to use a national online instrument, what that will be remains unseen,” Pearson said.
Other ideas include evaluations for adjunct professors, as well as summer evaluations. Pearson says in the past, evaluating adjuncts with SIRs have historically been a cost issue, but with more adjuncts teaching courses at the University, “Having the SIRs used by all of the elements of the faculty would give us a better read on performance, as opposed to a department using a separate instrument for adjuncts, and something else for online, as so forth, so we’ll definitely be looking into that.”