Student Evaluations of Teaching

Student evaluations of teaching (SETs) have long been controversial. Countless studies have investigated whether SETs are valid measures of effective teaching, with the results often simplified into attention-grabbing headline. This page provides some resources for those interested in learning more about the literature, and developing SET items for their own departments.

  • Betsy Barre at Rice University has compiled an excellent review of the literature. She captures the highlights in this post (and further expands on those ideas in a follow-up post). The main points can be summarized as:
  1. Yes, there are studies that have shown no correlation (or even inverse correlations) between the results of student evaluations and student learning. Yet, there are just as many, and in fact many more, that show just the opposite.[2]
  2. As with all social science, this research question is incredibly complex. And insofar as the research literature reflects this complexity, there are few straightforward answers to any questions. If you read anything that suggests otherwise (in either direction), be suspicious.
  3. Despite this complexity, there is wide agreement that a number of independent factors, easily but rarely controlled for, will bias the numerical results of an evaluation. These include, but are not limited to, student motivation, student effort, class size, and discipline (note that gender, grades, and workload are NOT included in this list).
  4. Even when we control for these known biases, the relationship between scores and student learning is not 1 to 1. Most studies have found correlations of around .5. This is a relatively strong positive correlation in the social sciences, but it is important to understand that it means there are still many factors influencing the outcome that we don’t yet understand. Put differently, student evaluations of teaching effectiveness are a useful, butultimately imperfect, measure of teaching effectiveness.
  5. Despite this recognition, we have not yet been able to find an alternative measure of teaching effectiveness that correlates as strongly with student learning. In other words, they may be imperfect measures, but they are also our best.
  6. Finally, if scholars of evaluations agree on anything, they agree that however useful student evaluations might be, they will be made more useful when used in conjunction with other measures of teaching effectiveness.
  • San Diego State requires all departments to ask some common questions on their SET forms and to use a common rating scale (see the Policies page for the exact language from the Senate Policy File). In addition, departments can add up to ten additional quantitative and two additional qualitative questions. In the slides for a CTL event on Feb 18, 2016, there are several suggestions for those additional items (note that questions must be phrased so that the common rating scale can be used for responses). Also, the original Senate memo shows the marked up policy language changes and contains several suggestions for reporting of scores.
  • IDEA is a nonprofit organization that has long assisted higher education institutions with their student ratings of instruction. The items on their forms emphasize teaching behaviors known to be correlated with effective teaching and student learning.
  • Students Evaluation of Educational Quality is one of the oldest standardized SETs and has been studied extensively. The items focus on nine factors associated with effective teaching and student learning.
  • The Teacher Behaviors Checklistis another tool that focuses on teaching behaviors associated with effective teaching and student learning. Also see the page on Evaluating Teaching Effectiveness for additional suggestions.
  • Given that student evaluations are administered at the end of the semester, instructors can only use them to improve teaching in future courses. It is also good practice for instructors to collect formative feedback earlier in the semester when there is still time to make adjustments; see the page on Informal Feedback and Formative Assessment for more information.