• Home
  • Cherry Picked - KU developing new, more insightful method of evaluating teaching than traditional student forms

KU developing new, more insightful method of evaluating teaching than traditional student forms

Monday, March 18, 2019

LAWRENCE — The form is nearly as ubiquitous as the syllabus or the final exam.

Near the end of the semester, students fill out professor evaluations. For decades, those surveys have been the standard for evaluating college teaching. As concerns about the validity of those surveys grow, faculty at the University of Kansas are leading an effort to develop more insightful, broader ways of evaluating teaching that can be adapted across disciplines and institutions.

“We wanted evaluations of teaching to take into account the intellectual work that goes on behind the scenes when a teacher is not in front of the class,” said Andrea Greenhoot, director of the Center for Teaching Excellence, which is leading the project at KU. “That is actually where the vast majority of faculty time and effort on their teaching is spent.”

The result was a rubric that is being tested at KU, the University of Massachusetts, Amherst, and the University of Colorado, Boulder. The three universities, along with Michigan State University, are in the second year of a five-year National Science Foundation grant aimed at creating a more meaningful approach to evaluating teaching. The rubric still takes student surveys of teaching into account but puts that feedback in context by bringing attention to instructors’ work in designing meaningful courses and assignments, documenting student learning and reflecting on course outcomes.

The rubric is part of a project known as the Benchmarks for Teaching Effectiveness, which builds on work the center has done over the past decade. In 2014, the center began developing the rubric to guide improved evaluation. Student surveys of teaching tend to provide a number or rating, which then become the primary focus of the faculty evaluation process. While student surveys of teaching have value, they also come with inherent biases.

“The idea was ‘why don’t we come up with a better number, something more comprehensive and meaningful?’” Greenhoot said.

The other most common form of evaluation, peer observation, also has its limitations, namely that it tends to focus on only one or two class sessions. Neither tends to dive deeper into what all goes into a course behind the scenes.

The Benchmarks for Teaching Effectiveness rubric identifies seven categories of instructional practice:

  • Goals, content and alignment
  • Teaching practices
  • Achievement of learning outcomes
  • Classroom climate and student perceptions
  • Reflection and iterative growth
  • Mentoring and advising
  • Involvement in teaching service, scholarship or community.

The National Science Foundation provided grant funding for a five-year project to develop, test, adapt and share the Benchmarks for Teaching Effectiveness rubric. With support from CTE, nine KU departments are using the rubric to develop new evaluation and mentoring processes. The rubric also includes guidelines on how a teacher’s performance in each of the areas can be measured as being below, meeting or exceeding expectations, as well as guidelines on how those standards can be defined. The system prompts a deeper look at teacher evaluation with input from three key stakeholders: students, peers and the instructors themselves.

“'There’s no other way to evaluate teaching.’ That’s a statement I’ve heard many times over the years,” said Doug Ward, associate director of the Center for Teaching Excellence. “But we want to show people that there are other ways. I think we’re on the cusp of some very significant changes in the way teaching is evaluated.”

One of the biggest strengths of the Benchmarks rubric is its adaptability. Teaching is not the same from one discipline to another, nor from one university to another. As part of the grant project, the center invited departments at KU to participate. Those departments develop consensus on priorities for effective teaching and use the rubric as a guide for measuring outcomes. The center provides support and advice throughout the process, and the departments provide feedback on their experiences. More departments will be invited to take part in future years, and the plan is to make the refined rubric available to universities across the country and internationally.

“The goal is to really improve the rubric and to develop processes the departments can use to implement the system efficiently,” Greenhoot said. “If we can develop some really good models, we’d like to see it spread. We don’t want to see an administration say ‘you have to do this’ but have institutions that want to use it because it can provide a much deeper look at effective teaching and learning.”

The effort has drawn the support of the Association of American Universities as well. In 2011, the organization launched its undergraduate STEM initiative, which included a focus on improving instruction at research universities. Emily Miller, AAU associate vice president for policy, said the effort fits a focus on excellence in both teaching and research, a large part of what it means to be an AAU institution.

“Student evaluations have valuable information to offer but not necessarily at the breadth of what we’d like to see in this space,” Miller said. “This rubric has been based on scholarship into how people learn, and it’s been exciting to see the movement. We would like to see more AAU institutions looking more robustly into how they’re documenting classroom teaching and other areas such as curriculum development and effective ways to evaluate teaching and learning.”

Research universities have long placed a great deal of importance on research productivity in the promotion and tenure process. Miller, Greenhoot and Ward all agreed teaching effectiveness should play a larger role in tenure consideration, and a better way of evaluating teaching with multiple points of input could be both an incentive to do so and provide a more comprehensive picture for all involved. The type of teaching evaluation made possible through the Benchmarks for Teaching Effectiveness program could benefit instructors at all points in their career, though, Greenhoot and Ward said. Foremost, it could establish the expectation that quality teaching is highly valued.

Miller said that would pay off for students in several ways, including giving them a broader understanding of course and subject material, helping retain students through high-quality, welcoming instruction and the translation of better understanding across courses.

While student surveys of teaching were never meant to be the dominant or only source of evaluation, they should not be entirely discounted either, Ward said. Similarly, peer evaluation can give a beneficial look at teaching but tends not to focus on an hour or so of in-class instruction instead of the work that takes place across semesters as a course is developed and refined. The Benchmarks for Teaching Effectiveness is developing a way for teachers, students and peers to gain insight into what effective instruction is and can be in courses across disciplines and institutions.