A ‘Tradeoffs’ Method for Soliciting Student Feedback

Tags: , ,

Categories: GSI Online LibraryTeaching Effectiveness Award Essays

by Sean Tanner, Public Policy

Teaching Effectiveness Award Essay, 2011

Soliciting useful student feedback can be difficult. At best, the data from a poor feedback instrument will be disregarded as noise; at worst, it will cause the GSI to make unproductive choices in her future teaching jobs. A typical Likert-scale survey item might ask how strongly the student agrees or disagrees with the statement “My GSI used classroom time effectively.” If the median response is “somewhat disagree,” what should the GSI do about it? If the GSI decides to put more effort toward classroom preparation, she will need to reduce the amount of effort spent on some other aspect of teaching. Which part of teaching should get cut back? Most feedback surveys do not inform the tradeoffs that are necessary when a GSI develops her teaching skills. As the GSI for a two-semester quantitative methods course for graduate students of public policy, I had a comparatively long time to solicit and respond to feedback from the same group of 75 students.

Last semester, I experimented with a method of collecting student feedback that would force the students to make tradeoffs in my time and effort. I gave them a list of the potentially alterable activities I perform as a teacher and the approximate amount of time I put into each activity per week. Next to each activity, I gave a description of what more and less of that activity would mean. For instance, I spent about an hour per week writing comments on their assignments. More time writing comments would mean more thorough written feedback and less time would mean writing checks or minuses with “see key” instead of personal feedback. As another example, I spent about three hours per week preparing in-class practice problems. More time preparing those questions would mean more relevant policy examples; less time preparing those questions would mean more textbook problems. All told, I had eighteen hours per week to distribute across nine teaching activities. Each student reapportioned my time to suit his needs.

With the data from this form, I was able to get a far richer picture of student preferences than a simple Likert-scale questionnaire would have allowed. Preferences for my office hours were clustered tightly around five hours per week (more than the three I was offering), while almost no one cared about me being available via email. The variances of these responses were also informative, as they allowed me to observe how heterogeneous student preferences were. A larger variance in an item made me cautious about changes to the corresponding activity. Perhaps most interesting were the inter-item correlations. For instance, the few students who actually wanted me to be available via email also tended to be above average in their desire for more office hours. These students wanted more of my time outside of class altogether and so were well served by merely increasing office hours. In the end, I wound up making moderate adjustments in my time distribution.

The immediate response from students was overwhelmingly positive. Many students thought it was innovative and appreciated the opportunity to tinker with my time. The changes I made to my teaching activities were followed by a precipitous drop in student requests for more or less of those activities. A follow-up survey at the end of this semester should provide another measure of how effectively reapportioning GSI time based on student feedback can address students’ needs.