Achieving Widespread Participation through Evidence-Based Classroom Discourse

Tags: , , , , , , ,

Categories: GSI Online LibraryTeaching Effectiveness Award Essays

by Elise Piazza, Undergraduate and Interdisciplinary Studies (Home Department: Vision Science)

Teaching Effectiveness Award Essay, 2014

On my first day as a graduate student instructor for Introduction to Cognitive Science, I noticed that participation was limited to a few students, while the rest sat silently, either intimidated or bored. Many teachers ignore this substantial problem of low participation, engendering a “rich-get-richer” effect in which the students whose grasp of the material is weakest do not receive feedback or engage in any interactions about their ideas and interpretations.

As an experimental psychologist, I decided to introduce my scientific approach to teaching by turning our discussion section into an experiment. On day two, I used a questionnaire to gather students’ opinions on many real-world manifestations of some of the course topics (e.g., “Which of the following are intelligent and/or conscious: Siri? A dog? An infant? Why or why not?”). In the subsequent class, I presented plots of the survey data, group statistics, and anonymous quotations from the students’ written explanations that provided a springboard for my overview of major topics in cognitive science. This exercise served as a philosophical icebreaker — a way to illuminate the myriad of academic, political, and religious backgrounds of the class and to gauge their baseline intuitions about the mind. My approach sparked lively discussion and yielded more honest and thoughtful responses than if I had asked individuals to orally volunteer their (likely controversial) opinions to the rest of the class in the first week.

Another experiment I conducted in the classroom used chatbots — online programs that simulate human verbal communication. We had just covered the Turing Test, a classic measure of machine intelligence. I asked the students to administer this test to online chatbots, to try to stump them with creative questions and/or challenging phrasing, and to then report which kinds of concepts baffled the chatbots the most. (What college student wouldn’t want to trip up a conversational computer program with double-entendres, idiomatic phrases, and pick-up  lines?) I compiled the students’ findings and we discussed why certain features of natural human language, such as context dependence and irony, are difficult to implement algorithmically. To interactively demonstrate various perceptual effects or moral decision-making tasks, I re-enacted the original experiments with my students as participants, giving them an intimate and direct view of the process of scientific inquiry. These engaging in-class experiments naturally motivated virtually every student to participate. Instead of feeling pressured or forced, their experience was that we were working as a scientific team to arrive at answers. Many of the course topics (consciousness, free will) are philosophically challenging — and intimidating to discuss — but I noticed that some students who rarely spoke during the first few weeks were much more likely to participate whenever concrete scientific evidence was introduced as a catalyst for dialogue.

In the final few weeks, I held a series of debates on hot topics (e.g., “Do we have free will?” “Does the mind extend beyond the brain?”) during section. I asked each team to present empirical evidence for their arguments, and I was impressed by the depth of their research. Although the debates were not graded, students enthusiastically participated in them, and several even came to office hours to excitedly share the latest research on face perception or neuroethics that they had unearthed. Many of the more introverted students flourished during the debates, which provided a structured forum for sharing ideas.

The teaching methods I applied promoted participation by involving all of the students in a variety of ways, and I assessed the effectiveness of these methods with teaching evaluations, conversations with  the students, observation during section and office hours, and exam performance. The debates and in-class experiments enhanced student participation by making the course material more tangible, interdisciplinary (thus covering a wider range of students’ interests), interactive, and fun. This was confirmed by the teaching evaluations: students universally reported satisfaction with the empirical viewpoints I presented in section (average teaching evaluation score: 6.6 on a 7-point scale, and 70% of teaching evaluation comments specifically praised the interactive, empirical approach). These methods also improved the quality of students’ ideas. As the semester progressed, they asked more conceptually sophisticated questions, and increasingly more students came to office hours to ask follow-up questions about research studies I had presented. This sparked some terrific conversations, several of which led to me helping students secure research assistantships on campus. The organization and depth of students’ exams (in-class essays) also improved, with specific gains in their use of scientific evidence to defend their arguments. Although at the beginning of the course many students mainly parroted the lecture material in their essays, by the second exam most of them were employing empirical evidence and vivid real-life examples, demonstrating a deeper understanding of the material. Overall, these empirically driven methods fueled my students’ curiosity to better understand themselves (the heart of cognitive science, after all), and participation automatically rose as a result. At the end of the course, several students expressed their heartfelt gratitude and enthusiastically told me that I had inspired them to major in cognitive science.