A Collaborative Response to ChatGPT

Categories: Teaching Effectiveness Award Essays

By Madeleine Levac, Philosophy

Teaching Effectiveness Award Essay, 2024

            The launch of ChatGPT in 2022 introduced new challenges for educators, several of which were front of mind as I prepared to GSI a philosophy course last fall. There was the obvious issue of academic dishonesty. We lack a surefire way of identifying work that relies problematically on this technology; nor is it entirely clear yet what constitutes problematic reliance. In addition, students who turn to AI deprive themselves of the very things I want them to get from a philosophy course, like experiences of critical inquiry, of coming to understand what seemed unintelligible. Finally, I worried about the impacts that a punitive, top-down approach to all this can have on the student-teacher relationship and the classroom dynamic.

            The concrete challenge I faced was then how to address some major pedagogical and equity issues without alienating or disrespecting my students. I settled on the following solution. Before the first essay was assigned, I set aside part of our discussion section for the purpose of negotiating, together, an AI policy for the course. Emphasizing that we were just figuring things out and that there were no wrong or incriminating answers, I asked students how they used these technologies and what uses they had heard about from others. Once we had a range of examples on the table, I invited thoughts about their acceptability. I prompted my students to give reasons in support of their view. I supplied, in turn, the reasons behind my own curmudgeonly attitude towards ChatGPT. It turned out that many students were in the habit of using AI software to catch mistakes and improve their writing at the sentence level. Most deemed this unobjectionable, so I gave them permission to continue in our class—something that I would not have allowed spontaneously. It was clear that having AI write any part of the work they submitted was not allowed, but we talked about whythis was, and what cases it extended to. Some students thought it was fine to have ChatGPT produce an outline for a paper, or condense and ‘translate’ the readings for them. I discouraged these uses but was open about the fact that I likely wouldn’t be able to detect them. There are precautionary and ethical reasons not to use AI on assignments; the more sophisticated these tools get, the more the precautionary reasons fade out. Rather than try to hide this, I wanted to encourage students to approach the question with its moral dimensions in mind.

            A natural way to test the success of this approach would be to compare the extent to which my students and a control group abused AI technologies on their assignments. But now one of the original challenges reappears: we’re not accurate detectors of that abuse, especially in its subtler forms. I nonetheless think that the experiment was successful by different, perhaps more important, measures. I learned things from my students about their relationships to technology that genuinely surprised me. We had a conversation, face to face in a room. Students bravely voiced their own opinions. We practiced philosophical skills: trying out ideas, subjecting them to scrutiny, finding reasons in support of those we do want to endorse, introducing new considerations. Skills like these mattered for the class, and they also matter for navigating a cultural environment that AI is so rapidly shaping. Instructors have a responsibility to uphold certain academic norms and create a fair and safe environment, but the essence of our job is to guide students in their learning, not persecute them. Giving students a voice in determining the rules they are subject to is a small way of encouraging those basic human activities that the latest applications of AI are encroaching upon.