From Skilled Test Takers to Budding Scientists: Overhauling Assessment in Cognitive Neuroscience

Categories: Teaching Effectiveness Award Essays

By Manon Ironside, Psychology

Teaching Effectiveness Award Essay, 2022

As Fall of 2020 approached, it was clear that the assessment structure for Cognitive Neuroscience needed a major overhaul. The rigorous multiple-choice exams administered twice during the semester were not well-suited to remote learning and assessment, and the need to revise course assessments presented an opportunity to reconsider the pedagogical soundness and accessibility issues in the course more broadly. For example, while the multiple-choice exams generally produced a normal distribution of scores, they were not a particularly accurate reflection of work demands in cognitive neuroscience careers. In past semesters, these exams were also heavily weighted, and this narrow style of assessment penalized students whose strengths were more apparent in other styles of assessment, and students who struggled to process written information quickly.

In the summer of 2020, I completed the GRI Innovation Fellowship, and in doing so, carefully developed a novel assessment plan for the Cognitive Neuroscience course, which the professor accepted and GSIs implemented in Fall of 2020. I developed a 4-part rolling assignment final project with legitimate assessment in mind. The professor of the course emphasized the key learning objective of developing critical thinking skills. I expanded this objective to incorporate the 6 levels in Bloom’s taxonomy, and in doing so, it became clear that a legitimate assessment in this course would involve students constructing their own research proposals through an iterative process. Through the assignment I developed, students: 1) conducted a literature critique, 2) proposed a novel research hypothesis based on the literature critique and clarified the methods required to test this idea, 3) reviewed and provided feedback to another student’s research proposal, 4) incorporated peer feedback and presented (expected) results. At each stage, students also received feedback from GSIs, which they used to improve their proposals.

In addition to developing this assessment, I considered the importance of community-building, asynchronous options, and the exceptional challenges students faced amid the COVID-19 pandemic. During sections, I grouped students into smaller ‘pods’ which remained consistent throughout the semester, and incorporated group work into part of each section. This had the dual purpose of allowing for community-building and breaking up the section into smaller, more manageable pieces. I uploaded short tutorial videos regarding the final project to bCourses using Kaltura, so that students could access asynchronous material. Finally, students were more heavily assessed according to progress as opposed to ability alone, allowing students who struggled in the course to be acknowledged for their growth and learning. 

To test the effectiveness of this assessment approach, we asked students to complete a reflection in which they evaluated their own learning over the course of the 4-part project. Some students found the project quite difficult, and we used this information to incorporate increased scaffolding in the subsequent semester. Other students expressed gratitude for the level of feedback and continuity of the 4-part project: In particular, students who were interested in pursuing research and graduate school mentioned that they felt much better prepared to apply for research assistant positions and had a clearer sense of what it meant to be a cognitive neuroscientist. Many applications for research assistant positions and undergraduate grants require a research proposal. Following completion of the 4-part project in this course, students had a template for what a quality proposal would look like. Overall, this novel assessment style for the course was considered successful and pedagogically sound and continues to be implemented in the course.