### by Andrew Shi, Mathematics

#### Teaching Effectiveness Award Essay, 2020

In Math 128A/B, students get their first exposure to mathematical problems that lack *exact *answers, where only *approximate *answers can be found. The course goes beyond just finding these answers; it emphasizes the design, comparison, and implementation of algorithms that find these approximations. This paradigm shift can be very jarring to students conditioned to believe that math is largely about “cookbook-style” problems where every answer can reliably be found in the back of the textbook. In these courses, we study many of the so-called “Top 10 Algorithms of the 20th Century” in science and engineering. After the key points of the particular algorithm are sketched out in the lecture, we assign programming projects to implement them with prompts as terse as “Implement the Fast Multipole Method.” Not surprisingly, given such little guidance, students often complain they don’t even know where to start! It became clear to me they needed to learn how to break down the main algorithm into a series of bite-sized chunks that could be independently coded up and tested for accuracy — a recipe of sorts.

Mathematicians have a habit of presenting algorithms only in their final, polished forms that seem to work like magic. What we don’t explain are the decades of incremental development behind the scenes with all the trial-and-error involved. Before the Fast Multipole Method made it onto a Top 10 List, it evolved from a more humble “*Slow *Multipole Method” and even before that, an exceedingly simple “Slow *Single*pole Method.” I first tasked my students to delve deeper into the origin story of each algorithm and trace it to its earliest ancestor. With my guidance, students were able to find the starting point algorithm that was easy to code up and understand how all the bells and whistles were layered on to arrive at the final algorithm.

Now that we had a list of simpler subproblems to solve, we needed to formulate a detailed set of steps to code each of them up. I challenged my students to explain their thought processes, confronting small details instead of glossing over them. This was especially important since the code they would have to write needed to be completely unambiguous. After the students wrote down their recipes and test cases for each problem, I asked them to exchange them with each other. Since they had each individually put so much thought into the problem already, they were able to have productive discussions with each other, learning from the other’s thought processes and filling in any gaps. With detailed recipes in hand, the students reported that the actual coding itself was a breeze after all the thinking and heavy lifting had already been done. As a Ph.D. student in computational math, I was able to share my own experiences breaking down daunting problems and offer guidance and encouragement.

I started off the semester providing many of the details to the students, but as the year went on, I gradually took off the training wheels (without them noticing)! It became clear this approach was successful when my office hours turned from students sheepishly asking questions like “What do I do now?” and “Is this right?” to students gushing with their own ideas on how to tackle the problem. As a GSI, it was immensely satisfying to see their final reports go from reflecting scattered thought processes to much more structured ones. Even though most of my students don’t end up using any of the actual algorithms we studied, many of them report back that a systematic and incremental approach to problem solving has been critical in their own endeavors. Some students tell me this class is where they found their confidence and independence as mathematicians. Having seen so many of them grow in the past four semesters, I believe they could go on to develop the top 10 algorithms of the 21st century!