by Lyn Paleo, Public Health
Teaching Effectiveness Award Essay, 2003
I co-teach a graduate-level course on program evaluation at the School of Public Health. I believe that students in a practice-based field such as this should receive a combination of theory and skills development. Theory-based lectures are critical; however, they alone are insufficient to the task of teaching people how to design and conduct evaluations for health promotion programs.
These days many, perhaps most, health promotion programs target vulnerable populations such as immigrants, homeless families, gang youth, or substance abusers. People in these groups generally have either low English-language literacy skills or a strong aversion to taking tests. I tell my students that if you invite people to a nutrition education workshop, a gang violence prevention program, or a parenting class and impose upon them a written test at the end of the first session — worse yet, give them a pre-test on material they are supposed to learn in the subsequent weeks— they will be less motivated to return. Written pre- and post-tests are considered an indispensable tool for evaluation, but I assert that there are better alternatives that are no less valid and much less burdensome to program participants.
During the four class sessions devoted to designing surveys and tests, I incorporate skills-building activities along with lectures on measurement theory. First and foremost, but not an innovation of mine, is the tradition that all students partner with a community agency and develop a viable evaluation plan for a real program. This tradition is an indication that the school also values practice-based learning. My innovation lies in the classroom interactive exercises that help students figure out how to design, write, and critique surveys and tests. In several sessions, students in small groups put up on large sheets of paper sample post-test questions to demonstrate their understanding of the theoretical concept for that session. We then spend a few minutes critiquing the questions and identifying better approaches. In another session, each group gets a stack of two dozen actual surveys and post-tests and has half an hour to find examples of all the types of problems in writing questions which we discussed in preceding class sessions.
During these sessions, students themselves experience alternatives to the typical paper-and-pencil post-test, and we examine each method for its validity, reliability, and participant response and discuss the implications for analysis. For example, students each are given a dozen sticker-dots to together place on a large flip-chart grid, indicating by the number of dots in each square their aggregate opinion of which quantitative method has higher cost, bias, and response rate. We then discuss how this method might be used in a nutrition class with plastic food models before which participants place sticker-dots to show which they think is the ideal serving size. On another day, students receive a sheet with 30 mailing labels onto which are printed a mixed-up set of ten concepts, indicators, and measures; they peel off and rearrange the labels to match up the ten sets. This always has been a popular exercise, and it is easy, then, to discuss how labels can be used as a post-test assessment, perhaps one that incorporates drawings more than words. Through the combination of these (and other) classroom activities and presentations of theory, students come to consider differences in test-taking modalities: they recognize that some people test fine on a standard written instrument, but other times the better option is an oral test or a skills demonstration.
The proof is in the pudding— or in this case in the final product. Students’ evaluation plans show not only their facility with the theoretical material, but also their own innovations in gearing health promotion programs’ post-tests to both the purpose of the program and the inclinations and language abilities of the target group.