Using large language models to summarize student feedback
ORAL
Abstract
Student feedback of instruction is important for instructors as it provides a means for understanding learning needs, preferences, and challenges. Yet, for courses with a large number of students, such as introductory physics courses, reading through the feedback and making meaningful insight from long student feedback can be a time-consuming task, a potential barrier for already time-strapped instructors to regularly collect feedback in their courses. Large language models (LLMs), which have become more accessible and user-friendly since the launch of ChatGPT in late 2022, offer a potential solution. LLMs have been noted to be effective at summarizing large volumes of text and could be useful for summarizing student feedback quickly. In this study, we used four popular LLMs to summarize end-of-semester teaching evaluations from three instructors and eight course offerings at a large university located in the southeastern United States. These courses ranged in size from under 20 students to over 100 students and included both lecture and active-learning courses. We compared the summaries of the evaluations generated by the LLMs to the summaries generated by the human research team. In general, we find that LLMs identify similar trends in the evaluations as the human summarizers do, though we did find some differences. Our work then suggests that LLMs are a useful tool for quickly extracting insights from student feedback.
–
Presenters
-
Nicholas Young
University of Georgia
Authors
-
Nicholas Young
University of Georgia
-
Christopher Overton
University of Georgia
-
Ania Majewska
University of Georgia
-
Hina Shaikh
University of Georgia, Eberhard Karls University of Tübingen
-
Nandana J Weliweriya
University of Georgia