Methodological Foundations for AI-Driven Survey Question Generation
ORAL
Abstract
This presentation will explore the integration of Generative AI, specifically Large Language Models (LLMs), into Qualtrics to develop adaptive, context-aware survey instruments for educational research. Traditional surveys and interviews offer distinct advantages but remain constrained by either scalability or adaptability. AI-driven surveys aim to bridge this gap, providing personalized, evolving interactions that enhance data collection while maintaining research rigor. We introduce the Synthetic Question-Response Analysis framework, a methodological approach for validating AI-generated survey questions before deployment with human participants through extensive testing in silico. Through sentiment analysis, lexical evaluation, and structural analysis, we compare AI-to-AI and AI-to-human interactions, examining how AI-generated questions adapt to participant responses. Additionally, we use Activity Theory to assess how LLM-driven surveys mediate participant engagement and influence response dynamics. We found that while the SQRA framework can be used to improve AI-driven question generation, it cannot fully replicate human response variability, reinforcing the importance of human involvement in AI-mediated research.
–
Presenters
-
Ted K Mburu
Engineering Education Department, University of Colorado Boulder
Authors
-
Ted K Mburu
Engineering Education Department, University of Colorado Boulder
-
Campbell McColley
Meinig School of Biomedical Engineering, Cornell University
-
Joan Rong
Meinig School of Biomedical Engineering, Cornell University
-
Alexandra Werth
Meinig School of Biomedical Engineering, Cornell University