Modeling response times along with scores in IRT models – usefulness in item and assessment design.
POSTER
Abstract
Item Response Theory (IRT) models are widely used in psychometrics and education research to model student latent ability(ies) and item and test characteristics. This is commonly done by fitting individual student scores on items to estimate student 'ability' and item 'difficulty' and 'discrimination' parameters. However, a class of IRT models analogously model student response times (RT) on items typically using lognormal fits to estimate student and item 'slowness' parameters. These 'slowness' parameters suggest how time-consuming an item is or how comparatively slow a student is. We apply these models to student data on conceptual tests of 1-D kinematics and Newton's laws to understand the relevance of the 'slowness' parameters in gauging physics conceptual learning. We find broad ranges for these parameters suggesting varying strategies or behavior by student and item. Interestingly, the item slowness parameter is positively correlated with the score-based item 'discrimination' parameter, and surface-level item features like length, number of response choices, and the presence of figure(s). Based on our findings, we discuss how such RT models can reveal important features useful in designing and selecting items for a test and determining appropriate time limits.
Presenters
-
Harish Moni Prakash
The Ohio State University
Authors
-
Harish Moni Prakash
The Ohio State University
-
Andrew F Heckler
The Ohio State University, Ohio State University