Consistency of item response theory results between data sets
ORAL
Abstract
Analyses of data from multiple-choice tests typically begin by scoring each response dichotomously as either correct or incorrect. Dichotomous scoring facilitates many forms of item-level and test-level analyses, and leads easily to reporting student scores based on the number of items answered correctly. Dichotomous scoring also destroys any information that may be learned by examining which particular incorrect responses students select. This destruction of information is particularly relevant for data collected using research-based assessments, on which incorrect response options often correspond to specific commonly held ideas. We have previously used nominal response models (NRM) from item response theory to rank incorrect responses to items on one such test (the Force and Motion Conceptual Evaluation, FMCE), and we have argued that specific NRM parameters indicate how close any particular incorrect response is to the correct response. We present evidence of the consistency of these rankings and the parameter values between two large data sets (~6000 students each). We show that a one-dimensional model that treats the FMCE as measuring a single test construct is inadequate, and we discuss promising avenues for more complex analyses.
–
Presenters
-
Trevor I Smith
Rowan University
Authors
-
Trevor I Smith
Rowan University
-
Nasrine Bendjilali
Rowan University