Date of Award
Doctor of Philosophy (PhD)
Derek C. Briggs
Lorrie A. Shepard
Erin M. Furtak
Learning Progressions (LPs) are hypothesized pathways describing the development of students’ understanding. Although they show promise for informing decisions about student learning, and helping develop standards and curricula, attempts to validate LPs empirically have been virtually nonexistent.
The purpose of this dissertation is twofold: 1) to validate an LP by applying psychometric models and 2) to examine and compare these models and their results in terms of their applicability to that LP. I examine the information produced by Item Response Theory (IRT) models and Diagnostic Classification Models (DCMs) when applied to item responses from an assessment—composed of Ordered Multiple Choice (OMC) items—designed to measure an LP of Force and Motion. I apply the Partial Credit Model (PCM; Embretson & Reise, 2000), Attribute Hierarchy Model (AHM; Gierl, Leighton, & Hunka, 2006), and Generalized Diagnostic Model (GDM; von Davier, 2005) to the assessment data.
All three models in this study yield evidence that student item responses do not follow progressions given in the LP. Hence, the hypothesized LP, as well as the OMC items used to measure student understanding of that LP, should be reexamined. In particular, the assessment tasks and associated OMC items exhibit ceiling and floor effects that impair the models’ abilities to associate student responses LP levels.
Each model had unique limitations in terms of its applicability to the LP. The PCM model’s assumptions and its resulting item statistics were inappropriate, and could not be used to classify students into LP levels. In contrast, both the AHM and GDM models did classify students into latent classes, but they were still limited. The AHM’s estimation procedure, which relies on an artificial neural network approach, introduced problems, as did the overall fit of the model. The GDM is so complex that it is conceptually hard to understand and utilize, even though it did produce both item level statistics (unlike AHM) and student classifications.
Overall, this study provides insights into how to use psychometric modeling to inform an LP and LP assessment, as well as the viability of three models from two different frameworks in the context of an LP.
Circi Kizil, Ruhan, "The Marginal Edge of Learning Progressions and Modeling: Investigating Diagnostic Inferences from Learning Progressions Assessment" (2015). School of Education Graduate Theses & Dissertations. 75.