Date of Award

Spring 1-1-2010

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Computer Science

First Advisor

James H. Martin

Second Advisor

Tamara Sumner

Third Advisor

Martha Palmer

Abstract

Educational digital libraries and peer-produced open educational resources have become integral to efforts to incorporate personalized learning into the classroom. Assuring the quality of educational content from these sources has become a major concern of the curators of such materials, and of educators who want to use them. But quality of educational materials is a multi-faceted problem, not completely understood, and often disputed. In current practice, focused manual effort by trained experts is required to assess each resource.

This work attempts to leverage the large existing corpus of work in the field of computational semantics to supplement and support human judgment in educational resource assessment. Based on an in-depth study of human expert decision processes, characterizing the quality of a resource is broken down into dimensions of quality, and further into low-level, more easily identified indicators of quality; these indicators of quality alone are strongly predictive of an expert's overall quality assessment of a resource.

A corpus of 1000 resources from the Digital Library for Earth System Education (DLESE) was manually annotated for the presence or absence of seven important quality indicators. Human experts were able to make these assessments quite consistently. Using a supervised machine learning and document classification approach, a baseline computational system was able to train models for each of the seven indicators that achieved some agreement with the human annotation. By adjusting the computational system to make better use of the data set, these models were improved to achieve good agreement on all seven indicators.

To evaluate the generalizability of this approach, an additional corpus of 230 peer-produced open educational resources from the Instructional Architect (IA) project was manually annotated for quality indicators, using a slightly modified annotation protocol. In spite of the very different nature of the materials, the computational models trained on the DLESE corpus generalized to the new data to a small extent; models trained on the new data achieved mostly good agreement.

Share

COinS