Date of Award

Spring 1-1-2014

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Computer Science

First Advisor

Alexander Repenning

Second Advisor

Clayton Lewis

Third Advisor

David Webb

Fourth Advisor

Michael Klymkowsky

Fifth Advisor

Tom Yeh

Abstract

Since the 1990s there have been multiple efforts to fix the broken pipeline at the K-12 level in computer science education, and most of those efforts have focused on the student motivational factor. The results of many studies in computer science education indicate that student motivation in computer science has been successfully increased by those efforts (Perrone et al., 1995; Walter et al., 2007; Kelleher et al., 2005; Kelleher et al., 2007; Maloney et al., 2008; Resnick et al., 2009), but most of them have failed to address educational benefits of these efforts. I believe that this biased tendency of CS education research has been caused by the lack of an adequate instrument to measure students’ achieved skills with learning objectives at the semantic level. In other words, the right assessment instrument should be able to assess not only student learning skills but also achieved learning objectives: what kinds of knowledge students have learned through their activities in the class. Student learning skills may be measured with existing tools such as grading rubrics, but they are extremely time consuming and have a limited functionality to provide necessary educational feedback such as student learning progression.

I developed a learning data analysis tool to measure student-learning skills and represent students’ learning achievements at the semantic level through phenomenological analysis in real-time. This concept uses a LSA (Landauer, 2003) inspired technique, multiple high dimensional cosine calculations to analyze semantic meanings of the pre-defined subject/phenomena in a given. Theoretically, this idea can be applied to several different domains such as natural language processing and visual end user programming. Therefore, this idea can be employed to build a learning assessment tool for computer science (CS) and/or computational thinking (CT) education where visual programming is widely adopted.

As a semantic assessment tool for CS/CT learning, I propose a method, Computational Thinking Pattern Analysis (CTPA) in which nine canonical computational thinking patterns (Koh et al., 2010) work as pre-defined phenomena within a programmed artifact’s context. The CTPA measures students’ learning of skills (how well they have learned a skill) and students’ learning of objectives (how well they have learned certain objectives) at the semantic level through phenomenological analysis from student-programmed artifacts in real-time. The outcomes of CTPA can be used to provide valid and useful educational feedback to educators and learners in CS/CT education such as measuring and tracking student learning outcomes.

Semantic assessment in CS/CT education would be able to provide better individual feedback and faster learning assessment to students and teachers by measuring student skills and challenges and analyzing learning objectives at the semantic level. This kind of feedback can be used to determine when and how teachers can expand students’ learning capability in accordance with the theories of the Zone of Proximal Development and Flow (Basawapatna et al., 2013). A validated CTPA will contribute to the study of learning theory, professional development, and educational data mining by providing empirical data in order to refine the current conceptual framework of educational systems.

This research suggests a method that can assess students’ learning skills, provide effective learning guidelines, and compute students’ learning outcomes. This type of method, which cannot be found widely, can be used to create real cyberlearning systems that help large numbers of teachers and students to learn computational thinking.

Share

COinS