Date of Award

Spring 1-1-2011

Document Type

Thesis

Degree Name

Master of Arts (MA)

Department

Psychology & Neuroscience

First Advisor

Matthew C. Jones

Second Advisor

Randall C. O'Reilly

Third Advisor

Michael C. Mozer

Abstract

Humans demonstrate an incredible capacity to learn novel tasks in complex dynamic environments. Reinforcement learning (RL) has shown promise as a computational framework for modeling the learning of dynamic tasks in a biologically plausible way. However the learning performance of RL depends critically on the representation of the task. In the machine learning literature, representations are carefully crafted to capture the structure of the task, whereas humans autonomously construct representations during learning. In this work I present a framework integrating RL with psychological mechanisms of representation learning. One model presented here, Q-ALCOVE, explores how RL can adapt selective attention among stimulus dimensions to construct a representations in two different tasks. The model proposes that selective attention can be learned indirectly via internal feedback signals central to RL. I present the results of a behavioral experiment supporting this prediction as well as modeling work suggesting a broad psychological scope for RL.

Share

COinS