Date of Award

Spring 1-1-2011

Document Type

Thesis

Degree Name

Master of Science (MS)

Department

Computer Science

First Advisor

Michael Mozer

Second Advisor

Sriram Sankaranarayanan

Third Advisor

Clayton Lewis

Abstract

Individuals are often called upon to make sequences of judgments, as is required in questionnaires, reviewing movies, or taste testing beverages. The value in these judgments is bounded by how well individuals can express their internal sensations, impressions, and evaluations using rating scales. Psychological studies have shown that individuals are incapable of making judgments on an absolute rating scale, and instead rely upon reference points and anchors from recent experiences [32]. These sequential dependencies prevent acquisition of useful responses in judgment tasks. Luckily, the cognitive process of transforming internal sensations to responses relies in a lawful manner on recent experiences [5].

We first examined whether this contamination from recent experience is due to the short lag between responses, often a second or two apart. Indeed, researchers sometimes increase the time between judgments specifically to avoid sequential dependencies. We examined a data set collected with trials one minute apart. The data consists of pain calibrations acquired from experiments for Professor Tor Wager's lab at Columbia University. Wager studies brain activity associated with pain and placebo effects. Participants were asked to judge the level of pain induced by varying temperature in pools of water. The calibration procedure attempts to determine the mapping between temperature in degrees Celsius, and the pain rating on a 10 point scale. We first generated figures that related groups of temperature data to analyze whether sequential dependencies played a role in this data set. We discovered, even though the calibrations were designed to avoid sequential dependencies, that these effects existed. We then created models to help predict these effects, including linear regression models, neural nets, and lookup tables. We found that we could reduce the root mean squared error across the data set by 6%.

Given the systematic contamination of a trial on subsequent trials, we next asked whether it would be possible to reverse this effect and decontaminate judgments to obtain a more reliable, context independent measure of an individual's perception. We collected our own data for this task. We asked individuals to rate obscure movie advertisements (DVD boxes) by indicating on a 10 point scale how likely they would want to watch the movie. The same movies were rated multiple times in order to observe the effects of context. Half the data from each individual (2 presentations of 50 movies) was used for constructing decontamination models. The decontamination model consists of a context-independent 'impression' for each movie as well as a contamination model that predicts how a movie will be rated in a given context. Models were scored on how well the decontamination model predicted actual judgments. We found that we could reduce mean squared error across the data set by 5%.

Share

COinS