Date of Award

Spring 1-1-2017

Document Type

Thesis

Degree Name

Master of Science (MS)

First Advisor

Jordan Boyd-Graber

Second Advisor

Stephen Becker

Third Advisor

William Kleiber

Abstract

Most results in supervised learning make the assumption that the distributions from which the training data and test data are are drawn are independent and identically distributed. In practice, knowledge is stored in many different forms, and models that can combine information from multiple sources have an advantage over those that are restricted by the i.i.d. assumption. In this work, we present results on a factoid question answering dataset with 91754 questions representing 15400 different answers, and demonstrate significant improvement in accuracy by augmenting the training set with raw text from Wikipedia. In particular the most pronounced gains in accuracy are concentrated on the answers which have low representation in the training set. This is accomplished by using deep averaging networks (DANs), which we empirically demonstrate can learn in this heterogeneous setting, with no modifications, making them an attractive option compared to the other more complex methods for domain adaptation which are often used.

Share

COinS