Date of Award

Spring 1-1-2017

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

First Advisor

Jordan Boyd-Graber

Second Advisor

Martha S. Palmer

Third Advisor

James H. Martin

Fourth Advisor

Mans Hulden

Fifth Advisor

Hal Daumé III

Abstract

Recently, approaches to simultaneous machine translation—translating sentences incrementally, before they are complete—have attempted to incorporate machine learning to achieve simultaneous machine translation from verb-final languages (such as German and Japanese) to verb-medial languages (such as English). Due to the divergent syntax of these languages, particularly the head-finality of verb-final languages, simultaneous translation is a great challenge for both humans and machines, requiring the incorporation of predictions about the end of the sentences to achieve natural-sounding translations without waiting for them to be uttered. This problem is not trivial, yet relatively little attention has been given to it in the computational linguistics literature. We use incremental verb prediction to tackle this problem. By predicting verbs in SOV languages before they have been spoken, we can get ahead of the speaker, and by learning when to trust these predictions, we can minimize the propagation of error from incorrect predictions to the incremental translations. For the task of determining when to trust incrementally-revealed predictions and what actions to take based in part on these predictions, we turn to reinforcement learning. By combining incremental linguistic prediction with reinforcement learning, the simultaneous translation system can minimize translation errors introduced by relying on imperfect predictions, leading to less error-prone translations that still manage to be expeditious under realistic constraints on time.

Share

COinS