Date of Award

Spring 1-1-2013

Document Type


Degree Name

Doctor of Philosophy (PhD)


Computer Science

First Advisor

Michael Mozer

Second Advisor

Aaron Clauset

Third Advisor

Vanja Dukic

Fourth Advisor

Qin Lv

Fifth Advisor

Sriram Sankaranarayanan


In many areas of machine learning and data science, the available data are represented as vectors of feature values. Some of these features are useful for prediction, but others are spurious or redundant. Feature selection is commonly used to determine the utility of a feature. Typically, features are selected in an all-or-none fashion for inclusion in a model. We describe an alternative approach that has received little attention in the literature: determining the relative importance of features via continuous weights, and performing multiple iterations of model training to iteratively reweight features such that the least useful features eventually obtain a weight of zero. We explore feature selection by employing iterative reweighting for two classes of popular machine learning models: L1 penalized linear models and Random Forests. Recent studies have shown that incorporating importance weights into L1 models leads to improvement in predictive performance in a single iteration of training. In Chapter 3, we advance the state-of-the-art by developing an alternative method for estimating feature importance based on subsampling. Extending the approach to multiple iterations of training, employing the importance weights from iteration n to bias the training on iteration n + 1 seems promising, but past studies yielded no benefit to iterative reweighting. In Chapter 4, we obtain a significant reduction of 7.48% in the error rate over standard L1 penalized algorithms, and nearly as large an improvement over alternative feature selection algorithms such as Adaptive Lasso, Bootstrap Lasso, and MSA-LASSO using our improved estimates of feature importance. In Chapter 5, we consider iterative reweighting in the context of Random Forests and contrast this with a more standard backward-elimination technique that involves training models with the full complement of features and iteratively removing the least important feature. In parallel with this contrast, we also compare several measures of importance, including our own proposal based on evaluating models constructed with and without each candidate feature. We show that our importance measure yields both higher accuracy and greater sparsity than importance measures obtained without retraining models (including measures proposed by Breiman and Strobl), though at a greater computational cost.