Skip to main content
Login | Suomeksi | På svenska | In English

Linear Models with Regularization

Show full item record

Title: Linear Models with Regularization
Author(s): Huang, Zhiyong
Contributor: University of Helsinki, Faculty of Science, Department of Mathematics and Statistics
Discipline: Statistics
Language: English
Acceptance year: 2012
Abstract:
In this master's thesis we present two important classes of regularized linear models -regularized least squares regression (LS) and regularized least absolute deviation (LAD). Use of regularized regression in variable selection was pioneered by Tibshirani (1996) and his proposed LASSO rapidly became a popular and competitive method in variable selection. Properties of LASSO have been intensively studied and different algorithms to solve LASSO have been developed. While the success of LASSO was acclaimed during the process, its limitations were noticed and a number of alternative methods have been proposed in subsequent research. Among all of theses methods, adaptive LASSO (Zou, 2006) and SCAD (Fan and Li, 2001) attempt to improve the efficiency of LASSO; LAD LASSO (Wang et al., 2007) assumes non-Gaussian distributed errors; ridge, elastic net (Zou and Hastie, 2005) and Bridge (Frank and Friedman, 1993) adopt penalties other than L1; while fused LASSO (Tibshirani et al., 2005) and grouped LASSO (Yuan and Lin, 2006) take extra constrains of data into account. We discuss LASSO in length in the thesis. Its properties in orthogonal design, singular design and p > n design are examined. Its asymptotic performance is investigated and its limitations are carefully illustrated. Another two commonly used regularization methods in LS - ridge and elastic net - are discussed as well. The regularized LAD is another focus of the thesis. As a robust statistic, LAD, which fits the conditional median rather than the conditional mean of the response, has a bounded influence function and a high conditional breakdown point. It is natural to use regularized LAD to do variable selection in presence of long-tailed errors or outliers in the response. Compared with LASSO, LAD LASSO does robust estimation and variable selection simultaneously. We make a simulation study and examine two real examples on the performance of these regularized linear models. Our results demonstrate that no single one estimate dominates others in all cases. The sparsity of the true model, the distribution of the noise, noise-to-signal ratio, the sample size and the correction of predictors, all these factors matter. When the noise has a normal distribution, LASSO, adaptive LASSO and elastic net often outperform others in prediction accuracy. Adaptive LASSO is the best in variable selection and elastic net tends to reveal less sparsity than LASSO. When the noise follows a Laplace distribution, LAD LASSO is competitive with LASSO but is less efficient than adaptive LASSO. For noises with extremely long-tailed distribution such as Cauchy distribution, LAD LASSO dominates others in both the prediction accuracy and variable selection.


Files in this item

Files Size Format View
Linear_Models_with_Regularization_1002_Zhiyong.pdf 612.1Kb PDF

This item appears in the following Collection(s)

Show full item record