Results 1  10
of
738
ASYMPTOTIC ANALYSIS OF THE HUBERIZED LASSO ESTIMATOR
"... ABSTRACT The Huberized LASSO model, a robust version of the popular LASSO, yields robust model selection in sparse linear regression. Though its superior performance was empirically demonstrated for large variance noise, currently no theoretical asymptotic analysis has been derived for the Huberize ..."
Abstract
 Add to MetaCart
for the Huberized LASSO estimator. Here we prove that the Huberized LASSO estimator is consistent and asymptotically normal distributed under a proper shrinkage rate. Our derivation shows that, unlike the LASSO estimator, its asymptotic variance is stabilized in the presence of noise with large variance. We also
Asymptotic properties of the residual bootstrap for lasso estimators
 In Proceedings of the American Mathematical Society
, 2010
"... In this article, we derive the asymptotic distribution of the bootstrapped Lasso estimator of the regression parameter in a multiple linear regression model. It is shown that under some mild regularity conditions on the design vectors and the regularization parameter, the bootstrap approximation con ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this article, we derive the asymptotic distribution of the bootstrapped Lasso estimator of the regression parameter in a multiple linear regression model. It is shown that under some mild regularity conditions on the design vectors and the regularization parameter, the bootstrap approximation
Unified lasso estimation via least squares approximation
, 2007
"... We propose a method of least squares approximation (LSA) for unified yet simple LASSO estimation. Our general theoretical framework includes ordinary least squares, generalized linear models, quantile regression, and many others as special cases. Specifically, LSA can transfer many different types o ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
We propose a method of least squares approximation (LSA) for unified yet simple LASSO estimation. Our general theoretical framework includes ordinary least squares, generalized linear models, quantile regression, and many others as special cases. Specifically, LSA can transfer many different types
SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR
 SUBMITTED TO THE ANNALS OF STATISTICS
, 2007
"... We exhibit an approximate equivalence between the Lasso estimator and Dantzig selector. For both methods we derive parallel oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the ℓp estimation loss for 1 ≤ p ≤ 2 in the linear model when th ..."
Abstract

Cited by 472 (11 self)
 Add to MetaCart
We exhibit an approximate equivalence between the Lasso estimator and Dantzig selector. For both methods we derive parallel oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the ℓp estimation loss for 1 ≤ p ≤ 2 in the linear model when
Bolasso: model consistent lasso estimation through the bootstrap
 In Proceedings of the Twentyfifth International Conference on Machine Learning (ICML
, 2008
"... We consider the leastsquare linear regression problem with regularization by the ℓ1norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic ..."
Abstract

Cited by 85 (15 self)
 Add to MetaCart
positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso
Regression Shrinkage and Selection Via the Lasso
 JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B
, 1994
"... We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactl ..."
Abstract

Cited by 4212 (49 self)
 Add to MetaCart
We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients
The adaptive LASSO and its oracle properties
 Journal of the American Statistical Association
"... The lasso is a popular technique for simultaneous estimation and variable selection. Lasso variable selection has been shown to be consistent under certain conditions. In this work we derive a necessary condition for the lasso variable selection to be consistent. Consequently, there exist certain sc ..."
Abstract

Cited by 683 (10 self)
 Add to MetaCart
The lasso is a popular technique for simultaneous estimation and variable selection. Lasso variable selection has been shown to be consistent under certain conditions. In this work we derive a necessary condition for the lasso variable selection to be consistent. Consequently, there exist certain
On the Asymptotic Properties of The Group Lasso Estimator in Least Squares Problems
"... We derive conditions guaranteeing estimation and model selection consistency, oracle properties and persistence for the grouplasso estimator and model selector proposed by Yuan and Lin (2006) for least squares problems when the covariates have a natural grouping structure. We study both the case of ..."
Abstract

Cited by 58 (0 self)
 Add to MetaCart
We derive conditions guaranteeing estimation and model selection consistency, oracle properties and persistence for the grouplasso estimator and model selector proposed by Yuan and Lin (2006) for least squares problems when the covariates have a natural grouping structure. We study both the case
High dimensional graphs and variable selection with the Lasso
 ANNALS OF STATISTICS
, 2006
"... The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a ..."
Abstract

Cited by 736 (22 self)
 Add to MetaCart
The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso
Results 1  10
of
738