Results 1  10
of
184
Least absolute shrinkage is equivalent to quadratic penalization
 of Perspectives in Neural Computing
, 1998
"... Adaptive ridge is a special form of ridge regression, balancing the quadratic penalization on each parameter of the model. This paper shows the equivalence between adaptive ridge and lasso (least absolute shrinkage and selection operator). This equivalence states that both procedures produce the sam ..."
Abstract

Cited by 49 (6 self)
 Add to MetaCart
Adaptive ridge is a special form of ridge regression, balancing the quadratic penalization on each parameter of the model. This paper shows the equivalence between adaptive ridge and lasso (least absolute shrinkage and selection operator). This equivalence states that both procedures produce
Outcomes of the equivalence of adaptive ridge with least absolute shrinkage
 Eds.), Advances in Neural Information Processing Systems
, 1998
"... Adaptive Ridge is a special form of Ridge regression, balancing the quadratic penalization on each parameter of the model. It was shown to be equivalent to Lasso (least absolute shrinkage and selection operator), in the sense that both procedures produce the same estimate. Lasso can thus be viewed a ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Adaptive Ridge is a special form of Ridge regression, balancing the quadratic penalization on each parameter of the model. It was shown to be equivalent to Lasso (least absolute shrinkage and selection operator), in the sense that both procedures produce the same estimate. Lasso can thus be viewed
Combinatorial Selection and Least Absolute Shrinkage via the CLASH Algorithm
"... The least absolute shrinkage and selection operator (LASSO) for linear regression exploits the geometric interplay of the ℓ2data error objective and the ℓ1norm constraint to arbitrarily select sparse models. Guiding this uninformed selection process with sparsity models has been precisely the cente ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
The least absolute shrinkage and selection operator (LASSO) for linear regression exploits the geometric interplay of the ℓ2data error objective and the ℓ1norm constraint to arbitrarily select sparse models. Guiding this uninformed selection process with sparsity models has been precisely
Regression Shrinkage and Selection Via the Lasso
 JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B
, 1994
"... We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactl ..."
Abstract

Cited by 4212 (49 self)
 Add to MetaCart
We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients
The Least Absolute Shrinkage and Selection Operator (the Lasso) proposed
"... Abstract: The Lasso, the Forward Stagewise regression and the Lars are closely related procedures recently proposed for linear regression problems. Each of them can produce sparse models and can be used both for estimation and variable selection. In practical implementations these algorithms are typ ..."
Abstract
 Add to MetaCart
Abstract: The Lasso, the Forward Stagewise regression and the Lars are closely related procedures recently proposed for linear regression problems. Each of them can produce sparse models and can be used both for estimation and variable selection. In practical implementations these algorithms are typically tuned to achieve optimal prediction accuracy. We show that, when the prediction accuracy is used as the criterion to choose the tuning parameter, in general these procedures are not consistent in terms of variable selection. That is, the sets of variables selected are not consistently the true set of important variables. In particular, we show that for any sample size n, when there are superfluous variables in the linear regression model and the design matrix is orthogonal, the probability that these procedures correctly identify the true set of important variables is less than a constant (smaller than one) not depending on n. This result is also shown to hold for twodimensional problems with general correlated design matrices. The results indicate that in problems where the main goal is variable selection, predictionaccuracybased criteria alone are not sufficient for this purpose. Adjustments will be discussed to make the Lasso and related procedures useful/consistent for variable selection.
Using Multivariate Regression Model with Least Absolute Shrinkage and Selection Operator (LASSO) to Predict the Incidence of Xerostomia after IntensityModulated
"... Purpose: The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderatetosevere patientrated xerostomia among head and neck cancer (HNC) patients treated with IMRT ..."
Abstract
 Add to MetaCart
Purpose: The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderatetosevere patientrated xerostomia among head and neck cancer (HNC) patients treated
Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING
, 2007
"... Many problems in signal processing and statistical inference involve finding sparse solutions to underdetermined, or illconditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined with a spa ..."
Abstract

Cited by 539 (17 self)
 Add to MetaCart
sparsenessinducing (ℓ1) regularization term.Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution, and compressed sensing are a few wellknown examples of this approach. This paper proposes gradient projection (GP) algorithms for the bound
Sparse Reconstruction by Separable Approximation
, 2007
"... Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing ..."
Abstract

Cited by 373 (38 self)
 Add to MetaCart
Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing
On the LASSO and Its Dual
 Journal of Computational and Graphical Statistics
, 1999
"... Proposed by Tibshirani (1996), the LASSO (least absolute shrinkage and selection operator) estimates a vector of regression coe#cients by minimising the residual sum of squares subject to a constraint on the l 1 norm of coe#cient vector. The LASSO estimator typically has one or more zero elements ..."
Abstract

Cited by 209 (2 self)
 Add to MetaCart
Proposed by Tibshirani (1996), the LASSO (least absolute shrinkage and selection operator) estimates a vector of regression coe#cients by minimising the residual sum of squares subject to a constraint on the l 1 norm of coe#cient vector. The LASSO estimator typically has one or more zero
Gradient LASSO for feature selection
 Proceedings of the 21st International Conference on Machine Learning
, 2004
"... LASSO (Least Absolute Shrinkage and Selection Operator) is a useful tool to achieve the shrinkage and variable selection simultaneously. ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
LASSO (Least Absolute Shrinkage and Selection Operator) is a useful tool to achieve the shrinkage and variable selection simultaneously.
Results 1  10
of
184