Results 1  10
of
29,959
Least absolute shrinkage is equivalent to quadratic penalization
 of Perspectives in Neural Computing
, 1998
"... Adaptive ridge is a special form of ridge regression, balancing the quadratic penalization on each parameter of the model. This paper shows the equivalence between adaptive ridge and lasso (least absolute shrinkage and selection operator). This equivalence states that both procedures produce the sam ..."
Abstract

Cited by 49 (6 self)
 Add to MetaCart
Adaptive ridge is a special form of ridge regression, balancing the quadratic penalization on each parameter of the model. This paper shows the equivalence between adaptive ridge and lasso (least absolute shrinkage and selection operator). This equivalence states that both procedures produce
Outcomes of the equivalence of adaptive ridge with least absolute shrinkage
 Eds.), Advances in Neural Information Processing Systems
, 1998
"... Adaptive Ridge is a special form of Ridge regression, balancing the quadratic penalization on each parameter of the model. It was shown to be equivalent to Lasso (least absolute shrinkage and selection operator), in the sense that both procedures produce the same estimate. Lasso can thus be viewed a ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Adaptive Ridge is a special form of Ridge regression, balancing the quadratic penalization on each parameter of the model. It was shown to be equivalent to Lasso (least absolute shrinkage and selection operator), in the sense that both procedures produce the same estimate. Lasso can thus be viewed
Combinatorial Selection and Least Absolute Shrinkage via the CLASH Algorithm
"... The least absolute shrinkage and selection operator (LASSO) for linear regression exploits the geometric interplay of the ℓ2data error objective and the ℓ1norm constraint to arbitrarily select sparse models. Guiding this uninformed selection process with sparsity models has been precisely the cente ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
The least absolute shrinkage and selection operator (LASSO) for linear regression exploits the geometric interplay of the ℓ2data error objective and the ℓ1norm constraint to arbitrarily select sparse models. Guiding this uninformed selection process with sparsity models has been precisely
1Combinatorial Selection and Least Absolute Shrinkage via the CLASH Algorithm
"... The least absolute shrinkage and selection operator (LASSO) for linear regression exploits the geometric interplay of the `2data error objective and the `1norm constraint to arbitrarily select sparse models. Guiding this uninformed selection process with sparsity models has been precisely the cent ..."
Abstract
 Add to MetaCart
The least absolute shrinkage and selection operator (LASSO) for linear regression exploits the geometric interplay of the `2data error objective and the `1norm constraint to arbitrarily select sparse models. Guiding this uninformed selection process with sparsity models has been precisely
Regression Shrinkage and Selection Via the Lasso
 Journal of the Royal Statistical Society, Series B
, 1994
"... We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactl ..."
Abstract

Cited by 4055 (51 self)
 Add to MetaCart
We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients
The Least Absolute Shrinkage and Selection Operator (the Lasso) proposed
"... Abstract: The Lasso, the Forward Stagewise regression and the Lars are closely related procedures recently proposed for linear regression problems. Each of them can produce sparse models and can be used both for estimation and variable selection. In practical implementations these algorithms are typ ..."
Abstract
 Add to MetaCart
Abstract: The Lasso, the Forward Stagewise regression and the Lars are closely related procedures recently proposed for linear regression problems. Each of them can produce sparse models and can be used both for estimation and variable selection. In practical implementations these algorithms are typically tuned to achieve optimal prediction accuracy. We show that, when the prediction accuracy is used as the criterion to choose the tuning parameter, in general these procedures are not consistent in terms of variable selection. That is, the sets of variables selected are not consistently the true set of important variables. In particular, we show that for any sample size n, when there are superfluous variables in the linear regression model and the design matrix is orthogonal, the probability that these procedures correctly identify the true set of important variables is less than a constant (smaller than one) not depending on n. This result is also shown to hold for twodimensional problems with general correlated design matrices. The results indicate that in problems where the main goal is variable selection, predictionaccuracybased criteria alone are not sufficient for this purpose. Adjustments will be discussed to make the Lasso and related procedures useful/consistent for variable selection.
A fast iterative shrinkagethresholding algorithm with application to . . .
, 2009
"... We consider the class of Iterative ShrinkageThresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast Iterat ..."
Abstract

Cited by 1055 (8 self)
 Add to MetaCart
We consider the class of Iterative ShrinkageThresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast
Using Multivariate Regression Model with Least Absolute Shrinkage and Selection Operator (LASSO) to Predict the Incidence of Xerostomia after IntensityModulated
"... Purpose: The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderatetosevere patientrated xerostomia among head and neck cancer (HNC) patients treated with IMRT ..."
Abstract
 Add to MetaCart
Purpose: The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderatetosevere patientrated xerostomia among head and neck cancer (HNC) patients treated
Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
 IEEE Journal of Selected Topics in Signal Processing
, 2007
"... Abstract—Many problems in signal processing and statistical inference involve finding sparse solutions to underdetermined, or illconditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined wi ..."
Abstract

Cited by 524 (15 self)
 Add to MetaCart
with a sparsenessinducing (ℓ1) regularization term.Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution, and compressed sensing are a few wellknown examples of this approach. This paper proposes gradient projection (GP) algorithms for the bound
Greedy Function Approximation: A Gradient Boosting Machine
 Annals of Statistics
, 2000
"... Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed for additi ..."
Abstract

Cited by 951 (12 self)
 Add to MetaCart
for additive expansions based on any tting criterion. Specic algorithms are presented for least{squares, least{absolute{deviation, and Huber{M loss functions for regression, and multi{class logistic likelihood for classication. Special enhancements are derived for the particular case where the individual
Results 1  10
of
29,959