Results 1  10
of
9,290
Lasso Regression
, 2014
"... The problem is to solve a sparsityencouraging “regularized ” regression problem: minimize ‖Ax − b‖22 + λ‖x‖1 My gut reaction: Replace least squares (LS) with least absolute deviations (LAD). LAD is to LS as median is to mean. Median is a more robust statistic. The LAD version can be recast as a lin ..."
Abstract
 Add to MetaCart
The problem is to solve a sparsityencouraging “regularized ” regression problem: minimize ‖Ax − b‖22 + λ‖x‖1 My gut reaction: Replace least squares (LS) with least absolute deviations (LAD). LAD is to LS as median is to mean. Median is a more robust statistic. The LAD version can be recast as a
From Lasso regression to Feature vector
 In NIPS
, 2005
"... Lasso regression tends to assign zero weights to most irrelevant or redundant features, and hence is a promising technique for feature selection. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Lasso regression tends to assign zero weights to most irrelevant or redundant features, and hence is a promising technique for feature selection.
From lasso regression to feature vector machine
 In Advances in Neural Information Processing Systems 18
, 2005
"... Lasso regression tends to assign zero weights to most irrelevant or redundant features, and hence is a promising technique for feature selection. Its limitation, however, is that it only offers solutions to linear models. Kernel machines with feature scaling techniques have been studied for feature ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
Lasso regression tends to assign zero weights to most irrelevant or redundant features, and hence is a promising technique for feature selection. Its limitation, however, is that it only offers solutions to linear models. Kernel machines with feature scaling techniques have been studied for feature
A study of error variance estimation in lasso regression
, 2013
"... Variance estimation in the linear model when p> n is a difficult problem. Standard least squares estimation techniques do not apply. Several variance estimators have been proposed in the literature, all with accompanying asymptotic results proving consistency and asymptotic normality under a va ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
timator using Lasso coefficients with regularisation parameter selected adaptively (via crossvalidation). In this paper, we review several variance estimators and perform a reasonably extensive simulation study in an attempt to compare their finite sample performance. It would seem from the results
Regression Shrinkage and Selection Via the Lasso
 Journal of the Royal Statistical Society, Series B
, 1994
"... We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactl ..."
Abstract

Cited by 4055 (51 self)
 Add to MetaCart
that are exactly zero and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also
1A Bayesian Mixture of Lasso Regressions with t−Errors
"... Motivated by a challenging problem in financial trading we are presented with a mixture of regressions with variable selection problem. In this regard, one is faced with data which possess outliers, skewness and, simultaneously, due to the nature of financial trading, one would like to be able to c ..."
Abstract
 Add to MetaCart
to construct clusters with specific predictors that are fairly sparse. We develop a Bayesian mixture of lasso regressions with t−errors to reflect these specific demands. The resulting model is necessarily complex and to fit the model to real data, we develop a stateoftheart Particle Markov chain
SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR
 SUBMITTED TO THE ANNALS OF STATISTICS
, 2007
"... We exhibit an approximate equivalence between the Lasso estimator and Dantzig selector. For both methods we derive parallel oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the ℓp estimation loss for 1 ≤ p ≤ 2 in the linear model when th ..."
Abstract

Cited by 465 (8 self)
 Add to MetaCart
We exhibit an approximate equivalence between the Lasso estimator and Dantzig selector. For both methods we derive parallel oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the ℓp estimation loss for 1 ≤ p ≤ 2 in the linear model when
Highdimensional Fused Lasso Regression using MajorizationMinimization and Parallel Processing
"... ar ..."
The group Lasso for logistic regression
 Journal of the Royal Statistical Society, Series B
, 2008
"... Summary. The group lasso is an extension of the lasso to do variable selection on (predefined) groups of variables in linear regression models. The estimates have the attractive property of being invariant under groupwise orthogonal reparameterizations. We extend the group lasso to logistic regressi ..."
Abstract

Cited by 278 (11 self)
 Add to MetaCart
Summary. The group lasso is an extension of the lasso to do variable selection on (predefined) groups of variables in linear regression models. The estimates have the attractive property of being invariant under groupwise orthogonal reparameterizations. We extend the group lasso to logistic
Least angle regression
 Ann. Statist
"... The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to s ..."
Abstract

Cited by 1308 (43 self)
 Add to MetaCart
implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS
Results 1  10
of
9,290