Results 1 
9 of
9
On Model Selection Consistency of Lasso
, 2006
"... Sparsity or parsimony of statistical models is crucial for their proper interpretations, as in sciences and social sciences. Model selection is a commonly used method to find such models, but usually involves a computationally heavy combinatorial search. Lasso (Tibshirani, 1996) is now being used ..."
Abstract

Cited by 462 (23 self)
 Add to MetaCart
Sparsity or parsimony of statistical models is crucial for their proper interpretations, as in sciences and social sciences. Model selection is a commonly used method to find such models, but usually involves a computationally heavy combinatorial search. Lasso (Tibshirani, 1996) is now being used as a computationally feasible alternative to model selection.
A note on the LASSO and related procedures in model selection
 STATISTICA SINICA
, 2004
"... The Lasso, the Forward Stagewise regression and the Lars are closely related procedures recently proposed for linear regression problems. Each of them can produce sparse models and can be used both for estimation and variable selection. In practical implementations these algorithms are typically tu ..."
Abstract

Cited by 78 (13 self)
 Add to MetaCart
The Lasso, the Forward Stagewise regression and the Lars are closely related procedures recently proposed for linear regression problems. Each of them can produce sparse models and can be used both for estimation and variable selection. In practical implementations these algorithms are typically tuned to achieve optimal prediction accuracy. We show that, when the prediction accuracy is used as the criterion to choose the tuning parameter, in general these procedures are not consistent in terms of variable selection. That is, the sets of variables selected are not consistent at finding the true set of important variables. In particular, we show that for any sample size n, when there are superfluous variables in the linear regression model and the design matrix is orthogonal, the probability of the procedures correctly identifying the true set of important variables is less than a constant (smaller than one) not depending on n. This result is also shown to hold for two dimensional problems with general correlated design matrices. The results indicate that in problems where
An Overview of Recent Developments in Genomics and the Statistical Methods that Bear on Them
"... * All authors contributed equally to this work The landscape of Genomics has changed drastically in the last two decades. Increasingly inexpensive sequencing has shifted the primary focus from the acquisition of biological sequences to the study of biological function. Assays have been developed to ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
* All authors contributed equally to this work The landscape of Genomics has changed drastically in the last two decades. Increasingly inexpensive sequencing has shifted the primary focus from the acquisition of biological sequences to the study of biological function. Assays have been developed to study many intricacies of biological systems, and publicly available databases have given rise to integrative analyses that combine information from many sources to draw complex conclusions. Such research was the focus of the recent workshop at the Isaac Newton Institute, High Dimensional Statistics in Biology. Many computational methods from modern genomics and related disciplines were presented and discussed. Using, as much as possible, the material from these talks, we give an overview of modern Genomics: from the essential assays that make datageneration possible, to the statistical methods that yield meaningful inference. In hopes of calling fresh perspectives to this field, we point to current analytical challenges, where novel methods, or novel applications of extant methods, are presently needed. Keywords: 1.
The Least Absolute Shrinkage and Selection Operator (the Lasso) proposed
"... Abstract: The Lasso, the Forward Stagewise regression and the Lars are closely related procedures recently proposed for linear regression problems. Each of them can produce sparse models and can be used both for estimation and variable selection. In practical implementations these algorithms are typ ..."
Abstract
 Add to MetaCart
Abstract: The Lasso, the Forward Stagewise regression and the Lars are closely related procedures recently proposed for linear regression problems. Each of them can produce sparse models and can be used both for estimation and variable selection. In practical implementations these algorithms are typically tuned to achieve optimal prediction accuracy. We show that, when the prediction accuracy is used as the criterion to choose the tuning parameter, in general these procedures are not consistent in terms of variable selection. That is, the sets of variables selected are not consistently the true set of important variables. In particular, we show that for any sample size n, when there are superfluous variables in the linear regression model and the design matrix is orthogonal, the probability that these procedures correctly identify the true set of important variables is less than a constant (smaller than one) not depending on n. This result is also shown to hold for twodimensional problems with general correlated design matrices. The results indicate that in problems where the main goal is variable selection, predictionaccuracybased criteria alone are not sufficient for this purpose. Adjustments will be discussed to make the Lasso and related procedures useful/consistent for variable selection.
unknown title
, 2005
"... Gradient directed regularization for sparse Gaussian concentration graphs, with applications to inference of genetic networks ..."
Abstract
 Add to MetaCart
Gradient directed regularization for sparse Gaussian concentration graphs, with applications to inference of genetic networks
unknown title
, 2005
"... Gradient directed regularization for sparse Gaussian concentration graphs, with applications to inference of genetic networks ..."
Abstract
 Add to MetaCart
Gradient directed regularization for sparse Gaussian concentration graphs, with applications to inference of genetic networks
REVIEW
, 2010
"... An overview of recent developments in genomics and associated statistical methods ..."
Abstract
 Add to MetaCart
(Show Context)
An overview of recent developments in genomics and associated statistical methods
The Loss Rank Criterion for Variable Selection in Linear Regression Analysis
, 2010
"... ar ..."
(Show Context)
Concentration Graphs, with Applications to Inference of Genetic Networks
"... permission of the copyright holder. ..."