Results 1  10
of
64
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Decoding by Linear Programming
, 2004
"... This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to rec ..."
Abstract

Cited by 1399 (16 self)
 Add to MetaCart
This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1minimization problem (‖x‖ℓ1:= i xi) min g∈R n ‖y − Ag‖ℓ1 provided that the support of the vector of errors is not too large, ‖e‖ℓ0: = {i: ei ̸= 0}  ≤ ρ · m for some ρ> 0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work [5]. Finally, underlying the success of ℓ1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.
Sure independence screening for ultrahigh dimensional feature space
, 2006
"... Variable selection plays an important role in high dimensional statistical modeling which nowadays appears in many areas and is key to various scientific discoveries. For problems of large scale or dimensionality p, estimation accuracy and computational cost are two top concerns. In a recent paper, ..."
Abstract

Cited by 283 (26 self)
 Add to MetaCart
Variable selection plays an important role in high dimensional statistical modeling which nowadays appears in many areas and is key to various scientific discoveries. For problems of large scale or dimensionality p, estimation accuracy and computational cost are two top concerns. In a recent paper, Candes and Tao (2007) propose the Dantzig selector using L1 regularization and show that it achieves the ideal risk up to a logarithmic factor log p. Their innovative procedure and remarkable result are challenged when the dimensionality is ultra high as the factor log p can be large and their uniform uncertainty principle can fail. Motivated by these concerns, we introduce the concept of sure screening and propose a sure screening method based on a correlation learning, called the Sure Independence Screening (SIS), to reduce dimensionality from high to a moderate scale that is below sample size. In a fairly general asymptotic framework, the SIS is shown to have the sure screening property for even exponentially growing dimensionality. As a methodological extension, an iterative SIS (ISIS) is also proposed to enhance its finite sample performance. With dimension reduced accurately from high to below sample size, variable selection can be improved on both speed and accuracy, and can then be ac
Efficient Use of Side Information in MultipleAntenna Data Transmission over Fading Channels
, 1998
"... We derive performance limits for two closely related communication scenarios involving a wireless system with multipleelement transmitter antenna arrays: a pointtopoint system with partial side information at the transmitter, and a broadcast system with multiple receivers. In both cases, ideal be ..."
Abstract

Cited by 211 (4 self)
 Add to MetaCart
We derive performance limits for two closely related communication scenarios involving a wireless system with multipleelement transmitter antenna arrays: a pointtopoint system with partial side information at the transmitter, and a broadcast system with multiple receivers. In both cases, ideal beamforming is impossible, leading to an inherently lower achievable performance as the quality of the side information degrades or as the number of receivers increases. Expected signaltonoise ratio (SNR) and mutual information are both considered as performance measures. In the pointtopoint case, we determine when the transmission strategy should use some form of beamforming and when it should not. We also show that, when properly chosen, even a small amount of side information can be quite valuable. For the broadcast scenario with an SNR criterion, we find the efficient frontier of operating points and show that even when the number of receivers is larger than the number of antenna array ...
The sparsity and bias of the lasso selection in highdimensional linear regression. Ann. Statist. Volume 36, Number 4, 15671594. Alexandre Belloni Duke University Fuqua
 School of Business 1 Towerview Drive Durham, NC 277080120 PO Box 90120 Email: abn5@duke.edu Victor Chernozhukov Massachusetts Institute of Technology Department of Economics and Operations research Center 50 Memorial Drive Room E52262f Cambridge, MA 02
, 2008
"... showed that, for neighborhood selection in Gaussian graphical models, under a neighborhood stability condition, the LASSO is consistent, even when the number of variables is of greater order than the sample size. Zhao and Yu [(2006) J. Machine Learning Research 7 2541–2567] formalized the neighborho ..."
Abstract

Cited by 191 (27 self)
 Add to MetaCart
showed that, for neighborhood selection in Gaussian graphical models, under a neighborhood stability condition, the LASSO is consistent, even when the number of variables is of greater order than the sample size. Zhao and Yu [(2006) J. Machine Learning Research 7 2541–2567] formalized the neighborhood stability condition in the context of linear regression as a strong irrepresentable condition. That paper showed that under this condition, the LASSO selects exactly the set of nonzero regression coefficients, provided that these coefficients are bounded away from zero at a certain rate. In this paper, the regression coefficients outside an ideal model are assumed to be small, but not necessarily zero. Under a sparse Riesz condition on the correlation of design variables, we prove that the LASSO selects a model of the correct order of dimensionality, controls the bias of the selected model at a level determined by the contributions of small regression coefficients and threshold bias, and selects all coefficients of greater order than the bias of the selected model. Moreover, as a consequence of this rate consistency of the LASSO in model selection, it is proved that the sum of error squares for the mean response and the ℓαloss for the regression coefficients converge at the best possible rates under the given conditions. An interesting aspect of our results is that the logarithm of the number of variables can be of the same order as the sample size for certain random dependent designs. 1. Introduction. Consider
Matrix models for betaensembles
 J. Math. Phys
, 2002
"... This paper constructs tridiagonal random matrix models for general (β> 0) βHermite (Gaussian) and βLaguerre (Wishart) ensembles. These generalize the wellknown Gaussian and Wishart models for β = 1,2,4. Furthermore, in the cases of the βLaguerre ensembles, we eliminate the exponent quantizati ..."
Abstract

Cited by 173 (23 self)
 Add to MetaCart
(Show Context)
This paper constructs tridiagonal random matrix models for general (β> 0) βHermite (Gaussian) and βLaguerre (Wishart) ensembles. These generalize the wellknown Gaussian and Wishart models for β = 1,2,4. Furthermore, in the cases of the βLaguerre ensembles, we eliminate the exponent quantization present in the previously known models. We further discuss applications for the new matrix models, and present some open problems.
Eigenvalues of large sample covariance matrices of spiked population models
, 2006
"... We consider a spiked population model, proposed by Johnstone, whose population eigenvalues are all unit except for a few fixed eigenvalues. The question is to determine how the sample eigenvalues depend on the nonunit population ones when both sample size and population size become large. This pape ..."
Abstract

Cited by 163 (8 self)
 Add to MetaCart
(Show Context)
We consider a spiked population model, proposed by Johnstone, whose population eigenvalues are all unit except for a few fixed eigenvalues. The question is to determine how the sample eigenvalues depend on the nonunit population ones when both sample size and population size become large. This paper completely determines the almost sure limits for a general class of samples. 1
Universality at the edge of the spectrum in Wigner random matrices
, 2003
"... We prove universality at the edge for rescaled correlation functions of Wigner random matrices in the limit n → +∞. As a corollary, we show that, after proper rescaling, the 1st, 2nd, 3rd, etc. eigenvalues of Wigner random hermitian (or real symmetric) matrix weakly converge to the distributions est ..."
Abstract

Cited by 150 (8 self)
 Add to MetaCart
We prove universality at the edge for rescaled correlation functions of Wigner random matrices in the limit n → +∞. As a corollary, we show that, after proper rescaling, the 1st, 2nd, 3rd, etc. eigenvalues of Wigner random hermitian (or real symmetric) matrix weakly converge to the distributions established by Tracy and Widom in G.U.E. (G.O.E.) cases.
THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX
"... Abstract. We prove an optimal estimate on the smallest singular value of a random subgaussian matrix, valid for all fixed dimensions. For an N × n matrix A with independent and identically distributed subgaussian entries, the smallest singular value of A is at least of the order √ N − √ n − 1 with ..."
Abstract

Cited by 89 (15 self)
 Add to MetaCart
(Show Context)
Abstract. We prove an optimal estimate on the smallest singular value of a random subgaussian matrix, valid for all fixed dimensions. For an N × n matrix A with independent and identically distributed subgaussian entries, the smallest singular value of A is at least of the order √ N − √ n − 1 with high probability. A sharp estimate on the probability is also obtained. 1.
Smallest singular value of random matrices and geometry of random polytopes
 Adv. Math
, 2005
"... geometry of random polytopes ..."
(Show Context)