Results 1  10
of
98
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Signal recovery from partial information via Orthogonal Matching Pursuit
 IEEE TRANS. INFORM. THEORY
, 2005
"... This article demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results ..."
Abstract

Cited by 191 (8 self)
 Add to MetaCart
This article demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results for OMP, which require O(m 2) measurements. The new results for OMP are comparable with recent results for another algorithm called Basis Pursuit (BP). The OMP algorithm is much faster and much easier to implement, which makes it an attractive alternative to BP for signal recovery problems.
DETERMINISTIC EQUIVALENTS FOR CERTAIN FUNCTIONALS OF LARGE RANDOM MATRICES
, 2007
"... Consider an N × n random matrix Yn = (Y n ij) where the entries are given by Y n ij = σij(n) √ X n n ij, the X n ij being independent and identically distributed, centered with unit variance and satisfying some mild moment assumption. Consider now a deterministic N ×n matrix An whose columns and row ..."
Abstract

Cited by 74 (20 self)
 Add to MetaCart
Consider an N × n random matrix Yn = (Y n ij) where the entries are given by Y n ij = σij(n) √ X n n ij, the X n ij being independent and identically distributed, centered with unit variance and satisfying some mild moment assumption. Consider now a deterministic N ×n matrix An whose columns and rows are uniformly bounded in the Euclidean norm. Let Σn = Yn + An. We prove in this article that there exists a deterministic N ×N matrixvalued function Tn(z) analytic in C −R + such that, almost surely, 1 lim
Operator norm consistent estimation of largedimensional sparse covariance matrices
 Annals of Statistics
"... Estimating covariance matrices is a problem of fundamental importance in multivariate statistics. In practice it is increasingly frequent to work with data matrices X of dimension n×p, where p and n are both large. Results from random matrix theory show very clearly that in this setting, standard es ..."
Abstract

Cited by 69 (1 self)
 Add to MetaCart
(Show Context)
Estimating covariance matrices is a problem of fundamental importance in multivariate statistics. In practice it is increasingly frequent to work with data matrices X of dimension n×p, where p and n are both large. Results from random matrix theory show very clearly that in this setting, standard estimators like the sample covariance matrix perform in general very poorly. In this “large n, large p ” setting, it is sometimes the case that practitioners are willing to assume that many elements of the population covariance matrix are equal to 0, and hence this matrix is sparse. We develop an estimator to handle this situation. The estimator is shown to be consistent in operator norm, when, for instance, we have p ≍ n as n → ∞. In other words the largest singular value of the difference between the estimator and the population covariance matrix goes to zero. This implies consistency of all the eigenvalues and consistency of eigenspaces associated to isolated eigenvalues. We also propose a notion of sparsity for matrices, that is, “compatible” with spectral analysis and is independent of the ordering of the variables. 1. Introduction. Estimating
TracyWidom limit for the largest eigenvalue of a large class of complex sample covariance matrices
 ANN. PROBAB
, 2007
"... We consider the asymptotic fluctuation behavior of the largest eigenvalue of certain sample covariance matrices in the asymptotic regime where both dimensions of the corresponding data matrix go to infinity. More precisely, let X be an n × p matrix, and let its rows be i.i.d. complex normal vectors ..."
Abstract

Cited by 68 (7 self)
 Add to MetaCart
(Show Context)
We consider the asymptotic fluctuation behavior of the largest eigenvalue of certain sample covariance matrices in the asymptotic regime where both dimensions of the corresponding data matrix go to infinity. More precisely, let X be an n × p matrix, and let its rows be i.i.d. complex normal vectors with mean 0 and covariance �p. We show that for a large class of covariance matrices �p, the largest eigenvalue of X ∗ X is asymptotically distributed (after recentering and rescaling) as the Tracy–Widom distribution that appears in the study of the Gaussian unitary ensemble. We give explicit formulas for the centering and scaling sequences that are easy to implement and involve only the spectral distribution of the population covariance, n and p. The main theorem applies to a number of covariance models found in applications. For example, wellbehaved Toeplitz matrices as well as covariance matrices whose spectral distribution is a sum of atoms (under some conditions on the mass of the atoms) are among the models the theorem can handle. Generalizations of the theorem to certain spiked versions of our models and a.s. results about the largest eigenvalue are given. We also discuss a simple corollary that does not require normality of the entries of the data matrix and some consequences for applications in multivariate statistics.
SPECTRUM ESTIMATION FOR LARGE DIMENSIONAL COVARIANCE MATRICES USING RANDOM MATRIX THEORY
 SUBMITTED TO THE ANNALS OF STATISTICS
"... Estimating the eigenvalues of a population covariance matrix from a sample covariance matrix is a problem of fundamental importance in multivariate statistics; the eigenvalues of covariance matrices play a key role in many widely techniques, in particular in Principal Component Analysis (PCA). In ma ..."
Abstract

Cited by 66 (4 self)
 Add to MetaCart
Estimating the eigenvalues of a population covariance matrix from a sample covariance matrix is a problem of fundamental importance in multivariate statistics; the eigenvalues of covariance matrices play a key role in many widely techniques, in particular in Principal Component Analysis (PCA). In many modern data analysis problems, statisticians are faced with large datasets where the sample size, n, is of the same order of magnitude as the number of variables p. Random matrix theory predicts that in this context, the eigenvalues of the sample covariance matrix are not good estimators of the eigenvalues of the population covariance. We propose to use a fundamental result in random matrix theory, the MarčenkoPastur equation, to better estimate the eigenvalues of large dimensional covariance matrices. The MarčenkoPastur equation holds in very wide generality and under weak assumptions. The estimator we obtain can be thought of as “shrinking ” in a non linear fashion the eigenvalues of the sample covariance matrix to estimate the population eigenvalues. Inspired by ideas of random matrix theory, we also suggest a change of point of view when thinking about estimation of highdimensional vectors: we do not try to estimate directly the vectors but rather a probability measure that describes them. We think this is a theoretically more fruitful way to think about these problems. Our estimator gives fast and good or very good results in extended simulations. Our algorithmic approach is based on convex optimization. We also show that the proposed estimator is consistent.
The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices
, 2011
"... ..."
(Show Context)
On the Capacity Achieving Covariance Matrix for Rician MIMO Channels: An Asymptotic Approach
, 2008
"... In this contribution, the capacityachieving input covariance matrices for coherent blockfading correlated MIMO Rician channels are determined. In contrast with the Rayleigh and uncorrelated Rician cases, no closedform expressions for the eigenvectors of the optimum input covariance matrix are avai ..."
Abstract

Cited by 43 (19 self)
 Add to MetaCart
(Show Context)
In this contribution, the capacityachieving input covariance matrices for coherent blockfading correlated MIMO Rician channels are determined. In contrast with the Rayleigh and uncorrelated Rician cases, no closedform expressions for the eigenvectors of the optimum input covariance matrix are available. Classically, both the eigenvectors and eigenvalues are computed by numerical techniques. As the corresponding optimization algorithms are not very attractive, an approximation of the average mutual information is evaluated in this paper in the asymptotic regime where the number of transmit and receive antennas converge to + ∞ at the same rate. New results related to the accuracy of the corresponding large system approximation are provided. An attractive optimization algorithm of this approximation is proposed and we establish that it yields an effective way to compute the capacity achieving covariance matrix for the average mutual information. Finally, numerical simulation results show that, even for a moderate number of transmit and receive antennas, the new approach provides the same results as direct maximization approaches of the average mutual information, while being much more computationally attractive.
Fluctuations of the extreme eigenvalues of finite rank deformations of random matrices. Arxiv preprint arXiv:1009.0145,
, 2010
"... Abstract. Consider a deterministic selfadjoint matrix X n with spectral measure converging to a compactly supported probability measure, the largest and smallest eigenvalues converging to the edges of the limiting measure. We perturb this matrix by adding a random finite rank matrix with delocaliz ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Consider a deterministic selfadjoint matrix X n with spectral measure converging to a compactly supported probability measure, the largest and smallest eigenvalues converging to the edges of the limiting measure. We perturb this matrix by adding a random finite rank matrix with delocalized eigenvectors and study the extreme eigenvalues of the deformed model. We give necessary conditions on the deterministic matrix X n so that the eigenvalues converging out of the bulk exhibit Gaussian fluctuations, whereas the eigenvalues sticking to the edges are very close to the eigenvalues of the nonperturbed model and fluctuate in the same scale. We generalize these results to the case when X n is random and get similar behavior when we deform some classical models such as Wigner or Wishart matrices with rather general entries or the socalled matrix models.