Results 1  10
of
85
Why Gabor frames? Two fundamental measures of coherence and their role in model selection
 J. Commun. Netw
, 2010
"... ar ..."
(Show Context)
How close is the sample covariance matrix to the actual covariance matrix
 Journal of Theoretical Probability
, 2010
"... Abstract. Given a probability distribution in Rn with general (nonwhite) covariance, a classical estimator of the covariance matrix is the sample covariance matrix obtained from a sample of N independent points. What is the optimal sample size N = N(n) that guarantees estimation with a fixed accura ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Given a probability distribution in Rn with general (nonwhite) covariance, a classical estimator of the covariance matrix is the sample covariance matrix obtained from a sample of N independent points. What is the optimal sample size N = N(n) that guarantees estimation with a fixed accuracy in the operator norm? Suppose the distribution is supported in a centered Euclidean ball of radius O ( √ n). We conjecture that the optimal sample size is N = O(n) for all distributions with finite fourth moment, and we prove this up to an iterated logarithmic factor. This problem is motivated by the optimal theorem of M. Rudelson [23] which states that N = O(n log n) for distributions with finite second moment, and a recent result of R. Adamczak et al. [1] which guarantees that N = O(n) for subexponential distributions. 1.
Lasso and probabilistic inequalities for multivariate point processes
 SUBMITTED TO THE BERNOULLI
"... ..."
(Show Context)
von Neumann entropy penalization and low rank matrix approximation.
, 2010
"... Abstract We study a problem of estimation of a Hermitian nonnegatively definite matrix ρ of unit trace (for instance, a density matrix of a quantum system) based on n i.i.d. measurements (X 1 , Y 1 ), . . . , (X n , Y n ), where {X j } being random i.i.d. Hermitian matrices and {ξ j } being i.i.d. ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
Abstract We study a problem of estimation of a Hermitian nonnegatively definite matrix ρ of unit trace (for instance, a density matrix of a quantum system) based on n i.i.d. measurements (X 1 , Y 1 ), . . . , (X n , Y n ), where {X j } being random i.i.d. Hermitian matrices and {ξ j } being i.i.d. random variables with E(ξ j X j ) = 0. The estimator is considered, where S is the set of all nonnegatively definite Hermitian m × m matrices of trace 1. The goal is to derive oracle inequalities showing how the estimation error depends on the accuracy of approximation of the unknown state ρ by lowrank matrices.
LSTD with random projections
 In Advances in Neural Information Processing Systems
, 2010
"... We consider the problem of reinforcement learning in highdimensional spaces when the number of features is bigger than the number of samples. In particular, we study the leastsquares temporal difference (LSTD) learning algorithm when a space of low dimension is generated with a random projection f ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
We consider the problem of reinforcement learning in highdimensional spaces when the number of features is bigger than the number of samples. In particular, we study the leastsquares temporal difference (LSTD) learning algorithm when a space of low dimension is generated with a random projection from a highdimensional space. We provide a thorough theoretical analysis of the LSTD with random projections and derive performance bounds for the resulting algorithm. We also show how the error of LSTD with random projections is propagated through the iterations of a policy iteration algorithm and provide a performance bound for the resulting leastsquares policy iteration (LSPI) algorithm. 1
Two are better than one: Fundamental parameters of frame coherence
, 2011
"... This paper investigates two parameters that measure the coherence of a frame: worstcase and average coherence. We first use worstcase and average coherence to derive nearoptimal probabilistic guarantees on both sparse signal detection and reconstruction in the presence of noise. Next, we provide ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
This paper investigates two parameters that measure the coherence of a frame: worstcase and average coherence. We first use worstcase and average coherence to derive nearoptimal probabilistic guarantees on both sparse signal detection and reconstruction in the presence of noise. Next, we provide a catalog of nearly tight frames with small worstcase and average coherence. Later, we find a new lower bound on worstcase coherence; we compare it to the Welch bound and use it to interpret recently reported signal reconstruction results. Finally, we give an algorithm that transforms frames in a way that decreases average coherence without changing the spectral norm or worstcase coherence.
Invertibility of random matrices: unitary and orthogonal transformation
 Journal of the AMS
"... Abstract. We show that a perturbation of any fixed square matrix D by a random unitary matrix is well invertible with high probability. A similar result holds for perturbations by random orthogonal matrices; the only notable exception is when D is close to orthogonal. As an application, these result ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We show that a perturbation of any fixed square matrix D by a random unitary matrix is well invertible with high probability. A similar result holds for perturbations by random orthogonal matrices; the only notable exception is when D is close to orthogonal. As an application, these results completely eliminate a hardtocheck condition from the Single Ring Theorem by Guionnet, Krishnapur and Zeitouni. Contents
Random matrices: Universality of local spectral statistics of nonHermitian matrices
, 2013
"... It is a classical result of Ginibre that the normalized bulk kpoint correlation functions of a complex n × n gaussian matrix with independent entries of mean zero and unit variance are asymptotically given by the determinantal point process on C with kernel K∞(z, w): = 1pi e −z2/2−w2/2+zw in ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
It is a classical result of Ginibre that the normalized bulk kpoint correlation functions of a complex n × n gaussian matrix with independent entries of mean zero and unit variance are asymptotically given by the determinantal point process on C with kernel K∞(z, w): = 1pi e −z2/2−w2/2+zw in the limit n→∞. In this paper we show that this asymptotic law is universal among all random n × n matrices Mn whose entries are jointly independent, exponentially decaying, have independent real and imaginary parts, and whose moments match that of the complex gaussian ensemble to fourth order. Analogous results at the edge of the spectrum are also obtained. As an application, we extend a central limit theorem for the number of eigenvalues of complex gaussian matrices in a small disk to these more general ensembles. These results are nonHermitian analogues of some recent universality results for Hermitian Wigner matrices. However, a key new difficulty arises in the nonHermitian case, due to the instability of the spectrum for such ma
Low rank Multivariate regression
 Electronic Journal of Statistics
, 2011
"... Abstract. We consider in this paper the multivariate regression problem, when the target regression matrix A is close to a low rank matrix. Our primary interest is in on the practical case where the variance of the noise is unknown. Our main contribution is to propose in this setting a criterion to ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We consider in this paper the multivariate regression problem, when the target regression matrix A is close to a low rank matrix. Our primary interest is in on the practical case where the variance of the noise is unknown. Our main contribution is to propose in this setting a criterion to select among a family of low rank estimators and prove a nonasymptotic oracle inequality for the resulting estimator. We also investigate the easier case where the variance of the noise is known and outline that the penalties appearing in our criterions are minimal (in some sense). These penalties involve the expected value of KyFan norms of some random matrices. These quantities can be evaluated easily in practice and upperbounds can be derived from recent results in random matrix theory. 1.