Results 1  10
of
29
Phase transition in limiting distributions of coherence of highdimensional random matrices
, 2012
"... The coherence of a random matrix, which is defined to be the largest magnitude of the Pearson correlation coefficients between the columns of the random matrix, is an important quantity for a wide range of applications including highdimensional statistics and signal processing. Inspired by these ap ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
The coherence of a random matrix, which is defined to be the largest magnitude of the Pearson correlation coefficients between the columns of the random matrix, is an important quantity for a wide range of applications including highdimensional statistics and signal processing. Inspired by these applications, this paper studies the limiting laws of the coherence of n × p random matrices for a full range of the dimension p with a special focus on the ultra highdimensional setting. Assuming the columns of the random matrix are independent random vectors with a common spherical distribution, we give a complete characterization of the behavior of the limiting distributions of the coherence. More specifically, the limiting distributions of the coherence are derived log p → ∞. The results show that the limiting behavior of the coherence differs significantly in different regimes and exhibits interesting phase transition phenomena as the dimension p grows as a function of n. Applications to statistics and compressed sensing in the ultra highdimensional setting are also discussed. separately for three regimes: 1 n log p → 0,
Central Limit Theorems for Classical Likelihood Ratio Tests for HighDimensional Normal Distributions
"... For random samples of size n obtained from pvariate normal distributions, we consider the classical likelihood ratio tests (LRT) for their means and covariance matrices in the highdimensional setting. These test statistics have been extensively studied in multivariate analysis and their limiting d ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
For random samples of size n obtained from pvariate normal distributions, we consider the classical likelihood ratio tests (LRT) for their means and covariance matrices in the highdimensional setting. These test statistics have been extensively studied in multivariate analysis and their limiting distributions under the null hypothesis were proved to be chisquare distributions as n goes to infinity and p remains fixed. In this paper, we consider the highdimensional case where both p and n go to infinity with p/n → y ∈ (0, 1]. We prove that the likelihood ratio test statistics under this assumption will converge in distribution to normal distributions with explicit means and variances. We perform the simulation study to show that the likelihood ratio tests using our central limit theorems outperform those using the traditional chisquare approximations for analyzing highdimensional data.
Optimal hypothesis testing for high dimensional covaraiance matrices
 Bernoulli
, 2013
"... ar ..."
(Show Context)
Estimating Sparse Precision Matrix: Optimal Rates of Convergence and Adaptive Estimation
"... Precision matrix is of significant importance in a wide range of applications in multivariate analysis. This paper considers adaptive minimax estimation of sparse precision matrices in the high dimensional setting. Optimal rates of convergence are established for a range of matrix norm losses. A ful ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Precision matrix is of significant importance in a wide range of applications in multivariate analysis. This paper considers adaptive minimax estimation of sparse precision matrices in the high dimensional setting. Optimal rates of convergence are established for a range of matrix norm losses. A fully data driven estimator based on adaptive constrained ℓ1 minimization is proposed and its rate of convergence is obtained over a collection of parameter spaces. The estimator, called ACLIME, is easy to implement and performs well numerically. A major step in establishing the minimax rate of convergence is the derivation of a ratesharp lower bound. A “twodirectional ” lower bound technique is applied to obtain the minimax lower bound. The upper and lower bounds together yield the optimal rates of convergence for sparse precision matrix estimation and show that the ACLIME estimator is adaptively minimax rate optimal for a collection of parameter spaces and a range of matrix norm losses simultaneously.
Approximation of Rectangular BetaLaguerre Ensembles and Large Deviations
"... Let λ1, · · · , λn be random eigenvalues coming from the betaLaguerre ensemble with parameter p, which is a generalization of the real, complex and quaternion Wishart matrices of parameter (n, p). In the case that the sample size n is much smaller than the dimension of the population distributio ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Let λ1, · · · , λn be random eigenvalues coming from the betaLaguerre ensemble with parameter p, which is a generalization of the real, complex and quaternion Wishart matrices of parameter (n, p). In the case that the sample size n is much smaller than the dimension of the population distribution p, a common situation in modern data, we approximate the betaLaguerre ensemble by a betaHermite ensemble which is a generalization of the real, complex and quaternion Wigner matrices. As corollaries, when n is much smaller than p, we show that the largest and smallest eigenvalues of the complex Wishart matrix are asymptotically independent; we obtain the limiting distribution of the condition numbers as a sum of two i.i.d. random variables with a TracyWidom distribution, which is much different from the exact square case that n = p by Edelman (1988); we propose a test procedure for a spherical hypothesis test. By the same approximation tool, we obtain the asymptotic distribution of the smallest eigenvalue of the betaLaguerre ensemble. In the second part of the paper, under the assumption that n is much smaller than p in a certain scale, we prove the large deviation principles for three basic statistics: the largest eigenvalue, the smallest eigenvalue and the empirical distribution of λ1, · · · , λn, where the last large deviation is derived by using a nonstandard method.
Distributions of Eigenvalues of Large Euclidean Matrices Generated from Three Manifolds
"... Let x1, · · · , xn be points randomly chosen from a set G ⊂ R p and f(x) be a function. A special Euclidean random matrix is given by Mn = (f(∥xi − xj ∥ 2))n×n. When p is fixed and n → ∞ we prove that ˆµ(Mn), the empirical distribution of the eigenvalues of Mn, converges to δ0 for a big class of ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Let x1, · · · , xn be points randomly chosen from a set G ⊂ R p and f(x) be a function. A special Euclidean random matrix is given by Mn = (f(∥xi − xj ∥ 2))n×n. When p is fixed and n → ∞ we prove that ˆµ(Mn), the empirical distribution of the eigenvalues of Mn, converges to δ0 for a big class of functions of f(x). Assuming both p and n go to infinity with n/p → y ∈ (0, ∞), we obtain the explicit limit of ˆµ(Mn) when G is the unit sphere S p−1 or the unit ball Bp(0, 1) and the explicit limit of ˆµ((Mn − apIn)/bp) for G = [0, 1] p, where ap and bp are constants. As corollaries, we obtain the limit of ˆµ(An) with An = (d(xi, xj))n×n and d being the geodesic distance on S p−1. We also obtain the limit of ˆµ(An) for the Euclidean distance matrix An = (∥xi − xj∥)n×n as G is S p−1 or Bp(0, 1). The limits are the law of a+bV where a and b are explicit constants and V follows the MarčenkoPastur law. The same are also obtained for other examples including (exp(−λ 2 ∥xi − xj ∥ γ))n×n and (exp(−λ 2 d(xi, xj) γ))n×n.