Results 1  10
of
49
Fluctuations of eigenvalues and second order Poincaré inequalities
, 2007
"... Linear statistics of eigenvalues in many familiar classes of random matrices are known to obey gaussian central limit theorems. The proofs of such results are usually rather difficult, involving hard computations specific to the model in question. In this article we attempt to formulate a unified t ..."
Abstract

Cited by 49 (5 self)
 Add to MetaCart
(Show Context)
Linear statistics of eigenvalues in many familiar classes of random matrices are known to obey gaussian central limit theorems. The proofs of such results are usually rather difficult, involving hard computations specific to the model in question. In this article we attempt to formulate a unified technique for deriving such results via relatively soft arguments. In the process, we introduce a notion of ‘second order Poincaré inequalities’: just as ordinary Poincaré inequalities give variance bounds, second order Poincaré inequalities give central limit theorems. The proof of the main result employs Stein’s method of normal approximation. A number of examples are worked out; some of them are new. One of the new results is a CLT for the spectrum of gaussian Toeplitz matrices.
Vector diffusion maps and the connection laplacian
 CComm. Pure Appl. Math
"... Abstract. We introduce vector diffusion maps (VDM), a new mathematical framework for organizing and analyzing massive high dimensional data sets, images and shapes. VDM is a mathematical and algorithmic generalization of diffusion maps and other nonlinear dimensionality reduction methods, such as L ..."
Abstract

Cited by 48 (13 self)
 Add to MetaCart
Abstract. We introduce vector diffusion maps (VDM), a new mathematical framework for organizing and analyzing massive high dimensional data sets, images and shapes. VDM is a mathematical and algorithmic generalization of diffusion maps and other nonlinear dimensionality reduction methods, such as LLE, ISOMAP and Laplacian eigenmaps. While existing methods are either directly or indirectly related to the heat kernel for functions over the data, VDM is based on the heat kernel for vector fields. VDM provides tools for organizing complex data sets, embedding them in a low dimensional space, and interpolating and regressing vector fields over the data. In particular, it equips the data with a metric, which we refer to as the vector diffusion distance. In the manifold learning setup, where the data set is distributed on (or near) a low dimensional manifold M d embedded in R p, we prove the relation between VDM and the connectionLaplacian operator for vector fields over the manifold. Key words. Dimensionality reduction, vector fields, heat kernel, parallel transport, local principal component analysis, alignment. 1. Introduction. Apopularwaytodescribethe
Sample eigenvalue based detection of highdimensional signals in white noise using relatively few samples
, 2007
"... ..."
(Show Context)
Multivariate analysis and Jacobi ensembles: Largest eigenvalue, Tracy–Widom limits and rates of convergence
 ANN. STATIST
, 2008
"... Let A and B be independent, central Wishart matrices in p variables with common covariance and having m and n degrees of freedom, respectively. The distribution of the largest eigenvalue of (A+B) −1 B has numerous applications in multivariate statistics, but is difficult to calculate exactly. Suppos ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Let A and B be independent, central Wishart matrices in p variables with common covariance and having m and n degrees of freedom, respectively. The distribution of the largest eigenvalue of (A+B) −1 B has numerous applications in multivariate statistics, but is difficult to calculate exactly. Suppose that m and n grow in proportion to p. We show that after centering and scaling, the distribution is approximated to secondorder, O(p −2/3), by the Tracy–Widom law. The results are obtained for both complex and then realvalued data by using methods of random matrix theory to study the largest eigenvalue of the Jacobi unitary and orthogonal ensembles. Asymptotic approximations of Jacobi polynomials near the largest zero play a central role.
Universality results for the largest eigenvalues of some sample covariance matrix ensembles Probab. Theory Related Fields
, 2009
"... ..."
(Show Context)
Limits of spiked random matrices
, 2013
"... Given a large, highdimensional sample from a spiked population, the top sample covariance eigenvalue is known to exhibit a phase transition. We show that the largest eigenvalues have asymptotic distributions near the phase transition in the rank one spiked real Wishart setting and its general β ana ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
(Show Context)
Given a large, highdimensional sample from a spiked population, the top sample covariance eigenvalue is known to exhibit a phase transition. We show that the largest eigenvalues have asymptotic distributions near the phase transition in the rank one spiked real Wishart setting and its general β analogue, proving a conjecture of Baik, Ben Arous and Péche ́ (2005). We also treat shifted mean Gaussian orthogonal and β ensembles. Such results are entirely new in the real case; in the complex case we strengthen existing results by providing optimal scaling assumptions. One obtains the known limiting random Schrödinger operator on the halfline, but the boundary condition now depends on the perturbation. We derive several characterizations of the limit laws in which β appears as a parameter, including a simple linear boundary value problem. This PDE description recovers known explicit formulas at β = 2, 4, yielding in particular a new and simple proof of the Painleve ́ representations for these