Results 1 
6 of
6
Optimal Rates of Convergence of Transelliptical Component Analysis
, 2013
"... Han and Liu (2012) proposed a method named transelliptical component analysis (TCA) for conducting scaleinvariant principal component analysis on high dimensional data with transelliptical distributions. The transelliptical family assumes that the data follow an elliptical distribution after unspec ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Han and Liu (2012) proposed a method named transelliptical component analysis (TCA) for conducting scaleinvariant principal component analysis on high dimensional data with transelliptical distributions. The transelliptical family assumes that the data follow an elliptical distribution after unspecified marginal monotone transformations. In a double asymptotic framework where the dimension d is allowed to increase with the sample size n, Han and Liu (2012) showed that one version of TCA attains a “nearly parametric ” rate of convergence in parameter estimation when the parameter of interest is assumed to be sparse. This paper improves upon their results in two aspects: (i) Under the nonsparse setting (i.e., the parameter of interest is not assumed to be sparse), we show that a version of TCA attains the optimal rate of convergence up to a logarithmic factor; (ii) Under the sparse setting, we also lay out venues to analyze the performance of the TCA estimator proposed in Han and Liu (2012). In particular, we provide a “sign subgaussian condition ” which is sufficient for TCA to attain an improved rate of convergence and verify a subfamily of the transelliptical distributions satisfying this condition.
Flexible sampling of discrete data correlations without the marginal distributions
 In Adv. Neural Inform. Processing Systems
, 2013
"... Learning the joint dependence of discrete variables is a fundamental problem in machine learning, with many applications including prediction, clustering and dimensionality reduction. More recently, the framework of copula modeling has gained popularity due to its modular parameterization of joint d ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Learning the joint dependence of discrete variables is a fundamental problem in machine learning, with many applications including prediction, clustering and dimensionality reduction. More recently, the framework of copula modeling has gained popularity due to its modular parameterization of joint distributions. Among other properties, copulas provide a recipe for combining flexible models for univariate marginal distributions with parametric families suitable for potentially high dimensional dependence structures. More radically, the extended rank likelihood approach of Hoff (2007) bypasses learning marginal models completely when such information is ancillary to the learning task at hand as in, e.g., standard dimensionality reduction problems or copula parameter estimation. The main idea is to represent data by their observable rank statistics, ignoring any other information from the marginals. Inference is typically done in a Bayesian framework with Gaussian copulas, and it is complicated by the fact this implies sampling within a space where the number of constraints increases quadratically with the number of data points. The result is slow mixing when using offtheshelf Gibbs sampling. We present an efficient algorithm based on recent advances on constrained Hamiltonian Markov chain Monte Carlo that is simple to implement and does not require paying for a quadratic cost in sample size. 1
Optimal Sparse Principal Component Analysis in High Dimensional Elliptical Model
, 2013
"... We propose a semiparametric sparse principal component analysis method named elliptical component analysis (ECA) for analyzing high dimensional nonGaussian data. In particular, we assume the data follow an elliptical distribution. Elliptical family contains many wellknown multivariate distributio ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We propose a semiparametric sparse principal component analysis method named elliptical component analysis (ECA) for analyzing high dimensional nonGaussian data. In particular, we assume the data follow an elliptical distribution. Elliptical family contains many wellknown multivariate distributions such as multivariate Gaussian, multivariatet, Cauchy, Kotz, and logistic distributions. It allows extra flexibility on modeling heavytailed distributions and capture tail dependence between variables. Such modeling flexibility makes it extremely useful in modeling financial, genomics and bioimaging data, where the data typically present heavy tails and high tail dependence. Under a double asymptotic framework where both the sample size n and the dimension d increase, we show that a multivariate rank based ECA procedure attains the optimal rate of convergence in parameter estimation. This is the first optimality result established for sparse principal component analysis on high dimensional elliptical data.
High dimensional semiparametric scaleinvariant principal component analysis
, 2012
"... We propose a new high dimensional semiparametric principal component analysis (PCA) method, named Copula Component Analysis (COCA). The semiparametric model assumes that, after unspecified marginally monotone transformations, the distributions are multivariate Gaussian. COCA improves upon PCA and s ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We propose a new high dimensional semiparametric principal component analysis (PCA) method, named Copula Component Analysis (COCA). The semiparametric model assumes that, after unspecified marginally monotone transformations, the distributions are multivariate Gaussian. COCA improves upon PCA and sparse PCA in three aspects: (i) It is robust to modeling assumptions; (ii) It is robust to outliers and data contamination; (iii) It is scaleinvariant and yields more interpretable results. We prove that the COCA estimators obtain fast estimation rates and are feature selection consistent when the dimension is nearly exponentially large relative to the sample size. Careful experiments confirm that COCA outperforms sparse PCA on both synthetic and realworld datasets.
Principal Component Analysis on nonGaussian Dependent Data
"... In this paper, we analyze the performance of a semiparametric principal component analysis named Copula Component Analysis (COCA) (Han & Liu, 2012) when the data are dependent. The semiparametric model assumes that, after unspecified marginally monotone transformations, the distributions are mul ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we analyze the performance of a semiparametric principal component analysis named Copula Component Analysis (COCA) (Han & Liu, 2012) when the data are dependent. The semiparametric model assumes that, after unspecified marginally monotone transformations, the distributions are multivariate Gaussian. We study the scenario where the observations are drawn from noni.i.d. processes (mdependency or a more general φmixing case). We show that COCA can allow weak dependence. In particular, we provide the generalization bounds of convergence for both support recovery and parameter estimation of COCA for the dependent data. We provide explicit sufficient conditions on the degree of dependence, under which the parametric rate can be maintained. To our knowledge, this is the first work analyzing the theoretical performance of PCA for the dependent data in high dimensional settings. Our results strictly generalize the analysis in Han & Liu (2012) and the techniques we used have the separate interest for analyzing a variety of other multivariate statistical methods. 1.
High Dimensional Semiparametric Latent Graphical Model for Mixed Data
"... Graphical models are commonly used tools for modeling multivariate random variables. While there exist many convenient multivariate distributions such as Gaussian distribution for continuous data, mixed data with the presence of discrete variables or a combination of both continuous and discrete var ..."
Abstract
 Add to MetaCart
(Show Context)
Graphical models are commonly used tools for modeling multivariate random variables. While there exist many convenient multivariate distributions such as Gaussian distribution for continuous data, mixed data with the presence of discrete variables or a combination of both continuous and discrete variables poses new challenges in statistical modeling. In this paper, we propose a semiparametric model named latent Gaussian copula model for binary and mixed data. The observed binary data are assumed to be obtained by dichotomizing a latent variable satisfying the Gaussian copula distribution or the nonparanormal distribution. The latent Gaussian model with the assumption that the latent variables are multivariate Gaussian is a special case of the proposed model. A novel rankbased approach is proposed for both latent graph estimation and latent principal component analysis. Theoretically, the proposed methods achieve the same rates of convergence for both precision matrix estimation and eigenvector estimation, as if the latent variables were observed. Under similar conditions, the consistency of graph structure recovery and feature selection for leading eigenvectors is established. The performance of the proposed methods is numerically assessed through simulation studies, and the usage of our methods is illustrated by a genetic dataset.