Results 1 
4 of
4
Nonconvex Statistical Optimization for Sparse Tensor Graphical Model
"... We consider the estimation of sparse graphical models that characterize the dependency structure of highdimensional tensorvalued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covaria ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We consider the estimation of sparse graphical models that characterize the dependency structure of highdimensional tensorvalued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a nonconvex objective function. In spite of the nonconvexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. 1
∑n
"... 9. Notation and an outline. Suppose that we have n i.i.d. random matrices Xn = (X(1), X(2),..., X(n)) where X(i) ∼ Nf,m (0, A0 ⊗ B0) for all i, and A0 = (ajk) and B0 = (bjk) are positive definite. Let Y (t) = X(t) ..."
Abstract
 Add to MetaCart
(Show Context)
9. Notation and an outline. Suppose that we have n i.i.d. random matrices Xn = (X(1), X(2),..., X(n)) where X(i) ∼ Nf,m (0, A0 ⊗ B0) for all i, and A0 = (ajk) and B0 = (bjk) are positive definite. Let Y (t) = X(t)
1Covariance Estimation in High Dimensions via Kronecker Product Expansions
, 2013
"... This paper presents a new method for estimating high dimensional covariance matrices. The method, permuted rankpenalized leastsquares (PRLS), is based on a Kronecker product series expansion of the true covariance matrix. Assuming an i.i.d. Gaussian random sample, we establish high dimensional rat ..."
Abstract
 Add to MetaCart
This paper presents a new method for estimating high dimensional covariance matrices. The method, permuted rankpenalized leastsquares (PRLS), is based on a Kronecker product series expansion of the true covariance matrix. Assuming an i.i.d. Gaussian random sample, we establish high dimensional rates of convergence to the true covariance as both the number of samples and the number of variables go to infinity. For covariance matrices of low separation rank, our results establish that PRLS has significantly faster convergence than the standard sample covariance matrix (SCM) estimator. The convergence rate captures a fundamental tradeoff between estimation error and approximation error, thus providing a scalable covariance estimation framework in terms of separation rank, similar to low rank approximation of covariance matrices [1]. The MSE convergence rates generalize the high dimensional rates recently obtained for the ML Flipflop algorithm [2], [3] for Kronecker product covariance estimation. We show that a class of block Toeplitz covariance matrices is approximatable by low separation rank and give bounds on the minimal separation rank r that ensures a given level of bias. Simulations are presented to validate the theoretical bounds. As a real world application, we illustrate the utility of the proposed Kronecker covariance estimator for spatiotemporal linear least squares prediction of multivariate wind speed measurements. Index Terms Structured covariance estimation, penalized least squares, Kronecker product decompositions, high dimensional convergence rates, meansquare error, multivariate prediction.
Estimation with Norm Regularization
"... Analysis of nonasymptotic estimation error and structured statistical recovery based on norm regularized regression, such as Lasso, needs to consider four aspects: the norm, the loss function, the design matrix, and the noise model. This paper presents generalizations of such estimation error anal ..."
Abstract
 Add to MetaCart
(Show Context)
Analysis of nonasymptotic estimation error and structured statistical recovery based on norm regularized regression, such as Lasso, needs to consider four aspects: the norm, the loss function, the design matrix, and the noise model. This paper presents generalizations of such estimation error analysis on all four aspects. We characterize the restricted error set, establish relations between error sets for the constrained and regularized problems, and present an estimation error bound applicable to any norm. Precise characterizations of the bound is presented for a variety of noise models, design matrices, including subGaussian, anisotropic, and dependent samples, and loss functions, including least squares and generalized linear models. Gaussian width, a geometric measure of size of sets, and associated tools play a key role in our generalized analysis. 1