Results 1 
3 of
3
Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. arXiv preprint, arXiv
, 2013
"... We provide theoretical analysis of the statistical and computational properties of penalized Mestimators that can be formulated as the solution to a possibly nonconvex optimization problem. Many important estimators fall in this category, including least squares regression with nonconvex regulariz ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We provide theoretical analysis of the statistical and computational properties of penalized Mestimators that can be formulated as the solution to a possibly nonconvex optimization problem. Many important estimators fall in this category, including least squares regression with nonconvex regularization, generalized linear models with nonconvex regularization, and sparse elliptical random design regression. For these problems, it is intractable to calculate the global solution due to the nonconvex formulation. In this paper, we propose an approximate regularization path following method for solving a variety of learning problems with nonconvex objective functions. Under a unified analytic framework, we simultaneously provide explicit statistical and computational rates of convergence of any local solution obtained by the algorithm. Computationally, our algorithm attains a global geometric rate of convergence for calculating the full regularization path, which is optimal among all firstorder algorithms. Unlike most existing methods that only attain geometric rates of convergence for one single regularization parameter, our algorithm calculates the full regularization path with the same iteration complexity. In particular, we provide a refined iteration complexity bound to sharply characterize the performance of each stage along the regularization path. Statistically, we provide sharp sample complexity analysis for all the approximate local solutions along the regularization path. In particular, our analysis improves upon existing results by providing a more refined sample complexity bound as well as an exact support recovery result for the final estimator. These results show that the final estimator attains an oracle statistical property due to the usage of nonconvex penalty. 1
Analysis of elliptical copula correlation factor model with kendall’s tau. Personal Communication
, 2013
"... ar ..."
(Show Context)
Expandable Factor Analysis
, 2014
"... Bayesian sparse factor models have proven useful for characterizing dependencies in highdimensional data. However, lack of computational scalability to highdimensions (P) with unknown numbers of factors (K) remains a vexing issue. We propose a framework for expandable factor analysis (xFA), where ..."
Abstract
 Add to MetaCart
Bayesian sparse factor models have proven useful for characterizing dependencies in highdimensional data. However, lack of computational scalability to highdimensions (P) with unknown numbers of factors (K) remains a vexing issue. We propose a framework for expandable factor analysis (xFA), where expandable refers to the ability of scaling to highdimensional settings by adaptively adding additional factors as needed. Key to this behavior is the use of a novel multiscale generalized double Pareto (mGDP) prior for the loadings matrix. The mGDP prior is carefully structured to induce sparsity in the loadings, allow an unknown number of factors, and produce an objective function for maximum a posteriori estimation that factorizes to yield P separate weighted `1regularized regressions. Model averaging is used to remove sensitivity due to the form of mGDP prior and number of factors. We provide theoretical support, develop efficient computational algorithms, and evaluate xFA’s performance using simulated data and genomic applications. Computational efficiency is further improved via onestep estimation. Key words: Bayesian model averaging; covariance matrix; EMtype algorithm; factor analysis; generalized double Pareto; highdimensional; Laplace approximation; nonconcave variable selection; sparsity. 1