Results 1 -
3 of
3
Composite Self-Concordant Minimization
"... We propose a variable metric framework for minimizing the sum of a self-concordant function and a possibly non-smooth convex function endowed with a computable proximal operator. We theoretically establish the convergence of our framework without relying on the usual Lipschitz gradient assumption on ..."
Abstract
-
Cited by 7 (5 self)
- Add to MetaCart
We propose a variable metric framework for minimizing the sum of a self-concordant function and a possibly non-smooth convex function endowed with a computable proximal operator. We theoretically establish the convergence of our framework without relying on the usual Lipschitz gradient assumption on the smooth part. An important highlight of our work is a new set of analytic step-size selection and correction procedures based on the structure of the problem. We describe concrete algorithmic instances of our framework for several interesting large-scale applications and demonstrate them numerically on both synthetic and real data.
Scalable sparse covariance estimation via self-concordance
"... We consider the class of convex minimization prob-lems, composed of a self-concordant function, such as the log det metric, a convex data fidelity term h(·) and, a regularizing – possibly non-smooth – function g(·). This type of problems have recently attracted a great deal of interest, mainly due t ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
We consider the class of convex minimization prob-lems, composed of a self-concordant function, such as the log det metric, a convex data fidelity term h(·) and, a regularizing – possibly non-smooth – function g(·). This type of problems have recently attracted a great deal of interest, mainly due to their omnipres-ence in top-notch applications. Under this locally Lip-schitz continuous gradient setting, we analyze the con-vergence behavior of proximal Newton schemes with the added twist of a probable presence of inexact eval-uations. We prove attractive convergence rate guaran-tees and enhance state-of-the-art optimization schemes to accommodate such developments. Experimental re-sults on sparse covariance estimation show the merits of our algorithm, both in terms of recovery efficiency and complexity.