Results 1  10
of
62
A feasible method for optimization with orthogonality constraints
 In Rice Univ. Technical Report’10. 4
"... Abstract. Minimization with orthogonality constraints (e.g., X⊤X = I) and/or spherical constraints (e.g., ‖x‖2 = 1) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, pharmonic flows, 1bit compressive sensing, matrix rank minimization, et ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Minimization with orthogonality constraints (e.g., X⊤X = I) and/or spherical constraints (e.g., ‖x‖2 = 1) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, pharmonic flows, 1bit compressive sensing, matrix rank minimization, etc. These problems are difficult because the constraints are not only nonconvex but numerically expensive to preserve during iterations. To deal with these difficulties, we propose to use a CrankNicolsonlike update scheme to preserve the constraints and based on it, develop curvilinear search algorithms with lower periteration cost compared to those based on projections and geodesics. The efficiency of the proposed algorithms is demonstrated on a variety of test problems. In particular, for the maxcut problem, it exactly solves a decomposition formulation for the SDP relaxation. For polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems, the proposed algorithms run very fast and return solutions no worse than those from their stateoftheart algorithms. For the quadratic assignment problem, a gap 0.842 % to the best known solution on the largest problem “tai256c ” in QAPLIB can be reached in 5 minutes on a typical laptop.
Lowrank matrix completion by riemannian optimization
 ANCHPMATHICSE, Mathematics Section, École Polytechnique Fédérale de
"... The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorit ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
(Show Context)
The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retractionbased optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this lowrank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Additionally, we derive secondorder models that can be used in Newton’s method based on approximating the exponential map on this manifold to second order. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach scales very well for largescale problems and compares favorable with the stateoftheart, while outperforming most existing solvers. 1
LOWRANK OPTIMIZATION ON THE CONE OF POSITIVE SEMIDEFINITE MATRICES ∗
"... Abstract. We propose an algorithm for solving optimization problems defined on a subset of the cone of symmetric positive semidefinite matrices. This algorithm relies on the factorization X = YYT, where the number of columns of Y fixes an upper bound on the rank of the positive semidefinite matrix X ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We propose an algorithm for solving optimization problems defined on a subset of the cone of symmetric positive semidefinite matrices. This algorithm relies on the factorization X = YYT, where the number of columns of Y fixes an upper bound on the rank of the positive semidefinite matrix X. It is thus very effective for solving problems that have a lowrank solution. The factorization X = YYT leads to a reformulation of the original problem as an optimization on a particular quotient manifold. The present paper discusses the geometry of that manifold and derives a secondorder optimization method with guaranteed quadratic convergence. It furthermore provides some conditions on the rank of the factorization to ensure equivalence with the original problem. In contrast to existing methods, the proposed algorithm converges monotonically to the sought solution. Its numerical efficiency is evaluated on two applications: the maximal cut of a graph and the problem of sparse principal component analysis. Key words. lowrank constraints, cone of symmetric positive definite matrices, Riemannian quotient manifold, sparse principal component analysis, maximumcut algorithms, largescale algorithms
Manopt: a matlab toolbox for optimization on manifolds
, 1308
"... Optimization on manifolds is a rapidly developing branch of nonlinear optimization. Its focus is on problems where the smooth geometry of the search space can be leveraged to design efficient numerical algorithms. In particular, optimization on manifolds is wellsuited to deal with rank and orthogon ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
(Show Context)
Optimization on manifolds is a rapidly developing branch of nonlinear optimization. Its focus is on problems where the smooth geometry of the search space can be leveraged to design efficient numerical algorithms. In particular, optimization on manifolds is wellsuited to deal with rank and orthogonality constraints. Such structured constraints appear pervasively in machine learning applications, including lowrank matrix completion, sensor network localization, camera network registration, independent component analysis, metric learning, dimensionality reduction and so on. The Manopt toolbox, available at www.manopt.org, is a userfriendly, documented piece of software dedicated to simplify experimenting with state of the art Riemannian optimization algorithms. By dealing internally with most of the differential geometry, the package aims particularly at lowering the entrance barrier.
A Riemannian optimization approach for computing lowrank solutions of Lyapunov equations
, 2009
"... We propose a new framework based on optimization on manifolds to approximate the solution of a Lyapunov matrix equation by a lowrank matrix. The method minimizes the error on the Riemannian manifold of symmetric positive semidefinite matrices of fixed rank. We detail how objects from differential ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
(Show Context)
We propose a new framework based on optimization on manifolds to approximate the solution of a Lyapunov matrix equation by a lowrank matrix. The method minimizes the error on the Riemannian manifold of symmetric positive semidefinite matrices of fixed rank. We detail how objects from differential geometry, like the Riemannian gradient and Hessian, can be efficiently computed for this manifold. As minimization algorithm we use the Riemannian TrustRegion method of [Found. Comput. Math., 7 (2007), pp. 303–330] based on a secondorder model of the objective function on the manifold. Together with an efficient preconditioner this method can find lowrank solutions with very little memory. We illustrate our results with numerical examples.
LOWRANK OPTIMIZATION WITH TRACE NORM PENALTY∗
"... Abstract. The paper addresses the problem of lowrank trace norm minimization. We propose an algorithm that alternates between fixedrank optimization and rankone updates. The fixedrank optimization is characterized by an efficient factorization that makes the trace norm differentiable in the sear ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
(Show Context)
Abstract. The paper addresses the problem of lowrank trace norm minimization. We propose an algorithm that alternates between fixedrank optimization and rankone updates. The fixedrank optimization is characterized by an efficient factorization that makes the trace norm differentiable in the search space and the computation of duality gap numerically tractable. The search space is nonlinear but is equipped with a Riemannian structure that leads to efficient computations. We present a secondorder trustregion algorithm with a guaranteed quadratic rate of convergence. Overall, the proposed optimization scheme converges superlinearly to the global solution while maintaining complexity that is linear in the number of rows and columns of the matrix. To compute a set of solutions efficiently for a grid of regularization parameters we propose a predictorcorrector approach that outperforms the naive warmrestart approach on the fixedrank quotient manifold. The performance of the proposed algorithm is illustrated on problems of lowrank matrix completion and multivariate linear regression.
QuasiNewton methods on Grassmannians and multilinear approximations of tensors
, 2009
"... Abstract. In this paper we proposed quasiNewton and limited memory quasiNewton methods for objective functions defined on Grassmann manifolds or a product of Grassmann manifolds. Specifically we defined bfgs and lbfgs updates in local and global coordinates on Grassmann manifolds or a product of ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we proposed quasiNewton and limited memory quasiNewton methods for objective functions defined on Grassmann manifolds or a product of Grassmann manifolds. Specifically we defined bfgs and lbfgs updates in local and global coordinates on Grassmann manifolds or a product of these. We proved that, when local coordinates are used, our bfgs updates on Grassmann manifolds share the same optimality property as the usual bfgs updates on Euclidean spaces. When applied to the best multilinear rank approximation problem for general and symmetric tensors, our approach yields fast, robust, and accurate algorithms that exploit the special Grassmannian structure of the respective problems, and which work on tensors of large dimensions and arbitrarily high order. Extensive numerical experiments are included to substantiate our claims. Key words. Grassmann manifold, Grassmannian, product of Grassmannians, Grassmann quasiNewton, Grassmann bfgs, Grassmann lbfgs, multilinear rank, symmetric multilinear rank, tensor, symmetric tensor, approximations
Anasazi software for the numerical solution of largescale eigenvalue problems
 ACM TOMS
"... Anasazi is a package within the Trilinos software project that provides a framework for the iterative, numerical solution of largescale eigenvalue problems. Anasazi is written in ANSI C++ and exploits modern software paradigms to enable the research and development of eigensolver algorithms. Furt ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
Anasazi is a package within the Trilinos software project that provides a framework for the iterative, numerical solution of largescale eigenvalue problems. Anasazi is written in ANSI C++ and exploits modern software paradigms to enable the research and development of eigensolver algorithms. Furthermore, Anasazi provides implementations for some of the most recent eigensolver methods. The purpose of our paper is to describe the design and development of the Anasazi framework. A performance comparison of Anasazi and the popular FORTRAN 77 code ARPACK is given.
Lowrank optimization for semidefinite convex problems ∗
, 807
"... Compiled on July 28, 2008, 12:05 We propose an algorithm for solving nonlinear convex programs defined in terms of a symmetric positive semidefinite matrix variable X. This algorithm rests on the factorization X = Y Y T, where the number of columns of Y fixes the rank of X. It is thus very effective ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
Compiled on July 28, 2008, 12:05 We propose an algorithm for solving nonlinear convex programs defined in terms of a symmetric positive semidefinite matrix variable X. This algorithm rests on the factorization X = Y Y T, where the number of columns of Y fixes the rank of X. It is thus very effective for solving programs that have a low rank solution. The factorization X = Y Y T evokes a reformulation of the original problem as an optimization on a particular quotient manifold. The present paper discusses the geometry of that manifold and derives a second order optimization method. It furthermore provides some conditions on the rank of the factorization to ensure equivalence with the original problem. The efficiency of the proposed algorithm is illustrated on two applications: the maximal cut of a graph and the sparse principal component analysis problem. 1
A truncatedCG style method for symmetric generalized eigenvalue problems
 J. Comput. Appl. Math
, 2004
"... A numerical algorithm is proposed for computing an extreme eigenpair of a symmetric/positivedefinite matrix pencil (A, B). The leftmost or the rightmost eigenvalue can be targeted. Knowledge of (A, B) is only required through a routine that performs matrixvector products. The method has excellent ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
A numerical algorithm is proposed for computing an extreme eigenpair of a symmetric/positivedefinite matrix pencil (A, B). The leftmost or the rightmost eigenvalue can be targeted. Knowledge of (A, B) is only required through a routine that performs matrixvector products. The method has excellent global convergence properties and its local rate of convergence is superlinear. It is based on a constrained truncatedCG trustregion strategy to optimize the Rayleigh quotient, in the framework of a recentlyproposed trustregion scheme on Riemannian manifolds.