Results 1  10
of
19
Manopt: a matlab toolbox for optimization on manifolds
, 1308
"... Optimization on manifolds is a rapidly developing branch of nonlinear optimization. Its focus is on problems where the smooth geometry of the search space can be leveraged to design efficient numerical algorithms. In particular, optimization on manifolds is wellsuited to deal with rank and orthogon ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
Optimization on manifolds is a rapidly developing branch of nonlinear optimization. Its focus is on problems where the smooth geometry of the search space can be leveraged to design efficient numerical algorithms. In particular, optimization on manifolds is wellsuited to deal with rank and orthogonality constraints. Such structured constraints appear pervasively in machine learning applications, including lowrank matrix completion, sensor network localization, camera network registration, independent component analysis, metric learning, dimensionality reduction and so on. The Manopt toolbox, available at www.manopt.org, is a userfriendly, documented piece of software dedicated to simplify experimenting with state of the art Riemannian optimization algorithms. By dealing internally with most of the differential geometry, the package aims particularly at lowering the entrance barrier.
The geometry of algorithms using hierarchical tensors
, 2012
"... In this paper, the differential geometry of the novel hierarchical Tucker format for tensors is derived. The set HT,k of tensors with fixed tree T and hierarchical rank k is shown to be a smooth quotient manifold, namely the set of orbits of a Lie group action corresponding to the nonunique basis ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
In this paper, the differential geometry of the novel hierarchical Tucker format for tensors is derived. The set HT,k of tensors with fixed tree T and hierarchical rank k is shown to be a smooth quotient manifold, namely the set of orbits of a Lie group action corresponding to the nonunique basis representation of these hierarchical tensors. Explicit characterizations of the quotient manifold, its tangent space and the tangent space of HT,k are derived, suitable for highdimensional problems. The usefulness of a complete geometric description is demonstrated by two typical applications. First, new convergence results for the nonlinear Gauss– Seidel method on HT,k are given. Notably and in contrast to earlier works on this subject, the task of minimizing the Rayleigh quotient is also addressed. Second, evolution equations for dynamic tensor approximation are formulated in terms of an explicit projection operator onto the tangent space of HT,k. In addition, a numerical comparison is made between this dynamical approach and the standard one based on truncated singular value decompositions.
LOWRANK OPTIMIZATION WITH TRACE NORM PENALTY∗
"... Abstract. The paper addresses the problem of lowrank trace norm minimization. We propose an algorithm that alternates between fixedrank optimization and rankone updates. The fixedrank optimization is characterized by an efficient factorization that makes the trace norm differentiable in the sear ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
(Show Context)
Abstract. The paper addresses the problem of lowrank trace norm minimization. We propose an algorithm that alternates between fixedrank optimization and rankone updates. The fixedrank optimization is characterized by an efficient factorization that makes the trace norm differentiable in the search space and the computation of duality gap numerically tractable. The search space is nonlinear but is equipped with a Riemannian structure that leads to efficient computations. We present a secondorder trustregion algorithm with a guaranteed quadratic rate of convergence. Overall, the proposed optimization scheme converges superlinearly to the global solution while maintaining complexity that is linear in the number of rows and columns of the matrix. To compute a set of solutions efficiently for a grid of regularization parameters we propose a predictorcorrector approach that outperforms the naive warmrestart approach on the fixedrank quotient manifold. The performance of the proposed algorithm is illustrated on problems of lowrank matrix completion and multivariate linear regression.
Online Learning in the Embedded Manifold of Lowrank Matrices
"... When learning models that are represented in matrix forms, enforcing a lowrank constraint can dramatically improve the memory and run time complexity, while providing a natural regularization of the model. However, naive approaches to minimizing functions over the set of lowrank matrices are eithe ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
When learning models that are represented in matrix forms, enforcing a lowrank constraint can dramatically improve the memory and run time complexity, while providing a natural regularization of the model. However, naive approaches to minimizing functions over the set of lowrank matrices are either prohibitively time consuming (repeated singular value decomposition of the matrix) or numerically unstable (optimizing a factored representation of the lowrank matrix). We build on recent advances in optimization over manifolds, and describe an iterative online learning procedure, consisting of a gradient step, followed by a secondorder retraction back to the manifold. While the ideal retraction is costly to compute, and so is the projection operator that approximates it, we describe another retraction that can be computed efficiently. It has run time and memory complexity of O((n+m)k) for a rankk matrix of dimension m×n, when using an online procedure with rankone gradients. We use this algorithm, LORETA, to learn a matrixform similarity measure over pairs of documents represented as high dimensional vectors. LORETA improves the mean average precision over a passiveaggressive approach in a factorized model, and also improves over a full model trained on preselected features using the same memory requirements. We further adapt LORETA to learn positive semidefinite lowrank matrices, providing an online algorithm for lowrank metric learning. LORETA also shows consistent improvement over standard weakly supervised methods in a large (1600 classes and 1 million images, using ImageNet) multilabel image classification task.
Linear regression under fixedrank constraints: a Riemannian approach
 In 28th International Conference on Machine Learning. ICML
, 2011
"... In this paper, we tackle the problem of learning a linear regression model whose parameter is a fixedrank matrix. We study the Riemannian manifold geometry of the set of fixedrank matrices and develop efficient linesearch algorithms. The proposed algorithms have many applications, scale to highdi ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
(Show Context)
In this paper, we tackle the problem of learning a linear regression model whose parameter is a fixedrank matrix. We study the Riemannian manifold geometry of the set of fixedrank matrices and develop efficient linesearch algorithms. The proposed algorithms have many applications, scale to highdimensional problems, enjoy local convergence properties and confer a geometric basis to recent contributions on learning fixedrank matrices. Numerical experiments on benchmarks suggest that the proposed algorithms compete with the stateoftheart, and that manifold optimization offers a versatile framework for the design of rankconstrained machine learning algorithms. 1.
Tensor Sparse Coding for Positive Definite Matrices
"... Abstract—In recent years, there has been extensive research on sparse representation of vectorvalued signals. In the matrix case, the data points are merely vectorized and treated as vectors thereafter (for e.g., image patches). However, this approach cannot be used for all matrices, as it may dest ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In recent years, there has been extensive research on sparse representation of vectorvalued signals. In the matrix case, the data points are merely vectorized and treated as vectors thereafter (for e.g., image patches). However, this approach cannot be used for all matrices, as it may destroy the inherent structure of the data. Symmetric positive definite (SPD) matrices constitute one such class of signals, where their implicit structure of positive eigenvalues is lost upon vectorization. This paper proposes a novel sparse coding technique for positive definite matrices, which respects the structure of the Riemannian manifold and preserves the positivity of their eigenvalues, without resorting to vectorization. Synthetic and realworld computer vision experiments with region covariance descriptors demonstrate the need for and the applicability of the new sparse coding model. This work serves to bridge the gap between the sparse modeling paradigm and the space of positive definite matrices. Index Terms—Sparse coding, positive definite matrices, region covariance descriptors, computer vision, optimization. 1
A Riemannian geometry with complete geodesics for the set of positive semidefinite matrices of fixed rank
, 2010
"... ..."
Solving PhaseLift by lowrank Riemannian optimization methods for complex semidefinite constraints
"... ..."
BILGO: Bilateral Greedy Optimization for Large Scale Semidefinite Programming
"... Abstract Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a largescale, scalability and computational efficiency are considered desirable properties for a practical semidefinite p ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a largescale, scalability and computational efficiency are considered desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyse a new bilateral greedy optimization(denoted BILGO) strategy in solving general semidefinite programs on largescale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve largescale semidefinite programs efficiently.
The geometry of algorithms using hierarchical tensors
, 2013
"... In this paper, the differential geometry of the novel hierarchical Tucker format for tensors is derived. The set HT,k of tensors with fixed tree T and hierarchical rank k is shown to be a smooth quotient manifold, namely the set of orbits of a Lie group action corresponding to the nonunique basis r ..."
Abstract
 Add to MetaCart
In this paper, the differential geometry of the novel hierarchical Tucker format for tensors is derived. The set HT,k of tensors with fixed tree T and hierarchical rank k is shown to be a smooth quotient manifold, namely the set of orbits of a Lie group action corresponding to the nonunique basis representation of these hierarchical tensors. Explicit characterizations of the quotient manifold, its tangent space and the tangent space of HT,k are derived, suitable for highdimensional problems. The usefulness of a complete geometric description is demonstrated by two typical applications. First, new convergence results for the nonlinear Gauss–Seidel method on HT,k are given. Notably and in contrast to earlier works on this subject, the task of minimizing the Rayleigh quotient is also addressed. Second, evolution equations for dynamic tensor approximation are formulated in terms of an explicit projection operator onto the tangent space of HT,k. In addition, a numerical comparison is made between this dynamical approach and the standard one based on truncated singular value decompositions.