Results 1  10
of
100
SOLVING A LOWRANK FACTORIZATION MODEL FOR MATRIX COMPLETION BY A NONLINEAR SUCCESSIVE OVERRELAXATION ALGORITHM
"... Abstract. The matrix completion problem is to recover a lowrank matrix from a subset of its entries. The main solution strategy for this problem has been based on nuclearnorm minimization which requires computing singular value decompositions – a task that is increasingly costly as matrix sizes an ..."
Abstract

Cited by 91 (10 self)
 Add to MetaCart
(Show Context)
Abstract. The matrix completion problem is to recover a lowrank matrix from a subset of its entries. The main solution strategy for this problem has been based on nuclearnorm minimization which requires computing singular value decompositions – a task that is increasingly costly as matrix sizes and ranks increase. To improve the capacity of solving largescale problems, we propose a lowrank factorization model and construct a nonlinear successive overrelaxation (SOR) algorithm that only requires solving a linear least squares problem per iteration. Convergence of this nonlinear SOR algorithm is analyzed. Numerical results show that the algorithm can reliably solve a wide range of problems at a speed at least several times faster than many nuclearnorm minimization algorithms. Key words. Matrix Completion, alternating minimization, nonlinear GS method, nonlinear SOR method AMS subject classifications. 65K05, 90C06, 93C41, 68Q32
Lowrank matrix completion using alternating minimization. ArXiv:1212.0467 eprint
, 2012
"... ar ..."
Lowrank matrix completion by riemannian optimization
 ANCHPMATHICSE, Mathematics Section, École Polytechnique Fédérale de
"... The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorit ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
(Show Context)
The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retractionbased optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this lowrank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Additionally, we derive secondorder models that can be used in Newton’s method based on approximating the exponential map on this manifold to second order. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach scales very well for largescale problems and compares favorable with the stateoftheart, while outperforming most existing solvers. 1
Hankel matrix rank minimization with applications to system identification and realization
, 2011
"... In this paper, we introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz and moment structures, and catalog applications from diverse fields under this framework. We discuss various firstorder methods for solving the ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
(Show Context)
In this paper, we introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz and moment structures, and catalog applications from diverse fields under this framework. We discuss various firstorder methods for solving the resulting optimization problem, including alternating direction methods, proximal point algorithm and gradient projection methods. We perform computational experiments to compare these methods on system identification problem and system realization problem. For the system identification problem, the gradient projection method (accelerated by Nesterov’s extrapolation techniques) usually outperforms other firstorder methods in terms of CPU time on both real and simulated data; while for the system realization problem, the alternating direction method, as applied to a certain primal reformulation, usually outperforms other firstorder methods in terms of CPU time.
Rank aggregation via nuclear norm minimization
 In KDD
"... The process of rank aggregation is intimately intertwined with the structure of skewsymmetric matrices. We apply recent advances in the theory and algorithms of matrix completion to skewsymmetric matrices. This combination of ideas produces a new method for ranking a set of items. The essence of ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
(Show Context)
The process of rank aggregation is intimately intertwined with the structure of skewsymmetric matrices. We apply recent advances in the theory and algorithms of matrix completion to skewsymmetric matrices. This combination of ideas produces a new method for ranking a set of items. The essence of our idea is that a rank aggregation describes a partially filled skewsymmetric matrix. We extend an algorithm for matrix completion to handle skewsymmetric data and use that to extract ranks for each item. Our algorithm applies to both pairwise comparison and rating data. Because it is based on matrix completion, it is robust to both noise and incomplete data. We show a formal recovery result for the noiseless case and present a detailed study of the algorithm on synthetic data and Netflix ratings.
A gradient descent algorithm on the grassman manifold for matrix completion
, 2009
"... ar ..."
(Show Context)
Sparse Bayesian methods for lowrank matrix estimation. arXiv:1102.5288v1 [stat.ML
, 2011
"... Abstract—Recovery of lowrank matrices has recently seen significant ..."
Abstract

Cited by 28 (11 self)
 Add to MetaCart
(Show Context)
Abstract—Recovery of lowrank matrices has recently seen significant
Convergence of fixed point continuation algorithms for matrix rank minimization
, 2010
"... ..."
(Show Context)
A simplified approach to recovery conditions for low rank matrices, in
 Proc. IEEE Int. Symp. on Inf. Theory (ISIT), 2011
"... ar ..."
(Show Context)
Scaled Gradients on Grassmann Manifolds for Matrix Completion
"... This paper describes gradient methods based on a scaled metric on the Grassmann manifold for lowrank matrix completion. The proposed methods significantly improve canonical gradient methods, especially on illconditioned matrices, while maintaining established global convegence and exact recovery g ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
(Show Context)
This paper describes gradient methods based on a scaled metric on the Grassmann manifold for lowrank matrix completion. The proposed methods significantly improve canonical gradient methods, especially on illconditioned matrices, while maintaining established global convegence and exact recovery guarantees. A connection between a form of subspace iteration for matrix completion and the scaled gradient descent procedure is also established. The proposed conjugate gradient method based on the scaled gradient outperforms several existing algorithms for matrix completion and is competitive with recently proposed methods. 1