Results 1  10
of
16
Scalable Coordinate Descent Approaches to Parallel Matrix Factorization for Recommender Systems
"... Abstract—Matrix factorization, when the matrix has missing values, has become one of the leading techniques for recommender systems. To handle webscale datasets with millions of users and billions of ratings, scalability becomes an important issue. Alternating Least Squares (ALS) and Stochastic Gra ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Matrix factorization, when the matrix has missing values, has become one of the leading techniques for recommender systems. To handle webscale datasets with millions of users and billions of ratings, scalability becomes an important issue. Alternating Least Squares (ALS) and Stochastic Gradient Descent (SGD) are two popular approaches to compute matrix factorization. There has been a recent flurry of activity to parallelize these algorithms. However, due to the cubic time complexity in the target rank, ALS is not scalable to largescale datasets. On the other hand, SGD conducts efficient updates but usually suffers from slow convergence that is sensitive to the parameters. Coordinate descent, a classical optimization approach, has been used for many other largescale problems, but its application to matrix factorization for recommender systems has not been explored thoroughly. In this paper, we show that coordinate descent based methods have a more efficient update rule compared to ALS, and are faster and have more stable convergence than SGD. We study different update sequences and propose the CCD++ algorithm, which updates rankone factors one by one. In addition, CCD++ can be easily parallelized on both multicore and distributed systems. We empirically show that CCD++ is much faster than ALS and SGD in both settings. As an example, on a synthetic dataset with 2 billion ratings, CCD++ is 4 times faster than both SGD and ALS using a distributed system with 20 machines. KeywordsRecommender systems, Matrix factorization, Low rank approximation, Parallelization.
Using underapproximations for sparse nonnegative matrix factorization
 Pattern Recognition
, 2010
"... Nonnegative Matrix Factorization (NMF) has gathered a lot of attention in the last decade and has been successfully applied in numerous applications. It consists in the factorization of a nonnegative matrix by the product of two lowrank nonnegative matrices: M ≈ V W. In this paper, we attempt to so ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Nonnegative Matrix Factorization (NMF) has gathered a lot of attention in the last decade and has been successfully applied in numerous applications. It consists in the factorization of a nonnegative matrix by the product of two lowrank nonnegative matrices: M ≈ V W. In this paper, we attempt to solve NMF problems in a recursive way. In order to do that, we introduce a new variant called Nonnegative Matrix Underapproximation (NMU) by adding the upper bound constraint V W ≤ M. Besides enabling a recursive procedure for NMF, these inequalities make NMU particularly wellsuited to achieve a sparse representation, improving the partbased decomposition. Although NMU is NPhard (which we prove using its equivalence with the maximum edge biclique problem in bipartite graphs), we present two approaches to solve it: a method based on convex reformulations and a method based on Lagrangian relaxation. Finally, we provide some encouraging numerical results for image processing applications.
A globally convergent algorithm for nonconvex optimization based on block coordinate update,” arXiv preprint arXiv:1410.1386
, 2014
"... Abstract. Nonconvex optimization problems arise in many areas of computational science and engineering and are (approximately) solved by a variety of algorithms. Existing algorithms usually only have local convergence or subsequence convergence of their iterates. We propose an algorithm for a gener ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Nonconvex optimization problems arise in many areas of computational science and engineering and are (approximately) solved by a variety of algorithms. Existing algorithms usually only have local convergence or subsequence convergence of their iterates. We propose an algorithm for a generic nonconvex optimization formulation, establish the convergence of its whole iterate sequence to a critical point along with a rate of convergence, and numerically demonstrate its efficiency. Specially, we consider the problem of minimizing a nonconvex objective function. Its variables can be treated as one block or be partitioned into multiple disjoint blocks. It is assumed that each nondifferentiable component of the objective function or each constraint applies to one block of variables. The differentiable components of the objective function, however, can apply to one or multiple blocks of variables together. Our algorithm updates one block of variables at time by minimizing a certain proxlinear surrogate. The order of update can be either deterministic or randomly shuffled in each round. In fact, our convergence analysis only needs that each block be updated at least once every fixed number of iterations. We obtain the convergence of the whole iterate sequence to a critical point under fairly loose conditions including, in particular, the Kurdyka Lojasiewicz (KL) condition, which is satisfied by a broad class of nonconvex/nonsmooth applications. Of course, these results apply to convex optimization as well. We apply our convergence result to the coordinate descent method for nonconvex regularized linear regression and also a modified rankone residue iteration method for nonnegative matrix factorization. We show that both the methods have global convergence. Numerically, we test our algorithm on nonnegative matrix and tensor factorization problems, with random shuffling enable to avoid local solutions.
Bounded Matrix Low Rank Approximation
"... Abstract—Matrix lower rank approximations such as nonnegative matrix factorization (NMF) have been successfully used to solve many data mining tasks. In this paper, we propose a new matrix lower rank approximation called Bounded Matrix Low Rank Approximation (BMA) which imposes a lower and an upper ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Matrix lower rank approximations such as nonnegative matrix factorization (NMF) have been successfully used to solve many data mining tasks. In this paper, we propose a new matrix lower rank approximation called Bounded Matrix Low Rank Approximation (BMA) which imposes a lower and an upper bound on every element of a lower rank matrix that best approximates a given matrix with missing elements. This new approximation models many real world problems, such as recommender systems, and performs better than other methods, such as singular value decompositions (SVD) or NMF. We present an efficient algorithm to solve BMA based on coordinate descent method. BMA is different from NMF as it imposes bounds on the approximation itself rather than on each of the low rank factors. We show that our algorithm is scalable for large matrices with missing elements on multi core systems with low memory. We present substantial experimental results illustrating that the proposed method outperforms the state of the art algorithms for recommender systems such as
A Visionbased Navigation System of Mobile Tracking Robot
"... Abstract—Based on the study of developments in many fields of computer vision, a novel computer vision navigation system for mobile tracking robot is presented. According to the primary functions of this kind of robot, three irrelevant technologies, pattern recognition, binocular vision and motion e ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Based on the study of developments in many fields of computer vision, a novel computer vision navigation system for mobile tracking robot is presented. According to the primary functions of this kind of robot, three irrelevant technologies, pattern recognition, binocular vision and motion estimation, make up of the basic technologies of our robot. The nonnegative matrix factorization (NMF) algorithm is applied to detect the target. The application method of NMF in our robot is demonstrated. Interesting observations on distance measurement and motion capture are discussed in detail. The reasons resulting in error of distance measurement are analyzed. According to the models and formulas of distance measurement error, the error type could be found, which is helpful to decrease the distance error. Based on the diamond search (DS) technology applied in MPEG4, an improved DS algorithm is developed to meet the special requirement of mobile tracking robot. Index Terms—mobile robot, navigation, computer vision, nonnegative matrix factorization, binocular vision, motion estimation.
1 Introduction – Problem Statements and Models
"... Matrix factorization is an important and unifying topic in signal processing and linear algebra, which has found numerous applications in many other areas. This chapter introduces basic linear and multilinear 1 models for matrix and tensor factorizations and decompositions, and formulates the analy ..."
Abstract
 Add to MetaCart
(Show Context)
Matrix factorization is an important and unifying topic in signal processing and linear algebra, which has found numerous applications in many other areas. This chapter introduces basic linear and multilinear 1 models for matrix and tensor factorizations and decompositions, and formulates the analysis framework for
An Alternating Direction Algorithm for Nonnegative Matrix Factorization
, 2010
"... We extend the classic alternating direction method for convex optimization to solving the nonconvex, nonnegative matrix factorization problem and conduct several carefully designed numerical experiments to compare the proposed algorithms with the most widely used two algorithms for solving this pro ..."
Abstract
 Add to MetaCart
(Show Context)
We extend the classic alternating direction method for convex optimization to solving the nonconvex, nonnegative matrix factorization problem and conduct several carefully designed numerical experiments to compare the proposed algorithms with the most widely used two algorithms for solving this problem. In addition, the proposed algorithm is also briefly compared with two other more recent algorithms. Numerical evidence shows that the alternating direction algorithm tends to deliver higherquality solutions with faster computing times on the tested problems. A convergence result is given showing that the algorithm converges to a KarushKuhnTucker point whenever it converges. 1
Chapter 1 Bounded Matrix Low Rank Approximation
"... It is common in recommender systems rating matrix, where the input matrix R is bounded in between [rmin, rmax] such as [1, 5]. In this chapter, we propose a new improved scalable low rank approximation algorithm for such bounded matrices called Bounded Matrix Low Rank Approximation(BMA) that bounds ..."
Abstract
 Add to MetaCart
(Show Context)
It is common in recommender systems rating matrix, where the input matrix R is bounded in between [rmin, rmax] such as [1, 5]. In this chapter, we propose a new improved scalable low rank approximation algorithm for such bounded matrices called Bounded Matrix Low Rank Approximation(BMA) that bounds every element of the approximation PQ. We also present an alternate formulation to bound existing recommender system algorithms called BALS and discuss its convergence. Our experiments on real world datasets illustrate that the proposed method BMA outperforms the state of the art algorithms for recommender system such as Stochastic Gradient Descent, Alternating Least Squares with regularization, SVD++ and BiasSVD on real