Results 1 
6 of
6
Noisy Matrix Completion Using Alternating Minimization
"... Abstract. The task of matrix completion involves estimating the entries of a matrix, M ∈ R m×n, when a subset, Ω ⊂ {(i, j) : 1 ≤ i ≤ m, 1 ≤ j ≤ n} of the entries are observed. A particular set of low rank models for this task approximate the matrix as a product of two low rank matrices, ̂ M = UV T, ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The task of matrix completion involves estimating the entries of a matrix, M ∈ R m×n, when a subset, Ω ⊂ {(i, j) : 1 ≤ i ≤ m, 1 ≤ j ≤ n} of the entries are observed. A particular set of low rank models for this task approximate the matrix as a product of two low rank matrices, ̂ M = UV T, where U ∈ R m×k and V ∈ R n×k and k ≪ min{m, n}. A popular algorithm of choice in practice for recovering M from the partially observed matrix using the low rank assumption is alternating least square (ALS) minimization, which involves optimizing over U and V in an alternating manner to minimize the squared error over observed entries while keeping the other factor fixed. Despite being widely experimented in practice, only recently were theoretical guarantees established bounding the error of the matrix estimated from ALS to that of the original matrix M. In this work we extend the results for a noiseless setting and provide the first guarantees for recovery under noise for alternating minimization. We specifically show that for well conditioned matrices corrupted by random noise of bounded Frobenius norm, if the number of observed entries is O ( k 7 n log n) , then the ALS algorithm recovers the original matrix within an error bound that depends on the norm of the noise matrix. The sample complexity is the same as derived in [7] for the noise–free matrix completion using ALS. 1
Probabilistic LowRank Matrix Completion from Quantized Measurements
, 2016
"... Abstract We consider the recovery of a low rank realvalued matrix M given a subset of noisy discrete (or quantized) measurements. Such problems arise in several applications such as collaborative filtering, learning and content analytics, and sensor network localization. We consider constrained ma ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We consider the recovery of a low rank realvalued matrix M given a subset of noisy discrete (or quantized) measurements. Such problems arise in several applications such as collaborative filtering, learning and content analytics, and sensor network localization. We consider constrained maximum likelihood estimation of M , under a constraint on the entrywise infinitynorm of M and an exact rank constraint. We provide upper bounds on the Frobenius norm of matrix estimation error under this model. Previous theoretical investigations have focused on binary (1bit) quantizers, and been based on convex relaxation of the rank. Compared to the existing binary results, our performance upper bound has faster convergence rate with matrix dimensions when the fraction of revealed observations is fixed. We also propose a globally convergent optimization algorithm based on low rank factorization of M and validate the method on synthetic and real data, with improved performance over previous methods.
Generalized HigherOrder Orthogonal Iteration for Tensor Decomposition and Completion
"... Lowrank tensor estimation has been frequently applied in many realworld problems. Despite successful applications, existing Schatten 1norm minimization (SNM) methods may become very slow or even not applicable for largescale problems. To address this difficulty, we therefore propose an efficien ..."
Abstract
 Add to MetaCart
(Show Context)
Lowrank tensor estimation has been frequently applied in many realworld problems. Despite successful applications, existing Schatten 1norm minimization (SNM) methods may become very slow or even not applicable for largescale problems. To address this difficulty, we therefore propose an efficient and scalable core tensor Schatten 1norm minimization method for simultaneous tensor decomposition and completion, with a much lower computational complexity. We first induce the equivalence relation of Schatten 1norm of a lowrank tensor and its core tensor. Then the Schatten 1norm of the core tensor is used to replace that of the whole tensor, which leads to a much smallerscale matrix SNM problem. Finally, an efficient algorithm with a rankincreasing scheme is developed to solve the proposed problem with a convergence guarantee. Extensive experimental results show that our method is usually more accurate than the stateoftheart methods, and is orders of magnitude faster.
Nuclear Norm Regularized Least Squares Optimization on Grassmannian Manifolds
"... This paper aims to address a class of nuclear norm regularized least square (NNLS) problems. By exploiting the underlying lowrank matrix manifold structure, the problem with nuclear norm regularization is cast to a Riemannian optimization problem over matrix manifolds. Compared with existing NNLS ..."
Abstract
 Add to MetaCart
(Show Context)
This paper aims to address a class of nuclear norm regularized least square (NNLS) problems. By exploiting the underlying lowrank matrix manifold structure, the problem with nuclear norm regularization is cast to a Riemannian optimization problem over matrix manifolds. Compared with existing NNLS algorithms involving singular value decomposition (SVD) of largescale matrices, our method achieves significant reduction in computational complexity. Moreover, the uniqueness of matrix factorization can be guaranteed by our Grassmannian manifold method. In our solution, we first introduce the bilateral factorization into the original NNLS problem and convert it into a Grassmannian optimization problem by using a linearized technique. Then the conjugate gradient procedure on the Grassmannian manifold is developed for our method with a guarantee of local convergence. Finally, our method can be extended to address the graph regularized problem. Experimental results verified both the efficiency and effectiveness of our method. 1
Generalized HigherOrder Orthogonal Iteration for Tensor Decomposition and Completion
"... Lowrank tensor estimation has been frequently applied in many realworld problems. Despite successful applications, existing Schatten 1norm minimization (SNM) methods may become very slow or even not applicable for largescale problems. To address this difficulty, we therefore propose an efficien ..."
Abstract
 Add to MetaCart
(Show Context)
Lowrank tensor estimation has been frequently applied in many realworld problems. Despite successful applications, existing Schatten 1norm minimization (SNM) methods may become very slow or even not applicable for largescale problems. To address this difficulty, we therefore propose an efficient and scalable core tensor Schatten 1norm minimization method for simultaneous tensor decomposition and completion, with a much lower computational complexity. We first induce the equivalence relation of Schatten 1norm of a lowrank tensor and its core tensor. Then the Schatten 1norm of the core tensor is used to replace that of the whole tensor, which leads to a much smallerscale matrix SNM problem. Finally, an efficient algorithm with a rankincreasing scheme is developed to solve the proposed problem with a convergence guarantee. Extensive experimental results show that our method is usually more accurate than the stateoftheart methods, and is orders of magnitude faster.
Generalized HigherOrder Orthogonal Iteration for Tensor Decomposition and Completion
"... Lowrank tensor estimation has been frequently applied in many realworld problems. Despite successful applications, existing Schatten 1norm minimization (SNM) methods may become very slow or even not applicable for largescale problems. To address this difficulty, we therefore propose an efficien ..."
Abstract
 Add to MetaCart
(Show Context)
Lowrank tensor estimation has been frequently applied in many realworld problems. Despite successful applications, existing Schatten 1norm minimization (SNM) methods may become very slow or even not applicable for largescale problems. To address this difficulty, we therefore propose an efficient and scalable core tensor Schatten 1norm minimization method for simultaneous tensor decomposition and completion, with a much lower computational complexity. We first induce the equivalence relation of Schatten 1norm of a lowrank tensor and its core tensor. Then the Schatten 1norm of the core tensor is used to replace that of the whole tensor, which leads to a much smallerscale matrix SNM problem. Finally, an efficient algorithm with a rankincreasing scheme is developed to solve the proposed problem with a convergence guarantee. Extensive experimental results show that our method is usually more accurate than the stateoftheart methods, and is orders of magnitude faster.