Results 1  10
of
74
Exact Matrix Completion via Convex Optimization
, 2008
"... We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfe ..."
Abstract

Cited by 860 (27 self)
 Add to MetaCart
We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most lowrank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m ≥ C n 1.2 r log n for some positive numerical constant C, then with very high probability, most n × n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.
Guaranteed minimumrank solutions of linear matrix equations via nuclear norm minimization
, 2007
"... The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative ..."
Abstract

Cited by 568 (23 self)
 Add to MetaCart
(Show Context)
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NPhard, because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
MaximumMargin Matrix Factorization
 Advances in Neural Information Processing Systems 17
, 2005
"... We present a novel approach to collaborative prediction, using lownorm instead of lowrank factorizations. The approach is inspired by, and has strong connections to, largemargin linear discrimination. We show how to learn lownorm factorizations by solving a semidefinite program, and discuss ..."
Abstract

Cited by 260 (20 self)
 Add to MetaCart
(Show Context)
We present a novel approach to collaborative prediction, using lownorm instead of lowrank factorizations. The approach is inspired by, and has strong connections to, largemargin linear discrimination. We show how to learn lownorm factorizations by solving a semidefinite program, and discuss generalization error bounds for them.
A simpler approach to matrix completion
 the Journal of Machine Learning Research
"... This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candès and Recht [4], Candès and Tao [7], and Keshavan, Montanari, and Oh [18]. The reconstruction is accomplished by minim ..."
Abstract

Cited by 162 (7 self)
 Add to MetaCart
(Show Context)
This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candès and Recht [4], Candès and Tao [7], and Keshavan, Montanari, and Oh [18]. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory.
Interiorpoint method for nuclear norm approximation with application to system identification
"... ..."
Restricted strong convexity and (weighted) matrix completion: Optimal bounds with noise
, 2010
"... We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong ..."
Abstract

Cited by 86 (13 self)
 Add to MetaCart
We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries a number of error bounds on matrix completion in the weighted Frobenius norm under noisy sampling and for both exact and near lowrank matrices. Our results are based on measures of the “spikiness ” and “lowrankness ” of matrices that are less restrictive than the incoherence conditions imposed in previous work. Our technique involves an Mestimator that includes controls on both the rank and spikiness of the solution, and we establish nonasymptotic error bounds in weighted Frobenius norm for recovering matrices lying with ℓq“balls ” of bounded spikiness. Using informationtheoretic methods, we show that no algorithm can achieve better estimates (up to a logarithmic factor) over these same sets, showing that our conditions on matrices and associated rates are essentially optimal.
Generalization Error Bounds for Collaborative Prediction with LowRank Matrices
 In Advances In Neural Information Processing Systems 17
, 2005
"... We prove generalization error bounds for predicting entries in a partially observed matrix by approximating the observed entries with a lowrank matrix. To do so, we bound the number of sign configurations of lowrank matrices using a result about realizable oriented matroids. ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
(Show Context)
We prove generalization error bounds for predicting entries in a partially observed matrix by approximating the observed entries with a lowrank matrix. To do so, we bound the number of sign configurations of lowrank matrices using a result about realizable oriented matroids.
Estimation of simultaneously sparse and low rank matrices
 In Proc. ICML
, 2012
"... The paper introduces a penalized matrix estimation procedure aiming at solutions which are sparse and lowrank at the same time. Such structures arise in the context of social networks or protein interactions where underlying graphs have adjacency matrices which are blockdiagonal in the appropria ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
(Show Context)
The paper introduces a penalized matrix estimation procedure aiming at solutions which are sparse and lowrank at the same time. Such structures arise in the context of social networks or protein interactions where underlying graphs have adjacency matrices which are blockdiagonal in the appropriate basis. We introduce a convex mixed penalty which involves `1norm and trace norm simultaneously. We obtain an oracle inequality which indicates how the two effects interact according to the nature of the target matrix. We bound generalization error in the link prediction problem. We also develop proximal descent strategies to solve the optimization problem efficiently and evaluate performance on synthetic and real data sets. 1.
Sparse Bayesian methods for lowrank matrix estimation. arXiv:1102.5288v1 [stat.ML
, 2011
"... Abstract—Recovery of lowrank matrices has recently seen significant ..."
Abstract

Cited by 26 (11 self)
 Add to MetaCart
(Show Context)
Abstract—Recovery of lowrank matrices has recently seen significant