#### DMCA

## Matrix completion from noisy entries

### Cached

### Download Links

Venue: | Journal of Machine Learning Research |

Citations: | 124 - 8 self |

### Citations

867 | Exact matrix completion via convex optimization.
- Candes, Recht
- 2009
(Show Context)
Citation Context ... discovered that, if the matrix M has rank r, and unless it is too ‘structured’, a small random subset of its entries allow to reconstruct it exactly. This result was first proved by Candés and Recht =-=[3]-=- by analyzing a convex relaxation indroduced by Fazel [4]. A tighter analysis of the same convex relaxation was carried out in [5]. A number of iterative schemes to solve the convex optimization probl... |

636 | The geometry of algorithms with orthogonality constraints
- Edelman, Arias, et al.
- 1999
(Show Context)
Citation Context ...assman manifold). The gradient descent algorithm is applied to the function ˜ F : M(m,n) ≡ G(m,r) × G(n,r) → R. For further details on optimization by gradient descent on matrix manifolds we refer to =-=[12, 13]-=-. 1.3 Main results Our first result shows that, in great generality, the rank-r projection of ÑE provides a reasonable approximation of M. Throughout this paper, without loss of generality, we assume ... |

554 | A singular value thresholding algorithm for matrix completion
- Cai, Candes, et al.
(Show Context)
Citation Context ...ation indroduced by Fazel [4]. A tighter analysis of the same convex relaxation was carried out in [5]. A number of iterative schemes to solve the convex optimization problem appeared soon thereafter =-=[6, 7, 8]-=- (also see [9] for a generalization). In an alternative line of work, the authors of [1] attacked the same problem using a combination of spectral techniques and manifold optimization: we will refer t... |

357 | The power of convex relaxation: Near-optimal matrix completion,
- Candes, Tao
- 2009
(Show Context)
Citation Context ...onstruct it exactly. This result was first proved by Candés and Recht [3] by analyzing a convex relaxation indroduced by Fazel [4]. A tighter analysis of the same convex relaxation was carried out in =-=[5]-=-. A number of iterative schemes to solve the convex optimization problem appeared soon thereafter [6, 7, 8] (also see [9] for a generalization). In an alternative line of work, the authors of [1] atta... |

294 |
Optimization Algorithms on Matrix Manifolds
- Absil, Mahony, et al.
- 2008
(Show Context)
Citation Context ...assman manifold). The gradient descent algorithm is applied to the function ˜ F : M(m,n) ≡ G(m,r) × G(n,r) → R. For further details on optimization by gradient descent on matrix manifolds we refer to =-=[12, 13]-=-. 1.3 Main results Our first result shows that, in great generality, the rank-r projection of ÑE provides a reasonable approximation of M. Throughout this paper, without loss of generality, we assume ... |

286 |
Matrix Rank Minimization with Applications
- Fazel
- 2001
(Show Context)
Citation Context ...it is too ‘structured’, a small random subset of its entries allow to reconstruct it exactly. This result was first proved by Candés and Recht [3] by analyzing a convex relaxation indroduced by Fazel =-=[4]-=-. A tighter analysis of the same convex relaxation was carried out in [5]. A number of iterative schemes to solve the convex optimization problem appeared soon thereafter [6, 7, 8] (also see [9] for a... |

285 | Probabilistic Matrix Factorization - Salakhutdinov, Mnih - 2008 |

260 | Maximum-margin matrix factorization - Srebro, Rennie, et al. - 2005 |

237 | Fast Monte-Carlo algorithms for finding low-rank approximations
- Frieze, Kannan, et al.
(Show Context)
Citation Context ... combination of spectral techniques and manifold optimization, that we call here OPTSPACE. We prove performance guarantees that are order-optimal in a number of circumstances. 1 Introduction Spectral techniques are an authentic workhorse in machine learning, statistics, numerical analysis, and signal processing. Given a matrix M , its largest singular values –and the associated singular vectors– ‘explain’ the most significant correlations in the underlying data source. A low-rank approximation of M can further be used for low-complexity implementations of a number of linear algebra algorithms [2]. In many practical circumstances we have access only to a sparse subset of the entries of an m × n matrix M . It has recently been discovered that, if the matrix M has rank r, and unless it is too ‘structured’, a small random subset of its entries allow to reconstruct it exactly. This result was first proved by Candes and Recht [3] by analyzing a convex relaxation indroduced by Fazel [4]. A tighter analysis of the same convex relaxation was carried out in [5]. A number of iterative schemes to solve the convex optimization problem appeared soon thereafter [6, 7, 8] (also see [9] for a general... |

196 | Fixed point and Bregman iterative methods for matrix rank minimization
- Ma, Goldfarb, et al.
- 2011
(Show Context)
Citation Context ...ation indroduced by Fazel [4]. A tighter analysis of the same convex relaxation was carried out in [5]. A number of iterative schemes to solve the convex optimization problem appeared soon thereafter =-=[6, 7, 8]-=- (also see [9] for a generalization). In an alternative line of work, the authors of [1] attacked the same problem using a combination of spectral techniques and manifold optimization: we will refer t... |

196 | Weighted low-rank approximations - Srebro, Jaakkola - 2003 |

194 | Matrix completion from a few entries
- Keshavan, Montanari, et al.
(Show Context)
Citation Context ...nt space of M(m,n) at x) and 1 m ‖ W‖ 2 + 1 n ‖ Q‖ 2 = d(x,u) 2 , because t ↦→ x(t) is parametrized proportionally to the arclength. Explicitexpressionsfor wcanbeobtainedintermsofw ≡ ˙x(0) = (W,Q)(=-=Keshavan et al., 2010-=-). If we let W = LΘR T be the singular value decomposition of W, we obtain W = −URΘsinΘR T +LΘcosΘR T . (20) It was proved in Keshavan et al. (2010) that 〈gradG(x), w〉 ≥ 0. It is thereforesufficient... |

183 | An accelerated proximal gradient algorithm for nuclear norm regular- ized least squares problems
- Toh, Yun
- 2009
(Show Context)
Citation Context ...ation indroduced by Fazel [4]. A tighter analysis of the same convex relaxation was carried out in [5]. A number of iterative schemes to solve the convex optimization problem appeared soon thereafter =-=[6, 7, 8]-=- (also see [9] for a generalization). In an alternative line of work, the authors of [1] attacked the same problem using a combination of spectral techniques and manifold optimization: we will refer t... |

165 | Fast computation of low rank matrix approximations
- Achlioptas, McSherry
- 2001
(Show Context)
Citation Context ... of Theorem 1.3 will be given in the journal version of this paper. 1.5 Comparison with related work Let us begin by mentioning that a statement analogous to our preliminary Theorem 1.1 was proved in =-=[14]-=-. Our result however applies to any number of revealed entries, while the one of [14] requires |E| ≥ (8log n) 4 n (which for n ≤ 5 · 10 8 is larger than n 2 ). As for Theorem 1.2, we will mainly compa... |

147 | Robust principal component analysis: Exact recovery of corrupted low-rank matrices by convex optimization
- Wright, Ganesh, et al.
(Show Context)
Citation Context ...Fazel [4]. A tighter analysis of the same convex relaxation was carried out in [5]. A number of iterative schemes to solve the convex optimization problem appeared soon thereafter [6, 7, 8] (also see =-=[9]-=- for a generalization). In an alternative line of work, the authors of [1] attacked the same problem using a combination of spectral techniques and manifold optimization: we will refer to their algori... |

80 | On the second eigenvalue in random regular graphs - Friedman, Kahn, et al. - 1989 |

76 | Admira: Atomic decomposition for minimum rank approximation
- Lee, Bresler
- 2010
(Show Context)
Citation Context ...mparable with the information theoretic lower bound: roughly nr max{r,log n} random entries are needed to reconstruct M exactly (here we assume m of order n). A related approach was also developed in =-=[10]-=-, although without performance guarantees for matrix completion. The above results crucially rely on the assumption that M is exactly a rank r matrix. For many applications of interest, this assumptio... |

71 | Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization,” [Online]. Available: http://www.ist.caltech.edu/~brecht/publications.html - Recht, Fazel, et al. |

62 | Spectral techniques applied to sparse random graphs - Feige, Ofek - 2005 |

41 | Collaborative filtering in a non-uniform world: Learning with the weighted trace norm - Srebro, Salakhutdinov - 2010 |

28 | A gradient descent algorithm on the Grassman manifold for matrix completion. arXiv preprint arXiv:0910.5260
- Keshavan, Oh
- 2009
(Show Context)
Citation Context ...f ÑE , Pr( ÑE ) = X0S0Y T 0 ; 3: Minimize ˜ F(X,Y) through gradient descent, with initial condition (X0,Y0). We may noteherethat therankof thematrix M, if not known, can bereliably estimated from ÑE (=-=Keshavan and Oh, 2009-=-). The various steps of the above algorithm are defined as follows. Trimming. We say that a row is ‘over-represented’ if it contains more than 2|E|/m revealed entries (i.e., more than twice the averag... |

27 | The expected norm of random matrices. - Seginer - 2000 |

23 | A new look at independence,” The Annals of Probability - Talagrand - 1996 |

18 |
Matrix completion from a few entries. arxiv, 901,
- Keshavan, Oh, et al.
- 2009
(Show Context)
Citation Context ...ries. The problem arises in a variety of applications, from collaborative filtering (the ‘Netflix problem’) to structure-from-motion and positioning. We study a low complexity algorithm introduced in =-=[1]-=-, based on a combination of spectral techniques and manifold optimization, that we call here OPTSPACE. We prove performance guarantees that are order-optimal in a number of circumstances. 1 Introducti... |

9 |
Matrix completion with noise,” arXiv:0903.3131
- Candes, Plan
- 2009
(Show Context)
Citation Context ...d it is therefore important to investigate their robustness. Can the above approaches be generalized when the underlying data is ‘well approximated’ by a rank r matrix? This question was addressed in =-=[11]-=- within the convex relaxation approach of [3]. The present paper proves a similar robustness result for OPTSPACE. Remarkably the guarantees we obtain are order-optimal in a variety of circumstances, a... |

2 | RestrictedBoltzmannmachinesforcollaborative filtering - Salakhutdinov, Hinton - 2007 |

1 | 2078 Completion from Noisy Entries - Frieze, Kannan, et al. |