Results 11  20
of
181
Restricted strong convexity and weighted matrix completion: Optimal bounds with noise
, 2012
"... We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong ..."
Abstract

Cited by 78 (10 self)
 Add to MetaCart
We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries a number of error bounds on matrix completion in the weighted Frobenius norm under noisy sampling and for both exact and near lowrank matrices. Our results are based on measures of the “spikiness” and “lowrankness” of matrices that are less restrictive than the incoherence conditions imposed in previous work. Our technique involves an Mestimator that includes controls on both the rank and spikiness of the solution, and we establish nonasymptotic error bounds in weighted Frobenius norm for recovering matrices lying with ℓq“balls ” of bounded spikiness. Using informationtheoretic methods, we show that no algorithm can achieve better estimates (up to a logarithmic factor) over these same sets, showing that our conditions on matrices and associated rates are essentially optimal.
Online Identification and Tracking of Subspaces from Highly Incomplete Information
, 2010
"... This work presents GROUSE (Grassmanian RankOne Update Subspace Estimation), an efficient online algorithm for tracking subspaces from highly incomplete observations. GROUSE requires only basic linear algebraic manipulations at each iteration, and each subspace update can be performed in linear time ..."
Abstract

Cited by 75 (9 self)
 Add to MetaCart
(Show Context)
This work presents GROUSE (Grassmanian RankOne Update Subspace Estimation), an efficient online algorithm for tracking subspaces from highly incomplete observations. GROUSE requires only basic linear algebraic manipulations at each iteration, and each subspace update can be performed in linear time in the dimension of the subspace. The algorithm is derived by analyzing incremental gradient descent on the Grassmannian manifold of subspaces. With a slight modification, GROUSE can also be used as an online incremental algorithm for the matrix completion problem of imputing missing entries of a lowrank matrix. GROUSE performs exceptionally well in practice both in tracking subspaces and as an online algorithm for matrix completion. 1
Parallel stochastic gradient algorithms for largescale matrix completion
 MATHEMATICAL PROGRAMMING COMPUTATION
, 2013
"... This paper develops Jellyfish, an algorithm for solving dataprocessing problems with matrixvalued decision variables regularized to have low rank. Particular examples of problems solvable by Jellyfish include matrix completion problems and leastsquares problems regularized by the nuclear norm or ..."
Abstract

Cited by 69 (7 self)
 Add to MetaCart
This paper develops Jellyfish, an algorithm for solving dataprocessing problems with matrixvalued decision variables regularized to have low rank. Particular examples of problems solvable by Jellyfish include matrix completion problems and leastsquares problems regularized by the nuclear norm or γ2norm. Jellyfish implements a projected incremental gradient method with a biased, random ordering of the increments. This biased ordering allows for a parallel implementation that admits a speedup nearly proportional to the number of processors. On largescale matrix completion tasks, Jellyfish is orders of magnitude more efficient than existing codes. For example, on the Netflix Prize data set, prior art computes rating predictions in approximately 4 hours, while Jellyfish solves the same problem in under 3 minutes on a 12 core workstation.
Tight Oracle Bounds for Lowrank Matrix Recovery from a Minimal Number of Random Measurements
, 2009
"... This paper presents several novel theoretical results regarding the recovery of a lowrank matrix from just a few measurements consisting of linear combinations of the matrix entries. We showthatproperlyconstrainednuclearnormminimizationstablyrecoversalowrankmatrix from a constant number of noisy ..."
Abstract

Cited by 64 (3 self)
 Add to MetaCart
This paper presents several novel theoretical results regarding the recovery of a lowrank matrix from just a few measurements consisting of linear combinations of the matrix entries. We showthatproperlyconstrainednuclearnormminimizationstablyrecoversalowrankmatrix from a constant number of noisy measurements per degree of freedom; this seems to be the first result of this nature. Further, the recovery error from noisy data is within a constant of three targets: 1) the minimax risk, 2) an ‘oracle ’ error that would be available if the column space of the matrix were known, and 3) a more adaptive ‘oracle ’ error which would be available with the knowledge of the column space corresponding to the part of the matrix that stands above the noise. Lastly, the error bounds regarding lowrank matrices are extended to provide an error bound when the matrix has full rank with decaying singular values. The analysis in this paper is based on the restricted isometry property (RIP) introduced in [6] for vectors, and in [22] for matrices.
Lowrank matrix completion using alternating minimization. ArXiv:1212.0467 eprint
, 2012
"... ar ..."
Collaborative Spectrum Sensing from Sparse Observations Using Matrix Completion for Cognitive Radio Networks
"... Abstract — In cognitive radio, spectrum sensing is a key component to detect spectrum holes (i.e., channels not used by any primary users). Collaborative spectrum sensing among the cognitive radio nodes is expected to improve the ability of checking complete spectrum usage states. Unfortunately, due ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
(Show Context)
Abstract — In cognitive radio, spectrum sensing is a key component to detect spectrum holes (i.e., channels not used by any primary users). Collaborative spectrum sensing among the cognitive radio nodes is expected to improve the ability of checking complete spectrum usage states. Unfortunately, due to power limitation and channel fading, available channel sensing information is far from being sufficient to tell the unoccupied channels directly. Aiming at breaking this bottleneck, we apply recent matrix completion techniques to greatly reduce the sensing information needed. We formulate the collaborative sensing problem as a matrix completion subproblem and a jointsparsity reconstruction subproblem. Results of numerical simulations that validated the effectiveness and robustness of the proposed approach are presented. In particular, in noiseless cases, when number of primary user is small, exact detection was obtained with no more than 8 % of the complete sensing information, whilst as number of primary user increases, to achieve a detection rate of 95.55%, the required information percentage was merely 16.8%. I.
Lowrank matrix completion by riemannian optimization
 ANCHPMATHICSE, Mathematics Section, École Polytechnique Fédérale de
"... The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorit ..."
Abstract

Cited by 36 (3 self)
 Add to MetaCart
(Show Context)
The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retractionbased optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this lowrank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Additionally, we derive secondorder models that can be used in Newton’s method based on approximating the exponential map on this manifold to second order. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach scales very well for largescale problems and compares favorable with the stateoftheart, while outperforming most existing solvers. 1
Hankel matrix rank minimization with applications to system identification and realization
, 2011
"... In this paper, we introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz and moment structures, and catalog applications from diverse fields under this framework. We discuss various firstorder methods for solving the ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
(Show Context)
In this paper, we introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz and moment structures, and catalog applications from diverse fields under this framework. We discuss various firstorder methods for solving the resulting optimization problem, including alternating direction methods, proximal point algorithm and gradient projection methods. We perform computational experiments to compare these methods on system identification problem and system realization problem. For the system identification problem, the gradient projection method (accelerated by Nesterov’s extrapolation techniques) usually outperforms other firstorder methods in terms of CPU time on both real and simulated data; while for the system realization problem, the alternating direction method, as applied to a certain primal reformulation, usually outperforms other firstorder methods in terms of CPU time.
ThreeDimensional Structure Determination from Common Lines in CryoEM by Eigenvectors and Semidefinite Programming
, 2011
"... The cryoelectron microscopy reconstruction problem is to find the threedimensional (3D) structure of a macromolecule given noisy samples of its twodimensional projection images at unknown random directions. Present algorithms for finding an initial 3D structure model are based on the “angular r ..."
Abstract

Cited by 32 (16 self)
 Add to MetaCart
(Show Context)
The cryoelectron microscopy reconstruction problem is to find the threedimensional (3D) structure of a macromolecule given noisy samples of its twodimensional projection images at unknown random directions. Present algorithms for finding an initial 3D structure model are based on the “angular reconstitution ” method in which a coordinate system is established from three projections, and the orientation of the particle giving rise to each image is deduced from common lines among the images. However, a reliable detection of common lines is difficult due to the low signaltonoise ratio of the images. In this paper we describe two algorithms for finding the unknown imaging directions of all projections by minimizing global selfconsistency errors. In the first algorithm, the minimizer is obtained by computing the three largest eigenvectors of a specially designed symmetric matrix derived from the common lines, while the second algorithm is based on semidefinite programming (SDP). Compared with existing algorithms, the advantages of our algorithms are fivefold: first, they accurately estimate all orientations at very low commonline detection rates; second, they are extremely fast, as they involve only the computation of a few top eigenvectors or a sparse SDP; third, they are nonsequential and use the information in all common lines at once; fourth, they are amenable to a rigorous mathematical analysis using spectral analysis and random matrix theory; and finally, the algorithms are optimal in the sense that they reach the information theoretic Shannon bound up to a constant for an idealized probabilistic model.
The Network Completion Problem: Inferring Missing Nodes and Edges in Networks
"... While the social and information networks have become ubiquitous, the challenge ofcollecting complete network data still persists. Many times the collected network data is incomplete with nodes and edges missing. Commonly, only a part of the network can be observed and we would like to infer the uno ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
(Show Context)
While the social and information networks have become ubiquitous, the challenge ofcollecting complete network data still persists. Many times the collected network data is incomplete with nodes and edges missing. Commonly, only a part of the network can be observed and we would like to infer the unobserved part of the network. We address this issue by studying the Network Completion Problem: Given a network with missing nodes and edges, can we complete the missing part? We cast the problem in the Expectation Maximization (EM) framework where we use the observed part of the network to fit a model of network structure, and then we estimate the missing part of the network using the model, reestimate the parameters and so on. We combine the EM algorithm with the Kronecker graphs model and design a scalable Metropolized Gibbs sampling approach that allows for the estimation of the model parametersas well as the inference about missing nodes and edges of the network. Experiments on synthetic and several realworld networks show that our approach can effectively recover the network even when about half of the nodes in the network are missing. Our algorithm outperforms not only classical linkprediction approaches but also the state of the art Stochastic block modeling approach. Furthermore, our algorithm easily scales to networks with tens of thousands of nodes. 1