Results 1  10
of
24
Robust Principal Component Analysis?
, 2009
"... This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a lowrank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the lowrank and the sparse co ..."
Abstract

Cited by 553 (26 self)
 Add to MetaCart
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a lowrank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the lowrank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the ℓ1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
Guaranteed rank minimization via singular value projection
 In NIPS 2010
, 2010
"... Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics. In this paper we propose a simple and fast algorithm SVP (Singular Value Projection) for rank minimization under affine constraints (ARMP) and s ..."
Abstract

Cited by 96 (7 self)
 Add to MetaCart
(Show Context)
Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics. In this paper we propose a simple and fast algorithm SVP (Singular Value Projection) for rank minimization under affine constraints (ARMP) and show that SVP recovers the minimum rank solution for affine constraints that satisfy a restricted isometry property (RIP). Our method guarantees geometric convergence rate even in the presence of noise and requires strictly weaker assumptions on the RIP constants than the existing methods. We also introduce a Newtonstep for our SVP framework to speedup the convergence with substantial empirical gains. Next, we address a practically important application of ARMP the problem of lowrank matrix completion, for which the defining affine constraints do not directly obey RIP, hence the guarantees of SVP do not hold. However, we provide partial progress towards a proof of exact recovery for our algorithm by showing a more restricted isometry property and observe empirically that our algorithm recovers lowrank incoherent matrices from an almost optimal number of uniformly sampled entries. We also demonstrate empirically that our algorithms outperform existing methods, such as those of [5, 18, 14], for ARMP and the matrix completion problem by an order of magnitude and are also more robust to noise and sampling schemes. In particular, results show that our SVPNewton method is significantly robust to noise and performs impressively on a more realistic powerlaw sampling scheme for the matrix completion problem. 1
Lowrank matrix completion by riemannian optimization
 ANCHPMATHICSE, Mathematics Section, École Polytechnique Fédérale de
"... The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorit ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
(Show Context)
The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retractionbased optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this lowrank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Additionally, we derive secondorder models that can be used in Newton’s method based on approximating the exponential map on this manifold to second order. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach scales very well for largescale problems and compares favorable with the stateoftheart, while outperforming most existing solvers. 1
Collaborative Spectrum Sensing from Sparse Observations Using Matrix Completion for Cognitive Radio Networks
"... Abstract — In cognitive radio, spectrum sensing is a key component to detect spectrum holes (i.e., channels not used by any primary users). Collaborative spectrum sensing among the cognitive radio nodes is expected to improve the ability of checking complete spectrum usage states. Unfortunately, due ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
Abstract — In cognitive radio, spectrum sensing is a key component to detect spectrum holes (i.e., channels not used by any primary users). Collaborative spectrum sensing among the cognitive radio nodes is expected to improve the ability of checking complete spectrum usage states. Unfortunately, due to power limitation and channel fading, available channel sensing information is far from being sufficient to tell the unoccupied channels directly. Aiming at breaking this bottleneck, we apply recent matrix completion techniques to greatly reduce the sensing information needed. We formulate the collaborative sensing problem as a matrix completion subproblem and a jointsparsity reconstruction subproblem. Results of numerical simulations that validated the effectiveness and robustness of the proposed approach are presented. In particular, in noiseless cases, when number of primary user is small, exact detection was obtained with no more than 8 % of the complete sensing information, whilst as number of primary user increases, to achieve a detection rate of 95.55%, the required information percentage was merely 16.8%. I.
Integrating LowRank and GroupSparse Structures for Robust MultiTask Learning
 In KDD
, 2011
"... Multitask learning (MTL) aims at improving the generalization performance by utilizing the intrinsic relationships among multiple related tasks. A key assumption in most MTL algorithms is that all tasks are related, which, however, may not be the case in many realworld applications. In this paper, ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
(Show Context)
Multitask learning (MTL) aims at improving the generalization performance by utilizing the intrinsic relationships among multiple related tasks. A key assumption in most MTL algorithms is that all tasks are related, which, however, may not be the case in many realworld applications. In this paper, we propose a robust multitask learning (RMTL) algorithm which learns multiple tasks simultaneously as well as identifies the irrelevant (outlier) tasks. Specifically, the proposed RMTL algorithm captures the task relationships using a lowrank structure, and simultaneously identifies the outlier tasks using a groupsparse structure. The proposed RMTL algorithm is formulated as a nonsmooth convex (unconstrained) optimization problem. We propose to adopt the accelerated proximal method (APM) for solving such an optimization problem. The key component in APM is the computation of the proximal operator, which can be shown to admit an analytic solution. We also theoretically analyze the effectiveness of the RMTL algorithm. In particular, we derive a key property of the optimal solution to RMTL; moreover, based on this key property, we establish a theoretical bound for characterizing the learning performance of RMTL. Our experimental results on benchmark data sets demonstrate the effectiveness and efficiency of the proposed algorithm.
Compressed Sensing with Nonlinear Observations
, 2010
"... Compressed sensing is a recently developed signal acquisition technique. In contrast to traditional sampling methods, significantly fewer samples are required whenever the signals admit a sparse representation. Crucially, sampling methods can be constructed that allow the reconstruction of sparse si ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
(Show Context)
Compressed sensing is a recently developed signal acquisition technique. In contrast to traditional sampling methods, significantly fewer samples are required whenever the signals admit a sparse representation. Crucially, sampling methods can be constructed that allow the reconstruction of sparse signals from a small number of measurements using efficient algorithms. We have recently generalised these ideas in two important ways. We have developed methods and theoretical results that allow much more general constraints to be imposed on the signal and we have also extended the approach to more general Hilbert spaces. In this paper we introduce a further generalisation to compressed sensing and allow for nonlinear sampling methods. This is achieved by using a recently introduced generalisation of the Restricted Isometry Property (or the biLipschitz condition) traditionally imposed on the compressed sensing system. We show that, if this more general condition holds for the nonlinear sampling system, then we can reconstruct signals from nonlinear compressive measurements.
LOWRANK MATRIX RECOVERY VIA ITERATIVELY REWEIGHTED LEAST SQUARES MINIMIZATION
"... Abstract. We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively lowran ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively lowrank solution. Under the assumption that the linear measurements fulfill a suitable generalization of the Null Space Property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best krank approximation. In certain relevant cases, for instance for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows to expedite the solution of the least squares problems required at each iteration. We present numerical experiments which confirm the robustness of the algorithm for the solution of matrix completion problems, and demonstrate its competitiveness with respect to other techniques proposed recently in the literature. AMS subject classification: 65J22, 65K10, 52A41, 49M30. Key Words: lowrank matrix recovery, iteratively reweighted least squares, matrix completion.
Iterative reweighted least squares for matrix rank minimization
 2010. Proceedings of the Allerton Conference
"... Abstract—The classical compressed sensing problem is to find the sparsest solution to an underdetermined system of linear equations. A good convex approximation to this problem is to minimize the ℓ1 norm subject to affine constraints. The Iterative Reweighted Least Squares (IRLSp) algorithm (0 < ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
(Show Context)
Abstract—The classical compressed sensing problem is to find the sparsest solution to an underdetermined system of linear equations. A good convex approximation to this problem is to minimize the ℓ1 norm subject to affine constraints. The Iterative Reweighted Least Squares (IRLSp) algorithm (0 < p ≤ 1), has been proposed as a method to solve the ℓp (p ≤ 1) minimization problem with affine constraints. Recently Chartrand et al observed that IRLSp with p < 1 has better empirical performance than ℓ1 minimization, and Daubechies et al gave ‘local ’ linear and superlinear convergence results for IRLSp with p = 1 and p < 1 respectively. In this paper we extend IRLSp as a family of algorithms for the matrix rank minimization problem and we also present a related family of algorithms, sIRLSp. We present guarantees on recovery of lowrank matrices for IRLS1 under the Null Space Property (NSP). We also establish that the difference between the successive iterates of IRLSp and sIRLSp converges to zero and that the IRLS0 algorithm converges to the stationary point of a nonconvex ranksurrogate minimization problem. On the numerical side, we give a few efficient implementations for IRLS0 and demonstrate that both sIRLS0 and IRLS0 perform better than algorithms such as Singular Value Thresholding (SVT) on a range of ‘hard ’ problems (where the ratio of number of degrees of freedom in the variable to the number of measurements is large). We also observe that sIRLS0 performs better than Iterative Hard Thresholding algorithm (IHT) when there is no apriori information on the low rank solution.
Iterative reweighted algorithms for matrix rank minimization
 Journal of Machine Learning Research
"... The problem of minimizing the rank of a matrix subject to affine constraints has many applications in machine learning, and is known to be NPhard. One of the tractable relaxations proposed for this problem is nuclear norm (or trace norm) minimization of the matrix, which is guaranteed to find the m ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The problem of minimizing the rank of a matrix subject to affine constraints has many applications in machine learning, and is known to be NPhard. One of the tractable relaxations proposed for this problem is nuclear norm (or trace norm) minimization of the matrix, which is guaranteed to find the minimum rank matrix under suitable assumptions. In this paper, we propose a family of Iterative Reweighted Least Squares algorithms IRLSp (with 0 ≤ p ≤ 1), as a computationally efficient way to improve over the performance of nuclear norm minimization. The algorithms can be viewed as (locally) minimizing certain smooth approximations to the rank function. When p = 1, we give theoretical guarantees similar to those for nuclear norm minimization, i.e., recovery of lowrank matrices under certain assumptions on the operator defining the constraints. For p < 1, IRLSp shows better empirical performance in terms of recovering lowrank matrices than nuclear norm minimization. We provide an efficient implementation for IRLSp, and also present a related family of algorithms, sIRLSp. These algorithms exhibit competitive run times and improved recovery when compared to existing algorithms for random instances of the matrix completion problem, as well as on the MovieLens movie recommendation data set.