Results 1  10
of
46
Stable principal component pursuit
 In Proc. of International Symposium on Information Theory
, 2010
"... We consider the problem of recovering a target matrix that is a superposition of lowrank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured highdimensional signals such as videos and hyperspectral images, as well as in the analys ..."
Abstract

Cited by 94 (3 self)
 Add to MetaCart
(Show Context)
We consider the problem of recovering a target matrix that is a superposition of lowrank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured highdimensional signals such as videos and hyperspectral images, as well as in the analysis of transformation invariant lowrank structure recovery. We analyze the performance of the natural convex heuristic for solving this problem, under the assumption that measurements are chosen uniformly at random. We prove that this heuristic exactly recovers lowrank and sparse terms, provided the number of observations exceeds the number of intrinsic degrees of freedom of the component signals by a polylogarithmic factor. Our analysis introduces several ideas that may be of independent interest for the more general problem of compressed sensing and decomposing superpositions of multiple structured signals. 1
Recursive robust pca or recursive sparse recovery in large but structured noise
 in IEEE Intl. Symp. on Information Theory (ISIT
, 2013
"... This Dissertation is brought to you for free and open access by the Graduate College at Digital Repository @ Iowa State University. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Digital Repository @ Iowa State University. For more informati ..."
Abstract

Cited by 22 (17 self)
 Add to MetaCart
(Show Context)
This Dissertation is brought to you for free and open access by the Graduate College at Digital Repository @ Iowa State University. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Digital Repository @ Iowa State University. For more information, please contact
Recovery of lowrank plus compressed sparse matrices with application to unveiling traffic anomalies
 IEEE TRANS. INFO. THEORY
, 2013
"... Given the noiseless superposition of a lowrank matrix plus the product of a known fat compression matrix times a sparse matrix, the goal of this paper is to establish deterministic conditions under which exact recovery of the lowrank and sparse components becomes possible. This fundamental identif ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
Given the noiseless superposition of a lowrank matrix plus the product of a known fat compression matrix times a sparse matrix, the goal of this paper is to establish deterministic conditions under which exact recovery of the lowrank and sparse components becomes possible. This fundamental identifiability issue arises with traffic anomaly detection in backbone networks, and subsumes compressed sensing as well as the timely lowrank plus sparse matrix recovery tasks encountered in matrix decomposition problems. Leveraging the ability of and nuclear norms to recover sparse and lowrank matrices, a convex program is formulated to estimate the unknowns. Analysis and simulations confirm that the said convex program can recover the unknowns for sufficiently lowrank and sparse enough components, along with a compression matrix possessing an isometry property when restricted to operate on sparse vectors. When the lowrank, sparse, and compression matrices are drawn from certain random ensembles, it is established that exact recovery is possible with high probability. Firstorder algorithms are developed to solve the nonsmooth convex optimization problem with provable iteration complexity guarantees. Insightful tests with synthetic and real network data corroborate the effectiveness of the novel approach in unveiling traffic anomalies across flows and time, and its ability to outperform existing alternatives.
LowRank Tensors for Scoring Dependency Structures
"... Accurate scoring of syntactic structures such as headmodifier arcs in dependency parsing typically requires rich, highdimensional feature representations. A small subset of such features is often selected manually. This is problematic when features lack clear linguistic meaning as in embeddings o ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
(Show Context)
Accurate scoring of syntactic structures such as headmodifier arcs in dependency parsing typically requires rich, highdimensional feature representations. A small subset of such features is often selected manually. This is problematic when features lack clear linguistic meaning as in embeddings or when the information is blended across features. In this paper, we use tensors to map highdimensional feature vectors into low dimensional representations. We explicitly maintain the parameters as a lowrank tensor to obtain low dimensional representations of words in their syntactic roles, and to leverage modularity in the tensor for easy training with online algorithms. Our parser consistently outperforms the Turbo and MST parsers across 14 different languages. We also obtain the best published UAS results on 5 languages.1 1
An online algorithm for separating sparse and lowdimensional signal sequences from their sum
 IEEE Trans. Signal Process
"... Abstract—This paper designs and extensively evaluates an online algorithm, called practical recursive projected compressive sensing (PracReProCS), for recovering a time sequence of sparse vectors and a time sequence of dense vectors from their sum, , when the ’s lie in a slowly changing lowdimens ..."
Abstract

Cited by 10 (8 self)
 Add to MetaCart
(Show Context)
Abstract—This paper designs and extensively evaluates an online algorithm, called practical recursive projected compressive sensing (PracReProCS), for recovering a time sequence of sparse vectors and a time sequence of dense vectors from their sum, , when the ’s lie in a slowly changing lowdimensional subspace of the full space. A key application where this problem occurs is in realtime video layering where the goal is to separate a video sequence into a slowly changing background sequence and a sparse foreground sequence that consists of one or more moving regions/objects onthefly. PracReProCS is a practical modification of its theoretical counterpart which was analyzed in our recent work. Extension to the undersampled case is also developed. Extensive experimental comparisons demonstrating the advantage of the approach for both simulated and real videos, over existing batch and recursive methods, are shown. Index Terms—Online robust PCA, recursive sparse recovery, large but structured noise, compressed sensing. I.
Performance guarantees for undersampled recursive sparse recovery in large but structured noise (long version).” [Online]. Available: http://www.public.iastate. edu/%7Eblois/ReProModCSLong.pdf
"... Abstract—We study the problem of recursively reconstructing a time sequence of sparse vectors St from measurements of the form Mt = ASt +BLt where A and B are known measurement matrices, and Lt lies in a slowly changing low dimensional subspace. We assume that the signal of interest (St) is sparse, ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
(Show Context)
Abstract—We study the problem of recursively reconstructing a time sequence of sparse vectors St from measurements of the form Mt = ASt +BLt where A and B are known measurement matrices, and Lt lies in a slowly changing low dimensional subspace. We assume that the signal of interest (St) is sparse, and has support which is correlated over time. We introduce a solution which we call Recursive Projected Modified Compressed Sensing (ReProMoCS), which exploits the correlated support change of St. We show that, under weaker assumptions than previous work, with high probability, ReProMoCS will exactly recover the support set of St and the reconstruction error of St is upper bounded by a small timeinvariant value. A motivating application where the above problem occurs is in functional MRI imaging of the brain to detect regions that are “activated ” in response to stimuli. In this case both measurement matrices are the same (i.e. A = B). The active region image constitutes the sparse vector St and this region changes slowly over time. The background brain image changes are global but the amount of change is very little and hence it can be well modeled as lying in a slowly changing low dimensional subspace, i.e. this constitutes Lt. I.
Signal Recovery on Incoherent Manifolds
"... Suppose that we observe noisy linear measurements of an unknown signal that can be modeled as the sum of two component signals, each of which arises from a nonlinear submanifold of a highdimensional ambient space. We introduce SPIN, a firstorder projected gradient method to recover the signal com ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Suppose that we observe noisy linear measurements of an unknown signal that can be modeled as the sum of two component signals, each of which arises from a nonlinear submanifold of a highdimensional ambient space. We introduce SPIN, a firstorder projected gradient method to recover the signal components. Despite the nonconvex nature of the recovery problem and the possibility of underdetermined measurements, SPIN provably recovers the signal components, provided that the signal manifolds are incoherent and that the measurement operator satisfies a certain restricted isometry property. SPIN significantly extends the scope of current recovery models and algorithms for lowdimensional linear inverse problems and matches (or exceeds) the current state of the art in terms of performance.
Robust Locally Linear Analysis with Applications to Image Denoising and Blind Inpainting
, 2011
"... We study the related problems of denoising images corrupted by impulsive noise and blind inpainting (i.e., inpainting when the deteriorated region is unknown). Our basic approach is to model the set of patches of pixels in an image as a union of low dimensional subspaces, corrupted by sparse but pe ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
We study the related problems of denoising images corrupted by impulsive noise and blind inpainting (i.e., inpainting when the deteriorated region is unknown). Our basic approach is to model the set of patches of pixels in an image as a union of low dimensional subspaces, corrupted by sparse but perhaps large magnitude noise. For this purpose, we develop a robust and iterative RANSAC like method for single subspace modeling and extend it to an iterative algorithm for modeling multiple subspaces. We prove convergence for both algorithms and carefully compare our methods with other recent ideas for such robust modeling. We demonstrate state of the art performance of our method for both imaging problems.
1 MATRIX ALPS: Accelerated Low Rank and Sparse Matrix Reconstruction
"... We propose MATRIX ALPS for recovering a sparse plus lowrank decomposition of a matrix given its corrupted and incomplete linear measurements. Our approach is a firstorder projected gradient method over nonconvex sets, and it exploits a wellknown memorybased acceleration technique. We theoretica ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We propose MATRIX ALPS for recovering a sparse plus lowrank decomposition of a matrix given its corrupted and incomplete linear measurements. Our approach is a firstorder projected gradient method over nonconvex sets, and it exploits a wellknown memorybased acceleration technique. We theoretically characterize the convergence properties of MATRIX ALPS using the stable embedding properties of the linear measurement operator. We then numerically illustrate that our algorithm outperforms the existing convex as well as nonconvex stateoftheart algorithms in computational efficiency without sacrificing stability. I.
Riemannian Pursuit for Big Matrix Recovery
"... Low rank matrix recovery is a fundamental task in many realworld applications. The performance of existing methods, however, deteriorates significantly when applied to illconditioned or largescale matrices. In this paper, we therefore propose an efficient method, called Riemannian Pursuit (RP), ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Low rank matrix recovery is a fundamental task in many realworld applications. The performance of existing methods, however, deteriorates significantly when applied to illconditioned or largescale matrices. In this paper, we therefore propose an efficient method, called Riemannian Pursuit (RP), that aims to address these two problems simultaneously. Our method consists of a sequence of fixedrank optimization problems. Each subproblem, solved by a nonlinear Riemannian conjugate gradient method, aims to correct the solution in the most important subspace of increasing size. Theoretically, RP converges linearly under mild conditions and experimental results show that it substantially outperforms existing methods when applied to largescale and illconditioned matrices. 1.