Results 1  10
of
27
Stable principal component pursuit
 In Proc. of International Symposium on Information Theory
, 2010
"... We consider the problem of recovering a target matrix that is a superposition of lowrank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured highdimensional signals such as videos and hyperspectral images, as well as in the analys ..."
Abstract

Cited by 94 (3 self)
 Add to MetaCart
(Show Context)
We consider the problem of recovering a target matrix that is a superposition of lowrank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured highdimensional signals such as videos and hyperspectral images, as well as in the analysis of transformation invariant lowrank structure recovery. We analyze the performance of the natural convex heuristic for solving this problem, under the assumption that measurements are chosen uniformly at random. We prove that this heuristic exactly recovers lowrank and sparse terms, provided the number of observations exceeds the number of intrinsic degrees of freedom of the component signals by a polylogarithmic factor. Our analysis introduces several ideas that may be of independent interest for the more general problem of compressed sensing and decomposing superpositions of multiple structured signals. 1
Clustering partially observed graphs via convex optimization.
 Journal of Machine Learning Research,
, 2014
"... Abstract This paper considers the problem of clustering a partially observed unweighted graphi.e., one where for some node pairs we know there is an edge between them, for some others we know there is no edge, and for the remaining we do not know whether or not there is an edge. We want to organiz ..."
Abstract

Cited by 47 (13 self)
 Add to MetaCart
(Show Context)
Abstract This paper considers the problem of clustering a partially observed unweighted graphi.e., one where for some node pairs we know there is an edge between them, for some others we know there is no edge, and for the remaining we do not know whether or not there is an edge. We want to organize the nodes into disjoint clusters so that there is relatively dense (observed) connectivity within clusters, and sparse across clusters. We take a novel yet natural approach to this problem, by focusing on finding the clustering that minimizes the number of "disagreements"i.e., the sum of the number of (observed) missing edges within clusters, and (observed) present edges across clusters. Our algorithm uses convex optimization; its basis is a reduction of disagreement minimization to the problem of recovering an (unknown) lowrank matrix and an (unknown) sparse matrix from their partially observed sum. We evaluate the performance of our algorithm on the classical Planted Partition/Stochastic Block Model. Our main theorem provides sufficient conditions for the success of our algorithm as a function of the minimum cluster size, edge density and observation probability; in particular, the results characterize the tradeoff between the observation probability and the edge density gap. When there are a constant number of clusters of equal size, our results are optimal up to logarithmic factors.
Clustering Sparse Graphs
, 2012
"... We develop a new algorithm to cluster sparse unweighted graphs – i.e. partition the nodes into disjoint clusters so that there is higher density within clusters, and low across clusters. By sparsity we mean the setting where both the incluster and across cluster edge densities are very small, possi ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
We develop a new algorithm to cluster sparse unweighted graphs – i.e. partition the nodes into disjoint clusters so that there is higher density within clusters, and low across clusters. By sparsity we mean the setting where both the incluster and across cluster edge densities are very small, possibly vanishing in the size of the graph. Sparsity makes the problem noisier, and hence more difficult to solve. Any clustering involves a tradeoff between minimizing two kinds of errors: missing edges within clusters and present edges across clusters. Our insight is that in the sparse case, these must be penalized differently. We analyze our algorithm’s performance on the natural, classical and widely studied “planted partition ” model (also called the stochastic block model); we show that our algorithm can cluster sparser graphs, and with smaller clusters, than all previous methods. This is seen empirically as well. 1
Stable Restoration and Separation of Approximately Sparse Signals
"... This paper develops new theory and algorithms to recover signals that are approximately sparse in some general (i.e., basis, frame, overcomplete, or incomplete) dictionary but corrupted by a combination of measurement noise and interference having a sparse representation in a second general diction ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
(Show Context)
This paper develops new theory and algorithms to recover signals that are approximately sparse in some general (i.e., basis, frame, overcomplete, or incomplete) dictionary but corrupted by a combination of measurement noise and interference having a sparse representation in a second general dictionary. Particular applications covered by our framework include the restoration of signals impaired by impulse noise, narrowband interference, or saturation, as well as image inpainting, superresolution, and signal separation. We develop efficient recovery algorithms and deterministic conditions that guarantee stable restoration and separation. Two application examples demonstrate the efficacy of our approach.
Incoherenceoptimal matrix completion
, 2013
"... This paper considers the matrix completion problem. We show that it is not necessary to assume joint incoherence, which is a standard but unintuitive and restrictive condition that is imposed by previous studies. This leads to a sample complexity bound that is orderwise optimal with respect to the ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
(Show Context)
This paper considers the matrix completion problem. We show that it is not necessary to assume joint incoherence, which is a standard but unintuitive and restrictive condition that is imposed by previous studies. This leads to a sample complexity bound that is orderwise optimal with respect to the incoherence parameter (as well as to the rank r and the matrix dimension n, except for a log n factor). As a consequence, we improve the sample complexity of recovering a semidefinite matrix from O(nr2 log2 n) to O(nr log2 n), and the highest allowable rank from Θ( n / log n) to Θ(n / log2 n). The key step in proof is to obtain new bounds on the `∞,2norm, defined as the maximum of the row and column norms of a matrix. To demonstrate the applicability of our techniques, we discuss extensions to SVD projection, semisupervised clustering and structured matrix completion. Finally, we turn to the lowrankplussparse matrix decomposition problem, and show that the joint incoherence condition is unavoidable here conditioned on computational complexity assumptions on the classical planted clique problem. This means that it is intractable in general to separate a rankω( n) positive semidefinite matrix and a sparse matrix. 1
Corrupted Sensing: Novel Guarantees for Separating Structured Signals.
 IEEE Transactions on Information Theory,
, 2014
"... AbstractWe study the problem of corrupted sensing, a generalization of compressed sensing in which one aims to recover a signal from a collection of corrupted or unreliable measurements. While an arbitrary signal cannot be recovered in the face of arbitrary corruption, tractable recovery is possib ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
AbstractWe study the problem of corrupted sensing, a generalization of compressed sensing in which one aims to recover a signal from a collection of corrupted or unreliable measurements. While an arbitrary signal cannot be recovered in the face of arbitrary corruption, tractable recovery is possible when both signal and corruption are suitably structured. We quantify the relationship between signal recovery and two geometric measures of structure, the Gaussian complexity of a tangent cone and the Gaussian distance to a subdifferential. We take a convex programming approach to disentangling signal and corruption, analyzing both penalized programs that trade off between signal and corruption complexity, and constrained programs that bound the complexity of signal or corruption when prior information is available. In each case, we provide conditions for exact signal recovery from structured corruption and stable signal recovery from structured corruption with added unstructured noise. Our simulations demonstrate close agreement between our theoretical recovery bounds and the sharp phase transitions observed in practice. In addition, we provide new interpretable bounds for the Gaussian complexity of sparse vectors, blocksparse vectors, and lowrank matrices, which lead to sharper guarantees of recovery when combined with our results and those in the literature.
Recovery guarantees for restoration and separation of approximately sparse signals
 in Proc. 49th Ann. Allerton Conf. on Comm., Control, and Computing
, 2011
"... Abstract—In this paper, we present performance guarantees for the recovery and separation of signals that are approximately sparse in some general (i.e., basis, frame, overcomplete, or incomplete) dictionary but corrupted by a combination of measurement noise and interference that is sparse in a ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract—In this paper, we present performance guarantees for the recovery and separation of signals that are approximately sparse in some general (i.e., basis, frame, overcomplete, or incomplete) dictionary but corrupted by a combination of measurement noise and interference that is sparse in a second general dictionary. Applications covered by this framework include the restoration of signals impaired by impulse noise, narrowband interference, or saturation, as well as image inpainting, superresolution, and signal separation. We develop computationally efficient algorithms for signal restoration and signal separation and present deterministic conditions that guarantee their stability. A simple inpainting example demonstrates the efficacy of our approach. I.