Results 1  10
of
241
Sparse Representation For Computer Vision and Pattern Recognition
, 2009
"... Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact highfidelity representation of the observed signal, but also to extract semantic information. The choice of ..."
Abstract

Cited by 146 (9 self)
 Add to MetaCart
(Show Context)
Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact highfidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learned from, the training samples themselves provide the key to obtaining stateoftheart results and to attaching semantic meaning to sparse signal representations. Understanding the good performance of such unconventional dictionaries in turn demands new algorithmic and analytical techniques. This review paper highlights a few representative examples of how the interaction between sparse signal representation and computer vision can enrich both fields, and raises a number of open questions for further study.
Robust Subspace Segmentation by LowRank Representation
"... We propose lowrank representation (LRR) to segment data drawn from a union of multiple linear (or affine) subspaces. Given a set of data vectors, LRR seeks the lowestrank representation among all the candidates that represent all vectors as the linear combination of the bases in a dictionary. Unlik ..."
Abstract

Cited by 145 (25 self)
 Add to MetaCart
(Show Context)
We propose lowrank representation (LRR) to segment data drawn from a union of multiple linear (or affine) subspaces. Given a set of data vectors, LRR seeks the lowestrank representation among all the candidates that represent all vectors as the linear combination of the bases in a dictionary. Unlike the wellknown sparse representation (SR), which computes the sparsest representation of each data vector individually, LRR aims at finding the lowestrank representation of a collection of vectors jointly. LRR better captures the global structure of data, giving a more effective tool for robust subspace segmentation from corrupted data. Both theoretical and experimental results show that LRR is a promising tool for subspace segmentation. 1.
Object segmentation by long term analysis of point trajectories
 In Proc. European Conference on Computer Vision
, 2010
"... Abstract. Unsupervised learning requires a grouping step that defines which data belong together. A natural way of grouping in images is the segmentation of objects or parts of objects. While pure bottomup segmentation from static cues is well known to be ambiguous at the object level, the story ch ..."
Abstract

Cited by 145 (9 self)
 Add to MetaCart
(Show Context)
Abstract. Unsupervised learning requires a grouping step that defines which data belong together. A natural way of grouping in images is the segmentation of objects or parts of objects. While pure bottomup segmentation from static cues is well known to be ambiguous at the object level, the story changes as soon as objects move. In this paper, we present a method that uses long term point trajectories based on dense optical flow. Defining pairwise distances between these trajectories allows to cluster them, which results in temporally consistent segmentations of moving objects in a video shot. In contrast to multibody factorization, points and even whole objects may appear or disappear during the shot. We provide a benchmark dataset and an evaluation method for this so far uncovered setting. 1
Robust Recovery of Subspace Structures by LowRank Representation
"... In this work we address the subspace recovery problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to segment the samples into their respective subspaces and correct the possible errors as well. To this end, we propose a novel method ter ..."
Abstract

Cited by 128 (24 self)
 Add to MetaCart
(Show Context)
In this work we address the subspace recovery problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to segment the samples into their respective subspaces and correct the possible errors as well. To this end, we propose a novel method termed LowRank Representation (LRR), which seeks the lowestrank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that LRR well solves the subspace recovery problem: when the data is clean, we prove that LRR exactly captures the true subspace structures; for the data contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for the data corrupted by arbitrary errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace segmentation and error correction, in an efficient way.
A geometric analysis of subspace clustering with outliers
 ANNALS OF STATISTICS
, 2012
"... This paper considers the problem of clustering a collection of unlabeled data points assumed to lie near a union of lower dimensional planes. As is common in computer vision or unsupervised learning applications, we do not know in advance how many subspaces there are nor do we have any information a ..."
Abstract

Cited by 66 (3 self)
 Add to MetaCart
(Show Context)
This paper considers the problem of clustering a collection of unlabeled data points assumed to lie near a union of lower dimensional planes. As is common in computer vision or unsupervised learning applications, we do not know in advance how many subspaces there are nor do we have any information about their dimensions. We develop a novel geometric analysis of an algorithm named sparse subspace clustering (SSC) [11], which significantly broadens the range of problems where it is provably effective. For instance, we show that SSC can recover multiple subspaces, each of dimension comparable to the ambient dimension. We also prove that SSC can correctly cluster data points even when the subspaces of interest intersect. Further, we develop an extension of SSC that succeeds when the data set is corrupted with possibly overwhelmingly many outliers. Underlying our analysis are clear geometric insights, which may bear on other sparse recovery problems. A numerical study complements our theoretical analysis and demonstrates the effectiveness of these methods.
A Closed Form Solution to Robust Subspace Estimation and Clustering
"... We consider the problem of fitting one or more subspaces to a collection of data points drawn from the subspaces and corrupted by noise/outliers. We pose this problem as a rank minimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean, selfexpressive, low ..."
Abstract

Cited by 43 (4 self)
 Add to MetaCart
(Show Context)
We consider the problem of fitting one or more subspaces to a collection of data points drawn from the subspaces and corrupted by noise/outliers. We pose this problem as a rank minimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean, selfexpressive, lowrank dictionary plus a matrix of noise/outliers. Our key contribution is to show that, for noisy data, this nonconvex problem can be solved very efficiently and in closed form from the SVD of the noisy data matrix. Remarkably, this is true for both one or more subspaces. An important difference with respect to existing methods is that our framework results in a polynomial thresholding of the singular values with minimal shrinkage. Indeed, a particular case of our framework in the case of a single subspace leads to classical PCA, which requires no shrinkage. In the case of multiple subspaces, our framework provides an affinity matrix that can be used to cluster the data according to the subspaces. In the case of data corrupted by outliers, a closedform solution appears elusive. We thus use an augmented Lagrangian optimization framework, which requires a combination of our proposed polynomial thresholding operator with the more traditional shrinkagethresholding operator. 1.
Factoring nonnegative matrices with linear programs
, 2012
"... This paper describes a new approach for computing nonnegative matrix factorizations (NMFs) with linear programming. The key idea is a datadriven model for the factorization, in which the most salient features in the data are used to express the remaining features. More precisely, given a data matri ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
(Show Context)
This paper describes a new approach for computing nonnegative matrix factorizations (NMFs) with linear programming. The key idea is a datadriven model for the factorization, in which the most salient features in the data are used to express the remaining features. More precisely, given a data matrix X, the algorithm identifies a matrix C that satisfies X ≈ CX and some linear constraints. The matrix C selects features, which are then used to compute a lowrank NMF of X. A theoretical analysis demonstrates that this approach has the same type of guarantees as the recent NMF algorithm of Arora et al. (2012). In contrast with this earlier work, the proposed method (1) has better noise tolerance, (2) extends to more general noise models, and (3) leads to efficient, scalable algorithms. Experiments with synthetic and real datasets provide evidence that the new approach is also superior in practice. An optimized C++ implementation of the new algorithm can factor a multiGigabyte matrix in a matter of minutes.
Hybrid linear modeling via local bestfit flats
 in IEEE Conference on Computer Vision and Pattern Recognition
"... In this paper we present a simple and fast geometric method for modeling data by a union of affine sets. The method begins by forming a collection of local best fit affine subspaces. The correct sizes of the local neighborhoods are determined automatically by the Jones ’ β2 numbers; we prove under c ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
(Show Context)
In this paper we present a simple and fast geometric method for modeling data by a union of affine sets. The method begins by forming a collection of local best fit affine subspaces. The correct sizes of the local neighborhoods are determined automatically by the Jones ’ β2 numbers; we prove under certain geometric conditions that good local neighborhoods exist and are found by our method. The collection is further processed by a greedy selection procedure or a spectral method to generate the final model. We discuss applications to trackingbased motion segmentation and clustering of faces under different illuminating conditions. We give extensive experimental evidence demonstrating the state of the art accuracy and speed of the suggested algorithms on these problems and also on synthetic hybrid linear data as well as the MNIST handwritten digits data; and we demonstrate how to use our algorithms for fast determination of the number of affine subspaces.
Incremental Gradient on the Grassmannian for Online Foreground and Background Separation in Subsampled Video
 IN PROCEEDINGS OF THE 2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR
, 2012
"... It has recently been shown that only a small number of samples from a lowrank matrix are necessary to reconstruct the entire matrix. We bring this to bear on computer vision problems that utilize lowdimensional subspaces, demonstrating that subsampling can improve computation speed while still al ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
It has recently been shown that only a small number of samples from a lowrank matrix are necessary to reconstruct the entire matrix. We bring this to bear on computer vision problems that utilize lowdimensional subspaces, demonstrating that subsampling can improve computation speed while still allowing for accurate subspace learning. We present GRASTA, Grassmannian Robust Adaptive Subspace Tracking Algorithm, an online algorithm for robust subspace estimation from randomly subsampled data. We consider the specific application of background and foreground separation in video, and we assess GRASTA on separation accuracy and computation time. In one benchmark video example [16], GRASTA achieves a separation rate of 46.3 frames per second, even when run in MATLAB on a personal laptop.
A TUTORIAL ON SUBSPACE CLUSTERING
"... The past few years have witnessed an explosion in the availability of data from multiple sources and modalities. For example, millions of cameras have been installed in buildings, streets, airports and cities around the world. This has generated extraordinary advances on how to acquire, compress, st ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
(Show Context)
The past few years have witnessed an explosion in the availability of data from multiple sources and modalities. For example, millions of cameras have been installed in buildings, streets, airports and cities around the world. This has generated extraordinary advances on how to acquire, compress, store, transmit and process massive amounts of complex highdimensional data. Many of these advances have relied on the observation that, even though these data sets are highdimensional, their intrinsic dimension is often much smaller than the dimension of the ambient space. In computer vision, for example, the number of pixels in an image can be rather large, yet most computer vision models use only a few parameters to describe the appearance, geometry and dynamics of a scene. This has motivated the development of a number of techniques for finding a lowdimensional representation