Results 11  20
of
136
C.: Spectral clustering of linear subspaces for motion segmentation
 In: IEEE I. Conf. Comp. Vis
"... This paper studies automatic segmentation of multiple motions from tracked feature points through spectral embedding and clustering of linear subspaces. We show that the dimension of the ambient space is crucial for separability, and that low dimensions chosen in prior work are not optimal. We sugge ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
(Show Context)
This paper studies automatic segmentation of multiple motions from tracked feature points through spectral embedding and clustering of linear subspaces. We show that the dimension of the ambient space is crucial for separability, and that low dimensions chosen in prior work are not optimal. We suggest lower and upper bounds together with a datadriven procedure for choosing the optimal ambient dimension. Application of our approach to the Hopkins155 video benchmark database uniformly outperforms a range of stateoftheart methods both in terms of segmentation accuracy and computational speed. 1.
Hybrid linear modeling via local bestfit flats
 in IEEE Conference on Computer Vision and Pattern Recognition
"... In this paper we present a simple and fast geometric method for modeling data by a union of affine sets. The method begins by forming a collection of local best fit affine subspaces. The correct sizes of the local neighborhoods are determined automatically by the Jones ’ β2 numbers; we prove under c ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
(Show Context)
In this paper we present a simple and fast geometric method for modeling data by a union of affine sets. The method begins by forming a collection of local best fit affine subspaces. The correct sizes of the local neighborhoods are determined automatically by the Jones ’ β2 numbers; we prove under certain geometric conditions that good local neighborhoods exist and are found by our method. The collection is further processed by a greedy selection procedure or a spectral method to generate the final model. We discuss applications to trackingbased motion segmentation and clustering of faces under different illuminating conditions. We give extensive experimental evidence demonstrating the state of the art accuracy and speed of the suggested algorithms on these problems and also on synthetic hybrid linear data as well as the MNIST handwritten digits data; and we demonstrate how to use our algorithms for fast determination of the number of affine subspaces.
Foundations of a Multiway Spectral Clustering Framework for Hybrid Linear Modeling
, 2009
"... Abstract The problem of Hybrid Linear Modeling (HLM) is to model and segment data using a mixture of affine subspaces. Different strategies have been proposed to solve this problem, however, rigorous analysis justifying their performance is missing. This paper suggests the Theoretical Spectral Curva ..."
Abstract

Cited by 37 (10 self)
 Add to MetaCart
(Show Context)
Abstract The problem of Hybrid Linear Modeling (HLM) is to model and segment data using a mixture of affine subspaces. Different strategies have been proposed to solve this problem, however, rigorous analysis justifying their performance is missing. This paper suggests the Theoretical Spectral Curvature Clustering (TSCC) algorithm for solving the HLM problem and provides careful analysis to justify it. The TSCC algorithm is practically a combination of Govindu’s multiway spectral clustering framework (CVPR 2005) and Ng et al.’s spectral clustering algorithm (NIPS 2001). The main result of this paper states that if the given data is sampled from a mixture of distributions concentrated around affine subspaces, then with high sampling probability the TSCC algorithm segments well the different underlying clusters. The goodness of clustering depends on the withincluster errors, the betweenclusters interaction, and a tuning parameter applied by TSCC. The proof also provides new insights for the analysis of Ng et al. (NIPS 2001). Keywords Hybrid linear modeling · dflats clustering · Multiway clustering · Spectral clustering · Polar curvature · Perturbation analysis · Concentration inequalities Communicated by Albert Cohen. This work was supported by NSF grant #0612608.
Median Kflats for hybrid linear modeling with many outliers
 In Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on Computer Vision
"... We describe the Median Kflats (MKF) algorithm, a simple online method for hybrid linear modeling, i.e., for approximating data by a mixture of flats. This algorithm simultaneously partitions the data into clusters while finding their corresponding best approximating ℓ1 dflats, so that the cumulati ..."
Abstract

Cited by 31 (10 self)
 Add to MetaCart
(Show Context)
We describe the Median Kflats (MKF) algorithm, a simple online method for hybrid linear modeling, i.e., for approximating data by a mixture of flats. This algorithm simultaneously partitions the data into clusters while finding their corresponding best approximating ℓ1 dflats, so that the cumulative ℓ1 error is minimized. The current implementation restricts dflats to be ddimensional linear subspaces. It requires a negligible amount of storage, and its complexity, when modeling data consisting of N points in R D with K ddimensional linear subspaces, is of order O(ns · K · d · D + ns · d 2 · D), where ns is the number of iterations required for convergence (empirically on the order of 10 4). Since it is an online algorithm, data can be supplied to it incrementally and it can incrementally produce the corresponding output. The performance of the algorithm is carefully evaluated using synthetic and real data.
A TUTORIAL ON SUBSPACE CLUSTERING
"... The past few years have witnessed an explosion in the availability of data from multiple sources and modalities. For example, millions of cameras have been installed in buildings, streets, airports and cities around the world. This has generated extraordinary advances on how to acquire, compress, st ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
(Show Context)
The past few years have witnessed an explosion in the availability of data from multiple sources and modalities. For example, millions of cameras have been installed in buildings, streets, airports and cities around the world. This has generated extraordinary advances on how to acquire, compress, store, transmit and process massive amounts of complex highdimensional data. Many of these advances have relied on the observation that, even though these data sets are highdimensional, their intrinsic dimension is often much smaller than the dimension of the ambient space. In computer vision, for example, the number of pixels in an image can be rather large, yet most computer vision models use only a few parameters to describe the appearance, geometry and dynamics of a scene. This has motivated the development of a number of techniques for finding a lowdimensional representation
Energybased Geometric MultiModel Fitting
, 2010
"... Geometric model fitting is a typical chicken&egg problem: data points should be clustered based on geometric proximity to models whose unknown parameters must be estimated at the same time. Most existing methods, including generalizations of RANSAC, greedily search for models with most inliers ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
(Show Context)
Geometric model fitting is a typical chicken&egg problem: data points should be clustered based on geometric proximity to models whose unknown parameters must be estimated at the same time. Most existing methods, including generalizations of RANSAC, greedily search for models with most inliers (within a threshold) ignoring overall classification of points. We formulate geometric multimodel fitting as an optimal labeling problem with a global energy function balancing geometric errors and regularity of inlier clusters. Regularization based on spatial coherence (on some nearneighbor graph) and/or label costs is NP hard. Standard combinatorial algorithms with guaranteed approximation bounds (e.g. αexpansion) can minimize such regularization energies over a finite set of labels, but they are not directly applicable to a continuum of labels, e.g. R 2 in line fitting. Our proposed approach (PEARL) combines model sampling from data points as in RANSAC with iterative reestimation of inliers and models parameters based on a global regularization functional. This technique efficiently explores the continuum of labels in the context of energy minimization. In practice, PEARL converges to a good quality local minima of the energy automatically selecting a small number of models that best explain the whole data set. Our tests demonstrate that our energybased approach significantly improves the current state of the art in geometric model fitting currently dominated by various greedy generalizations of RANSAC.
Higher order motion models and spectral clustering
 In CVPR
, 2012
"... Motion segmentation based on point trajectories can integrate information of a whole video shot to detect and separate moving objects. Commonly, similarities are defined between pairs of trajectories. However, pairwise similarities restrict the motion model to translations. Nontranslational motion, ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
(Show Context)
Motion segmentation based on point trajectories can integrate information of a whole video shot to detect and separate moving objects. Commonly, similarities are defined between pairs of trajectories. However, pairwise similarities restrict the motion model to translations. Nontranslational motion, such as rotation or scaling, is penalized in such an approach. We propose to define similarities on higher order tuples rather than pairs, which leads to hypergraphs. To apply spectral clustering, the hypergraph is transferred to an ordinary graph, an operation that can be interpreted as a projection. We propose a specific nonlinear projection via a regularized maximum operator, and show that it yields significant improvements both compared to pairwise similarities and alternative hypergraph projections. 1.
NonRigid Structure from LocallyRigid Motion
"... We introduce locallyrigid motion, a general framework for solving the Mpoint, Nview structurefrommotion problem for unknown bodies deforming under orthography. The key idea is to first solve many local 3point, Nview rigid problems independently, providing a “soup ” of specific, plausibly rigi ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
(Show Context)
We introduce locallyrigid motion, a general framework for solving the Mpoint, Nview structurefrommotion problem for unknown bodies deforming under orthography. The key idea is to first solve many local 3point, Nview rigid problems independently, providing a “soup ” of specific, plausibly rigid, 3D triangles. The main advantage here is that the extraction of 3D triangles requires only very weak assumptions: (1) deformations can be locally approximated by nearrigid motion of three points (i.e., stretching not dominant) and (2) local motions involve some generic rotation in depth. Triangles from this soup are then grouped into bodies, and their depth flips and instantaneous relative depths are determined. Results on several sequences, both our own and from related work, suggest these conditions apply in diverse settings—including very challenging ones (e.g., multiple deforming bodies). Our starting point is a novel linear solution to 3point structure from motion, a problem for which no general algorithms currently exist. 1.
Robust Subspace Clustering
, 2013
"... Subspace clustering refers to the task of finding a multisubspace representation that best fits a collection of points taken from a highdimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [17] to cluster noisy data, and develops some novel theory demo ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
Subspace clustering refers to the task of finding a multisubspace representation that best fits a collection of points taken from a highdimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [17] to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from geometric functional analysis to show that the algorithm can accurately recover the underlying subspaces under minimal requirements on their orientation, and on the number of samples per subspace. Synthetic as well as real data experiments complement our theoretical study, illustrating our approach and demonstrating its effectiveness.