Results 1 -
9 of
9
Learning Structured Low-Rank Representation via Matrix Factorization
"... Abstract A vast body of recent works in the literature have shown that exploring structures beyond data lowrankness can boost the performance of subspace clustering methods such as Low-Rank Representation (LRR). It has also been well recognized that the matrix factorization framework might offer mo ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract A vast body of recent works in the literature have shown that exploring structures beyond data lowrankness can boost the performance of subspace clustering methods such as Low-Rank Representation (LRR). It has also been well recognized that the matrix factorization framework might offer more flexibility on pursuing underlying structures of the data. In this paper, we propose to learn structured LRR by factorizing the nuclear norm regularized matrix, which leads to our proposed non-convex formulation NLRR. Interestingly, this formulation of NLRR provides a general framework for unifying a variety of popular algorithms including LRR, dictionary learning, robust principal component analysis, sparse subspace clustering, etc. Several variants of NLRR are also proposed, for example, to promote sparsity while preserving low-rankness. We design a practical algorithm for NLRR and its variants, and establish theoretical guarantee for the stability of the solution and the convergence of the algorithm. Perhaps surprisingly, the computational and memory cost of NLRR can be reduced by roughly one order of magnitude compared to the cost of LRR. Experiments on extensive simulations and real datasets confirm the robustness of efficiency of NLRR and the variants.
A Supervised Low-rank Method for Learning Invariant Subspaces
"... Sparse representation and low-rank matrix decomposi-tion approaches have been successfully applied to several computer vision problems. They build a generative repre-sentation of the data, which often requires complex training as well as testing to be robust against data variations in-duced by nuisa ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Sparse representation and low-rank matrix decomposi-tion approaches have been successfully applied to several computer vision problems. They build a generative repre-sentation of the data, which often requires complex training as well as testing to be robust against data variations in-duced by nuisance factors. We introduce the invariant com-ponents, a discriminative representation invariant to nui-sance factors, because it spans subspaces orthogonal to the space where nuisance factors are defined. This allows de-veloping a framework based on geometry that ensures a uni-form inter-class separation, and a very efficient and robust classification based on simple nearest neighbor. In addi-tion, we show how the approach is equivalent to a local metric learning, where the local metrics (one for each class) are learned jointly, rather than independently, thus avoiding the risk of overfitting without the need for additional regu-larization. We evaluated the approach for face recognition with highly corrupted training and testing data, obtaining very promising results. 1.
Robust motion segmentation with unknown correspondences
- In ECCV. 2014
"... Abstract. Motion segmentation can be addressed as a subspace clustering problem, assuming that the trajectories of interest points are known. However, establishing point correspondences is in itself a challenging task. Most existing approaches tackle the correspondence estimation and motion segment ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. Motion segmentation can be addressed as a subspace clustering problem, assuming that the trajectories of interest points are known. However, establishing point correspondences is in itself a challenging task. Most existing approaches tackle the correspondence estimation and motion segmentation problems separately. In this paper, we introduce an approach to performing motion segmentation without any prior knowledge of point correspondences. We formulate this problem in terms of Partial Permutation Matrices (PPMs) and aim to match feature descriptors while simultaneously encouraging point trajectories to satisfy subspace constraints. This lets us handle outliers in both point locations and feature appearance. The resulting optimization problem can be solved via the Alternating Direction Method of Multipliers (ADMM), where each subproblem has an efficient solution. Our experimental evaluation on synthetic and real sequences clearly evidences the benefits of our formulation over the traditional sequential approach that first estimates correspondences and then performs motion segmentation.
LogDet Rank Minimization with Application to Subspace Clustering
, 2015
"... Low-rank matrix is desired in many machine learning and computer vision problems. Most of the recent studies use the nuclear norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and thus the rank may not be well approximated i ..."
Abstract
- Add to MetaCart
Low-rank matrix is desired in many machine learning and computer vision problems. Most of the recent studies use the nuclear norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and thus the rank may not be well approximated in practical problems. In this paper, we propose using a log-determinant (LogDet) function as a smooth and closer, though nonconvex, approximation to rank for obtaining a low-rank representation in subspace clustering. Augmented Lagrange multipliers strategy is applied to iteratively optimize the LogDet-based nonconvex objective function on potentially large-scale data. By making use of the angular information of principal directions of the resultant low-rank representation, an affinity graph matrix is constructed for spectral clustering. Experimental results on motion segmentation and face clustering data demonstrate that the proposed method often outperforms state-of-the-art subspace clustering algorithms.
Abstract Algebraic-Geometric Subspace Clustering
"... Subspace clustering is the problem of clustering data drawn from a union of linear subspaces. Prior algebraic-geometric approaches to this problem required the subspaces to be of equal dimension, or the number of subspaces to be known. While an algorithm addressing the general case of an unknown nu ..."
Abstract
- Add to MetaCart
(Show Context)
Subspace clustering is the problem of clustering data drawn from a union of linear subspaces. Prior algebraic-geometric approaches to this problem required the subspaces to be of equal dimension, or the number of subspaces to be known. While an algorithm addressing the general case of an unknown number of subspaces of possibly different dimensions had been proposed, a proof for its correctness had not been given. In this paper, we consider an abstract version of the subspace clustering problem, where one is given the algebraic variety of the union of subspaces rather than the data points. Our main contribution is to propose a provably correct algorithm for decomposing the algebraic variety into the constituent subspaces in the general case of an unknown number of subspaces of possibly different dimensions. Our algorithm uses the gradient of a vanishing polynomial at a point in the variety to find a hyperplane containing the subspace passing through that point. By intersecting the variety with this hyperplane and recursively applying the procedure, our algorithm eventually identifies the subspace containing that point. By repeating this procedure for other points, our algorithm eventually identifies all the subspaces and their dimensions.
CLUSTERING WITH SHARED NEAREST NEIGHBOR-UNSCENTED TRANSFORM BASED ESTIMATION
"... ABSTRACT Subspace clustering developed from the group of cluster objects in all subspaces of a dataset. When clustering high dimensional objects, the accuracy and efficiency of traditional clustering algorithms are very poor, because data objects may belong to diverse clusters in different subspace ..."
Abstract
- Add to MetaCart
ABSTRACT Subspace clustering developed from the group of cluster objects in all subspaces of a dataset. When clustering high dimensional objects, the accuracy and efficiency of traditional clustering algorithms are very poor, because data objects may belong to diverse clusters in different subspaces comprised of different combinations of dimensions. To overcome the above issue, we are going to implement a new technique termed Opportunistic Subspace and Estimated Clustering (OSEC) model on high Dimensional Data to improve the accuracy in the search retrieval.Still to improve the quality of clustering hubness is a mechanism related to vector-space data deliberated by the propensity of certain data points also referred to as the hubs with a miniature distance to numerous added data points in high dimensional spaces which is associated to the phenomenon of distance concentration. The performance of hubness on high dimensional data has an incapable impact on many machine learning tasks namely classification, nearest neighbor, outlier detection and clustering. Hubness is a newly unexplored problem of machine learning in high dimensional data spaces, which fails in automatically determining the number of clusters in the data. Subspace clustering discovers the efficient cluster validation but problem of hubness is not discussed effectively. To overcome clustering based hubness problem with sub spacing, high dimensionality of data employs the nearest neighbor machine learning methods. Shared Nearest Neighbor Clustering based on Unscented Transform (SNNC-UT) estimation method is developed to overcome the hubness problem with determination of cluster data. The core objective of SNNC is to find the number of cluster points such that the points within a cluster are more similar to each other than to other points in a different cluster. SNNC-UT estimates the relative density, i.e., probability density, in a nearest region and obtains a more robust definition of density. SNNC-UT handle overlapping situations based on the unscented transform and calculate the statistical distance of a random variable which undergoes a nonlinear transformation. The experimental performance of SNNC-UT and k-nearest neighbor hubness in clustering is evaluated in terms of clustering quality, distance measurement ratio, clustering time, and energy consumption.
NULL SPACE CLUSTERING WITH APPLICATIONS TO MOTION SEGMENTATION AND FACE CLUSTERING
"... The problems of motion segmentation and face clustering can be addressed in a framework of subspace clustering methods. In this paper, we tackle the more general problem of cluster-ing data points lying in a union of low-dimensional linear(or affine) subspaces, which can be naturally applied in moti ..."
Abstract
- Add to MetaCart
(Show Context)
The problems of motion segmentation and face clustering can be addressed in a framework of subspace clustering methods. In this paper, we tackle the more general problem of cluster-ing data points lying in a union of low-dimensional linear(or affine) subspaces, which can be naturally applied in motion segmentation and face clustering. For data points drawn from linear (or affine) subspaces, we propose a novel algorithm called Null Space Clustering (NSC), utilizing the null space of the data matrix to construct the affinity matrix. To better deal with noise and outliers, it is converted to an equivalent prob-lem with Frobenius norm minimization, which can be solved efficiently. We demonstrate that the proposed NSC leads to improved performance in terms of clustering accuracy and ef-ficiency when compared to state-of-the-art algorithms on two well-known datasets, i.e., Hopkins 155 and Extended Yale B. Index Terms — null space, subspace clustering, affinity matrix, normalized cuts, motion segmentation, face clustering 1.