Results 1  10
of
15
Incremental spectral clustering and its application to topological mapping
 In Proc. IEEE Int. Conf. on Robotics and Automation
, 2007
"... Abstract — This paper presents a novel use of spectral clustering algorithms to support cases where the entries in the affinity matrix are costly to compute. The method is incremental – the spectral clustering algorithm is applied to the affinity matrix after each row/column is added – which makes i ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
(Show Context)
Abstract — This paper presents a novel use of spectral clustering algorithms to support cases where the entries in the affinity matrix are costly to compute. The method is incremental – the spectral clustering algorithm is applied to the affinity matrix after each row/column is added – which makes it possible to inspect the clusters as new data points are added. The method is well suited to the problem of appearancebased, online topological mapping for mobile robots. In this problem domain, we show that we can reduce environmentdependent parameters of the clustering algorithm to just a single, intuitive parameter. Experimental results in large outdoor and indoor environments show that we can close loops correctly by computing only a fraction of the entries in the affinity matrix. The accompanying video clip shows how an example map is produced by the algorithm. I.
SemiSupervised Discriminant Analysis Using Robust PathBased Similarity
 Proc. IEEE Conf. Computer Vision and Pattern Recognition
, 2008
"... Linear Discriminant Analysis (LDA), which works by maximizing the withinclass similarity and minimizing the betweenclass similarity simultaneously, is a popular dimensionality reduction technique in pattern recognition and machine learning. In realworld applications when labeled data are limited, ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Linear Discriminant Analysis (LDA), which works by maximizing the withinclass similarity and minimizing the betweenclass similarity simultaneously, is a popular dimensionality reduction technique in pattern recognition and machine learning. In realworld applications when labeled data are limited, LDA does not work well. Under many situations, however, it is easy to obtain unlabeled data in large quantities. In this paper, we propose a novel dimensionality reduction method, called SemiSupervised Discriminant Analysis (SSDA), which can utilize both labeled and unlabeled data to perform dimensionality reduction in the semisupervised setting. Our method uses a robust pathbased similarity measure to capture the manifold structure of the data and then uses the obtained similarity to maximize the separability between different classes. A kernel extension of the proposed method for nonlinear dimensionality reduction in the semisupervised setting is also presented. Experiments on face recognition demonstrate the effectiveness of the proposed method. 1.
The Bottleneck Geodesic: Computing Pixel Affinity
 CVPR
, 2006
"... A meaningful affinity measure between pixels is essential for many computer vision and image processing applications. We propose an algorithm that works in the features’ histogram to compute image specific affinity measures. We use the observation that clusters in the feature space are typically smo ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
A meaningful affinity measure between pixels is essential for many computer vision and image processing applications. We propose an algorithm that works in the features’ histogram to compute image specific affinity measures. We use the observation that clusters in the feature space are typically smooth, and search for a path in the feature space between feature points that is both short and dense. Failing to find such a path indicates that the points are separated by a bottleneck in the histogram and therefore belong to different clusters. We call this new affinity measure the “Bottleneck Geodesic”. Empirically we demonstrate the superior results achieved by using our affinities as opposed to those using the widely used Euclidean metric, traditional geodesics and the simple bottleneck. 1.
Graph laplacian kernels for object classification from a single example
 In CVPR (2
, 2006
"... Classification with only one labeled example per class is a challenging problem in machine learning and pattern recognition. While there have been some attempts to address this problem in the context of specific applications, very little work has been done so far on the problem under more general ob ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Classification with only one labeled example per class is a challenging problem in machine learning and pattern recognition. While there have been some attempts to address this problem in the context of specific applications, very little work has been done so far on the problem under more general object classification settings. In this paper, we propose a graphbased approach to the problem. Based on a robust pathbased similarity measure proposed recently, we construct a weighted graph using the robust pathbased similarities as edge weights. A kernel matrix, called graph Laplacian kernel, is then defined based on the graph Laplacian. With the kernel matrix, in principle any kernelbased classifier can be used for classification. In particular, we demonstrate the use of a kernel nearest neighbor classifier on some synthetic data and realworld image sets, showing that our method can successfully solve some difficult classification tasks with only very few labeled examples. 1.
Contextaware hypergraph construction for robust spectral clustering
 IEEE Transactions on Knowledge and Data Engineering
, 2014
"... Abstract—Spectral clustering is a powerful tool for unsupervised data analysis. In this paper, we propose a contextaware hypergraph similarity measure (CAHSM), which leads to robust spectral clustering in the case of noisy data. We construct three types of hypergraph—the pairwise hypergraph, the k ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Spectral clustering is a powerful tool for unsupervised data analysis. In this paper, we propose a contextaware hypergraph similarity measure (CAHSM), which leads to robust spectral clustering in the case of noisy data. We construct three types of hypergraph—the pairwise hypergraph, the knearestneighbor (kNN) hypergraph, and the highorder overclustering hypergraph. The pairwise hypergraph captures the pairwise similarity of data points; the kNN hypergraph captures the neighborhood of each point; and the clustering hypergraph encodes highorder contexts within the dataset. By combining the affinity information from these three hypergraphs, the CAHSM algorithm is able to explore the intrinsic topological information of the dataset. Therefore, data clustering using CAHSM tends to be more robust. Considering the intracluster compactness and the intercluster separability of vertices, we further design a discriminative hypergraph partitioning criterion (DHPC). Using both CAHSM and DHPC, a robust spectral clustering algorithm is developed. Theoretical analysis and experimental evaluation demonstrate the effectiveness and robustness of the proposed algorithm. Index Terms—Hypergraph construction, spectral clustering, graph partitioning, similarity measure. F 1
DENSITY GEODESICS FOR SIMILARITY CLUSTERING
"... We address the problem of similarity metric selection in pairwise affinity clustering. Traditional techniques employ standard algebraic contextindependent sampledistance measures, such as the Euclidean distance. More recent contextdependent metric modifications employ the bottleneck principle to ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
We address the problem of similarity metric selection in pairwise affinity clustering. Traditional techniques employ standard algebraic contextindependent sampledistance measures, such as the Euclidean distance. More recent contextdependent metric modifications employ the bottleneck principle to develop pathbottleneck or pathaverage distances and define similarities based on geodesics determined according to these metrics. This paper develops a principled contextadaptive similarity metric for pairs of feature vectors utilizing the probability density of all data. Specifically, based on the postulate that Euclidean distance is the canonical metric for data drawn from a unithypercube uniform density, a densitygeodesic distance measure stemming from Riemannian geometry of curved surfaces is derived. Comparisons with alternative metrics demonstrate the superior properties such as robustness.
A Spectral Clustering Approach Based on Newton’s Equations of Motion
"... In this article, we introduce Newtonian spectral clustering, a method that employs Newtonian preprocessing to promote cluster perspicuity and trajectory analysis to gain valuable affinity information. A simple twobody potential is used to model the interaction under the influence of which the point ..."
Abstract
 Add to MetaCart
(Show Context)
In this article, we introduce Newtonian spectral clustering, a method that employs Newtonian preprocessing to promote cluster perspicuity and trajectory analysis to gain valuable affinity information. A simple twobody potential is used to model the interaction under the influence of which the points move according to Newton’s second law. This procedure produces a transformed data set with reduced cluster overlap, which favors the spectral clustering approach. This is so, because the affinity matrix can be enriched with information derived from the underlying interaction model. Special care is also given to estimate the Gaussian kernel parameter, since its role is important for the clustering procedure. The method is further extended appropriately to treat problems of high dimensionality. We have tested the proposed methodology on several benchmark data and compared its performance to that of rival techniques. The superiority of the new approach is readily deduced by inspecting the reported results. C ○ 2013 Wiley Periodicals, Inc. 1.
ANALYSIS OF TEXTURE EXTRACTION BASED ON HARALICK FEATURES FOR SEGMENTATION USING SPECTRAL CLUSTERING
"... ABSTARCT: The processing of whole image gives the inefficient and impractical results. Segmentation is the process which results in set of images that cover the entire image. The task of Clustering is an important aspect which is widely used in image segmentation and other areas. In this paper, we s ..."
Abstract
 Add to MetaCart
(Show Context)
ABSTARCT: The processing of whole image gives the inefficient and impractical results. Segmentation is the process which results in set of images that cover the entire image. The task of Clustering is an important aspect which is widely used in image segmentation and other areas. In this paper, we study spectral clustering algorithm which clusters data using eigenvectors of similarity matrix. This work proposes a two stage method. The extraction of the textual feature of original image is done which gives the first stage segmentation. And the second stage uses spectral clustering techniques to cluster the primitive regions.
A Scalebased Connected Coherence Tree Algorithm for Image Segmentation
"... Abstract — This paper presents a connected coherence tree algorithm (CCTA) for image segmentation with no prior knowledge. It aims to find regions of semantic coherence based on the proposed εneighbor coherence segmentation criterion. More specifically, with an adaptive spatial scale and an appropr ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — This paper presents a connected coherence tree algorithm (CCTA) for image segmentation with no prior knowledge. It aims to find regions of semantic coherence based on the proposed εneighbor coherence segmentation criterion. More specifically, with an adaptive spatial scale and an appropriate intensitydifference scale, CCTA often achieves several sets of coherent neighboring pixels which maximize the probability of being a single image content (including kinds of complex backgrounds). In practice, each set of coherent neighboring pixels corresponds to a coherence class (CC). The fact that each CC just contains a single equivalence class (EC) ensures the separability of an arbitrary image theoretically. In addition, the resultant CCs are represented by treebased data structures, named connected coherence tree (CCT)s. In this sense, CCTA is a graphbased image analysis algorithm, which expresses three advantages: (1) its fundamental idea, εneighbor coherence segmentation criterion, is easy to interpret and comprehend; (2) it is efficient due to a linear computational complexity in the number of image pixels; (3) both subjective comparisons and objective evaluation have shown that it is effective for the tasks of semantic object segmentation and figureground separation in a wide variety of images. Those images either contain tiny, long and thin objects or are severely degraded by noise, uneven lighting, occlusion, poor illumination and shadow. Index Terms — εneighbor coherence segmentation criterion, connected coherence tree, semantic segmentation, object segmentation, figureground separation. I.