Results 1  10
of
94
A survey of kernel and spectral methods for clustering,”
 Pattern Recognit.,
, 2008
"... Abstract Clustering algorithms are a useful tool to explore data structures and have been employed in many disciplines. The focus of this paper is the partitioning clustering problem with a special interest in two recent approaches: kernel and spectral methods. The aim of this paper is to present a ..."
Abstract

Cited by 88 (5 self)
 Add to MetaCart
(Show Context)
Abstract Clustering algorithms are a useful tool to explore data structures and have been employed in many disciplines. The focus of this paper is the partitioning clustering problem with a special interest in two recent approaches: kernel and spectral methods. The aim of this paper is to present a survey of kernel and spectral clustering methods, two approaches able to produce nonlinear separating hypersurfaces between clusters. The presented kernel clustering methods are the kernel version of many classical clustering algorithms, e.g., Kmeans, SOM and Neural Gas. Spectral clustering arise from concepts in spectral graph theory and the clustering problem is configured as a graph cut problem where an appropriate objective function has to be optimized. An explicit proof of the fact that these two paradigms have the same objective is reported since it has been proven that these two seemingly different approaches have the same mathematical foundation. Besides, fuzzy kernel clustering methods are presented as extensions of kernel Kmeans clustering algorithm.
Learning lowrank kernel matrices
 In ICML
, 2006
"... Kernel learning plays an important role in many machine learning tasks. However, algorithms for learning a kernel matrix often scale poorly, with running times that are cubic in the number of data points. In this paper, we propose efficient algorithms for learning lowrank kernel matrices; our algori ..."
Abstract

Cited by 49 (8 self)
 Add to MetaCart
(Show Context)
Kernel learning plays an important role in many machine learning tasks. However, algorithms for learning a kernel matrix often scale poorly, with running times that are cubic in the number of data points. In this paper, we propose efficient algorithms for learning lowrank kernel matrices; our algorithms scale linearly in the number of data points and quadratically in the rank of the kernel. We introduce and employ Bregman matrix divergences for rankdeficient matrices—these divergences are natural for our problem since they preserve the rank as well as positive semidefiniteness of the kernel matrix. Special cases of our framework yield faster algorithms for various existing kernel learning problems. Experimental results demonstrate the effectiveness of our algorithms in learning both lowrank and fullrank kernels. 1.
Active coanalysis of a set of shapes
 ACM Trans. on Graph (SIGGRAPH Asia
, 2012
"... Figure 1: Overview of our active coanalysis: (a) We start with an initial unsupervised cosegmentation of the input set. (b) During active learning, the system automatically suggests constraints which would refine results and the user interactively adds constraints as appropriate. In this example, ..."
Abstract

Cited by 33 (9 self)
 Add to MetaCart
Figure 1: Overview of our active coanalysis: (a) We start with an initial unsupervised cosegmentation of the input set. (b) During active learning, the system automatically suggests constraints which would refine results and the user interactively adds constraints as appropriate. In this example, the user adds a cannotlink constraint (in red) and a mustlink constraint (in blue) between segments. (c) The constraints are propagated to the set and the cosegmentation is refined. The process from (b) to (c) is repeated until the desired result is obtained. Unsupervised coanalysis of a set of shapes is a difficult problem since the geometry of the shapes alone cannot always fully describe the semantics of the shape parts. In this paper, we propose a semisupervised learning method where the user actively assists in the coanalysis by iteratively providing inputs that progressively constrain the system. We introduce a novel constrained clustering method based on a spring system which embeds elements to better respect their interdistances in feature space together with the usergiven set of constraints. We also present an active learning method that suggests to the user where his input is likely to be the most effective in refining the results. We show that each single pair of constraints affects many relations across the set. Thus, the method requires only a sparse set of constraints to quickly converge toward a consistent and errorfree semantic labeling of the set.
SemiSupervised Clustering via Matrix Factorization
, 2008
"... The recent years have witnessed a surge of interests of semisupervised clustering methods, which aim to cluster the data set under the guidance of some supervisory information. Usually those supervisory information takes the form of pairwise constraints that indicate the similarity/dissimilarity be ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
The recent years have witnessed a surge of interests of semisupervised clustering methods, which aim to cluster the data set under the guidance of some supervisory information. Usually those supervisory information takes the form of pairwise constraints that indicate the similarity/dissimilarity between the two points. In this paper, we propose a novel matrix factorization based approach for semisupervised clustering. In addition, we extend our algorithm to cocluster the data sets of different types with constraints. Finally the experiments on UCI data sets and real world Bulletin Board Systems (BBS) data sets show the superiority of our proposed method.
Pairwise Constraint Propagation by Semidefinite Programming for SemiSupervised Classification
"... We consider the general problem of learning from both pairwise constraints and unlabeled data. The pairwise constraints specify whether two objects belong to the same class or not, known as the mustlink constraints and the cannotlink constraints. We propose to learn a mapping that is smooth over t ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
We consider the general problem of learning from both pairwise constraints and unlabeled data. The pairwise constraints specify whether two objects belong to the same class or not, known as the mustlink constraints and the cannotlink constraints. We propose to learn a mapping that is smooth over the data graph and maps the data onto a unit hypersphere, where two mustlink objects are mapped to the same point while two cannotlink objects are mapped to be orthogonal. We show that such a mapping can be achieved by formulating a semidefinite programming problem, which is convex and can be solved globally. Our approach can effectively propagate pairwise constraints to the whole data set. It can be directly applied to multiclass classification and can handle data labels, pairwise constraints, or a mixture of them in a unified framework. Promising experimental results are presented for classification tasks on a variety of synthetic and real data sets. 1.
Symmetric Nonnegative Matrix Factorization for Graph Clustering
"... Nonnegative matrix factorization (NMF) provides a lower rank approximation of a nonnegative matrix, and has been successfully used as a clustering method. In this paper, we offer some conceptual understanding for the capabilities and shortcomings of NMF as a clustering method. Then, we propose Symme ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
(Show Context)
Nonnegative matrix factorization (NMF) provides a lower rank approximation of a nonnegative matrix, and has been successfully used as a clustering method. In this paper, we offer some conceptual understanding for the capabilities and shortcomings of NMF as a clustering method. Then, we propose Symmetric NMF (SymNMF) as a general framework for graph clustering, which inherits the advantages of NMF by enforcing nonnegativity on the clustering assignment matrix. Unlike NMF, however, SymNMF is based on a similarity measure between data points, and factorizes a symmetric matrix containing pairwise similarity values (not necessarily nonnegative). We compare SymNMF with the widelyused spectral clustering methods, and give an intuitive explanation of why SymNMF captures the cluster structure embedded in the graph representation more naturally. In addition, we develop a Newtonlike algorithm that exploits secondorder information efficiently, so as to show the feasibility of SymNMF as a practical framework for graph clustering. Our experiments on artificial graph data, text data, and image data demonstrate the substantially enhanced clustering quality of SymNMF over spectral clustering and NMF. Therefore, SymNMF is able to achieve better clustering results on both linear and nonlinear manifolds, and serves as a potential basis for many extensions and applications. 1
Discovering Latent Domains for Multisource Domain Adaptation
"... Abstract. Recent domain adaptation methods successfully learn crossdomain transforms to map points between source and target domains. Yet, these methods are either restricted to a single training domain, or assume that the separation into source domains is known a priori. However, most available tra ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Recent domain adaptation methods successfully learn crossdomain transforms to map points between source and target domains. Yet, these methods are either restricted to a single training domain, or assume that the separation into source domains is known a priori. However, most available training data contains multiple unknown domains. In this paper, we present both a novel domain transform mixture model which outperforms a single transform model when multiple domains are present, and a novel constrained clustering method that successfully discovers latent domains. Our discovery method is based on a novel hierarchical clustering technique that uses available object category information to constrain the set of feasible domain separations. To illustrate the effectiveness of our approach we present experiments on two commonly available image datasets with and without known domain labels: in both cases our method outperforms baseline techniques which use no domain adaptation or domain adaptation methods that presume a single underlying domain shift. 1
BoostCluster: Boosting Clustering by Pairwise Constraints
"... Data clustering is an important task in many disciplines. A large number of studies have attempted to improve clustering by using the side information that is often encoded as pairwise constraints. However, these studies focus on designing special clustering algorithms that can effectively exploit t ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
Data clustering is an important task in many disciplines. A large number of studies have attempted to improve clustering by using the side information that is often encoded as pairwise constraints. However, these studies focus on designing special clustering algorithms that can effectively exploit the pairwise constraints. We present a boosting framework for data clustering, termed as BoostCluster, that is able to iteratively improve the accuracy of any given clustering algorithm by exploiting the pairwise constraints. The key challenge in designing a boosting framework for data clustering is how to influence an arbitrary clustering algorithm with the side information since clustering algorithms by definition are unsupervised. The proposed framework addresses this problem by dynamically generating new data representations at each iteration that are, on the one hand, adapted to the clustering results at previous iterations by the given algorithm, and on the other hand consistent with the given side information. Our empirical study shows that the proposed boosting framework is effective in improving the performance of a number of popular clustering algorithms (Kmeans, partitional SingleLink, spectral clustering), and its performance is comparable to the stateoftheart algorithms for data clustering with side information.
Supervised pattern classification based on optimumpath forest
 INTERN. JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY (IJIST
, 2009
"... We present a supervised classification method which represents each class by one or more optimumpath trees rooted at some key samples, called prototypes. The training samples are nodes of a complete graph, whose arcs are weighted by the distances between the feature vectors of their nodes. Prototyp ..."
Abstract

Cited by 17 (11 self)
 Add to MetaCart
(Show Context)
We present a supervised classification method which represents each class by one or more optimumpath trees rooted at some key samples, called prototypes. The training samples are nodes of a complete graph, whose arcs are weighted by the distances between the feature vectors of their nodes. Prototypes are identified in all classes and the minimization of a connectivity function by dynamic programming assigns to each training sample a minimumcost path from its most strongly connected prototype. This competition among prototypes partitions the graph into an optimumpath forest rooted at them. The class of the samples in an optimumpath tree is assumed to be the same of its root. A test sample is classified similarly, by identifying which tree would contain it, if the sample were part of the training set. By choice of the graph model and connectivity function, one can devise other optimumpath forest classifiers. We present one of them, which is fast, simple, multiclass, parameter independent, does not make any assumption about the shapes of the classes, and can handle some degree of overlapping between classes. We also propose a general algorithm to learn from errors on an evaluation set without increasing the training set, and show the advantages of our method with respect to SVM, ANNMLP, and kNN classifiers in several experiments with datasets of various types.
Object identification with constraints
 In ICDM
, 2006
"... Object identification aims at identifying different representations of the same object based on noisy attributes such as descriptions of the same product in different online shops or references to the same paper in different publications. Numerous solutions have been proposed for solving this task, ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
(Show Context)
Object identification aims at identifying different representations of the same object based on noisy attributes such as descriptions of the same product in different online shops or references to the same paper in different publications. Numerous solutions have been proposed for solving this task, almost all of them based on similarity functions of a pair of objects. Although today the similarity functions are learned from a set of labeled training data, the structural information given by the labeled data is not used. By formulating a generic model for object identification we show how almost any proposed identification model can easily be extended for satisfying structural constraints. Therefore we propose a model that uses structural information given as pairwise constraints to guide collective decisions about object identification in addition to a learned similarity measure. We show with empirical experiments on public and on reallife data that combining both structural information and attributebased similarity enormously increases the overall performance for object identification tasks. 1