Results 11  20
of
84
Incremental spectral clustering and its application to topological mapping
 In Proc. IEEE Int. Conf. on Robotics and Automation
, 2007
"... Abstract — This paper presents a novel use of spectral clustering algorithms to support cases where the entries in the affinity matrix are costly to compute. The method is incremental – the spectral clustering algorithm is applied to the affinity matrix after each row/column is added – which makes i ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
(Show Context)
Abstract — This paper presents a novel use of spectral clustering algorithms to support cases where the entries in the affinity matrix are costly to compute. The method is incremental – the spectral clustering algorithm is applied to the affinity matrix after each row/column is added – which makes it possible to inspect the clusters as new data points are added. The method is well suited to the problem of appearancebased, online topological mapping for mobile robots. In this problem domain, we show that we can reduce environmentdependent parameters of the clustering algorithm to just a single, intuitive parameter. Experimental results in large outdoor and indoor environments show that we can close loops correctly by computing only a fraction of the entries in the affinity matrix. The accompanying video clip shows how an example map is produced by the algorithm. I.
Clustering with Normalized Cuts is Clustering with a Hyperplane,” Statistical Learning in Computer Vision
 in Statistical Learning in Computer Vision
, 2004
"... Abstract. We present a set of clustering algorithms that identify cluster boundaries by searching for a hyperplanar gap in unlabeled data sets. It turns out that the Normalized Cuts algorithm of Shi and Malik [1], originally presented as a graphtheoretic algorithm, can be interpreted as such an alg ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We present a set of clustering algorithms that identify cluster boundaries by searching for a hyperplanar gap in unlabeled data sets. It turns out that the Normalized Cuts algorithm of Shi and Malik [1], originally presented as a graphtheoretic algorithm, can be interpreted as such an algorithm. Viewing Normalized Cuts under this light reveals that it pays more attention to points away from the center of the data set than those near the center of the data set. As a result, it can sometimes split long clusters and display sensitivity to outliers. We derive a variant of Normalized Cuts that assigns uniform weight to all points, eliminating the sensitivity to outliers. 1
Identifying meaningful locations
 Mobile and Ubiquitous SystemsWorkshops, 2006. 3rd Annual International Conference on, IEEE, 2006
"... Abstract — Existing contextaware mobile applications often rely on location information. However, raw location data such as GPS coordinates or GSM cell identifiers are usually meaningless to the user and, as a consequence, researchers have proposed different methods for inferring socalled places f ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
(Show Context)
Abstract — Existing contextaware mobile applications often rely on location information. However, raw location data such as GPS coordinates or GSM cell identifiers are usually meaningless to the user and, as a consequence, researchers have proposed different methods for inferring socalled places from raw data. The places are locations that carry some meaning to user and to which the user can potentially attach some (meaningful) semantics. Examples of places include home, work and airport. A lack in existing work is that the labeling has been done in an ad hoc fashion and no motivation has been given for why places would be interesting to the user. As our first contribution we use social identity theory to motivate why some locations really are significant to the user. We also discuss what potential uses for location information social identity theory implies. Another flaw in the existing work is that most of the proposed methods are not suited to realistic mobile settings as they rely on the availability of GPS information. As our second contribution we consider a more realistic setting where the information consists of GSM cell transitions that are enriched with GPS information whenever a GPS device is available. We present four different algorithms for this problem and compare them using real data gathered throughout Europe. In addition, we analyze the suitability of our algorithms for mobile devices. I.
Amplifying the block matrix structure for spectral clustering
 In Proceedings of the 14th Annual Machine Learning Conference of Belgium and the Netherlands
"... Abstract Spectral clustering methods perform well in cases where classical methods (Kmeans, single linkage, etc.) fail. However, for very noncompact clusters, they also tend to have problems. In this paper, we propose three improvements which we show that perform better in such cases. We suggest ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Abstract Spectral clustering methods perform well in cases where classical methods (Kmeans, single linkage, etc.) fail. However, for very noncompact clusters, they also tend to have problems. In this paper, we propose three improvements which we show that perform better in such cases. We suggest that spectral decomposition is merely a method for determining the block structure of the affinity matrix. Consequently, it is advantageous for clustering techniques if the affinity matrix has a clear block structure. We propose two independent steps to achieve this goal. In the first, which we term contextdependent affinity, we compute point affinities by taking their neighborhoods into account. In the second, the conductivity method, we aim at amplifying the block structure of the affinity matrix. Combining these two enables us to achieve a clear blockdiagonal structure, despite starting with very weak affinities. For the last step, clustering spectral images, Kmeans is commonly used. Instead, as a third improvement, we suggest using our Klines algorithm. When compared to other clustering algorithms, our methods display promising performance on both artificial and realworld data sets.
Reconstructing many partitions using spectral techniques
 Proceedings of the 15th International Symposium on Fundamentals of Computation Theory
, 2005
"... Abstract. A partitioning of a set of n items is a grouping of these items into k disjoint, equally sized classes. Any partition can be modeled as a graph. The items become the vertices of the graph and two vertices are connected by an edge if and only if the associated items belong to the same class ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Abstract. A partitioning of a set of n items is a grouping of these items into k disjoint, equally sized classes. Any partition can be modeled as a graph. The items become the vertices of the graph and two vertices are connected by an edge if and only if the associated items belong to the same class. In a planted partition model a graph that models a partition is given, which is obscured by random noise, i.e., edges within a class can get removed and edges between classes can get inserted. The task is to reconstruct the planted partition from this graph. In the model that we study the number k of classes controls the difficulty of the task. We design a spectral partitioning algorithm that asymptotically almost surely reconstructs up to k = c √ n partitions, where c is a small constant, in time C k poly(n), where C is another constant. 1
Clustering via LPbased Stabilities ∗
"... A novel centerbased clustering algorithm is proposed in this paper. We first formulate clustering as an NPhard linear integer program and we then use linear programming and the duality theory to derive the solution of this optimization problem. This leads to an efficient and very general algorithm ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
(Show Context)
A novel centerbased clustering algorithm is proposed in this paper. We first formulate clustering as an NPhard linear integer program and we then use linear programming and the duality theory to derive the solution of this optimization problem. This leads to an efficient and very general algorithm, which works in the dual domain, and can cluster data based on an arbitrary set of distances. Despite its generality, it is independent of initialization (unlike EMlike methods such as Kmeans), has guaranteed convergence, can automatically determine the number of clusters, and can also provide online optimality bounds about the quality of the estimated clustering solutions. To deal with the most critical issue in a centerbased clustering algorithm (selection of cluster centers), we also introduce the notion of stability of a cluster center, which is a well defined LPbased quantity that plays a key role to our algorithm’s success. Furthermore, we also introduce, what we call, the margins (another key ingredient in our algorithm), which can be roughly thought of as dual counterparts to stabilities and allow us to obtain computationally efficient approximations to the latter. Promising experimental results demonstrate the potentials of our method. 1
L.Xu. Regularized spectral learning
 Proceedings of the Artificial Intelligence and Statistics Workshop(AISTATS 05
, 2005
"... Spectral clustering is a technique for finding groups in data consisting of similarities Sij between pairs of points. We approach the problem of learning the similarity as a function of other observed features, in order to optimize spectral clustering results on future data. This paper formulates a ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Spectral clustering is a technique for finding groups in data consisting of similarities Sij between pairs of points. We approach the problem of learning the similarity as a function of other observed features, in order to optimize spectral clustering results on future data. This paper formulates a new objective for learning in spectral clustering, that balances a clustering accuracy term, the gap, and a stability term, the eigengap with the later in the role of a regularizer. We derive an algorithm to optimize this objective, and semiautomatic methods to chose the optimal regularization. Preliminary experiments confirm the validity of the approach. 1
Spectral Mesh Processing
"... Spectral methods for mesh processing and analysis rely on the eigenvalues, eigenvectors, or eigenspace projections derived from appropriately defined mesh operators to carry out desired tasks. Early work in this area can be traced back to the seminal paper by Taubin in 1995, where spectral analysis ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Spectral methods for mesh processing and analysis rely on the eigenvalues, eigenvectors, or eigenspace projections derived from appropriately defined mesh operators to carry out desired tasks. Early work in this area can be traced back to the seminal paper by Taubin in 1995, where spectral analysis of mesh geometry based on a combinatorial Laplacian aids our understanding of the lowpass filtering approach to mesh smoothing. Over the past fifteen years, the list of applications in the area of geometry processing which utilize the eigenstructures of a variety of mesh operators in different manners have been growing steadily. Many works presented so far draw parallels from developments in fields such as graph theory, computer vision, machine learning, graph drawing, numerical linear algebra, and highperformance computing. This paper aims to provide a comprehensive survey on the spectral approach, focusing on its power and versatility in solving geometry processing problems and attempting to bridge the gap between relevant research in computer graphics and other fields. Necessary theoretical background is provided. Existing works covered are classified according to different criteria: the operators or eigenstructures employed, application domains, or the dimensionality of the spectral embeddings used. Despite much empirical success, there still remain many open questions pertaining to the spectral approach. These are discussed as we conclude the survey and provide our perspective on possible future research.
A Recommender System Based on Local Random Walks and Spectral Methods
, 2007
"... In this paper, we design recommender systems for weblogs based on the link structure among them. We propose algorithms based on refined random walks and spectral methods. First, we observe the use of the personalized page rank vector to capture the relevance among nodes in a social network. We apply ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
In this paper, we design recommender systems for weblogs based on the link structure among them. We propose algorithms based on refined random walks and spectral methods. First, we observe the use of the personalized page rank vector to capture the relevance among nodes in a social network. We apply the local partitioning algorithms based on refined random walks to approximate the personalized page rank vector, and extend these ideas from undirected graphs to directed graphs. Moreover, inspired by ideas from spectral clustering, we design a similarity metric among nodes of a social network using the eigenvalues and eigenvectors of a normalized adjacency matrix of the social network graph. In order to evaluate these algorithms, we crawled a set of weblogs and construct a weblog graph. We expect that these algorithms based on the link structure perform very well for weblogs, since the average degree of nodes in the weblog graph is large. Finally, we compare the performance of our algorithms on this data set. In particular, the acceptable performance of our algorithms on this data set justifies the use of a linkbased recommender system for social networks with large average degree.