Results 1  10
of
3,678
APPLICATION OF HIERARCHICAL AND KMEANS TECHNIQUES IN CONTENT BASED IMAGE RETRIEVAL
"... In recent years, there is an enormous growth in the collection of varied image databases in the web. It is difficult for the user to search and retrieve required images from these large collections. Content based Image retrieval emerged as an alternative to automated text based image retrieval syste ..."
Abstract
 Add to MetaCart
. The unique aspect of the system is the utilization of hierarchical and kmeans clustering techniques. The proposed procedure consists of two stages. Initially, Hierarchical clustering technique is used for grouping similar images. Then the image groups are applied to the KMeans, so that we can get better
A REVIEW PAPER ON IMPROVED KMEANS TECHNIQUE FOR OUTLIER DETECTION IN HIGH DIMENSIONAL DATASET
"... In many data mining application domain outlier detection is an important task, it can be regard as a binary asymmetric or unbalanced classification of pattern where one class has higher cardinality than the other, finding outlier is very challenging in high dimensional dataset where data contain lar ..."
Abstract
 Add to MetaCart
large amount of noise which causes effectiveness problem, they are more useful based on their diagnosis of data characteristics which deviate significantly from average, this paper present Improved KMeans Technique for Outlier Detection in High Dimensional Dataset. Various subspace based method has
A comparison of document clustering techniques
 In KDD Workshop on Text Mining
, 2000
"... This paper presents the results of an experimental study of some common document clustering techniques: agglomerative hierarchical clustering and Kmeans. (We used both a “standard” Kmeans algorithm and a “bisecting ” Kmeans algorithm.) Our results indicate that the bisecting Kmeans technique is ..."
Abstract

Cited by 613 (27 self)
 Add to MetaCart
This paper presents the results of an experimental study of some common document clustering techniques: agglomerative hierarchical clustering and Kmeans. (We used both a “standard” Kmeans algorithm and a “bisecting ” Kmeans algorithm.) Our results indicate that the bisecting Kmeans technique
Kmeans++: The advantages of careful seeding.
 In Proceedings of the Eighteenth Annual ACMSIAM Symposium on Discrete Algorithms, SODA ’07,
, 2007
"... Abstract The kmeans method is a widely used clustering technique that seeks to minimize the average squared distance between points in the same cluster. Although it offers no accuracy guarantees, its simplicity and speed are very appealing in practice. By augmenting kmeans with a very simple, ran ..."
Abstract

Cited by 478 (8 self)
 Add to MetaCart
Abstract The kmeans method is a widely used clustering technique that seeks to minimize the average squared distance between points in the same cluster. Although it offers no accuracy guarantees, its simplicity and speed are very appealing in practice. By augmenting kmeans with a very simple
An Efficient kMeans Clustering Algorithm: Analysis and Implementation
, 2000
"... Kmeans clustering is a very popular clustering technique, which is used in numerous applications. Given a set of n data points in R d and an integer k, the problem is to determine a set of k points R d , called centers, so as to minimize the mean squared distance from each data point to its ..."
Abstract

Cited by 417 (4 self)
 Add to MetaCart
Kmeans clustering is a very popular clustering technique, which is used in numerous applications. Given a set of n data points in R d and an integer k, the problem is to determine a set of k points R d , called centers, so as to minimize the mean squared distance from each data point to its
Xmeans: Extending Kmeans with Efficient Estimation of the Number of Clusters
 In Proceedings of the 17th International Conf. on Machine Learning
, 2000
"... Despite its popularity for general clustering, Kmeans suffers three major shortcomings; it scales poorly computationally, the number of clusters K has to be supplied by the user, and the search is prone to local minima. We propose solutions for the first two problems, and a partial remedy for the t ..."
Abstract

Cited by 418 (5 self)
 Add to MetaCart
and their parameters. Experiments show this technique reveals the true number of classes in the underlying distribution, and that it is much faster than repeatedly using accelerated Kmeans for different values of K.
Refining Initial Points for KMeans Clustering
, 1998
"... Practical approaches to clustering use an iterative procedure (e.g. KMeans, EM) which converges to one of numerous local minima. It is known that these iterative techniques are especially sensitive to initial starting conditions. We present a procedure for computing a refined starting condition fro ..."
Abstract

Cited by 317 (5 self)
 Add to MetaCart
Practical approaches to clustering use an iterative procedure (e.g. KMeans, EM) which converges to one of numerous local minima. It is known that these iterative techniques are especially sensitive to initial starting conditions. We present a procedure for computing a refined starting condition
Estimating the number of clusters in a dataset via the Gap statistic
, 2000
"... We propose a method (the \Gap statistic") for estimating the number of clusters (groups) in a set of data. The technique uses the output of any clustering algorithm (e.g. kmeans or hierarchical), comparing the change in within cluster dispersion to that expected under an appropriate reference ..."
Abstract

Cited by 502 (1 self)
 Add to MetaCart
We propose a method (the \Gap statistic") for estimating the number of clusters (groups) in a set of data. The technique uses the output of any clustering algorithm (e.g. kmeans or hierarchical), comparing the change in within cluster dispersion to that expected under an appropriate reference
KSVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
, 2006
"... In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signalatoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and inc ..."
Abstract

Cited by 935 (41 self)
 Add to MetaCart
signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method—the KSVD algorithm—generalizing the umeans clustering process. KSVD is an iterative method
Kmeans Clustering via Principal Component Analysis
, 2004
"... Principal component analysis (PCA) is a widely used statistical technique for unsupervised dimension reduction. Kmeans clustering is a commonly used data clustering for performing unsupervised learning tasks. Here we ..."
Abstract

Cited by 201 (5 self)
 Add to MetaCart
Principal component analysis (PCA) is a widely used statistical technique for unsupervised dimension reduction. Kmeans clustering is a commonly used data clustering for performing unsupervised learning tasks. Here we
Results 1  10
of
3,678