#### DMCA

## Survey of clustering algorithms (2005)

### Cached

### Download Links

Venue: | IEEE TRANSACTIONS ON NEURAL NETWORKS |

Citations: | 488 - 4 self |

### Citations

8422 | Gapped BLAST and PSIBLAST: a new generation of protein database search programs.
- Altschul, Madden, et al.
- 1997
(Show Context)
Citation Context ...amming algorithms are computationally infeasible. In practice, sequence comparison or proximity measure is achieved via some heuristics. Well-known examples include BLAST and FASTA with many variants =-=[10]-=-, [11], [224]. The key idea of these methods is to identify regions that may have potentially high matches, with a list of prespecified high-scoring words, at an early stage. Therefore, further search... |

6510 |
Neural Networks for Pattern Recognition.
- Bishop
- 1995
(Show Context)
Citation Context ...sification systems are either supervised or unsupervised, depending on whether they assign new inputs to one of a finite number of discrete supervised classes or unsupervised categories, respectively =-=[38]-=-, [60], [75]. In supervised classification, the mapping from a set of input data vectors ( , where is the input space dimensionality), to a finite set of discrete class labels ( , where is the total n... |

4490 |
A new look at the statistical model identification.
- Akaike
- 1974
(Show Context)
Citation Context ...bilities calculated. A large number of criteria, which combine concepts from information theory, have been proposed in the literature. Typical examples include, • Akaike’s information criterion (AIC) =-=[4]-=-, [282] AIC where is the total number of patterns, is the number of parameters for each cluster, is the total number of parameters estimated, and is the maximum log-likelihood. is selected with the mi... |

3313 | A Tutorial on Support Vector Machines for Pattern Recognition”, - Burges - 1998 |

2834 |
Cluster analysis and display of genome-wide expression patterns,
- Eisen, Spellman, et al.
- 1998
(Show Context)
Citation Context ...ions and interactions between genes under different conditions, and attracts more attention currently. Generally, cluster analysis of gene expression data is composed of two aspects: clustering genes =-=[80]-=-, [206], [260], [268], [283], [288] or clustering tissues or experiments [5], [109], [238]. Results of gene clustering may suggest that genes in the same group have similar functions, or they share th... |

2027 |
recognition with fuzzy objective function algorithms.
- Bezdek
- 1981
(Show Context)
Citation Context ...between a given object and the disclosed clusters. FCM is one of the most popular fuzzy clustering algorithms [141]. FCM can be regarded as a generalization of ISODATA [76] and was realized by Bezdek =-=[35]-=-. FCM attempts to find a partition ( fuzzy clusters) for a set of data points while minimizing the cost function where is the fuzzy partition matrix and is the membership coefficient of the th object ... |

1741 | A density-based algorithm for discovering clusters in large spatial databases with noise,”
- Ester, Kriegel, et al.
- 1996
(Show Context)
Citation Context ...have been made in order to overcome its disadvantages [142], [218]. 3) Many novel algorithms have been developed to cluster large-scale data sets, especially in the context of data mining [44], [45], =-=[85]-=-, [135], [213], [248]. Many of them can scale the computational complexity linearly to the input size and demonstrate the possibility of handling very large data sets. a) Random sampling approach, e.g... |

1741 | Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring,
- Golub
- 1999
(Show Context)
Citation Context ...attention currently. Generally, cluster analysis of gene expression data is composed of two aspects: clustering genes [80], [206], [260], [268], [283], [288] or clustering tissues or experiments [5], =-=[109]-=-, [238]. Results of gene clustering may suggest that genes in the same group have similar functions, or they share the same transcriptional regulation mechanism. Cluster analysis, for grouping functio... |

1534 |
The Use of Multiple Measurements in Taxonomic Problems.
- FISHER
- 1936
(Show Context)
Citation Context ...ctions III-A and III-B and the traveling salesman problem in Section III-C. A more extensive discussion of bioinformatics is in Sections III-D and III-E. A. Benchmark Data Sets—IRIS The iris data set =-=[92]-=- is one of the most popular data sets to examine the performance of novel methods in pattern recognition and machine learning. It can be downloaded from the UCI Machine Learning Repository at http://w... |

1193 | Biological sequence analysis: Probabilistic models of proteins and nucleic acids.
- Durbin, Eddy, et al.
- 1998
(Show Context)
Citation Context ...equential data can be generated from: DNA sequencing, speech processing, text mining, medical diagnosis, stock market, customer transactions, web data mining, and robot sensor analysis, to name a few =-=[78]-=-, [265]. In recent decades, sequential data grew explosively. For example, in genetics, the recent statistics released on October 15, 2004 (Release 144.0) shows that there are 43 194 602 655 bases fro... |

1064 |
Cluster analysis for applications.
- Anderberg
- 1973
(Show Context)
Citation Context ... and management. One of the vital means in dealing with these data is to classify or group them into a set of categories or clusters. Actually, as one of the most primitive activities of human beings =-=[14]-=-, classification plays an important and indispensable role in the long history of human development. In order to learn a new object or understand a new phenomenon, people always try to seek the featur... |

1042 |
Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays.
- Alon, Barkai, et al.
- 1999
(Show Context)
Citation Context ...t origin of lineage, can be well separated. Alon et al. performed a two-way clustering for both tissues and genes and revealed the potential relations, represented as visualizing patterns, among them =-=[6]-=-. Alizadeh et al. demonstrated the effectiveness of molecular classification of cancers by their gene expression profiles and successfully distinguished two molecularly distinct subtypes of diffuse la... |

759 | Knowledge acquisition via incremental conceptual clustering. - Fisher - 1987 |

719 | Automatic subspace clustering of high dimensional data for data mining applications.
- Agrawal, Gehrke, et al.
- 1998
(Show Context)
Citation Context ...d onto a SOFM with 1 002 240 nodes. Subspace-based clustering addresses the challenge by exploring the relations of data objects under different combinations of features. clustering in quest (CLIQUE) =-=[3]-=- employs a bottom-up scheme to seek dense rectangular cells in all subspaces with high density of points. Clusters are generated as the connected components in a graph whose vertices stand for the den... |

684 |
Adaptive control processes: A guided tour.
- Bellman
- 1961
(Show Context)
Citation Context ...term, “curse of dimensionality,” which was first used by Bellman to indicate the exponential growth of complexity in the case of multivariate function estimation under a high dimensionality situation =-=[28]-=-, is generally used to describe the problems accompanying high dimensional spaces [34], [132]. It is theoretically proved that the distance between the nearest points is no different from that of othe... |

656 | Laplacian eigenmaps and spectral techniques for embedding and clustering.
- Belkin, Niyogi
- 2001
(Show Context)
Citation Context ...em that finding -dimensional vectors so that the criterion function is minimized. Another interesting nonlinear dimensionality reduction approach, known as Laplace eigenmap algorithm, is presented in =-=[27]-=-. As discussed in Section II-H, SOFM also provide good visualization for high-dimensional input patterns [168]. SOFM map input patterns into a one or usually two dimensional lattice structure, consist... |

655 | Tabu Search – Part I, in:
- Glover
- 1989
(Show Context)
Citation Context ... partitions, but they are easily stuck in local minima and therefore cannot guarantee optimality. More complex search methods (e.g., evolutionary algorithms (EAs) [93], SA [165], and Tabu search (TS) =-=[108]-=- are known as stochastic optimization methods, while deterministic annealing (DA) [139], [234] is the most typical deterministic search technique) can explore the solution space more flexibly and effi... |

633 |
A Massively Parallel Architecture for a Self-Organizing
- Carpenter, Grossberg
- 1987
(Show Context)
Citation Context ...o provide more effective and faster clustering. [263] and [276] illustrate two such hybrid systems. ART was developed by Carpenter and Grossberg, as a solution to the plasticity and stability dilemma =-=[51]-=-, [53], [113]. ART can learn arbitrary input patterns in a stable, fast, and self-organizing way, thus, overcoming the effect of learning instability that plagues many other competitive networks. ART ... |

587 |
A Fuzzy Relative of The Isodata Process and its Use in Detecting Compact Well Separated Clusters.
- Dunn
- 1974
(Show Context)
Citation Context ...er more sophisticated relations between a given object and the disclosed clusters. FCM is one of the most popular fuzzy clustering algorithms [141]. FCM can be regarded as a generalization of ISODATA =-=[76]-=- and was realized by Bezdek [35]. FCM attempts to find a partition ( fuzzy clusters) for a set of data points while minimizing the cost function where is the fuzzy partition matrix and is the membersh... |

562 |
Bayesian Classification (AUTOCLASS): Theory and Results.
- Cheeseman, Stutz
- 1996
(Show Context)
Citation Context ... model-fitting estimator to construct each component from the contaminated model. AutoClass considers more families of probability distributions (e.g., Poisson and Bernoulli) for different data types =-=[59]-=-. A Bayesian approach is used in AutoClass to find out the optimal partition of the given data based on the prior probabilities. Its parallel realization is described in [228]. Other important algorit... |

481 | A Bayesian Framework for the Analysis of Microarray Expression Data: Regularized t-Test and Statistical Inferences of Gene Changes.
- Baldi, Long
- 2001
(Show Context)
Citation Context ...age. Gene expression data analysis consists of a three-level framework based on the complexity, ranging from the investigation of single gene activities to the inference of the entire genetic network =-=[20]-=-. The intermediate level explores the relations and interactions between genes under different conditions, and attracts more attention currently. Generally, cluster analysis of gene expression data is... |

449 |
Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps
- Carpenter, Grossberg, et al.
(Show Context)
Citation Context ...sifications [56]. The match tracking strategy ensures the consistency of category prediction between two ART modules by dynamically adjusting the vigilance parameter of ART . Also see fuzzy ARTMAP in =-=[55]-=-. A similar idea, omitting the inter-ART module, is known as LAPART [134]. The basic ART1 architecture consists of two-layer nodes, the feature representation field and the category representation fie... |

443 | Clustering gene expression patterns.
- BEN-DOR, YAKHINI
- 1999
(Show Context)
Citation Context ... resulting clusters. Additional heuristics are provided to accelerate the algorithm performance. Similarly, CAST considers a probabilistic model in designing a graph theory-based clustering algorithm =-=[29]-=-. Clusters are modeled as corrupted clique graphs, which, in ideal conditions, are regarded as a set of disjoint cliques. The effect of noise is incorporated by adding or removing edges from the ideal... |

431 |
Bioinformatics — The machine learning approach.
- Baldi, Brunak
- 1998
(Show Context)
Citation Context ...nucleic acids and amino acids in the current DNA or protein databases, e.g., bacteria genomes are from 0.5 to 10 Mbp, fungi genomes range from 10 to 50 Mbp, while the human genome is around 3 310 Mbp =-=[18]-=- (Mbp means million base pairs). Thus, conventional dynamic programming algorithms are computationally infeasible. In practice, sequence comparison or proximity measure is achieved via some heuristics... |

415 | Unsupervised learning of finite mixture models,
- Figueiredo, Jain
- 2002
(Show Context)
Citation Context ...m form The best estimate can be achieved by solving the log-likelihood equations . Unfortunately, since the solutions of the likelihood equations cannot be obtained analytically in most circumstances =-=[90]-=-, [197], iteratively suboptimal approaches are required to approximate the ML estimates. Among these methods, the expectation-maximization (EM) algorithm is the most popular [196]. EM regards the data... |

399 | When is Nearest Neighbor Meaningful?
- Beyer, Goldstein, et al.
- 1999
(Show Context)
Citation Context ...ential growth of complexity in the case of multivariate function estimation under a high dimensionality situation [28], is generally used to describe the problems accompanying high dimensional spaces =-=[34]-=-, [132]. It is theoretically proved that the distance between the nearest points is no different from that of other points when the dimensionality of the space is high enough [34]. Therefore, clusteri... |

370 |
Fuzzy ART: Fast stable learning and categorization of analog patterns by an adaptive resonance system.
- Carpenter, Grossberg, et al.
- 1991
(Show Context)
Citation Context ... under the ART1 framework. In [284] and [285], the ease with which ART may be used for hierarchical clustering is also discussed. Fuzzy ART (FA) benefits the incorporation of fuzzy set theory and ART =-=[57]-=-. FA maintains similar operations to ART1 and uses the fuzzy set operators to replace the binary operators, so that it can work for all real data sets. FA exhibits many desirable characteristics such ... |

333 |
Learning From Data: Concepts, Theory, and Methods,
- Cherkassky, Mulier
- 1998
(Show Context)
Citation Context ...tion systems are either supervised or unsupervised, depending on whether they assign new inputs to one of a finite number of discrete supervised classes or unsupervised categories, respectively [38], =-=[60]-=-, [75]. In supervised classification, the mapping from a set of input data vectors ( , where is the input space dimensionality), to a finite set of discrete class labels ( , where is the total number ... |

318 |
Unsupervised optimal fuzzy clustering.
- Gath, Geva
- 2002
(Show Context)
Citation Context ...1]. Gath and Geva described an initialization strategy of unsupervised tracking of cluster prototypes in their 2-layer clustering scheme, in which FCM and fuzzy ML estimation are effectively combined =-=[102]-=-. Kersten suggested that city block distance (or norm) could improve the robustness of FCM to outliers [163]. Furthermore, Hathaway, Bezdek, and Hu extended FCM to a more universal case by using Minko... |

315 | Exploratory projection pursuit
- Friedman
- 1987
(Show Context)
Citation Context ...lso indicated its connection to the auto-associative multilayer perceptron. Projection pursuit is another statistical technique for seeking low-dimensional projection structures for multivariate data =-=[97]-=-, [144]. Generally, projection pursuit regards the normal distribution as the least interesting projections and optimizes some certain indices that measure the degree of nonnormality [97]. PCA can be ... |

311 | Refining initial points for k-means clustering
- Bradley, Fayyad
- 1998
(Show Context)
Citation Context ...her considering the convergence speed, they recommended Kaufman’s method. Bradley and Fayyad presented a refinement algorithm that first utilizes -means times to random subsets from the original data =-=[43]-=-. The set formed from the union of the solution (centroids of the clusters) of the subsets is clustered times again, setting each subset solution as the initial guess. The starting points for the whol... |

300 | Scaling clustering algorithms to large databases,”
- Bradley, Fayyad, et al.
- 1998
(Show Context)
Citation Context ...and efforts have been made in order to overcome its disadvantages [142], [218]. 3) Many novel algorithms have been developed to cluster large-scale data sets, especially in the context of data mining =-=[44]-=-, [45], [85], [135], [213], [248]. Many of them can scale the computational complexity linearly to the input size and demonstrate the possibility of handling very large data sets. a) Random sampling a... |

284 |
Cluster analysis of multivariate data: efficiency versus interpretability of classifications.
- Forgy
- 1965
(Show Context)
Citation Context ...r maximizing the trace of . We can obtain a rich class of criterion functions based on the characteristics of and [75]. The -means algorithm is the best-known squared error-based clustering algorithm =-=[94]-=-, [191]. 1) Initialize a -partition randomly or based on some prior knowledge. Calculate the cluster prototype matrix . 2) Assign each object in the data set to the nearest cluster , i.e. if for and 3... |

244 |
Neural networks and principal component analysis: Learning from examples without local minima.
- Baldi, Hornik
- 1988
(Show Context)
Citation Context ...AY 2005 of approximating the input vectors. In this sense, PCA can be realized through a three-layer neural network, called an auto-associative multilayer perceptron, with linear activation functions =-=[19]-=-, [215]. In order to extract more complicated nonlinear data structure, nonlinear PCA was developed and one of the typical examples is kernel PCA. As methods discussed in Section II-I, kernel PCA firs... |

238 |
An Introduction to Simulated Evolutionary Optimization”,
- Fogel
- 1994
(Show Context)
Citation Context ... algorithms, are utilized to find the partitions, but they are easily stuck in local minima and therefore cannot guarantee optimality. More complex search methods (e.g., evolutionary algorithms (EAs) =-=[93]-=-, SA [165], and Tabu search (TS) [108] are known as stochastic optimization methods, while deterministic annealing (DA) [139], [234] is the most typical deterministic search technique) can explore the... |

223 |
ARTMAP: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network
- Carpenter, Grossberg, et al.
- 1991
(Show Context)
Citation Context ...orating two ART modules, which receive input patterns ART and corresponding labels ART , respectively, with an inter-ART module, the resulting ARTMAP system can be used for supervised classifications =-=[56]-=-. The match tracking strategy ensures the consistency of category prediction between two ART modules by dynamically adjusting the vigilance parameter of ART . Also see fuzzy ARTMAP in [55]. A similar ... |

214 | Support vector clustering.
- Ben-Hur, Horn, et al.
- 2001
(Show Context)
Citation Context ...s a measure of the denseness for the th cluster. Ben-Hur et al. presented a new clustering algorithm, SVC, in order to find a set of contours used as the cluster boundaries in the original data space =-=[31]-=-, [32]. These contours can be formed by mapping back the smallest enclosing sphere in the transformed feature space. RBF is chosen in this algorithm, and, by adjusting the width parameter of RBF, SVC ... |

201 |
et al.: Distinct types of diffuse large B-cell lymphoma identified by gene expression profiling.
- Alizadeh
- 2000
(Show Context)
Citation Context ...more attention currently. Generally, cluster analysis of gene expression data is composed of two aspects: clustering genes [80], [206], [260], [268], [283], [288] or clustering tissues or experiments =-=[5]-=-, [109], [238]. Results of gene clustering may suggest that genes in the same group have similar functions, or they share the same transcriptional regulation mechanism. Cluster analysis, for grouping ... |

195 |
A classification EM algorithm for clustering and two stochastic versions.
- Celeux, Govaert
- 1992
(Show Context)
Citation Context ...e relation between the EM algorithm and the -means algorithm. Celeux and Govaert proved that classification EM (CEM) algorithm under a spherical Gaussian mixture is equivalent to the -means algorithm =-=[58]-=-. Authorized licensed use limited to: UNIVERSIDADE FEDERAL DA PARAIBA. Downloaded on July 18, 2009 at 11:39 from IEEE Xplore.sRestrictions apply. 654 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 16, NO.... |

166 | Some new indexes of cluster validity.
- Bezdek, Pal
- 1998
(Show Context)
Citation Context ...d refer interested readers to [74], [110], and [150]. However, we will cover more details on how to determine the number of clusters in Section II-M. Some more recent discussion can be found in [22], =-=[37]-=-, [121], [180], and [181]. Approaches for fuzzy clustering validity are reported in [71], [104], [123], and [220]. 4) Results interpretation. The ultimate goal of clustering is to provide users with m... |

138 | DNA arrays for analysis of gene expression. - Eisen, Brown - 1999 |

136 |
A clustering technique for summarizing multivariate data
- Ball, Hall
- 1967
(Show Context)
Citation Context ...ategies. But the problem on computational complexity exists, due to the requirement for executing -means times for each value of . An interesting technique, called ISODATA, developed by Ball and Hall =-=[21]-=-, deals with the estimation of . ISODATA can dynamically adjust the number of clusters by merging and splitting clusters according to some predefined thresholds (in this sense, the problem of identify... |

117 | Markovian models for sequential data
- Bengio
- 1999
(Show Context)
Citation Context ...rion functions based on posterior probability and information theory for structural selection of HMMs and cluster validity [182]. More recent advances on HMMs and other related topics are reviewed in =-=[30]-=-. Other model-based sequence clustering includes mixtures of first-order Markov chain [255] and a linear model like autoregressive moving average (ARMA) model [286]. Usually, they are combined with EM... |

113 | A Robust Competitive Clustering Algorithm with Applications in Computer Vision:'
- Frigui, Krishuapuram
- 1999
(Show Context)
Citation Context ...mpetitive clustering algorithm (RCA) describes a competitive agglomeration process that progresses in stages, and clusters that lose in the competition are discarded, and absorbed into other clusters =-=[98]-=-. This process is generalized in [42], which attains the number of clusters by balancing the effect between the complexity and the fidelity. Another learning scheme, SPLL iteratively divides cluster p... |

112 |
Fuzzy C-means method for clustering microarray data
- Dembélé, Kastner
- 2003
(Show Context)
Citation Context ...promising performances in tackling different types of gene expression data. Since many genes usually display more than one function, fuzzy clustering may be more effective in exposing these relations =-=[73]-=-. Gene expression data is also important to elucidate the genetic regulation mechanism in a cell. By examining the corresponding DNA sequences in the control regions of a cluster of co-expressed genes... |

111 | Robust Clustering Methods: A Unified View
- Davé, Krishnapuram
- 1997
(Show Context)
Citation Context ...s on how to determine the number of clusters in Section II-M. Some more recent discussion can be found in [22], [37], [121], [180], and [181]. Approaches for fuzzy clustering validity are reported in =-=[71]-=-, [104], [123], and [220]. 4) Results interpretation. The ultimate goal of clustering is to provide users with meaningful insights from the original data, so that they can effectively solve the proble... |

94 | MCLUST: Software for model-based cluster analysis’,
- Fraley, Raftery
- 1999
(Show Context)
Citation Context ...IONS ON NEURAL NETWORKS, VOL. 16, NO. 3, MAY 2005 Fraley and Raftery described a comprehensive mixture-model based clustering scheme [96], which was implemented as a software package, known as MCLUST =-=[95]-=-. In this case, the component density is multivariate Gaussian, with a mean vector and a covariance matrix as the parameters to be estimated. The covariance matrix for each component can further be pa... |

85 |
et al. Basic Local Alignment Search Tool
- Altschul
- 1990
(Show Context)
Citation Context ... algorithms are computationally infeasible. In practice, sequence comparison or proximity measure is achieved via some heuristics. Well-known examples include BLAST and FASTA with many variants [10], =-=[11]-=-, [224]. The key idea of these methods is to identify regions that may have potentially high matches, with a list of prespecified high-scoring words, at an early stage. Therefore, further search only ... |

84 | An analysis of recent work on clustering algorithms - Fasulo - 1999 |

81 | A general probabilistic framework for clustering individuals and objects.
- Cadez, Gaffney, et al.
- 2000
(Show Context)
Citation Context ...irst-order Markov chain [255] and a linear model like autoregressive moving average (ARMA) model [286]. Usually, they are combined with EM for parameter estimation [286]. Smyth [255] and Cadez et al. =-=[50]-=- further generalize a universal probabilistic framework to model mixed data measurement, which includes both conventional static multivariate vectors and dynamic sequence data. The paradigm models clu... |

76 | A survey of fuzzy clustering algorithms for pattern recognition, part II
- Baraldi, Blonda
- 1999
(Show Context)
Citation Context ...panded the topic to the whole field of data mining [33]. Murtagh reported the advances in hierarchical clustering algorithms [210] andBaraldisurveyedseveralmodels for fuzzyandneuralnetwork clustering =-=[24]-=-. Some more survey papers can also be found in [25], [40], [74], [89], and [151]. In addition to the review papers, comparative research on clustering algorithms is also significant. Rauber, Paralic, ... |

76 | Mixture modeling of gene expression data from microarray experiments.
- Ghosh, Chinnaiyan
- 2002
(Show Context)
Citation Context ...s an important criterion for therapy selection and drug discovery [238]. Other applications of clustering algorithms for tissue classification include: mixtures of multivariate Gaussian distributions =-=[105]-=-, ellipsoidal ART [287], and graph theory-based methods [29], [247]. In most of these applications, important genes that are tightly related to the tumor types are identified according to their expres... |

70 | d2_cluster: a validated method for clustering EST and full-length cDNA sequences
- Burke, Davison, et al.
- 1999
(Show Context)
Citation Context ...rge-scale DNA or protein databases [237], [257]; 3) redundancy decrease of large-scale DNA or protein databases [185]; 4) domain identification [83], [115]; 5) expressed sequence tag (EST) clustering =-=[49]-=-, [200]. As described in Section II-J, classical dynamic programming algorithms for global and local sequence alignment are too intensive in computational complexity. This becomes worse because of the... |

66 |
A tabu search approach to the clustering problem.
- Al-Sultan
- 1995
(Show Context)
Citation Context ...es part or all of previously selected moves according to the specified size. These moves are forbidden in the current search and are called tabu. In the TS clustering algorithm developed by Al-Sultan =-=[9]-=-, a set of candidate solutions are generated from the current solution with some strategy. Each candidate solution represents the allocations of data objects in clusters. The candidate with the optima... |

56 |
A near optimal initial seed value selection in Kmeans algorithm using genetic algorithm.
- Babu, Murty
- 1993
(Show Context)
Citation Context ... GAs-based clustering, in order to reduce the computational complexity. GAs are very useful for improving the performance of -means algorithms. Babu and Murty used GAs to find good initial partitions =-=[15]-=-. Krishna and Murty combined GA with -means and developed GKA algorithm that can find the global optimum [173]. As indicated in Section II-C, the algorithm ELBG uses the roulette mechanism to address ... |

55 | Clustering Large Datasets in Arbitrary Metric Spaces
- Ganti, Ramakrishnan, et al.
- 1998
(Show Context)
Citation Context ..., as discussed in Section II-B. This new data structure efficiently captures the clustering information and largely reduces the computational burden. BIRCH was generalized into a broader framework in =-=[101]-=- with two algorithms realization, named as BUBBLE and BUBBLE-FM. d) Density-based approach, e.g., density based spatial clustering of applications with noise (DBSCAN) [85] and density-based clustering... |

53 |
GeneRAGE: a robust algorithm for sequence clustering and domain detection
- Enright, Ouzounis
- 2000
(Show Context)
Citation Context ...nes or proteins [119]; 2) structure identification of large-scale DNA or protein databases [237], [257]; 3) redundancy decrease of large-scale DNA or protein databases [185]; 4) domain identification =-=[83]-=-, [115]; 5) expressed sequence tag (EST) clustering [49], [200]. As described in Section II-J, classical dynamic programming algorithms for global and local sequence alignment are too intensive in com... |

53 |
Mercer kernel based clustering in feature space
- Girolami
(Show Context)
Citation Context ... control the process of producing mean vectors. The authors also illustrated the application of these approaches in case based reasoning systems. An alternative kernel-based clustering approach is in =-=[107]-=-. The problem was formulated to determine an optimal partition to minimize the trace of within-group scatter matrix in the feature space where , and is the total number of patterns in the th cluster. ... |

52 | Using the fractal dimension to cluster datasets
- Barbará, Chen
- 2000
(Show Context)
Citation Context ...unction, which reflects the comprehensive influence of data objects to their neighborhoods in the corresponding data space. e) Grid-based approach, e.g., WaveCluster [248] and fractal clustering (FC) =-=[26]-=-. WaveCluster assigns dataobjects toasetofunitsdividedintheoriginalfeature space, and employs wavelet transforms on these units, to map objects into the frequency domain. The key idea is that clusters... |

49 | General Fuzzy Min-Max Neural Network for Clustering and Classification,
- Gabrys, Bargiela
- 2000
(Show Context)
Citation Context ...ctions are controlled by the orienting subsystem through a vigilance parameter. cluster, instead of the combinations of several clusters. Simpson employed hyperbox fuzzy sets to characterize clusters =-=[100]-=-, [249]. Each hyperbox is delineated by a min and max point, and data points build their relations with the hyperbox through the membership function. The learning process experiences a series of expan... |

45 |
Cluster Analysis: A Survey
- Duran, Odell
- 1974
(Show Context)
Citation Context ...del clustering, in which a model is fit to data in advance. Clustering has a long history, with lineage dating back to Aristotle [124]. General references on clustering techniques include [14], [75], =-=[77]-=-, [88], [111], [127], [150], [161], [259]. Important survey papers on clustering techniques also exist in the literature. Starting from a statistical pattern recognition viewpoint, Jain, Murty,andFlyn... |

39 |
Optimal adaptive k-means algorithm with dynamic adjustment of learning rate
- Chinrungrueng, Sequin
- 1995
(Show Context)
Citation Context ...tor (KMO), replaces the computationally expensive crossover operators and alleviates the complexities coming with them. An adaptive learning rate strategy for the online mode -means is illustrated in =-=[63]-=-. The learning rate is exclusively dependent on the within-group variations and can be adjusted without involving any user activities. The proposed enhanced LBG (ELBG) algorithm adopts a roulette mech... |

39 | Parallel algorithms for hierarchical clustering and applications to split decomposition and parity graph recognition,
- Dahlhaus
- 2000
(Show Context)
Citation Context ...larity-based agglomerative clustering (SBAC), employs a mixed data measure scheme that pays extra attention to less common matches of feature values [183]. Parallel techniques for HC are discussed in =-=[69]-=- and [217], respectively. C. Squared Error—Based Clustering (Vector Quantization) In contrast to hierarchical clustering, which yields a successive level of clusters by iterative fusions or divisions,... |

39 | Some competitive learning methods,
- Fritzke
- 1997
(Show Context)
Citation Context ...work of unsupervised predictive learning problems, such as vector quantization [60] (see Section II-C), probability density function estimation [38] (see Section II-D), [60], and entropy maximization =-=[99]-=-. It is noteworthy that clustering differs from multidimensional scaling (perceptual maps), whose goal is to depict all the evaluated objects in a way that minimizes the topographical distortion while... |

35 |
Using DNA microarrays to study host–microbe interactions.
- Cummings, Relman
- 2000
(Show Context)
Citation Context ...BA. Downloaded on July 18, 2009 at 11:39 from IEEE Xplore.sRestrictions apply. 670 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 16, NO. 3, MAY 2005 Fig. 6. Basic procedure of cDNA microarray technology =-=[68]-=-. Fluorescently labeled cDNAs, obtained from target and reference samples through reverse transcription, are hybridized with the microarray, which is comprised of a large amount of cDNA clones. Image ... |

34 |
Adaptive fuzzy c-shells clustering and detection of ellipses
- Dave
- 1992
(Show Context)
Citation Context ...ared to detect different types of cluster shapes, especially contours (lines, circles, ellipses, rings, rectangles, hyperbolas) in a two-dimensional data space. They use the “shells” (curved surfaces =-=[70]-=-) as the cluster prototypes instead of points or surfaces in traditional fuzzy clustering algorithms. In the case of FCS [36], [70], the proposed cluster prototype is represented as a -dimensional hyp... |

31 |
Initial sequencing and analysis of the human genome
- Consortium
(Show Context)
Citation Context ...ances in genome sequencing projects and DNA microarray technologies have been achieved. The first draft of the human genome sequence project was completed in 2001, several years earlier than expected =-=[65]-=-, [275]. The genomic sequence data for other organizms (e.g., Drosophila melanogaster and Escherichia coli) are also abundant. DNA microarray technologies provide an effective and efficient way to mea... |

28 | Constructive feed forward ART clustering networks—Part I and II”,
- Baraldi, Alpaydin
- 2002
(Show Context)
Citation Context ...d data set into a finite and discrete set of “natural,” hidden data structures, rather than provide an accurate characterization of unobserved samples generated from the same probability distribution =-=[23]-=-, [60]. This can make the task of clustering fall outside of the framework of unsupervised predictive learning problems, such as vector quantization [60] (see Section II-C), probability density functi... |

28 | Fast accurate fuzzy clustering through data reduction,”
- Eschrich, Ke, et al.
- 2003
(Show Context)
Citation Context ...nsive investigation on the distance measure functions, the effect of weighting exponent on fuzziness control, the optimization approaches for fuzzy partition, and improvements of the drawbacks of FCM =-=[84]-=-, [141]. Like its hard counterpart, FCM also suffers from the presence of noise and outliers and the difficulty to identify the initial partitions. Yager and Filev proposed a MM in order to estimate t... |

28 | Amoeba: Hierarchical clustering based on spatial proximity using Delaunay diagram
- Estivill-Castro, Lee
- 2000
(Show Context)
Citation Context ...nnect more than two vertices) from the DTG and used a two-phase algorithm that is similar to Chameleon to find clusters [61]. Another DTG-based application, known as AMOEBA algorithm, is presented in =-=[86]-=-. Graph theory can also be used for nonhierarchical clusters. Zahn’s clustering algorithm seeks connected components as clusters by detecting and discarding inconsistent edges in the minimum spanning ... |

26 | Multi-Way Partitioning Via Spacefilling Curves and Dynamic Programming.
- Alpert, Kahng
- 1994
(Show Context)
Citation Context ...ltidimensional [8]. Alpert and Kahng considered a solution to the problem as the “inverse” of the divide-and-conquer TSP method and used a linear tour of the modules to form the subcircuit partitions =-=[7]-=-. They adopted the spacefilling curve heuristic for the TSP to construct the tour so that connected modules are still close in the generated tour. A dynamic programming method was used to generate the... |

25 |
Clustering of proximal sequence space for the identification of protein families
- Abascal, Valencia
- 2002
(Show Context)
Citation Context ...directed paths from to and vice versa. A minimum normalized cut algorithm for detecting protein families and a minimum spanning tree (MST) application for seeking domain information were presented in =-=[1]-=- and [115], respectively. In contrast with the aforementioned proximity-based methods, Guralnik and Karypis transformed protein or DNA sequences into a new feature space, based on the detected subpatt... |

24 |
Probabilistic models in cluster analysis
- Bock
- 1996
(Show Context)
Citation Context ...Murtagh reported the advances in hierarchical clustering algorithms [210] andBaraldisurveyedseveralmodels for fuzzyandneuralnetwork clustering [24]. Some more survey papers can also be found in [25], =-=[40]-=-, [74], [89], and [151]. In addition to the review papers, comparative research on clustering algorithms is also significant. Rauber, Paralic, and Pampalk presented empirical results for five typical ... |

24 |
A practical application of simulated annealing to clustering
- Brown, Huntley
- 1992
(Show Context)
Citation Context ...ich means that SA attempts to explore solution space more completely at high temperatures while favors the solutions that lead to lower energy at low temperatures. SA-based clustering was reported in =-=[47]-=- and [245]. The former illustrated an application of SA clustering to evaluate different clustering criteria and the latter investigated the effects of input parameters to the clustering performance. ... |

23 |
Nonparametric genetic clustering: Comparison of validity indices
- Bandyopadhyay, Maulik
- 2001
(Show Context)
Citation Context ...pth and refer interested readers to [74], [110], and [150]. However, we will cover more details on how to determine the number of clusters in Section II-M. Some more recent discussion can be found in =-=[22]-=-, [37], [121], [180], and [181]. Approaches for fuzzy clustering validity are reported in [71], [104], [123], and [220]. 4) Results interpretation. The ultimate goal of clustering is to provide users ... |

23 |
A genetic algorithm approach to cluster analysis
- Cowgill, Harvey, et al.
- 1999
(Show Context)
Citation Context ...pplications have appeared based on a similar framework. They are different in the meaning of an individual in the population, encoding methods, fitness function definition, and evolutionary operators =-=[67]-=-, [195], [273]. The algorithm CLUSTERING in [273] includes a heuristic scheme for estimating the appropriate number of clusters in the data. It also uses a nearest-neighbor algorithm to divide data in... |

23 | A Fast and Robust General Purpose Clustering Algorithm
- Estivell-Castro, Yang
(Show Context)
Citation Context ... clusters typical of -means. PAM utilizes real data points (medoids) as the cluster prototypes and avoids the effect of outliers. Based on the same consideration, a -medoids algorithm is presented in =-=[87]-=- Authorized licensed use limited to: UNIVERSIDADE FEDERAL DA PARAIBA. Downloaded on July 18, 2009 at 11:39 from IEEE Xplore.sRestrictions apply. XU AND WUNSCH II: SURVEY OF CLUSTERING ALGORITHMS 653 b... |

21 |
Redefining clustering for high-dimensional applications
- Aggarwal, Yu
- 2002
(Show Context)
Citation Context ...eved by constructing the best cutting hyperplanes through a set of projections. The time complexity for OptiGrid is in the interval of and . ORCLUS (arbitrarily ORiented projected CLUster generation) =-=[2]-=- defines a generalized projected cluster as a densely distributed subset of data objects in a subspace, along with a subset of vectors that represent the subspace. The dimensionality of the subspace i... |

19 |
A new kernel-based fuzzy clustering approach: support vector clustering with cell growing,”
- Chiang, Hao
- 2003
(Show Context)
Citation Context ...rchical clusters. When some points are allowed to lie outside the hypersphere, SVC can deal with outliers effectively. An extension, called multiple spheres support vector clustering, was proposed in =-=[62]-=-, which combines the concept of fuzzy membership. Kernel-based clustering algorithms have many advantages. 1) It is more possible to obtain a linearly separable hyperplane in the high-dimensional, or ... |

19 | Fuzzy Clustering for Symbolic Data - EI-Sonbaty, Ismail - 1998 |

19 |
Discovering patterns in spatial data using evolutionary programming
- Ghozeil, Fogel
- 1996
(Show Context)
Citation Context ...c distance for a given dissimilarity with Euclidean norm [190]. They suggested an order-based GA to solve the problem. Clustering algorithms based on ESs and EP are described and analyzed in [16] and =-=[106]-=-, respectively. TS is a combinatory search technique that uses the tabu list to guide the search process consisting of a sequence of moves. The tabu list stores part or all of previously selected move... |

16 |
Clustering protein sequences-structure prediction by transitive homology
- Bolten, Schliep, et al.
- 2001
(Show Context)
Citation Context ...nstruction a directed graph, in which each protein sequence corresponds to a vertex and edges are weighted based on the alignment score between two sequences and self alignment score of each sequence =-=[41]-=-. Clusters were formed through the search of strongly connected components (SCCs), each of which is a maximal subset of vertices and for each pair of vertices and in the subset, there exist two direct... |

15 | Hypersphere ART and ARTMAP for unsupervised and supervised incremental learning
- Anagnostopoulos, Georgiopoulos
(Show Context)
Citation Context ...hyperellipsoid geometrically. GA does not inherit the offline fast learning property of FA, as indicated by Anagnostopoulos et al. [13], who proposed different ART architectures: hypersphere ART (HA) =-=[12]-=- for hyperspherical clusters and ellipsoid ART (EA) [13] for hyperellipsoidal clusters, to explore a more efficient representation of clusters, while keeping important properties of FA. Baraldi and Al... |

15 |
Hierarchical Unsupervised Fuzzy Clustering
- Geva
- 1999
(Show Context)
Citation Context ...ow to determine the number of clusters in Section II-M. Some more recent discussion can be found in [22], [37], [121], [180], and [181]. Approaches for fuzzy clustering validity are reported in [71], =-=[104]-=-, [123], and [220]. 4) Results interpretation. The ultimate goal of clustering is to provide users with meaningful insights from the original data, so that they can effectively solve the problems enco... |

14 |
A clustering performance measure based on fuzzy set decomposition
- Backer, Jain
- 1981
(Show Context)
Citation Context ...onpredictive clustering is a subjective process in nature, which precludes an absolute judgment as to the relative efficacy of all clustering techniques [23], [152]. As pointed out by Backer and Jain =-=[17]-=-, “in cluster analysis a group of objects is split up into a number of more or less homogeneous subgroups on the basis of an often subjectively chosen measure of similarity (i.e., chosen subjectively ... |

12 |
A New Neural Network for ClusterDetection-and-Labeling:'
- Eltoft, defigueiredo
- 1998
(Show Context)
Citation Context ..., many other neural network architectures are developed for clustering. Most of these architectures utilize prototype vectors to represent clusters, e.g., cluster detection and labeling network (CDL) =-=[82]-=-, HEC [194], and SPLL [296]. HEC uses a two-layer network architecture to estimate the regularized Mahalanobis distance, which is equated to the Euclidean distance in a transformed whitened space. CDL... |

6 |
Survey of clustering data mining techniques. [Online]. Available: http://www.accrue.com/products/rp _cluster_review.pdfhttp://citeseer.nj.nec.com/berkhin02survey.html Bordogna
- Berkhin
- 2001
(Show Context)
Citation Context ...nd He investigated applicationsofclusteringalgorithmsforspatialdatabasesystems[171] and information retrieval [133], respectively. Berkhin further expanded the topic to the whole field of data mining =-=[33]-=-. Murtagh reported the advances in hierarchical clustering algorithms [210] andBaraldisurveyedseveralmodels for fuzzyandneuralnetwork clustering [24]. Some more survey papers can also be found in [25]... |

5 |
Numerical convergence and interpretation of the fuzzy c-shells clustering algorithms
- Bezdek, Hathaway
- 1992
(Show Context)
Citation Context ...s) in a two-dimensional data space. They use the “shells” (curved surfaces [70]) as the cluster prototypes instead of points or surfaces in traditional fuzzy clustering algorithms. In the case of FCS =-=[36]-=-, [70], the proposed cluster prototype is represented as a -dimensional hyperspherical shell ( for circles), where is the center, and is the radius. A distance function is defined as to measure the di... |

5 | Generalized competitive clustering for image segmentation
- Boujemaa
- 2000
(Show Context)
Citation Context ...describes a competitive agglomeration process that progresses in stages, and clusters that lose in the competition are discarded, and absorbed into other clusters [98]. This process is generalized in =-=[42]-=-, which attains the number of clusters by balancing the effect between the complexity and the fidelity. Another learning scheme, SPLL iteratively divides cluster prototypes from a single prototype unt... |

5 | A hypergraph based clustering algorithm for spatial data sets - Cherng, Lo |

5 | A clustering algorithm using the Tabu search approach with simulated annealing
- Chu, Roddick
- 2000
(Show Context)
Citation Context ... also proposed. A tabu list is used in a GA clustering algorithm to preserve the variety of the population and avoid repeating computation [243]. An application of SA for improving TS was reported in =-=[64]-=-. The algorithm further reduces the possible moves to local optima. The main drawback that plagues the search techniques-based clustering algorithms is the parameter selection. More often than not, se... |

4 | means clustering with the – and – norms - Bobrowski, Bezdek - 1991 |

3 | Soft-to-hard model transition in clustering: A review
- Baraldi, Schenato
- 1999
(Show Context)
Citation Context ...[33]. Murtagh reported the advances in hierarchical clustering algorithms [210] andBaraldisurveyedseveralmodels for fuzzyandneuralnetwork clustering [24]. Some more survey papers can also be found in =-=[25]-=-, [40], [74], [89], and [151]. In addition to the review papers, comparative research on clustering algorithms is also significant. Rauber, Paralic, and Pampalk presented empirical results for five ty... |

3 |
A comparison of Kernel methods for instantiating case based reasoning systems,” Advanced Engineering Informatics
- Fyfe, Corchado
- 2002
(Show Context)
Citation Context ...e the mean vector whose corresponding is 1 where . 4) Adapt the coefficients for each as for for 5) Repeat steps 2)–4) until convergence is achieved. Two variants of kernel- -means were introduced in =-=[66]-=-, motivated by SOFM and ART networks. These variants consider effects of neighborhood relations, while adjusting the cluster assignment variables, and use a vigilance parameter to control the process ... |

3 | A Tabu search approach to the fuzzy clustering problem
- Delgado, Skármeta, et al.
- 1997
(Show Context)
Citation Context ...sses with the packing and releasing procedures [266]. They also used a secondary tabu list to keep the search from trapping into the potential cycles. A fuzzy version of TS clustering can be found in =-=[72]-=-. SA is also a sequential and global search technique and is motivated by the annealing process in metallurgy [165]. SA allows the search process to accept a worse solution with a certain probability.... |

3 |
clustering, discriminant analysis, and density estimation
- “Model-Based
- 2002
(Show Context)
Citation Context ...r EM algorithm are the sensitivity to the selection of initial parameters, the effect of a singular covariance matrix, the possibility of convergence to a local optimum, and the slow convergence rate =-=[96]-=-, [196]. Variants of EM for addressing these problems are discussed in [90] and [196]. A valuable theoretical note is the relation between the EM algorithm and the -means algorithm. Celeux and Govaert... |

1 |
directions in netlist partitioning: A survey
- “Recent
- 1995
(Show Context)
Citation Context ...e object of the partitions is to minimize the number of connections among the components. One strategy for solving the problem is based on geometric representations, either linear or multidimensional =-=[8]-=-. Alpert and Kahng considered a solution to the problem as the “inverse” of the divide-and-conquer TSP method and used a linear tour of the modules to form the subcircuit partitions [7]. They adopted ... |

1 |
and ARTMAP for incremental unsupervised and supervised learning
- ART
- 2001
(Show Context)
Citation Context ...h cluster is modeled with Gaussian distribution and represented as a hyperellipsoid geometrically. GA does not inherit the offline fast learning property of FA, as indicated by Anagnostopoulos et al. =-=[13]-=-, who proposed different ART architectures: hypersphere ART (HA) [12] for hyperspherical clusters and ellipsoid ART (EA) [13] for hyperellipsoidal clusters, to explore a more efficient representation ... |

1 |
support vector clustering method
- “A
(Show Context)
Citation Context ...asure of the denseness for the th cluster. Ben-Hur et al. presented a new clustering algorithm, SVC, in order to find a set of contours used as the cluster boundaries in the original data space [31], =-=[32]-=-. These contours can be formed by mapping back the smallest enclosing sphere in the transformed feature space. RBF is chosen in this algorithm, and, by adjusting the width parameter of RBF, SVC can fo... |

1 |
Handbook of Pattern Recognition and Computer
- Vision, Pau, et al.
- 1993
(Show Context)
Citation Context ...t clustering structures, in order to provide a reference, to decide which one may best reveal the characteristics of the objects. We will not survey the topic in depth and refer interested readers to =-=[74]-=-, [110], and [150]. However, we will cover more details on how to determine the number of clusters in Section II-M. Some more recent discussion can be found in [22], [37], [121], [180], and [181]. App... |