### Table 1: Illustrative cluster ensemble problem with r = 4, k(1,...,3) = 3, and k(4) = 2: Original label vectors (left) and equivalent hypergraph representation with 11 hy- peredges (right). Each cluster is transformed into a hyperedge.

2002

"... In PAGE 13: ...Figure 4 illustrates meta-clustering for the example given in Table1 where r = 4, k = 3, k(1,.... ..."

Cited by 144

### Table 1: Illustrative cluster ensemble problem with D6 BP BG, CZB4BDBNBMBMBMBNBFB5 BP BF, and CZB4BGB5 BP BE: Original label vectors (left) and equivalent hypergraph representation with 11 hyperedges (right). Each cluster is transformed into a hyperedge.

2002

Cited by 144

### Table 1: Illustrative cluster ensemble problem with r =4, k(1,...,3) =3, and k(4) =2: Original label vectors (left) and equivalent hypergraph representation with 11 hyperedges (right). Each cluster is transformed into a hyperedge.

### Table 1: Clustering ensemble and consensus solution

"... In PAGE 7: ... Correspondence problem is emphasized by different label systems used by the partitions. Table1 shows the expected values of latent variables after 6 iterations of the EM algorithm and the resulting consensus clustering. In fact, stable combination appears even after the third iteration, and it corresponds to the true underlying structure of the data.... In PAGE 10: ... Figure 3 shows the error as a function of k for different consensus functions for the galaxy data. It is also interesting to note that, as expected, the average error of consensus clustering was lower than average error of the k-means clusterings in the ensemble ( Table1 ) when k is chosen to be equal to the true number of clusters. Moreover, the clustering error obtained by EM and MCLA algorithms with k=4 for Biochemistry data was the same as found by the advanced supervised classifiers applied to this dataset [28].... ..."

### Table 1: Clustering ensemble and consensus solution

2005

"... In PAGE 10: ... Correspondence problem is emphasized by different label systems used by the partitions. Table1 shows the expected values of latent variables after 6 iterations of the EM algorithm and the resulting consensus clustering. In fact, a stable combination appears as early as the third iteration, and it corresponds to the true underlying structure of the data.... ..."

Cited by 8

### Table 1 Types of cluster ensembles Ensemble Structure Clustering Number of Sample Noise

"... In PAGE 8: ...We compare the (assumed to be) true labels with the labels obtained through the 14 clustering methods ( Table1 ). The ar index was calculated for the 14 methods and for the 6 data sets.... ..."

### Table 2.2: An ensemble dataset for a cluster ensemble with four component clustering solutions and six data points.

2007

### Table 1: Differences of ensemble methods.

"... In PAGE 3: ... We note that the stacking method is designed for the regression problem. Table1 summarizes the differences of the existing and the new ensemble meth- ods. The new method is discussed in Section 3.... ..."

### Table 2. Accuracy (NMI) scores for base and ensemble clustering methods.

"... In PAGE 9: ... We suggest that the applicable of a diago- nal dominance reduction technique, which limits the influence of self-similarity, contributes to this improvement. In addition, the results in Table2 show that in several cases correspondence clustering after prototype reduction (COR-RED) performed better than clustering on the full kernel matrix. We suggest that the use of neighbourhood centroids as prototypes allows the production of a robust partition that may not be easily obtained by clustering on the full dataset using... In PAGE 10: ...able 2. Accuracy (NMI) scores for base and ensemble clustering methods. ing phase allows this partition to be refined to produce a more accurate final solution. Both KM and AA exhibited considerable instability due to the sensitivity of these algorithms to the choice of initial clusters, which is reflected in the high deviation scores in Table2 . In contrast, the ensemble methods tend to be far more robust, frequently producing identical or highly similar partitions.... ..."

### Table 3. Mean running times (in seconds) for ensemble clustering procedures.

"... In PAGE 10: ...3 Comparison of Algorithm Efficiency Another important aspect of our evaluation was to assess the computational gains resulting from prototype reduction. Table3 provides a list of the mean running times for the ensemble clustering experiments, which were performed on a Pentium IV 3.... ..."