### Table 2: Comparing clusters obtained by RDBC and DBSCAN on Iris Data

"... In PAGE 4: ...2.1: Graphical Representations of Results Table2 shows the clustering result and comparison between the two clustering algorithms. RDBC have the same time complexity as that of DBSCAN.... ..."

### Table 2. Comparing clusters obtained by RDBC and DBSCAN on Monash University data

"... In PAGE 10: ... Shown are example URLs that are extracted from the result. Table2 and Fig. 1 show the clustering result and e ciency comparison between the DBSCAN and RDBC cluster- ing algorithms.... ..."

### Table 2. Comparing clusters obtained by RDBC and DBSCAN on Monash University Data

2001

"... In PAGE 6: ... If use DBSCAN all these pages are belong to the same cluster. Table2 and Figure 3 show the clustering result and efficiency comparison between the two clustering algorithms. We see that using RDBC, while having about the same time complexity as DBSCAN, we obtain more clusters for the data set that is more reasonable and will generate clusters with more even distribution than DBSCAN.... ..."

Cited by 10

### Table 2. Comparing clusters obtained by RDBC and DBSCAN on Monash University Data

2001

"... In PAGE 6: ... If use DBSCAN all these pages are belong to the same cluster. Table2 and Figure 3 show the clustering result and efficiency comparison between the two clustering algorithms. We see that using RDBC, while having about the same time complexity as DBSCAN, we obtain more clusters for the data set that is more reasonable and will generate clusters with more even distribution than DBSCAN.... ..."

Cited by 10

### Table 2. Comparing clusters obtained by RDBC and DBSCAN on Monash University Data

2001

"... In PAGE 6: ... If use DBSCAN all these pages are belong to the same cluster. Table2 and Figure 3 show the clustering result and efficiency comparison between the two clustering algorithms. We see that using RDBC, while having about the same time complexity as DBSCAN, we obtain more clusters for the data set that is more reasonable and will generate clusters with more even distribution than DBSCAN.... ..."

Cited by 10

### Table 2: Execution cost of the algorithms (sec) k-medoids DBSCAN -Link Single-Link

"... In PAGE 10: ... We measured the execution cost of k-medoids, DBSCAN, -Link, and the hierarchical Single-Link algorithm, using the datasets of the previous experiment. Table2 shows the costs of the four methods. For k-medoids, we include the cost of nding only one local optimum; even in this case the al- gorithm is much slower than the other methods (usually producing much worse results, too).... ..."

### Table 2. DBSCAN: results

### Table 1: Optimal partitioning found by S_Dbw for each algorithm K-means DBSCAN CURE r=10, a=0.3)

"... In PAGE 6: ... We use three well-known algorithms, one from each of the popular algorithm categories: K-means (partitional), DBSCAN (density based) and CURE (hierarchical). Table1 a and Table 1b present S_Dbw values for the resulting clustering schemes for DataSet1, and Real_Data1 (see Figure4a and Figure4e respectively) found by K-means, DBSCAN and CURE respectively. More specifically, we consider the clustering schemes revealed by the algorithms mentioned above while their input parameter values are depicted in Table 1.... In PAGE 6: ... Table 1a and Table 1b present S_Dbw values for the resulting clustering schemes for DataSet1, and Real_Data1 (see Figure4a and Figure4e respectively) found by K-means, DBSCAN and CURE respectively. More specifically, we consider the clustering schemes revealed by the algorithms mentioned above while their input parameter values are depicted in Table1 . In the case of DataSet1, all three algorithms propose four clusters as the optimal clustering schemes (see Table 1a).... In PAGE 6: ... More specifically, we consider the clustering schemes revealed by the algorithms mentioned above while their input parameter values are depicted in Table 1. In the case of DataSet1, all three algorithms propose four clusters as the optimal clustering schemes (see Table1 a). Similarly, considering any of the three algorithms, the proposed validity index selects three clusters as the best partitioning for Real_Data1 (see Table1b).... In PAGE 6: ... In the case of DataSet1, all three algorithms propose four clusters as the optimal clustering schemes (see Table 1a). Similarly, considering any of the three algorithms, the proposed validity index selects three clusters as the best partitioning for Real_Data1 (see Table1 b). In some cases, however, an algorithm may partition a data set into the correct number of clusters but in a wrong way.... In PAGE 6: ... On the other hand, DBSCAN finds the correct three partitions of the data set (Figure 9(c)). Table1 c presents the behavior of S_Dbw in each of the above cases. If we consider the K- means clustering results, the index proposes four clusters as the best partitioning for our data set.... ..."

### Table 3. COBWEB results (using concepts) for an increasing number of documents 6.3.1. Evaluation of the results We clustered documents using DBSCAN and semantics first (Table 1) and then DBSAN and

2004

"... In PAGE 32: ... The entropy is again lower than the maximum entropy (a decrease of 52%). As can be deduced from the results in Table3 , after a certain number of documents (approximately 100) have been clustered, entropy remains constant. This means that the clustering algorithm needs many documents as input in order to perform well.... ..."

Cited by 1