### Table 1 Number of true positives and false positives obtained by applying the t-test and the local clustering algorithm to the artificial data set t-Test Local clustering

2004

"... In PAGE 13: ...he threshold of the membership value is 0.8. tion map obtained with the local clustering algorithm shows that the activated voxels tend to cluster with one another, while with the t-test the activated voxels do not form clusters and show more scattered false pos- itives. The numbers of true positives and false positives obtained in slice 3 are summarized in Table1 . Our ap- proach detects more true positives with a lower number of false positives.... ..."

### TABLE II RECOGNITION COMPARISON FOR THE THREE DIFFERENT ALGORITHMS USED. FOR LOCAL ICA, THE RESULTS OF THE BEST CLUSTER SIZE ARE SHOWN.

### Table 5 shows the confusion matrices obtained on the Classic300 dataset. Even though the dataset is separable, the low number of documents per cluster makes the problem somewhat difficult for fskmeans and spkmeans, while hard-moVMF has a much better performance due to its model flexibility. The soft-moVMF algorithm performs appreciably better than the other three algorithms. It seems that the low number of documents does not pose a problem for soft-moVMF and it ends up getting an almost perfect clustering for this dataset. Thus in this case, despite the low number of points per cluster, the superior modeling power of our moVMF based algorithms prevents them from getting trapped in inferior local-minima as compared to the other algorithms resulting in a better clustering.

"... In PAGE 18: ... Table5 : Comparative confusion matrices for 3 clusters of Classic300. The confusion matrices obtained on the Classic400 dataset are shown in Table 6.... ..."

### Table 1: Conceptual Clustering Example

2003

"... In PAGE 9: ... Given this representation of local ontologies, we can proceed in creating clusters or groups of ontologies by using a conceptual clustering algorithm. Consider the example shown in Table1 . The shaded cell regions indicate ... ..."

Cited by 1

### Table 1: Comparison of different algorithms on performance, convergence and stability. Five apporaches are compared based on time performance, convergence and stability. The K-means algorithm has better time performance than any other genetic algorithms, but it suffers from converging to local optimum and initialization dependent. Among the four genetic clustering approaches, Hybrid approach always has better time performance while FGKA performs well when the mutation probability is big, and IGKA performs well when the mutation probability is small. IGKA and FGKA outperform GKA. The convergence of four genetic algorithms has similar results, and all four are independent from the initialization.

2004

### Table 1. Cluster statistics

2002

"... In PAGE 5: ... This model can be used as a basis for protein modeling and prediction of local and global protein structures. Results Building block database creation and clustering Table1 gives some building block cluster statistics for each protein class and cutting level (Haspel et al. 2003).... In PAGE 5: ... These cases usually represent building blocks de- rived from the same protein family. Table1 gives sequence information for each cluster, the number of sequences, and their average sequence length. Figure 2 shows two examples of clusters, using MUSTA, our multiple structural compari- son algorithm (Leibowitz et al.... ..."

### Table 1. Dependence of the average number of loops in the self-avoiding polymer on the number of monomers, and on the average number of branches. Data is from the cubic lattice, and the branching probability is fixed to be 0.1.

1979

### Table 3: Comparison of the total number of region migrations and the number of region clusters at the end of simulation in WAN setting.

"... In PAGE 10: ...Table 3: Comparison of the total number of region migrations and the number of region clusters at the end of simulation in WAN setting. In order to differentiate the inter-server communication costs fur- ther, Table3 shows a comparison under the WAN environment of all our dynamic partitioning algorithms in terms of total num- ber of region migrations and the number of strongly connected re- gion clusters at the end of simulation. We see that the Locality Aware algorithm (Loc) maintains the best region locality by keep- ing the number of region clusters close to the initial value of 100 region clusters (corresponding to the initial block partitioning on 100 servers).... ..."

### Table 3: Comparison of the total number of region migrations and the number of region clusters at the end of simulation in WAN setting.

"... In PAGE 10: ...Table 3: Comparison of the total number of region migrations and the number of region clusters at the end of simulation in WAN setting. In order to differentiate the inter-server communication costs fur- ther, Table3 shows a comparison under the WAN environment of all our dynamic partitioning algorithms in terms of total num- ber of region migrations and the number of strongly connected re- gion clusters at the end of simulation. We see that the Locality Aware algorithm (Loc) maintains the best region locality by keep- ing the number of region clusters close to the initial value of 100 region clusters (corresponding to the initial block partitioning on 100 servers).... ..."

### Table 2: Experimental results for 2.5-opt-EEs-100, 2.5-opt-ACs, 2-p-opt and 1-shift on clustered instances of size 300. Each algorithm is allowed to run until it reaches a local optimum. The table gives mean and standard deviation (s.d.) of final solution cost and computation time in seconds. The results are obtained on 100 instances for each probability level.

2007

Cited by 5