### Table 1: The MCL algorithm.

1999

"... In PAGE 12: ... Afterwards, the importance factors are normalized so that they sum up to 1 (hence define a discrete probability distribution). Table1 summarizes the MCL algorithm. It is known [57] that under mild assumptions (which hold in our work), the sample set converges to the true posterior Bel#28x t #29 as m goes to infinity, with a convergence speed in O#28 1 p m #29.... ..."

Cited by 384

### Table 1 The MCL algorithm.

"... In PAGE 12: ... Afterwards, the importance factors are normalized so that they sum up to 1 (hence define a discrete probability distribution). Table1 summarizes the MCL algorithm. It is known [57] that under mild assump- tions (which hold in our work), the sample set converges to the true posterior Bel(xt) as m goes to infinity, with a convergence speed in O( 1 pm).... ..."

### Table 1 The MCL algorithm.

"... In PAGE 12: ... Afterwards, the importance factors are normalized so that they sum up to 1 (hence define a discrete probability distribution). Table1 summarizes the MCL algorithm. It is known [57] that under mild assump- tions (which hold in our work), the sample set converges to the true posterior Bel#28x t #29 as m goes to infinity, with a convergence speed in O#28 1 p m #29.... ..."

### Table 3 The Mixture MCL algorithm, here using the third variant of dual MCL (see Table 2).

1999

"... In PAGE 28: ...ompare this graph to Figures 10 and 12. These results were obtained through simulation. sample with probability 1 ; using standard MCL, and with probability using a dual. Table3 states the Mixture-MCL algorithm, using the third variant for calculating importance factors in the dual. As is easy to be seen, the Mixture-MCL algorithm combines the MCL algorithms in Table 1 with the dual algorithm in Table 2, using the (probabilistic) mixing ratio .... ..."

Cited by 384

### Table 3 The Mixture MCL algorithm, here using the third variant of dual MCL (see Table 2).

2001

"... In PAGE 28: ...ompare this graph to Figures 10 and 12. These results were obtained through simulation. sample with probability 1 ? using standard MCL, and with probability using a dual. Table3 states the Mixture-MCL algorithm, using the third variant for calculating importance factors in the dual. As is easy to be seen, the Mixture-MCL algorithm combines the MCL algorithms in Table 1 with the dual algorithm in Table 2, using the (probabilistic) mixing ratio .... ..."

### Table 2: Computational Results for MCL Problem code

2000

"... In PAGE 25: ... 6.3 Algorithm Comparison Our computational results are displayed in Table2 . All the computations were carried out on a 350 megahertz PC with 128 megabytes of RAM, running under Windows NT 4.... In PAGE 27: ... (The total time taken by our code|and by bc prod|at the root node never exceeds a few seconds for any instance, so we do not report it.) Recall from Table2 that B amp;C takes a long time to process each node compared to the other two. The data in Table 3 indicate that the extra time required at each node in B amp;C is certainly not required for separation|which appears to be e cient|and not primarily for retrieving old cuts.... In PAGE 28: ... (Indeed, since C amp;B takes longer to process nodes than bc prod, it might be possible to improve C amp;B through better cut management as well.) Nevertheless, the data of this table, along with the results of Table2 , suggest that more classes of inequalities may be needed to solve MCL as e ciently as possible. For example, for some instances, many more cover and reverse cover inequalities are found in the tree than at the root node.... ..."

Cited by 5

### Table 2. Sets of URLs obtained by ow simulation on random MCL clusters.

2005

"... In PAGE 8: ... 4 clusters were then chosen at random among clusters whose size is greater than 10 (too small clusters often do not have much interest). The pre ow-push algorithm was applied to each of them, resulting in 4 sets of URLs described in Table2 by their size, a title (manually selected by looking at the set of URLs) and the precision and recall in regard to the title, manually computed (as precisely as it could be). Table 2.... ..."

Cited by 2

### Table 1: Clustering of PDB sequences using SPC, gSPC and TRIBE-MCL algorithms

2005

"... In PAGE 4: ... Obviously the method performance was calculated only for the annotated cases. The SPC covered 6% fewer sequences for K = 20 but resulted in higher a posteriori specificity and sensitivities ( Table1 ). The TRIBE-MCL, however, resulted in higher a priori sensitivity.... In PAGE 8: ... Such analysis was possible since for the data analyzed in this study the largest number of protein sequences were clustered in leafs and only a few addi- tional sequences could be still clustered considering the whole tree structure. For example, using K = 6 ( Table1 amp;2, SPC results) only 56 and 627 additional sequences could be clustered for the PDB and SwissProt data sets, respec- tively. These numbers corresponded to about 1% sequences in each database and only marginally influ- enced the method performance.... ..."

### Table 4: Clustering of sequences of bacterial genomes using SPC, gSPC and TRIBE-MCL algorithms

2005

"... In PAGE 6: ...03 observed for the third protein. The performance of the three algorithms is shown in Table4 . The total number of non-redundant subcatego- ries for this analysis was 44,531.... ..."

### Table 2: Sets of URLs obtained by applying the pre ow-push algorithm to ve random MCL clusters.

2003