### Table 4. Here num is the number of frequencies to compute for each database scan. And Speedup is the ratio between the number of database scans needed when num = 1 and the number of database scans needed for each given num. As expected, typically the speedup is not as big as num because some frequency computations are wasteful. Nevertheless, we do observe an increasing speedup as num increases. As num becomes larger, the increase of speedup tends to be slower, since a larger portion of the frequency computations are wasteful.

"... In PAGE 20: ... Table4 . The speedup on the number of database scans due to PHDB.... ..."

### Table 2. Running time (in seconds)/I/O bandwidth (in MB/s) of the MIS benchmark running on multiple disk.

2005

"... In PAGE 10: ....35 times faster than LEDA-SM. The main reason for this big speedup is likely to be the more e cient priority queue algorithm from [11]. Table2 shows the parallel disk performance of the Stxxl implementations. The Stxxl-STL implementation achieves speedup of about 1.... ..."

Cited by 13

### Table 1 These experimental results shows that the total number of applied rules in a concurrent execution is approximately equal to those applied in a sequential execution. It is clear, that big terms have greater speed-ups (up to 3.1 for terms having 1535 nodes distributed on four processors) than smaller ones. Although, increasing the number of processors does not lead to a better speed-up if the term size is not increasing too. The experimental results are showing the importance of the interprocess communication in our CdCcRw implementation. Moreover, in the case of SUN4 experiments, the network used is based on a sequential bus (Ethernet). A higher bandwidth network is certainly needed to have better experimental results. Furthermore, we are now sharpening the current implementation.

"... In PAGE 23: ...We present benchmarks based on using the system RewSort sorting a list of natural numbers de ned in section 3 and the rewrite system R1 which presents a quite unfavourable case where the structure of terms used in the tests have a large amount of overlapping redexes. Table1 summarises, for each concurrent normalisationprocess, the rewriting system we use, the size of terms to be normalised, the platform and the number of processors used, the number of rewriting steps to obtain the normal form, the real time of execution given in second, and the di erent speed-ups.... ..."

### Table 1 These experimental results shows that the total number of applied rules in a concurrent execution is approximately equal to those applied in a sequential execution. It is clear, that big terms have greater speed-ups (up to 3.1 for terms having 1535 nodes distributed on four processors) than smaller ones. Although, increasing the number of processors does not lead to a better speed-up if the term size is not increasing too. The experimental results are showing the importance of the interprocess communication in our CdCcRw implementation. Moreover, in the case of SUN4 experiments, the network used is based on a sequential bus (Ethernet). A higher bandwidth network is certainly needed to have better experimental results. Furthermore, we are now sharpening the current implementation.

"... In PAGE 23: ...We present benchmarks based on using the system RewSort sorting a list of natural numbers de ned in section 3 and the rewrite system R1 which presents a quite unfavourable case where the structure of terms used in the tests have a large amount of overlapping redexes. Table1 summarises, for each concurrent normalisationprocess, the rewriting system we use, the size of terms to be normalised, the platform and the number of processors used, the number of rewriting steps to obtain the normal form, the real time of execution given in second, and the di erent speed-ups.... ..."

### Table 2: Speedups obtained for the simulation of 2 applications with di erent number of processor in the simulator.

1996

"... In PAGE 25: ... Today apos;s implementation with processes does not follow completely the assump- tions of our theoretical study (the granularity is too big). Hence, the speedups growth in Table2 slows rapidly. But we expect them to follow more closely our theoretical upper bound with the threads implementation that is in progress and in practice, with today apos;s parallel computers apos; parameters, the speedup should increase steadily with up to a dozen workstations for a wide range of parallel linear algebra applica- tions.... ..."

Cited by 1

### Table 3: Test results of near triangle inequality The results show that both the pruning power and the speedup ratio of near triangle inequality is pretty low compared to the results of mean value Q-grams. This is because the factor jSj that we introduced in near triangle inequality is too big, which reduces its pruning power. Finding a smaller suitable value is left as future work. We also find that near triangle inequality works better on the data set whose lengths follow a uniform distribution (trajectory

2005

Cited by 21

### Table 2: BIG BUCKETS MODELS

1998

"... In PAGE 25: ... For bc ? prod, we call one round of the specialised inequalities, and then one round as for bc ? opt. Computation on Single Level BB Instances Results for the BB instances are presented in Table2 . For each of the Tables 2 - 5, the column headings are as follows: instance identi es the instance, code identi es the system used in order to solve the problem, LP is the initial LP value before adding cuts at the root node of the Branch- and-Bound algorithm; XLP is the LP value after adding cuts at the root node of the Branch-and-Bound algorithm, IP is the value of the best feasible solution found within a time limit of 2 hours for all except the tr and ches problems (if there is no proof of optimality for this solution, this is indicated with a *), Secs if the CPU time in seconds (if there is no proof of optimality for this solution, this is indicated with a *), #cuts add is the total number of cuts added at the top node in the matrix, #cuts del is the total number of cuts deleted at the top node from the matrix, Gap is the nal duality gap based on the value of the best feasible solution IP and the best dual bound (DB) available.... In PAGE 25: ... For a minimization problem, gap=IP?DB IP 100%. We now brie y discuss the results in Table2 . The rst four instances are easily solved.... ..."

Cited by 4