### Table 1. Comparison of throughput for various TCP Vegas configurations The table lists the aggregated throughput for vari- ous configurations of TCP Vegas. Three Vegas are used in the ex- periments. The first is Linux TCP Vegas running on the physical machine (Native Vegas). The second is Linux TCP Vegas running on the Xen guest domain (Guest Vegas). The third is FoxyVegas, which tricks Linux BIC. BIC is the Linux default congestion control algorithm.

### TABLE I ENABLING FUNCTIONS FOR THE LA ALGORITHMS

2000

Cited by 10

### Table 3. NMI Results on reviews, sports, la1, la12, and la2 datasets reviews sports la1 la12 la2

2003

"... In PAGE 6: ... Boldface entries highlight the best algorithms in each column. To save space, we show the NMI results on for one speciflc K only for each dataset (results for other datasets are shown in Table3 and Table 4). Table 5 show the results for a series of paired t-tests.... ..."

### Table 1: Latency comparison between MiLa+GiLa and Algorithm 1/Algorithm 2.

"... In PAGE 3: ...nd 1GB RAM. Several test cases were randomly generated. We compare our algorithms with the approach MiLa+GiLa which first runs the MiLa algorithm to get a latency and then uses that latency as the constraint of each sink to run the Gila algorithm. Table1 reports the resultant latency found by each algorithm. The columns #bufs , #FFs , lat , and cpu (s) give the number of buffers inserted, the number of FFs inserted, the resultant latency, and the CPU time, respectively.... In PAGE 3: ... If an algorithm could not find a solution to satisfy the pipeline constraint for a test case, its corresponding table entries are filled with - . The results given in Table1 show that either Algorithm 1 or Algorithm 2 was able to find a solution with the same latency (for about half of the test cases) or even smaller latency (for the remaining test cases) than MiLa+GiLa, while the buffer/FF usage and CPU time are comparable or acceptable. In particular, for some test cases, both Algorithm 1 and Algorithm 2 could find feasible solutions, but MiLa+ GiLa failed to do so.... ..."

### TABLE IV COMPARISON BETWEEN ALGORITHMS (ZDT1 PROBLEM) ZDT1 HLGA VEGA NSGA SPEA

### TABLE VI COMPARISON BETWEEN ALGORITHMS (ZDT3 PROBLEM) ZDT3 HLGA VEGA NSGA SPEA

### TABLE VIII COMPARISON BETWEEN ALGORITHMS (ZDT6 PROBLEM) ZDT6 HLGA VEGA NSGA SPEA

### Table 8. Speedup with respect to a single processor implemen- tation and e ciency (speedup divided by number of processors). Algorithm is the parallel implementation of GRASP. Instances are abz6, mt10, orb5, and la21, with target values 960, 960, 920, and 1120, respectively.

2003

"... In PAGE 30: ...00 The plots were generated with 60 independent runs for each number of processors considered (1, 2, 4, 8, and 16 processors). Table8 summarizes the speedups shown in the plots. The table also shows e ciency (speedup divided by number of processors) values.... ..."

Cited by 18

### Table 8. Speedup with respect to a single processor implemen- tation and e ciency (speedup divided by number of processors). Algorithm is the parallel implementation of GRASP. Instances are abz6, mt10, orb5, and la21, with target values 960, 960, 920, and 1120, respectively.

2003

"... In PAGE 30: ...00 The plots were generated with 60 independent runs for each number of processors considered (1, 2, 4, 8, and 16 processors). Table8 summarizes the speedups shown in the plots. The table also shows e ciency (speedup divided by number of processors) values.... ..."

Cited by 18