### Table 3: Pairs of almost optimal service providers for each stage

2002

"... In PAGE 23: ...5 Step 5 By comparing the optimal standard deviations a62 a132 a9 obtained in Step 4 with the given data in Table 1 we can compute, for each stage, the service providers whose variance is closest to the optimal. These are listed in Table3 . It is easy to see that we can construct 64 combinations out of these 12 service providers listed in Table 3, where each combination representing a particular mix of service providers.... In PAGE 23: ... These are listed in Table 3. It is easy to see that we can construct 64 combinations out of these 12 service providers listed in Table3 , where each combination representing a particular mix of service providers. We have computed the end-to-end supply chain cost a153 , process capability indices a57 a38 and a57 a38a29a59 , DP, and DS for each of these 64 combination and the results are tabulated in Table 4.... ..."

Cited by 3

### Table 3: Pairs of almost optimal service providers for each stage

2002

"... In PAGE 27: ...5 Step 5 By comparing the optimal standard deviations a71 a175 a23 obtained in Step 4 with the given data in Table 1 we can compute, for each stage, the service providers whose variance is closest to the optimal. These are listed in Table3 . It is easy to see that we can construct 64 combinations out of these 12 service providers listed in Table 3, where each combination representing a particular mix of service providers.... In PAGE 27: ... These are listed in Table 3. It is easy to see that we can construct 64 combinations out of these 12 service providers listed in Table3 , where each combination representing a particular mix of service providers. We have computed the end-to-end supply chain cost a189 , process capability indices a3a19a4 and a3 a4a7a6 , DP, and DS for each of these 64 combination and the results are tabulated in Table 4.... ..."

Cited by 3

### TABLE III PAIRS OF ALMOST OPTIMAL SERVICE PROVIDERS FOR EACH STAGE

### Table 2: Comparison of GRS and Atallah and Prabhakar apos;s scheme [5] on maximum deviation from the optimum. M: number of disks. Grid size = 2M 2M. We also prove analytically that whenever M (number of disks) is a Fibonacci number, our scheme has almost optimal performance (both average case and worst case). Our extensive simulation shows that the performance of our scheme varies smoothly with M, which is a strong evidence that our scheme has good behavior for all values of M. To the best of our knowledge these are the rst non-trivial performance guarantees for any e cient declustering scheme.

2000

Cited by 21

### Table 3 shows the number of blocks accessed on each disk for processing the 10 queries for the uniformly distributed le. Let P1; P2 and P3 be the numbers of the accessed blocks on disk 1, disk 2 and disk 3. Clearly, for 1 i 3, Pi is almost the same as P3 i=1 Pi=M. Thus, the CMD method is almost optimal for queries on uniformly distributed les. Range Query Number of Blocks accessed on Disks Number on Disk 1 on Disk 2 on Disk 3

1992

"... In PAGE 23: ... Table3 . The performance of the algorithm of range query processing on uniformly distributed le.... ..."

Cited by 38

### Table 2: Comparison of GRS and Atallah and Prabhakar apos;s scheme [5] on maximum deviation from the optimum. M: number of disks. We also prove analytically that whenever M (number of disks) is a Fibonacci number, our scheme has almost optimal performance (both average case and worst case). Our extensive simulation shows that the performance of our scheme varies smoothly with M, which is a strong evidence that our scheme has good behavior for all values of M. To the best of our knowledge these are the rst non-trivial performance guarantees for any e cient declustering scheme. References [1] K. Abdel-Gha ar and A. E. Abbadi. Optimal allocation of two-dimensional data. In Proceedings 20

2000

Cited by 21

### Table 2 and Fig. 5 show the corresponding speedup results. First the overall speedup for the accumulated time over all 4 levels is given. It performs well until P = 16 and is no longer optimal for P = 60. This loss of e ciency is due to the setup phase for interface unknowns which shows rather bad scalability in the current implementation. So it scales from 32 sec for P = 16 to 15 sec for P = 60 only. The speedup computed for ` = 4 shows a slightly better behavior since coarse grid e ects are neglected. However, the speedup computed for the solver and ` = 4 only, shows much better results. Here the speedup is almost optimal for P = 60. For a more detailed analysis we consider the speedup with respect to one iteration for ` = 4. Now we observe even super-speedups. However this is due to cache e ects. Looking at Fig. 5 which shows the results

"... In PAGE 16: ...Table2 we see that for P = 16 the local problems are small enough to t into local caches. Summarizing, we observe an optimal scalability of our iterative solver.... ..."

### Table 1: Computing an (almost) optimal step size reduces the running time and number of minimum-cost flow (MCF) computations by two orders of magnitude. Data obtained on a Sun Enterprise 3000.

in An implementation of a combinatorial approximation algorithm for minimum-cost multicommodity flow

1998

"... In PAGE 9: ... Using the Newton-Raphson method reduces the running time by two orders of magnitude compared with using a fixed step size. (See Table1 .) As the accuracy increases, the reduction in running time for the Newton-Raphson method increases.... ..."

Cited by 27

### Table 1: Computing an (almost) optimal step size reduces the running time and number of minimum-cost ow (MCF) computations by two orders of magnitude. Data obtained on a Sun Enterprise 3000.

in An Implementation of a Combinatorial Approximation Algorithm for Minimum-Cost Multicommodity Flow

### Table 1 shows the average number of voxels transferred to the left and the right processors in a ring at the end of each phase of rotation. The average number of voxels processed per degree of rotation are also shown. It can be seen that cyclicity 3 is sufficient to achieve almost optimal load balancing both in computation and in communication.

1996

"... In PAGE 10: ... The graphs show communication patterns for data distributions from cyclicities 1 to 3. For average numbers refer to Table1 . The increase in the number of vertical lines in the graphs with increase in number of cycles is because of the fact that with increasing number of cycles, the maximum rotation angle decreases, leading to an increase in the number of phases.... In PAGE 11: ... The third method of block-cyclic distribution of data provided a simple but efficient solution to load balancing. Results (Figure 4 and Table1 ) show that with increasing cyclicity, almost optimal load balance is achieved both in communication and computation. Strictly speaking, the computation time at each processor will depend on the scene, as the actual num- ber of voxels which are occupied and processed may vary across processors and greatly influence their rendering time.... ..."

Cited by 6