### Table 6.5: Average number of nodes required by a real access list compared to random access lists at various list lengths.

### Table 1: Random Access File structure

"... In PAGE 6: ...hat creates a RAF (through the java.io.RandomAccessFile class) using a standard MIB text file for input. The RAF ( Table1 ) consists of a list of records, one for each of the objects contained in the MIB file, and by its nature allows immediate access to specific records. During agent initialisation, the RAF is used to build a MIB tree representation that is stored in system memory and enhances agent efficiency when handling manager requests.... ..."

### Table 2. Performance of the hypercube RandomAccess algorithm on a Cray XT3 machine in giga-updates-per-second (GUPS) for varying processor counts. Parallel efficiencies are computed from the single-processor hypercube rate.

"... In PAGE 4: ... Thus with this algorithm the only limitation to a high GUPS rate is the number of processors a machine has, assuming a very large machine still has suffi- cient bisection bandwidth to perform exchanges of 4K-byte messages between all pairs of processors at each dimension of the hypercube loop. The third line of Table2 lists predicted GUPS rates us- ing a model equation that reflects these two scaling factors, namely TP = T1 + TC log2(P). TP is the CPU time to run on P processors and is the sum of two terms.... ..."

### Table 2. Performance of the hypercube RandomAccess algorithm on a Cray XT3 machine in giga-updates-per-second (GUPS) for varying processor counts. Parallel efficiencies are computed from the single-processor hypercube rate.

"... In PAGE 4: ... Thus with this algorithm the only limitation to a high GUPS rate is the number of processors a machine has, assuming a very large machine still has suffi- cient bisection bandwidth to perform exchanges of 4K-byte messages between all pairs of processors at each dimension of the hypercube loop. The third line of Table2 lists predicted GUPS rates us- ing a model equation that reflects these two scaling factors, namely TP = T1 + TC log2(P). TP is the CPU time to run on P processors and is the sum of two terms.... ..."

### Table 2: Average random index access performance with an uncompressed index. Interpolation search fares slightly better than binary search, but on av- erage still requires almost 1 s (2,800 clock cycles) per random access on the Opteron system.

"... In PAGE 5: ... Each experiment was repeated six times. The best-performing run from each experiment is reported in Table2 . It can be seen that a random list accessed, real- ized through binary search, takes surprisingly long.... In PAGE 6: ....4% for 64-byte tree nodes). However, because the di er- ence is so small, our results still re ect the true performance of CSS-trees on this kind of search operation. Compared to na ve binary search, our implementation of the CSS-tree can process a request for a random posting between 73% (Pentium III) and 99% (Opteron) faster (com- paring Table 3 with Table2 ). The performance gain stems from a greatly reduced number of cache misses.... ..."

### Table 2. Running time in seconds. All timings were done on a Micron PC with a 266 MHz Pentium II processor and 128 MB random access memory, running the Solaris 8 operating system.

2002

"... In PAGE 11: ... Recall that the computation of persistence can be accelerated for k = 0; 2 using a union- find data structure. As substantiated in Table2 , this im- provement subsumes adding 0- and 2-cycles in the marking process, shrinking the time for these to steps to essentially nothing. The results suggest a possible linear dependence of the running time on the size of the data, which is substan- tially faster than the cubic dependence proved in Section 4.... In PAGE 11: ... Our cubic upper bound in Section 4 followed from the observation that the k-cycle created by i goes through fewer than pi collisions and the length of its list built up during these collisions is less than (k + 2)pi. We can explain the apparently linear running time documented in Table2 by showing that the average number of collisions and the average list length are both constant. Tables 3 and 4 provide evidence that this might indeed be the case.... ..."

Cited by 105

### Table 2. Running time in seconds. All timings were done on a Micron PC with a 266 MHz Pentium II processor and 128 MB random access memory, running the Solaris 8 operating system.

"... In PAGE 11: ... Recall that the computation of persistence can be accelerated for CZ BP BCBN BE using a union- find data structure. As substantiated in Table2 , this im- provement subsumes adding 0- and 2-cycles in the marking process, shrinking the time for these to steps to essentially nothing. The results suggest a possible linear dependence of... In PAGE 11: ... Our cubic upper bound in Section 4 followed from the observation that the CZ-cycle created by ARCX goes through fewer than D4CX collisions and the length of its list built up during these collisions is less than B4CZ B7 BEB5D4CX. We can explain the apparently linear running time documented in Table2 by showing that the average number of collisions and the average list length are both constant. Tables 3 and 4 provide evidence that this might indeed be the case.... ..."

### Table 2. Running time in seconds. All timings were done on a Micron PC with a 266 MHz Pentium II processor and 128 MB random access memory, running the Solaris 8 operating system.

"... In PAGE 11: ... Recall that the computation of persistence can be accelerated for a78 a18 a28 a14 a8 using a union- find data structure. As substantiated in Table2 , this im- provement subsumes adding 0- and 2-cycles in the marking process, shrinking the time for these to steps to essentially nothing. The results suggest a possible linear dependence of... In PAGE 11: ... Our cubic upper bound in Section 4 followed from the observation that the a78 -cycle created by a64 a0 goes through fewer than a25 a0 collisions and the length of its list built up during these collisions is less than a7 a78a39a99 a8 a95a11 a25 a0 . We can explain the apparently linear running time documented in Table2 by showing that the average number of collisions and the average list length are both constant. Tables 3 and 4 provide evidence that this might indeed be the case.... ..."

### Table 3 Data set #2: random access.

2002

Cited by 4