### Table 13: The best total running time (in hours) for BFS traversal on different graphs with the best external memory BFS implementations

"... In PAGE 3: ... Also, on low diameter graphs, the time taken by our improved MR BFS is around one-third of that in [3]. Towards the end, we summarize our results ( Table13 ) by giving the state of the art implementations of external memory BFS on different graph classes. 2 Improvements in the previous implementat- ions of MR BFS and MM BFS R The computation of each level of MR BFS involves sorting and scanning of neighbours of the nodes in the previous level.... In PAGE 8: ...2 Summary. Table13 gives the current state of the art implementations of external memory BFS on different graph classes. Our improved MR BFS implementation outper- forms the other external memory BFS implementations on low diameter graphs or when the nodes of a graph are arranged on the disk in the order required for BFS traversal.... ..."

### Table 13: The best total running time (in hours) for BFS traversal on different graphs with the best external memory BFS implementations

"... In PAGE 3: ... Also, on low diameter graphs, the time taken by our improved MR BFS is around one-third of that in [3]. Towards the end, we summarize our results ( Table13 ) by giving the state of the art implementations of external memory BFS on different graph classes. 2 Improvements in the previous implementat- ions of MR BFS and MM BFS R The computation of each level of MR BFS involves sorting and scanning of neighbours of the nodes in the previous level.... In PAGE 8: ...2 Summary. Table13 gives the current state of the art implementations of external memory BFS on different graph classes. Our improved MR BFS implementation outper- forms the other external memory BFS implementations on low diameter graphs or when the nodes of a graph are arranged on the disk in the order required for BFS traversal.... ..."

### Table 11: The best total running time (in hours) for BFS traversal on different graphs with the best external memory BFS implementations

"... In PAGE 4: ... Also, on low diameter graphs, the time taken by our improved MR BFS is around one-third of that in [3]. Towards the end, we summarize our results ( Table11 ) by giving the state of the art implementations of external memory BFS on different graph classes. Our implementations can be downloaded from http://www.... ..."

### Table 9: Time (in hours) required for the two preprocessing variants.

"... In PAGE 11: ... Note that the Euler tour computation followed by list ranking only requires sort(m) I/Os. This asymptotic difference shows in the I/O volume of the two preprocessing variants (Table 8), thereby explaining the better performance of the deterministic preprocessing over the randomized one ( Table9 ). On low diameter random graphs, the diameter of the clusters is also small and consequently, the randomized variant scans the graph fewer times leading to less I/O volume.... ..."

### Table 3: Time taken (in hours) by the BFS phase of MM BFS D with long and random clustering

"... In PAGE 4: ... On the other hand, low diameter clusters are evicted from the pool sooner and are scanned less often reducing the I/O volume of the BFS phase. Consequently as Table3 shows, the BFS phase of MM BFS D takes only 28 hours with clusters produced by random spanning tree, while it takes 51 hours with long and narrow clusters. 4 A Heuristic for maintaining the pool As noted in Section 1.... ..."

### Table 3: Time taken (in hours) by the BFS phase of MM BFS D with long and random clustering

"... In PAGE 4: ... On the other hand, low diameter clusters are evicted from the pool sooner and are scanned less often reducing the I/O volume of the BFS phase. Consequently as Table3 shows, the BFS phase of MM BFS D takes only 28 hours with clusters produced by random spanning tree, while it takes 51 hours with long and narrow clusters. 4 A Heuristic for maintaining the pool As noted in Section 1.... ..."

### Table 2: Number of steps required by the broadcasting sequence c (multiple-port version) A comparison of Tables 1 and 2 shows that for odd n, the one-port broadcasting algorithm based on sequence c performs as well as its multiple-port counterpart. However, for even n the number of steps required by one-port broadcasting is about 50% greater than the diameter of the n-SCC graph. Therefore, it is more e cient to use multiple-port broadcasting in this case. A synchronous algorithm to perform multiple-port broadcasting in an SCC graph using sequence c fol- lows. The variables and functions used by the multiple-port algorithm have the same functionality previously de ned for the one-port version. However, an additional procedure is used for multiple-port broadcasting, namely SEND MULTIPLE(port1,port2). This procedure is called by an informed node to send the broadcast message simultaneously in two of 3 possible ports: lateral link, right local link or left local link. As a matter 14

"... In PAGE 14: ... The total number of steps required to run sequence c using a multiple-port communication model is therefore j c(mult)j = j c (lat; mult)j + j c (loc; mult)j = b(n + 1)=2cb3(n ? 1)=2c. 2 Table2 lists the number of steps required by a multiple-port broadcasting algorithm using the cyclic sequence c, according to Theorem 4. Note that the total number of steps required by the algorithm based on sequence c ( 0:75n2) is very close to the diameter of the n-SCC graph (actually, the relative distance to the diameter is at most 17.... In PAGE 14: ...6% for 4 n 8). We have proved that c is optimal from the viewpoint of lateral link steps, and by inspecting Table2 we note that optimality from the viewpoint of local link steps is... ..."

### Table 2 Clustering coeSOcients of the market graph

2004

### Table 1: Logical topology costs before and after annealing. The initial temperature is set at 0.08, and as cost function, we use total edge-cost and diameter in edge-cost.

"... In PAGE 6: ... We xed the initial 50-node topology the anneal- ing process uses, so that we can compare the resulting topologies apos; costs. Figure 9 and Table1 show the results of running the topology computation algorithm using an equal linear combination of total edge-cost and diameter in edge- cost in the objective function to generate 50-node, 2- connected graphs. From Table 1, we observe that because the initial topology already has low total edge-cost, the annealing process cannot signi cantly improve total edge-cost, but produces topologies with diameter up to 70% lower.... In PAGE 6: ... Figure 9 and Table 1 show the results of running the topology computation algorithm using an equal linear combination of total edge-cost and diameter in edge- cost in the objective function to generate 50-node, 2- connected graphs. From Table1 , we observe that because the initial topology already has low total edge-cost, the annealing process cannot signi cantly improve total edge-cost, but produces topologies with diameter up to 70% lower. As expected, we get lower total edge-cost topologies with higher diameter as the probability of delete increases and the probability of add decreases.... In PAGE 7: ... The horizontal axis refer to iteration through the annealing process (log scale). that while the results in Table1 show that high delete probabilities attenuate the e ects of di erent decrease rates D, the graphs in Figure 9 show that D = 0:4 makes the annealing algorithm converge sooner. We repeat the same experiment using only total edge- cost in the objective function and report the results in Figure 10 and Table 2.... ..."

### Table 1: Event probabilities for causal structures Event Graph 0 Graph 1 Graph 2

2004

Cited by 4