### Table 2 presents the compression results in terms of total bits required divided by edges in the graph. For the random graphs, we have taken the average of ten trials, where a different random graph is produced for each trial. We note that there is little deviation between the runs. For the uncompressed size in bits per edge, we use the underestimate log2 (#nodes). As seen in graph G1, when the amount of copying is low, and thus the average degree is very small, the reference algorithm alone does slightly worse than the Huffman algorithm, although using a Huffman code in conjunction with the reference algorithm leads to better performance.

2001

"... In PAGE 9: ...20 8.35 Table2 : Results from the test graphs; bits per edge. 6 Future Work We have initiated study into how to compress Web graphs using the copy graph model, a random graph family with properties similar to Web graphs.... ..."

Cited by 49

### Table 2 presents the compression results in terms of total bits required divided by edges in the graph. For the random graphs, we have taken the average of ten trials, where a different random graph is produced for each trial. We note that there is little deviation between the runs. For the uncompressed size in bits per edge, we use the underestimate D0D3CVBE (#nodes). As seen in graph BZBD, when the amount of copying is low, and thus the average degree is very small, the reference algorithm alone does slightly worse than the Huffman algorithm, although using a Huffman code in conjunction with the reference algorithm leads to better performance.

2001

"... In PAGE 9: ...20 8.35 Table2 : Results from the test graphs; bits per edge. 6 Future Work We have initiated study into how to compress Web graphs using the copy graph model, a random graph family with properties similar to Web graphs.... ..."

Cited by 49

### Table 3. Precision in Top 1, 5, 10, 15 and 20 for potentially interesting web pages

2005

"... In PAGE 17: ... The results from precision/recall graph for potentially interesting web pages in Fig. 5 and the Top link analysis in Table3 are similar. WS was closer to the upper-right corner than Google, US, and Random over all.... ..."

Cited by 3

### Table 7. Article similarities to Link Analysis: Hubs and Authorities on the World Wide Web using a rank R = 30 PARAFAC decomposition.

2006

"... In PAGE 13: ...uthorities on the World Wide Web. The results depend on the choice of R, i.e., the number of factors used in the PARAFAC decomposition. Table 6 shows the result for R = 10 and Table7 for R = 30. The R = 10 case is not very precise, citing a variety of papers as related, ranging from the topic of sparse approximate inverses (arguably distantly related) to interior point methods (not related) and graph partitioning (related).... ..."