### Table 7: Numerical results on SDP relaxations of the graph partition problems.

"... In PAGE 20: ... Although (13) involves a dense data matrix E, we can obtain an equivalent SDP with sparse aggregate sparsity pattern applying an appropriate congruent transformation to it [8, section 6]. Table7 compares the three methods for the transformed problems. As k1 becomes large, the aggregate sparsity patterns remain sparse, though the extended sparsity patterns become dense for them.... ..."

### Table 6: Numerical results on SDPARA-C applied to large-size SDPs (64 CPUs)

2003

"... In PAGE 19: ... Table6 shows the computation times and memory requirements per processor needed when SDPARA-C is used to solve SDP relaxations of maximum cut problems for large lattice graphs, SDP relaxations of maximum clique problems for large lattice graphs and norm minimization prob- lems for large matrices. As one would expect, the computation times and memory requirements increase as n gets larger.... ..."

Cited by 5

### Table 4: Results (large graphs).

2004

"... In PAGE 11: ...erical results for M2sP. The algorithm is based on the spectral approach. Since this test suite is relatively small to provide enough information regarding M2sP, we have launched a new, much larger test suite which consists of 66 graphs from di erent areas [12, 21], see Table 1. These graphs are divided into two groups according to their size : the results for the smaller ones are introduced in Tables 2 and 3, while those for the larger ones in Table4 . For all the graphs in Tables 2 and 3 we compare our results with those of the spectral approach.... In PAGE 13: ...13 To enrich our test suite, we present in Table4 our \Quick quot; V-cycle results for additional 29 relatively large graphs. No spectral approach results are provided since we were not able to run (on the computers available to us) the MATLAB routine and calculate the needed eigenvector.... ..."

### Table 6: Numerical results on SDP relaxations of the graph partition problems. standard conversion completion

2003

"... In PAGE 19: ... Although (13) involves a dense data matrix E, we can obtain an equivalent SDP with sparse aggregate sparsity pattern applying an appropriate congruent transformation to it [8, section 6]. Table6 compares the three methods for the transformed problems. As k1 becomes large, the aggregate sparsity patterns remain sparse, though the extended sparsity patterns become dense for them.... ..."

Cited by 16

### Table 3 demonstrates that sparse problems in this dataset have large clustering coe cients like regular graphs, but small characteristic path lengths like ran- dom graphs. They therefore have a small world topol- ogy. By comparison, dense problems from this dataset have nodes of large degree which are less clustered. Such graphs therefore have less of a small world topol- ogy. We conjecture that graphs will often start with a sparse small world topology but will become more like dense random graphs as edges added \saturate quot; the structure.

### Table 15: Numerical results on graph equipartition problems.

1997

"... In PAGE 25: ... Parts (III) and (IV) require O(n3 + mf) and O(n3) arithmetic operations, respectively. Therefore they become the most expensive parts when n gets larger, and solving a semidef- inite program via SDPA seems impractical when n exceeds several thousands; for example, the graph equipartition problem with the size n = 1250 given in Table15 required more than 31 hours of computation time. Sparsity is one particular form of problem structure we exploit here because it is an important feature arising from SDP relaxation in combinatorial optimization and it has a... ..."

Cited by 25

### Table 1. datasets

2000

"... In PAGE 6: ... We rst consider two arti cial problems where thorough experiments are pursued and then look at four real-life datasets. A description of the databases is given in Table1 . All input variables are numerical and the datasets were selected to provide large enough samples.... In PAGE 9: ... 4.2 Real-life datasets We also have pursued experiments on four real-life datasets (the last four datasets of Table1 ). Each database was split into three disjoint parts: a learning set (LS), a pruning set (PS) and a test set (TS).... ..."

Cited by 1

### Table 1. OBIG performance for large graphs

1998

"... In PAGE 7: ...ombinations [15]. OBIG results are based on ten runs for each graph. OBIG solution quality dominated traditional and PDI algorithms on each of the 12 smallest irand and sg graphs, discovering the optimum cover for 11 of 12 graphs. Table1 shows similar solution quality results for the large irand and sg graphs. The best (minimum) solution found is reported for the GH algorithm, while the range of solutions discovered is reported for PDI and OBIG.... In PAGE 7: ...epeated here). OBIG, however, yields superior covers for each graph except sg1770. More interestingly, OBIG also requires orders of magnitude less time to compute the better quality solutions. Table1 includes runtimes on the 8 large graphs for which PDI results were reported [15]. OBIG data is averaged over ten runs2.... ..."

Cited by 6

### Table 1. OBIG performance for large graphs

1998

"... In PAGE 7: ...ombinations [15]. OBIG results are based on ten runs for each graph. OBIG solution quality dominated traditional and PDI algorithms on each of the 12 smallest irand and sg graphs, discovering the optimum cover for 11 of 12 graphs. Table1 shows similar solution quality results for the large irand and sg graphs. The best (minimum) solution found is reported for the GH algorithm, while the range of solutions discovered is reported for PDI and OBIG.... In PAGE 7: ...epeated here). OBIG, however, yields superior covers for each graph except sg1770. More interestingly, OBIG also requires orders of magnitude less time to compute the better quality solutions. Table1 includes runtimes on the 8 large graphs for which PDI results were reported [15]. OBIG data is averaged over ten runs2.... ..."

Cited by 6