### Table 2: Duality for closed conic convex programs

"... In PAGE 23: ...d #03 = inf 8 #3E #3C #3E : s 5 #0C #0C #0C #0C #0C #0C #0C 2 6 4 0 1 0 1 s 2 s 5 = p 2 0 s 5 = p 2 0 3 7 5 #17 0 9 #3E = #3E ; = 1: Finally, the possibility of the entries in Table2 where weak infeasibility is not involved, can be demonstrated by a 2-dimensional linear programming problem: Example 5 Let n =2,c2#3C 2 ,K=K #03 =#3C 2 + and A = f#28x 1 ;x 2 #29jx 1 =0g; A ? =f#28s 1 ;s 2 #29js 2 =0g: We see that #28P#29 is strongly feasible if c 1 #3E 0,weakly feasible if c 1 =0and strongly infeasible if c 1 #3C 0. Similarly, #28D#29 is strongly feasible if c 2 #3E 0,weakly feasible if c 2 =0and strongly infeasible if c 2 #3C 0.... In PAGE 27: ... #0F The regularizedprogram CP#28b; c; A; K 0 #29 is dual strongly infeasible if and only if F D = ;. Combining Theorem 8 with Table2 , we see that the regularized conic convex program is in perfect duality: Corollary 7 Assume the same setting as in Theorem 8. Then there holds #0F If d #03 = 1, then the regularized primal CP#28b; c; A; K 0 #29 is either infeasible or unbounded.... ..."

### Table 3 compares the performance of the algorithm using the convex enve- lope (\Linear quot;) and -based underestimation. The row labeled \Convex I quot; reports results obtained using a uniform diagonal shift matrix (a single value per term), and \Convex II quot; was obtained using the scaled Gerschgorin theorem method which generates one per variable in a nonconvex term. The use of leads to looser lower bounding functions than the convex enve- lope. Moreover, it requires the solution of a convex NLP for the generation of a lower bound, whereas a linear program is constructed when the con- vex envelope is used. As a result, both computation time and number of iterations increase signi cantly. The exploitation of the special structure of bilinear terms is thus expected to provide the best results in most cases. For very large problems, however, the introduction of additional variables and constraints may become prohibitive and a convex underestimator becomes more appropriate (Harding and Floudas, 1997).

"... In PAGE 15: ... Table3 : Computational results using di erent underestimating schemes.... ..."

### Table 5: Convex quadratics: log barrier method

"... In PAGE 30: ... The convergence criterion for the inner iteration was krx (xk;j; k)k 10?7krx (xk;0; k)k: We used 0 := 105 and := 2 10?1. Table5 gives the results using the log barrier algorithm. Table 6 gives the results using the reciprocal barrier algorithm.... ..."

### Table 6: Convex quadratics: reciprocal barrier method

"... In PAGE 30: ... Table 5 gives the results using the log barrier algorithm. Table6 gives the results using the reciprocal barrier algorithm. 6.... ..."

### Table 1 is a comparison of bounds obtained from MSDR3 and other relaxation methods applied to instances from QAPLIB [6]. The first column OPT denotes the exact optimal value of the problem instance, while the following columns contain the lower bounds from the relaxation methods: GLB , the Gilmore-Lawler bound [10]; KCCEB , the dual linear programming bound [15]; P B , the projected eigenvalue bound [12]; QP B , the convex quadratic programming bound [1]; SDR1 , SDR2 , SDR3 , the vector-lifting semidefi- nite relaxation bounds [27] computed by the bundle method [24]; the last column is our MSDR3 . All output values are rounded up to the nearest integer. To solve QAP , the minimization of trace AXBXT and trace BXAXT are equivalent. But for the relaxation MSDR3 , exchanging the roles of A and B results in two different formulations and bounds. In our tests we use both versions and take the larger output as the bound of MSDR3 . We then keep the better formulation throughout the branch and bound process, so that we do not double the computational work.

2006

"... In PAGE 16: ... We then keep the better formulation throughout the branch and bound process, so that we do not double the computational work. From Table1 , we see that the relative performances between the LP -based bounds GLB , KCCEB are unpredictable. At some instances, both are weaker than even the least expensive P B bounds.... In PAGE 17: ...a. 4887 4965 4621 Nug30 6124 4539 4785 5266 5362 5413 5651 5803 5446 rou12 235528 202272 223543 200024 205461 208685 219018 223680 207445 rou15 354210 298548 323589 296705 303487 306833 320567 333287 303456 rou20 725522 599948 641425 597045 607362 615549 641577 663833 609102 scr12 31410 27858 29538 4727 8223 11117 23844 29321 18803 scr15 51140 44737 48547 10355 12401 17046 41881 48836 39399 scr20 110030 86766 94489 16113 23480 28535 82106 94998 50548 tai12a 224416 195918 220804 193124 199378 203595 215241 222784 202134 tai15a 388214 327501 351938 325019 330205 333437 349179 364761 331956 tai17a 491812 412722 441501 408910 415576 419619 440333 451317 418356 tai20a 703482 580674 616644 575831 584938 591994 617630 637300 587266 tai25a 1167256 962417 1005978 956657 981870 974004 908248 1041337 970788 tai30a 1818146 1504688 1565313 1500407 1517829 1529135 1573580 1652186 1521368 tho30 149936 90578 99855 119254 124286 125972 134368 136059 122778 Table1 : Comparison of bounds for QAPLIB instances... ..."

Cited by 2

### Table 1: List of nonconvex and convex potential functions that have been used.

1998

"... In PAGE 6: ... Depending on the choice of the potential function, (2) includes many common MRF models that have been proposed in the literature. Table1 lists a variety of such potential functions. Notice that only the GGMRF model depends on p through the potential function.... In PAGE 10: ...4 ML Estimate of and p for Non-scalable Priors In this section, we derive methods to compute the joint ML estimates of and p when the potential function is not scalable. This includes all the potential functions of Table1 except the Gaussian, Laplacian, and GGMRF. Notice that u(x; p) is not a function of p for any of the non-scalable potential functions.... ..."

Cited by 34

### Table 1. A comparison of update methods for different Voronoi types. The number of points is n, h is the number of points on the convex hull prior to update and k is the number of points on the convex hull following update.

"... In PAGE 27: ... The algorithmic complexity of the update methods is highly dependent on the underlying Voronoi diagram, with the OVD being the simplest and most efficient and MWVD being the most expensive. Table1... ..."

### Table 2 : Results on sizing various circuits using iCONTRAST

1993

"... In PAGE 28: ...Table2 shows the area of the circuit after it has been sized by iCONTRAST to meet a delay speci#0Ccation, T spec , and the execution time on a Sun SPARCstation I. Since our method solves the underlying convex programming problem exactly, the areas shown here correspond to the globally optimum solution to the transistor sizing problem, with an accuracy that is dictated by the tightness of the user-speci#0Ced termination criterion.... ..."

Cited by 75

### Table 1: Running times for the continuous ASG algorithm and the discretized version The number of parameter functions that are discountinous at the same time was varied, however this was found to make relatively little di erence to the running time. 5 Concluding Remarks We have presented an active-set method which, under mild assumptions on the problem apos;s parameters, is capable of nding the exact solution to the continuous-time quadratic cost network ow problem e ciently. Although only a relatively simple example is included here for illustration purpose, the algorithm has been tested extensively on many other large and highly non-trivial problems, and has consistently return the same e ciency. Other extensions of the algorithm are possible, such as relaxing the strong convexity of the problem to a weakly convex one, or relaxing the network structure of the problem to look at a more general continuous-time monotropic programming problem (Rockafellar 1984). We are also conducting research into using this kind of model for water distribution networks and tra c ow problems.

"... In PAGE 17: ...2 Comparison with Discretization To show that the continuous ASG algorithm is more e ective in practice than just discretizing the problem, we solved a range of randomly generated problems using the two di erent approaches. The running times in CPU seconds on a DEC alpha workstation are summarized in Table1 . Problems are class ed according to the number of nodes, arcs and atomic intervals.... ..."