### Table 2: Duality for closed conic convex programs

"... In PAGE 23: ...d #03 = inf 8 #3E #3C #3E : s 5 #0C #0C #0C #0C #0C #0C #0C 2 6 4 0 1 0 1 s 2 s 5 = p 2 0 s 5 = p 2 0 3 7 5 #17 0 9 #3E = #3E ; = 1: Finally, the possibility of the entries in Table2 where weak infeasibility is not involved, can be demonstrated by a 2-dimensional linear programming problem: Example 5 Let n =2,c2#3C 2 ,K=K #03 =#3C 2 + and A = f#28x 1 ;x 2 #29jx 1 =0g; A ? =f#28s 1 ;s 2 #29js 2 =0g: We see that #28P#29 is strongly feasible if c 1 #3E 0,weakly feasible if c 1 =0and strongly infeasible if c 1 #3C 0. Similarly, #28D#29 is strongly feasible if c 2 #3E 0,weakly feasible if c 2 =0and strongly infeasible if c 2 #3C 0.... In PAGE 27: ... #0F The regularizedprogram CP#28b; c; A; K 0 #29 is dual strongly infeasible if and only if F D = ;. Combining Theorem 8 with Table2 , we see that the regularized conic convex program is in perfect duality: Corollary 7 Assume the same setting as in Theorem 8. Then there holds #0F If d #03 = 1, then the regularized primal CP#28b; c; A; K 0 #29 is either infeasible or unbounded.... ..."

### Table 1: Numerical results. References [1] E. Anderson, Z. Bai, C. Bischof, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, S. Ostrouchov and D. Sorensen, LAPACK Users apos; Guide, SIAM, Philadelphia, 1992. [2] D. S. Atkinson and P. M. Vaidya, \A cutting plane algorithm that uses analytic centers quot;, \Nondi erentiable and Large Scale Optimization quot;, Mathematical Programming, Series B, J. L. Go n and J. P. Vial, eds, 69 (1995) 1-43. [3] O. Bahn, J.-L. Go n, J.-P. Vial and O. du Merle, \Experimental behaviour of an inte- rior point cutting plane algorithm for convex programming: An application to geometric programming quot;, Discrete Applied Mathematics 49 (1994) 3-23. 16

### Table 3: A set of 12 points whose convex hull is not castable. According to an exhaustive checking of all possible planes through three vertices, the convex hull of the set of points given in Table 3 is not castable. The points were generated at random, near the surface of a sphere. By Observation 2, it su ces to consider the O(n3) casting planes through triples of vertices; each of these planes was considered in turn and the corresponding linear programs constructed as in Section 2.2. All computations were done in exact rational arithmetic, using Maple. A simple Maple program to check convex polyhedra for castability by brute force is available from the second author.

### Table 1: Operation Counts for Overlap Tests In general, we can expect that ~ R32 will not be zero, and using a short-circuit and will cause the more expensive inequality test to be skipped. Comparisons: We have implemented the algorithm and compared its performance with other box overlap algorithms. The latter include an e cient implementation of closest features computation between convex polytopes [14] and a fast implementation of linear programming based on Seidel apos;s algorithm [33]. Note that the last two implementations have been optimized for general convex polytopes, but not for boxes. All these algorithms are much faster than performing 144 edge-face intersections. We report the average time for checking overlap between two OBBs in Table 2. All the timings are in microseconds, computed on a HP 735=125 .

1996

"... In PAGE 10: ... Among all the tests, the absolute value of each element of ~ R is used four times, so those expressions can be computed once before beginning the axis tests. The operation tally for all 15 axis tests are shown in Table1 . If any one of the expressions is satis ed, the boxes are known to be disjoint, and the remainder of the 15 axis tests are unnecessary.... In PAGE 10: ... In cases where a box extent is known to be zero, the expressions for the tests can be further simpli ed. The operation counts for overlap tests are given in Table1 , including when one or both boxes degenerate into a rectangle. Further reductions are possible when a box degenerates to a line segment.... ..."

Cited by 538

### Table 3 compares the performance of the algorithm using the convex enve- lope (\Linear quot;) and -based underestimation. The row labeled \Convex I quot; reports results obtained using a uniform diagonal shift matrix (a single value per term), and \Convex II quot; was obtained using the scaled Gerschgorin theorem method which generates one per variable in a nonconvex term. The use of leads to looser lower bounding functions than the convex enve- lope. Moreover, it requires the solution of a convex NLP for the generation of a lower bound, whereas a linear program is constructed when the con- vex envelope is used. As a result, both computation time and number of iterations increase signi cantly. The exploitation of the special structure of bilinear terms is thus expected to provide the best results in most cases. For very large problems, however, the introduction of additional variables and constraints may become prohibitive and a convex underestimator becomes more appropriate (Harding and Floudas, 1997).

"... In PAGE 15: ... Table3 : Computational results using di erent underestimating schemes.... ..."

### Table 1 is a comparison of bounds obtained from MSDR3 and other relaxation methods applied to instances from QAPLIB [6]. The first column OPT denotes the exact optimal value of the problem instance, while the following columns contain the lower bounds from the relaxation methods: GLB , the Gilmore-Lawler bound [10]; KCCEB , the dual linear programming bound [15]; P B , the projected eigenvalue bound [12]; QP B , the convex quadratic programming bound [1]; SDR1 , SDR2 , SDR3 , the vector-lifting semidefi- nite relaxation bounds [27] computed by the bundle method [24]; the last column is our MSDR3 . All output values are rounded up to the nearest integer. To solve QAP , the minimization of trace AXBXT and trace BXAXT are equivalent. But for the relaxation MSDR3 , exchanging the roles of A and B results in two different formulations and bounds. In our tests we use both versions and take the larger output as the bound of MSDR3 . We then keep the better formulation throughout the branch and bound process, so that we do not double the computational work.

2006

"... In PAGE 16: ... We then keep the better formulation throughout the branch and bound process, so that we do not double the computational work. From Table1 , we see that the relative performances between the LP -based bounds GLB , KCCEB are unpredictable. At some instances, both are weaker than even the least expensive P B bounds.... In PAGE 17: ...a. 4887 4965 4621 Nug30 6124 4539 4785 5266 5362 5413 5651 5803 5446 rou12 235528 202272 223543 200024 205461 208685 219018 223680 207445 rou15 354210 298548 323589 296705 303487 306833 320567 333287 303456 rou20 725522 599948 641425 597045 607362 615549 641577 663833 609102 scr12 31410 27858 29538 4727 8223 11117 23844 29321 18803 scr15 51140 44737 48547 10355 12401 17046 41881 48836 39399 scr20 110030 86766 94489 16113 23480 28535 82106 94998 50548 tai12a 224416 195918 220804 193124 199378 203595 215241 222784 202134 tai15a 388214 327501 351938 325019 330205 333437 349179 364761 331956 tai17a 491812 412722 441501 408910 415576 419619 440333 451317 418356 tai20a 703482 580674 616644 575831 584938 591994 617630 637300 587266 tai25a 1167256 962417 1005978 956657 981870 974004 908248 1041337 970788 tai30a 1818146 1504688 1565313 1500407 1517829 1529135 1573580 1652186 1521368 tho30 149936 90578 99855 119254 124286 125972 134368 136059 122778 Table1 : Comparison of bounds for QAPLIB instances... ..."

Cited by 2

### Table 1: Running times for the exact kNN algorithm in seconds: 60 m 90, n 100. (* denotes infeasible) The asymptotic running time bound for the kNN algorithm is worse than the naive running time (x 1.2). However, the naive algorithm frequently attains its worst case, particularly when the answer is \no quot;. The kNN algorithm repeatedly computes \lower bounds quot; by solving linear programs for convex approximations to the Vi and the Uij polygons. This technique enables it to quickly discard regions that have no solution and focus only on regions with a possible solution. In x 5.4 we compare the performance of this algorithm to our approximate kNN algorithm.

### Table 1: Performance of our implementation on some polyhedral models. n denotes the number of facets of the model; h and k denote the number of convex hull facets, and the number of EE -pairs, respectively; time denotes the time in seconds.

1999

"... In PAGE 55: ... First, we tested our implementation on real-world polyhedral models obtained from Stratasys, Inc. Table1 gives the test results for ten models, which were chosen to encompass di erent geometries. For example, tod21.... In PAGE 55: ...s fishb.stl, which has 213,384 facets. Our program computed the width of the latter model within ten minutes. As can be seen in Table1 , the actual running time of the program heavily depends on the number, h, of facets of the convex hull. This is not surprising, because our compare functions are fairly complex.... ..."

### Table 1 Convex

"... In PAGE 4: ... The next step of the analysis, however, provided a more promising result. As shown in Table1 , the number of people visible from each convex space was consistently correlated not only with the visual range of the space but also with its integration into the setting as a whole. That more people are visible from spaces which have a stronger visual range is hardly surprising.... In PAGE 5: ... However, these correlations were neither very strong or consistent. Table1 . Correlation between the Number of People Visible from Each Convex Space with Convex Configuration Variables.... ..."