### Table 1: One iteration of the re-scaled perceptron algorithm is one pass of Steps 2-6.

2006

"... In PAGE 5: ... Herein we extend these ideas to the conic setting. Table1 contains a description of our algorithm, which is a structural extension of the algorithm in [4]. Note that the perceptron improvement phase requires a deep-separation oracle for F instead of the interior separation oracle for F as required by the perceptron algorithm.... ..."

Cited by 1

### Table 1: One iteration of the re-scaled perceptron algorithm is one pass of Steps 2-6.

2006

"... In PAGE 6: ... Herein we extend these ideas to the conic setting. Table1 contains a description of our algorithm, which is a structural extension of the algorithm in [4]. Note that the perceptron improvement phase requires a deep-separation oracle for F instead of the interior separation oracle for F as required by the perceptron algorithm.... ..."

Cited by 1

### Table 1: One iteration of the re-scaled perceptron algorithm is one pass of Steps 2-6.

2006

"... In PAGE 6: ... Herein we extend these ideas to the conic setting. Table1 contains a description of our algorithm, which is a structural extension of the algorithm in [4]. Note that the perceptron improvement phase requires a deep-separation oracle for F instead of the interior separation oracle for F as required by the perceptron algorithm.... ..."

Cited by 1

### Table 1: One iteration of the re-scaled perceptron algorithm is one pass of Steps 2-6.

2006

"... In PAGE 5: ... Herein we extend these ideas to the conic setting. Table1 contains a description of our algorithm, which is a structural extension of the algorithm in [4]. Note that the perceptron improvement phase requires a deep-separation oracle for F instead of the interior separation oracle for F as required by the perceptron algorithm.... ..."

Cited by 1

### Table 1. One iteration of the re-scaled perceptron algorithm is one pass of Steps 2-6.

2006

"... In PAGE 6: ... Here we extend these ideas to the conic setting. Table1 contains a description of our algorithm, which is a structural extension of the algorithm in [5]. Note that the perceptron improvement phase requires a deep-separation or- acle for F instead of the interior separation oracle for F as required by the perceptron algorithm.... ..."

Cited by 1

### Table 1: An outline of the Improved Iterative Scaling algorithm for estimating the parameters for maximum entropy. Using the inequality ? log(x) 1 ? x and Jensen apos;s inequality, we can bound this expression from below with a new expression we call B:

### Table 1: Matrix S3RMQ4M1, standard AINV algorithm, MMD nodal ordering. preconditioner scaling iterations notes

2001

"... In PAGE 14: ... If a static AINV preconditioner with the same sparsity pattern as that of A, denoted by AINV(0), is applied to an a priori shifted matrix A+ diag(A), breakdowns can be avoided and the PCG iteration converges. However, the number of iterations can be rather high; see Table1 , where the matrix S3RMQ4M1 is used as a test case. This matrix can be obtained from the Matrix Market [39].... In PAGE 16: ...block Jacobi scaling. This is shown in Table1 , where denotes the relative density of the preconditioner (i.e.... ..."

Cited by 5

### Table 3: Performance comparison of our algorithm with and without scaling under two additive constraints for the irregular topology in Figure 11. the worst-case complexity will be log c1 log W iterations of Dijkstra apos;s algorithm. The improvement in the SR with scaling is achieved at an average extra cost of one or two Dijkstra apos;s iterations per connection request.

2000

"... In PAGE 19: ... The SR of our algorithm with scaling is almost equal to that of the optimal algorithm. For di erent ranges of c1 and c2, Table3 shows the obtained SR and ANDI. Since a binary search is used to nd an appropriate scaling factor x in the range [1; c2],... ..."

Cited by 26

### Table 2: Relative CPU times for three di erent scaling al- gorithms scaling algorithm is almost two orders of magnitude slower than either estimate-based algorithm.

1996

"... In PAGE 5: ... It modi es the original scale function to take additional arguments f and e, and it uses a table to look up the value of 1 log2 B for 2 B 36. Table2 gives the relative CPU times for Steele and White apos;s iterative scaling algorithm [5] and the oating-point logarithm scaling algorithm with respect to our simple es- timate and scaling algorithm. The timings were performed using Chez Scheme on a DEC AXP 8420 running Digital UNIX V3.... ..."

Cited by 10

### Table 2: Relative CPU times for three di erent scaling al-

"... In PAGE 5: ... It modi es the origin al scale function to take additional arguments f and e,andituses a table to look up the value of 1 log 2 B for 2 B 36. Table2 gives the relative CPU times for Steele and White apos;s iterative scaling algorithm [5] and the oating-point logarithm scaling algorithm with respect to oursimple es- timate and scaling algorithm. The timings were performed using Chez Scheme on a DEC AXP 8420 running Digital UNIX V3.... ..."