### Table 3 (a) Number of iterations to reduce gap by 1012 averaged over 100 randomly generated problems. Mehrotra predictor-corrector rule; starting infeasible; S: short step failure (not included in average).

1995

"... In PAGE 20: ... More aggressive choices of gave a signi cantly reduced number of iterations (without loss of feasibility) for the XZ+ZX method, but led to many failures for the XZ and NT algorithms. In Table3 , we show results for the XZ+ZX method when the problem size n is varied, using the PC rule and two choices of . We see an iteration count which is essentially constant as n increases, with occasional failures (with steps too short) for =0:999.... ..."

Cited by 424

### Table 1 (a) Number of iterations to reduce gap by 1012 averaged over 100 randomly generated problems. Basic iteration with =0:25; starting infeasible; n =20, m =20; S: short step failure (not included in average); E: exceeded limit failure (not included in average).

1995

"... In PAGE 18: ... All methods were initialized with the infeasible starting point (X0;y0;Z0)=(I;0;I). Table1 shows results for the XZ+ZX, XZ, and NT basic iteration, using =0:25 in (2.17), with various choices for the steplength parameter... In PAGE 20: ... All experiments were conducted in Matlab, using IEEE double-precision arithmetic. Let us rst consider the results shown in Table1 for the Basic Iteration without the PC rule. For =0:9, all three methods show essentially the same number of iterations.... ..."

Cited by 424

### Table 1. Number of iterations(1-p). Problem loqo Path Following Barrier A ne-Scaling Primal Path Following Dual Path Following

1994

Cited by 1

### Table 6: Average iteration counts for the Nesterov-Todd (NT) and the new methods on logarithmic Chebychev approximation problems with random data. structured data are similar, as seen from Table 7. 9 Conclusion Primal{dual a ne{scaling methods were analysed in a potential reduction framework. This yielded new proofs of the polynomial worst{case iteration bounds of the short step algorithms, as well as insight into 21

in Primal-Dual Potential Reduction Methods for Semidefinite Programming Using Affine-Scaling Directions

"... In PAGE 23: ... Given data A = [a1; : : :; ap]T 2 IRp k and b 2 IRp, the problem becomes minx max i=1;:::;p logaT i x ? logbi which is equivalent to min t : 1=t aT i x=bi t; i = 1; : : :; p which in turn is equivalent to mint subject to 2 6 6 6 4 t ? aT i x=bi 0 0 0 aT i x=bi 1 0 1 t 3 7 7 7 5 0; i = 1; : : :; p; which is an SDP problem of dimension n = 3p, m = k + 1. The results are shown for problems with random data in Table6 . Here the NT method performs signi- cantly better, requiring four to ve fewer iterations on average in most cases.... ..."

### Table 3. Comparison of iteration counts between ex- act and inexact path following.

"... In PAGE 31: ... Compared to the exact path-following strategy of Algorithm EP, the inexact path-following concept of Algorithm IP is in many cases more efficient. In Table3... ..."

### Table 4.8. Avoid Collision (AC) and Path Follow (PF) Behavior Activation Level Calculation

### Table 1. Sample paths followed by agent apos;s sequence of three actions. The bonus value is set equal to \9. quot;

in Totally Model-Free Reinforcement Learning by Actor-Critic Elman Networks in Non-Markovian Domains

1998

"... In PAGE 3: ... quot; Hence, this action sequence \d-u-d quot; gives the total (reward) value \21 quot; (see Path 1 in Table 1). Similarly, another action sequence \u-u-d quot; gives the to- tal value \16 quot; (Path 2 in Table1 ), which was the maximum total value before the bonus-rule was introduced. Yet, due to the bonus-rule, another sequence \u-d-u quot; gives the max- imum possible total value \23 quot; (see Path 3 in Table 1).... In PAGE 3: ... Similarly, another action sequence \u-u-d quot; gives the to- tal value \16 quot; (Path 2 in Table 1), which was the maximum total value before the bonus-rule was introduced. Yet, due to the bonus-rule, another sequence \u-d-u quot; gives the max- imum possible total value \23 quot; (see Path 3 in Table1 ). Of course, the bonus-rule and the incurred-reward data used Table 1.... ..."

Cited by 1

### Table 4.1 Some of the transformations proposed by Meta for the MSS example. The double-arrow path denotes the derivation path followed in Fig. 4.1.

2002

Cited by 8

### Table 1. Experiment II: Experimental results for eight subjects: rows 1-4 for path following, rows 5-7 for off-path targeting, and 8-10 for avoidance.

2005

Cited by 4