### Table 5: A summary of the algorithms described in this paper. In Section 3 we described the inverse compositional algorithm with a weighted L2 norm. In Section 4 we described the inverse compositional iteratively reweighted least squares algorithm for a robust error function and two efficient approximations. Error Function Algorithm Efficient? Correct? Performance

2004

"... In PAGE 37: ...ixels. This is a result of the slight smoothing in the computation of the Hessian. 5 Conclusion 5.1 Summary In Table5 we present a summary of the algorithms described in this paper. In Section 3 we in- troduced the inverse compositional algorithm with a weighted L2 norm.... ..."

Cited by 144

### Table 6: The computational cost of the inverse compositional iteratively reweighted least squares algo- rithm. The cost of each iteration is a49a50a2a51a46 a49 a47 a35a31a46 a39 a10 which is asymptotically as slow as the Lucas-Kanade algorithm. Since the algorithm is so slow, in [1] we considered two efficient approximations to it: (1) the H-Algorithm [9, 11] and (2) an algorithm that takes advantage of the spatial coherence of outliers. Both of these approximations move (most of) the cost of computing the Hessian into the pre-computation.

2004

"... In PAGE 34: ... The naive implementation of this algorithm is almost as slow as the original Lucas-Kanade algorithm. See Table6 for the details. Table 6: The computational cost of the inverse compositional iteratively reweighted least squares algo- rithm.... In PAGE 34: ... The algorithm is summarized in Figure 11. The computational cost of the iteratively reweighted least squares algorithm is summarized in Table6 . The algorithm is as slow as the Lucas-Kanade algorithm.... ..."

Cited by 144

### Table 10: A summary of the 4 algorithms we described in Section 4 for image alignment with linear appearance variation using a robust error function. There are 2 main algorithms: (1) the robust simultaneous inverse compositional algorithm, and (2) the robust normalization inverse compositional algorithm. Both of these algorithms are very slow and so for each one we derived an efficient approximation to it.

2004

"... In PAGE 48: ... We also described efficient approximations to both of these algorithms. These four algorithms are summarized in Table10 . Of the 4 algorithms, the robust simultaneous algorithm performs the best, but unfortunately is very slow.... ..."

Cited by 144

### Table 12: The six gradient descent approximations that we considered: Gauss-Newton, Newton, steepest descent, Diagonal Hessian (Gauss-Newton amp; Newton), and Levenberg-Marquardt. When combined with the inverse compositional algorithm the six alternatives are all equally efficient except Newton. When combined with a forwards algorithm, only steepest descent and the diagonal Hessian algorithms are efficient. Only Gauss-Newton and Levenberg-Marquardt converge well empirically. Algorithm Efficient As Efficient As Convergence Frequency of

2004

"... In PAGE 42: ... We have exhibited five alternatives: (1) Newton, (2) steepest descent, (3) diagonal approximation to the Gauss-Newton Hessian, (4) diagonal ap- proximation to the Newton Hessian, and (5) Levenberg-Marquardt. Table12 contains a summary of the six gradient descent approximations we considered. We found that steepest descent and the diagonal approximations to the Hessian all perform very poorly, both in terms of the convergence rate and in terms of the frequency of convergence.... ..."

Cited by 144

### Table 12: The six gradient descent approximations that we considered: Gauss-Newton, Newton, steepest descent, Diagonal Hessian (Gauss-Newton amp; Newton), and Levenberg-Marquardt. When combined with the inverse compositional algorithm the six alternatives are all equally efficient except Newton. When combined with a forwards algorithm, only steepest descent and the diagonal Hessian algorithms are efficient. Only Gauss-Newton and Levenberg-Marquardt converge well empirically. Algorithm Efficient As Efficient As Convergence Frequency of

2004

"... In PAGE 42: ... We have exhibited five alternatives: (1) Newton, (2) steepest descent, (3) diagonal approximation to the Gauss-Newton Hessian, (4) diagonal ap- proximation to the Newton Hessian, and (5) Levenberg-Marquardt. Table12 contains a summary of the six gradient descent approximations we considered. We found that steepest descent and the diagonal approximations to the Hessian all perform very poorly, both in terms of the convergence rate and in terms of the frequency of convergence.... ..."

Cited by 144

### Table (5.3): In this table we show the normalized delay parameters that are obtained by minimizing the first-order approximation of the MSE when using the DPT algorithm for various choices for the order of the polynomial, M . The statistical efficiency is also shown using each of the proposed choices.

1995

Cited by 6

### Table 12: The six gradient descent approximations that we considered: Gauss-Newton, Newton, steepest descent, Diagonal Hessian (Gauss-Newton amp; Newton), and Levenberg-Marquardt. When combined with the inverse compositional algorithm the six alternatives are all efficient except Newton. When combined with the forwards compositional algorithm, only the steepest descent and the diagonal Hessian algorithms are efficient. Only Gauss-Newton and Levenberg-Marquardt converge well empirically. Algorithm Complexity w/ Complexity w/ Convergence Convergence

2004

"... In PAGE 47: ...26 proximation to the Newton Hessian, and (5) Levenberg-Marquardt. Table12 contains a summary of the six gradient descent approximations we considered. We found that steepest descent and the diagonal approximations to the Hessian all perform very poorly, both in terms of the convergence rate and in terms of the frequency of convergence.... ..."

Cited by 144

### Table 9: A summary of the 6 algorithms we described in Section 3 for image alignment with linear ap- pearance variation using the Euclidean L2 norm. There are 3 main algorithms: (1) the simultaneous inverse compositional algorithm, (2) the project out inverse compositional algorithm, and (3) the normalization inverse compositional algorithm. Each of the 3 main algorithms has one variant.

2004

"... In PAGE 47: ... We also described an efficient approximation to the simultaneous algorithm, and step-size corrections to the project out and normalization algorithms. These algo- rithms are summarized in Table9 . Of the 6 algorithms, the simultaneous algorithm performs the best, but unfortunately is very slow.... ..."

Cited by 144

### Table III compares the relative computational efficiency of our three schemes to the traditional MAX- LOG-MAP algorithm, for different number of states. It is assumed an addition, a MAX operation and a traceback operation are equally costly. Although this is a rough approximation, it gives us a yardstick for a first cut comparison of the different schemes. The traceback initialized scheme of section 3.4 is the most efficient, and

### Table 1: Algorithm Efficiency

1991

Cited by 2