### Table 1: The correct velocity component values and their average absolute and percentage errors and standard deviations for the three 3D optical ow components computed by the 3D Lucas and Kanade algorithm for the sinusoid datasets. D = 1:0 was used.

2003

"... In PAGE 14: ... = 1:0 and 200 iterations were used. especially in Table1 and to a less degree in Table 2 result from poor derivatives calculations in slices 2-4 and 26-29, an artifact of the construction of the sinusoid data sequence. The velocity was 1.... ..."

### Table 1: The computation cost of one iteration of the Lucas-Kanade algorithm. If D2 is the number of warp parameters and C6 is the number of pixels in the template CC , the cost of each iteration is C7B4D2 BE C6 B7 D2 BF B5.The most expensive step by far is Step 6, the computation of the Hessian, which alone takes time C7B4D2 BE C6B5.

2004

"... In PAGE 7: ... The total computational cost of each iteration is therefore C7B4D2 BE C6 B7 D2 BF B5, the most expensive step being Step 6. See Table1 for a summary of these computational costs. 3 The Quantity Approximated and the Warp Update Rule In each iteration the Lucas-Kanade algorithm approximately minimizes: CG DC CJ C1B4CFB4DCBN D4 B7A1D4B5B5 A0 CC B4DCB5CL BE (12) with respect to A1D4 and then updates the estimates of the parameters in Step 9 as: D4 AW D4 B7A1D4BM (13) Perhaps somewhat surprisingly iterating these two steps is not the only way to minimize the ex- pression in Equation (3).... In PAGE 39: ...Table1 0: The computation cost of the Levenberg-Marquardt inverse compositional algorithm. A number of steps have been re-ordered, a couple of steps have been extended, and two new steps have been introduced compared to the original algorithm in Figure 4.... In PAGE 43: ...Table1 1: Timing results for our Matlab implementation of the six algorithms in milliseconds. These results are for the 6-parameter affine warp using a BDBCBC A2 BDBCBC pixel grey-scale template on a 933MHz Pentium-IV.... In PAGE 44: ...Table1 2: The six gradient descent approximations that we considered: Gauss-Newton, Newton, steepest descent, Diagonal Hessian (Gauss-Newton amp; Newton), and Levenberg-Marquardt. When combined with the inverse compositional algorithm the six alternatives are all equally efficient except Newton.... ..."

Cited by 144

### Table 3: The computation cost of the simultaneous inverse compositional algorithm. Overall the algorithm is even slower than the Lucas-Kanade algorithm because the computational cost of most of the steps depends on the total number of parameters a46a16a35a17a9 rather than just the number of warp parameters a46 .

2004

"... In PAGE 11: ...e re-computed in every iteration. The result is the algorithm summarized in Figure 3. Overall the algorithm is even slower than the original Lucas-Kanade algorithm because the computational cost of most of the steps depends on the total number of parameters a33 a1 a11 rather than just the number of warp parameters a33 . See Table3 for a summary of the computation cost.... In PAGE 12: ... Because the steepest descent images depend on the appearance parameters (see Equation (24)), Steps (5) and (6) must be performed in every iteration. See Table3 for a summary of the computational cost. Table 3: The computation cost of the simultaneous inverse compositional algorithm.... In PAGE 38: ...exactly the same as the computational cost of the Euclidean version. See Table3 for comparison. Pre- Step 3 Step 4 Total Computation a49a29a2a18a9 a47a55a10 a49a50a2a51a46 a47a33a10 a49a50a2a14a2a51a46a16a35a17a9a53a10 a47a55a10 Per Step 1 Step 2 Step 5 Step 6 Iteration a49a29a2a51a46 a47a55a10 a49a50a2a18a9 a47a33a10 a49a50a2a14a2a51a46a16a35a19a9a53a10 a47a33a10 a49a50a2a14a2a51a46a16a35a19a9a53a10 a49 a47a56a35a31a2a51a46a16a35a19a9a53a10 a39 a10 Step 7 Step 8 Step 9 Total a49a29a2a14a2a51a46a16a35a19a9a53a10 a47a55a10 a49a29a2a14a2a51a46a16a35a19a9a53a10 a49 a10 a49a50a2a51a46 a49 a35 a19a9a53a10 a49a50a2a14a2a51a46 a35 a20a9a53a10 a49 a47 a35a31a2a51a46a16a35a17a9a53a10 a39 a10 4.... ..."

Cited by 144

### Table 3: The computation cost of the simultaneous inverse compositional algorithm. Overall the algorithm is even slower than the Lucas-Kanade algorithm because the computational cost of most of the steps depends on the total number of parameters a156a72a148a150a225 rather than just the number of warp parameters a156 .

2004

"... In PAGE 11: ...e re-computed in every iteration. The result is the algorithm summarized in Figure 3. Overall the algorithm is even slower than the original Lucas-Kanade algorithm because the computational cost of most of the steps depends on the total number of parameters a116 a51a219a189 rather than just the number of warp parameters a116 . See Table3 for a summary of the computation cost.... In PAGE 12: ... Because the steepest descent images depend on the appearance parameters (see Equation (24)), Steps (5) and (6) must be performed in every iteration. See Table3 for a summary of the computational cost. Table 3: The computation cost of the simultaneous inverse compositional algorithm.... In PAGE 38: ...exactly the same as the computational cost of the Euclidean version. See Table3 for comparison. Pre- Step 3 Step 4 Total Computation a159a143a126a160a225a201a157a52a132 a159a62a126a160a156a178a157a56a132 a159a62a126a95a126a160a156a72a148a150a225a115a132a228a157a52a132 Per Step 1 Step 2 Step 5 Step 6 Iteration a159a143a126a160a156a178a157a52a132 a159a62a126a160a225a201a157a56a132 a159a62a126a95a126a160a156a72a148a109a225a115a132a228a157a56a132 a159a62a126a95a126a160a156a72a148a109a225a115a132 a49 a157a10a148a145a126a160a156a72a148a109a225a115a132 a122 a132 Step 7 Step 8 Step 9 Total a159a143a126a95a126a160a156a72a148a109a225a115a132a228a157a52a132 a159a143a126a95a126a160a156a72a148a109a225a115a132 a49 a132 a159a62a126a160a156 a49 a148a109a225a115a132 a159a62a126a95a126a160a156a103a148a119a225a115a132 a49 a157a229a148a145a126a160a156a72a148a150a225a115a132 a122 a132 4.... ..."

Cited by 144

### Table 7: The computation cost of the Newton inverse compositional algorithm. The one time pre- computation cost in Steps 3-5 is C7B4D2BE C6B5. After that, the cost of each iteration is C7B4D2BE C6 B7 D2BFB5, sub- stantially more than the cost of the Gauss-Newton inverse compositional algorithm described in Figure 4, and asymptotically the same as the original Lucas-Kanade algorithm described in Figure 1 in Section 2. Pre- Step 3 Step 4 Step 5 Total

2004

"... In PAGE 31: ... The Newton inverse compositional algorithm is asymptotically as slow as the original Lucas-Kanade algorithm. See Table7 for a summary. 4.... ..."

Cited by 144

### Table 6: The computational cost of the inverse compositional iteratively reweighted least squares algo- rithm. The cost of each iteration is a49a50a2a51a46 a49 a47 a35a31a46 a39 a10 which is asymptotically as slow as the Lucas-Kanade algorithm. Since the algorithm is so slow, in [1] we considered two efficient approximations to it: (1) the H-Algorithm [9, 11] and (2) an algorithm that takes advantage of the spatial coherence of outliers. Both of these approximations move (most of) the cost of computing the Hessian into the pre-computation.

2004

"... In PAGE 34: ... The naive implementation of this algorithm is almost as slow as the original Lucas-Kanade algorithm. See Table6 for the details. Table 6: The computational cost of the inverse compositional iteratively reweighted least squares algo- rithm.... In PAGE 34: ... The algorithm is summarized in Figure 11. The computational cost of the iteratively reweighted least squares algorithm is summarized in Table6 . The algorithm is as slow as the Lucas-Kanade algorithm.... ..."

Cited by 144

### Table 6: The computational cost of the inverse compositional iteratively reweighted least squares algo- rithm. The cost of each iteration is a159a62a126a160a156 a49 a157a27a148a145a156 a122 a132 which is asymptotically as slow as the Lucas-Kanade algorithm. Since the algorithm is so slow, in [1] we considered two ef cient approximations to it: (1) the H-Algorithm [9, 11] and (2) an algorithm that takes advantage of the spatial coherence of outliers. Both of these approximations move (most of) the cost of computing the Hessian into the pre-computation.

2004

"... In PAGE 34: ... The naive implementation of this algorithm is almost as slow as the original Lucas-Kanade algorithm. See Table6 for the details. Table 6: The computational cost of the inverse compositional iteratively reweighted least squares algo- rithm.... In PAGE 34: ... The algorithm is summarized in Figure 11. The computational cost of the iteratively reweighted least squares algorithm is summarized in Table6 . The algorithm is as slow as the Lucas-Kanade algorithm.... ..."

Cited by 144

### Table 3: The computation cost of the inverse compositional algorithm. The one time pre-computation cost of computing the steepest descent images and the Hessian in Steps 3-6 is C7B4D2BE C6B5. After that, the cost of each iteration is C7B4D2 C6 B7 D2BEB5 a substantial saving over the Lucas-Kanade and compositional algorithms. Pre- Step 3 Step 4 Step 5 Step 6 Total

2004

"... In PAGE 14: ....2.4 Computational Cost of the Inverse Compositional Algorithm The inverse compositional algorithm is far more computationally efficient than either the Lucas- Kanade algorithm or the compositional algorithm. See Table3 for a summary. The most time con- suming step, the computation of the Hessian in Step 6 can be performed once as a pre-computation.... ..."

Cited by 144

### Table 2: The computation cost of the compositional algorithm. The one time pre-computation cost of evaluating the Jacobian in Step 4 is C7B4D2 C6B5. After that, the cost of each iteration is C7B4D2BE C6 B7 D2BFB5. Pre- Step 4 Total

2004

"... In PAGE 10: ....1.4 Computational Cost of the Compositional Algorithm The computational cost of the compositional algorithm is almost exactly the same as that of the Lucas-Kanade algorithm. See Table2 for a summary. The only steps that change are Steps 3, 4, and 9.... ..."

Cited by 144

### Table 2: The computation cost of the compositional algorithm. The one time pre-computation cost of evaluating the Jacobian in Step 4 is C7B4D2C6B5. After that, the cost of each iteration is C7B4D2 BE C6 B7 D2 BF B5.

2004

"... In PAGE 10: ....1.4 Computational Cost of the Compositional Algorithm The computational cost of the compositional algorithm is almost exactly the same as that of the Lucas-Kanade algorithm. See Table2 for a summary. The only steps that change are Steps 3, 4, and 9.... ..."

Cited by 144