### Table 1 - Matching results of an unseen data set of 10 minutes in a single camera view.

"... In PAGE 4: ... It can be seen that target (b) was classified correctly to the ground truth as it obtained the best score during the recognition process. Summary of the recognition on the whole test set is provided in Table1 . Since the model is based on the trend in a pair of main paths, if the target uses uncommon paths which has no trend in the data set, the target can not be recognised.... ..."

### Table 1. Camera fields of view.

in Photogrammetric Set-Up for the Analysis of Particle Motion in Aerosol Under Microgravity Conditions

### Table 3. Off-line performance on the View Angle ex- periment. Since the operator library uses a single tem- plate taken at nadir view angle, the interpretation qual- ity gracefully degrades as the scene is viewed from larger off-nadir angles. At extreme view angles the best interpretation is increased by employing a signif- icantly different sequence of operators from the previ- ous 5 view angles.

"... In PAGE 6: ...123456 View Angle Absolute Rewards depth 4 depth 5 depth 6 (0,10,20,40,80 or 160 units) left from the center of the scene and then rotating (pan) it back to view the center of the scene. Illustrated in Table3 , the system demonstrates a graceful degradation in performance as camera angle be- comes more and more pronounced. The on-line perfor- mance unfortunately is not as successful as in the sun angle experiment.... ..."

### Table 1. Experimental results of single view- specific neural networks

2000

"... In PAGE 5: ...From Table1 , we can see that if there is an accurate pose estimation process and the test image is fed to the right neural network, the recognition rate is about 97% on average, as shown by the diagonal line in the table. However, if the pose estimation is noisy, then the recognition ratio will drop very fast.... In PAGE 5: ...75% From Table 2 we can see that we can feed face images with all the poses and get almost the same recognition ratio around 98%. By comparing the experimental results in Table 2 with those in Table1 , we can see that even without knowing the pose information, the system achieves an average recognition ratio as high as 98.75%.... ..."

Cited by 26

### Table 1: Constraints on ! for various combinations of cameras and motion given the vanishing points vi of three orthogonal di- rections in the scene. The first three cases describe a single view of the scene where additional constraints are obtained from the knowledge that the camera is natural. Where two views are avail- able, the vanishing line of the second view image plane seen in the first view is l0 1. In the case of a fixed camera, la is the fixed line of H1 and li a side of the triangle of orthogonal vanishing points

1999

Cited by 33

### Table 1: Constraints on a170 for various combinations of cameras and motion given the vanishing points a84 a54a53 of three orthogonal di- rections in the scene. The first three cases describe a single view of the scene where additional constraints are obtained from the knowledge that the camera is natural. Where two views are avail- able, the vanishing line of the second view image plane seen in the first view is a87 a56a55 a172 . In the case of a fixed camera, a87 a56a57 is the fixed line

### Table 3: Cube counts for octrees computed from synthetic models (single view)

1990

"... In PAGE 17: ... For most objects, this should be smaller than the surface area of the viewing cone formed by the camera and the object silhouette. To see if this occurs in practice, we can count the number of cubes in the octrees formed from a single silhouette ( Table3 ), which tells us the number of cubes in a viewing cone. Compared to the octree cube count from our hierarchical octree algorithm (Table 1), we see that the single-view count generally exceeds the 32-view count.... ..."

Cited by 8

### Table 3: Cube counts for octrees computed from synthetic models (single view)

"... In PAGE 16: ... For most objects, this should be smaller than the surface area of the viewing cone formed by the camera and the object silhouette. To see if this occurs in practice, we can count the number of cubes in the octrees formed from a single silhouette ( Table3 ), which tells us the number of cubes in a viewing cone. Compared to the octree cube count from our hierarchical octree algorithm (Table 1), we see that the single-view count generally exceeds the 32-view count.... ..."

### Table 2: The total number of degrees of freedom for differ- ent camera model systems.

"... In PAGE 2: ... However when the point is on the line the constraints are dependent. In Table2 the number of constraints different corre- sponding features give on three and four images is given. In Figure 1: Quivers with one, two and three directions.... In PAGE 3: ....1. Problem formulation and solution A quiver with one direction seen in three affine views gives four constraints on the camera geometry. And since three affine cameras have twelve degrees of freedom according to Table2 this is a minimal case. A point in three views gives essentially three constraints on the camera geometry but gives four constraints on the trifocal tensor, cf.... ..."