#### DMCA

## Approved: AND AFFINE DISTORTION PREDICTION (2012)

### Citations

8898 | Distinctive image features from scale-invariant keypoints
- Lowe
(Show Context)
Citation Context ...eature should be invariant to changes in scale, rotation or translation within the image-space. Because of these requirements, the Scale Invariant Feature Transform (SIFT) algorithm developed by Lowe =-=[12]-=- was chosen as the feature detection algorithm used in this research. The SIFT algorithm is composed of four main stages: scale-space extrema detection, keypoint localization, orientation assignment a... |

3907 |
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography
- Fischler, Bolles
- 1981
(Show Context)
Citation Context ...thm used for both automatic calibration and image-aided navigation is the Nearest Neighbor Distance Ratio (NNDR). NNDR was chosen over other common techniques such as Random Sample Consensus (RANSAC) =-=[4]-=- due to the deep-coupling of inertial and imaging sensors provided by the image-aided navigation algorithm discussed in Section 2.7. Feature matching is accomplished by comparing feature descriptors, ... |

3836 |
A New Approach to Linear Filtering and Prediction Problems
- Kalman
- 1960
(Show Context)
Citation Context ...ction 2.7. Additional information on epipolar geometry and binocular stereopsis can be found in Szeliski’s textbook [19]. 34 2.6 Kalman Filtering The Kalman filter, developed by Rudolf Kalman in 1960 =-=[10]-=-, provides a method for the optimal combination of measurements made by multiple sensors (e.g., inertial and imaging). The Kalman filter uses Bayesian statistics to combine dynamics and measurements m... |

2041 | Good features to track
- Shi, Tomasi
- 1994
(Show Context)
Citation Context ...nges in scale (zoom), rotations about the c-frame’s z-axis (2-D rotations), and illumination, feature tracking effectiveness diminishes with rotations about the c-frame’s x and y axes (3-D rotations) =-=[18]-=-. Figure 3.5 illustrates a 3-D rotation between two images of the same scene. Using the conventional SIFT descriptors found for both images and a NNDR of 0.45, there was only one positive match found.... |

507 | Flexible Camera Calibration by Viewing a Plane from Unknown Orientations
- Zhang
- 1999
(Show Context)
Citation Context ...n. The current tool of choice for camera calibration is the Camera Calibration Toolbox from Caltech [1]. The algorithms used inside 1 the tool are heavily based on Dr. Zhang’s camera calibration work =-=[24]-=- and further explained in the next chapter. Currently, a standard two-camera calibration procedure is as follows: 1. First, the cameras are rigidly mounted to a camera bar. 2. Next, a planar calibrati... |

344 |
Stochastic models, estimation and control, (Volume 1
- Maybeck
- 1979
(Show Context)
Citation Context ...e dynamics and measurements models, which provides a solution estimate with the lowest possible uncertainty. This section outlines the basic principles behind the Kalman filter as outlined by Maybeck =-=[13]-=- [14]. 2.6.1 Linear Kalman Filter. The physical system dynamics are modeled using the form ẋ(t) = Fx(t) +Bu(t) +Gw(t) (2.46) where x is a vector containing the system states of interest, u is a vecto... |

324 | A Four-Step Camera Calibration Procedure with Implicit Image Correction
- Heikkila, Silven
- 1997
(Show Context)
Citation Context ... estimation technique described by Zhang but chooses to use the orthogonality property of vanishing points [16] to estimate the intrinsic parameters. Finally, Bouguet implements additional algorithms =-=[7]-=- to estimate the tangential distortion coefficients. Although the Camera Calibration Toolbox developed by Bouguet has become one of the most widely used tools in camera calibration among computer visi... |

255 | Close-range camera calibration
- Brown
- 1971
(Show Context)
Citation Context ...ic configuration parameters in a computer vision system. 28 2.5.1 Distortion Models. The first step in determining the nonlinear lens distortion of an imaging sensor is to model the distortion. Brown =-=[2]-=- groups the distortion parameters into radial, tangential, and skew components. 2.5.1.1 Radial Distortion. Radial distortion, the most noticeable of the three, causes straight lines to appear curved i... |

252 | Computer Vision: Algorithms and Applications
- Szeliski
- 2011
(Show Context)
Citation Context ...e keypoint descriptor is a 8 × 4 × 4 = 128 point normalized vector containing the values of the gradient orientation histogram bins in the 4× 4 subregions. 2.4.3 Feature Matching Techniques. Szeliski =-=[19]-=- introduces a few feature matching methods for algorithms that produce feature or keypoint descriptors such 25 Figure 2.13: Keypoint descriptor illustration. A feature’s unique descriptor is composed ... |

132 | Asift: A new framework for fully affine invariant image comparison
- Morel, Yu
(Show Context)
Citation Context ... can be adequately modeled using affine transformations on the initial image. Using this concept as a foundation, Morel and Yu developed the Affine Scale Invariant Feature Transform (ASIFT) algorithm =-=[15]-=-, which they claim provides full affine invariance. The ASIFT algorithm recursively uses the basic SIFT algorithm described in the previous section as a core function. Much like the scalespace develop... |

83 | Computing Integrals Involving the Matrix Exponential - Loan - 1978 |

63 |
Scale-space theory: a basic tool for analyzing structures at different scales.
- Lindeberg
- 1994
(Show Context)
Citation Context ...y D(x, y, σ) = (G(x, y, kσ)−G(x, y, σ)) ∗ I(x, y) (2.17) = L(x, y, kσ)− L(x, y, σ) (2.18) which yields a computationally efficient approximation to the scale-normalized Laplacian of Gaussian function =-=[11]-=-, thereby providing scale invariance. Figure 2.11 illustrates how the DOG functions are constructed for an input image. The initial image is incrementally convolved with 2-D Gaussian distributions to ... |

20 |
Fusion of image and inertial sensors for navigation,”
- Veth
- 2006
(Show Context)
Citation Context ...erged as a valuable and feasible precision navigation alternative which, when coupled with inertial navigation sensors can reduce navigation estimation errors by approximately two orders of magnitude =-=[22]-=-. Although the basic mathematics and algorithms have been thoroughly documented, image-aided navigation is still in its early stages. This research aims to improve two key steps within the image-aided... |

12 |
Camera Calibration Toolbox for Matlab,” URL: http://www.vision.caltech.edu/bouguetj/calib_doc, Accessed 2010
- Bouguet
(Show Context)
Citation Context ...cts within image-aided navigation: camera calibration and landmark tracking. 1.1.1 Camera Calibration. The current tool of choice for camera calibration is the Camera Calibration Toolbox from Caltech =-=[1]-=-. The algorithms used inside 1 the tool are heavily based on Dr. Zhang’s camera calibration work [24] and further explained in the next chapter. Currently, a standard two-camera calibration procedure ... |

7 |
Stochastic Constraints for Efficient Image Correspondence Search
- Veth, Raquet, et al.
- 2006
(Show Context)
Citation Context ...c projections [22]. navigational state vector. The vector containing feature locations, along with its associated covariance matrix, is augmented into the EKF using the system’s stochastic properties =-=[23]-=-. As the EKF propagates the state estimate, the location of the features in a future image are predicted along with an uncertainty ellipse, which is derived from the propagated covariance matrix. Fina... |

2 |
WGS-84 Development. Department of Defense World Geodetic System 1984 - Its Definition and Relationships with Local Geodetic Systems
- Committee
(Show Context)
Citation Context ...n reference frames are fundamentally important when expressing position, velocity, and orientation of a body. For this research, the following reference frames are defined based on those presented in =-=[3]-=-, [20] and [22]: The true inertial frame (I-frame) - a theoretical reference frame in which Newton’s laws of motion apply. The frame is defined by a non-accelerating, nonrotating orthonormal basis in ... |

2 | Coupling Vanishing Point Tracking with Inertial Navigation to Estimate Attitude in a Structured Environment
- Prahl
- 2011
(Show Context)
Citation Context ...) defines the relative rotation between the principal axes of two reference frames. DCMs apply rotations to each axis in a reference frame, using the standard Euler angles, and in the following order =-=[16]-=-: 1. First, a rotation angle ψ is applied about the z-axis of the originating frame. 12 Figure 2.5: Binocular disparity frame illustration. The binocular disparity frame originates at the midpoint bet... |

1 |
Inertial and Imaging Sensor Fusion for Image-Aided Navigation with
- Jurado, Fisher, et al.
- 2012
(Show Context)
Citation Context ...sary image distortions due to its deep integration into the EKF. The enhanced IAEKF algorithm with affine distortion prediction was presented at the Position Location and Navigation System Conference =-=[9]-=-. An experiment was developed to evaluate the performance of the automatic calibration and affine distortion prediction algorithms. For the camera calibration algorithm, a binocular set of cameras wer... |

1 | Opening Keynote Address - Norton - 2010 |

1 |
include area code) Standard Form 298 (Rev. 8–98) Prescribed by ANSI Std. Z39.18 22–03–2012 Master’s Thesis Sept 2010 — 22 Mar 2012 Enhanced Image-Aided Navigation Algorithm with Automatic Calibration and Affine Distortion Prediction 12G602
- Jurado, Capt
(Show Context)
Citation Context ...nges in scale (zoom), rotations about the c-frame’s z-axis (2-D rotations), and illumination, feature tracking effectiveness diminishes with rotations about the c-frame’s x and y axes (3-D rotations) =-=[18]-=-. Figure 3.5 illustrates a 3-D rotation between two images of the same scene. Using the conventional SIFT descriptors found for both images and a NNDR of 0.45, there was only one positive match found.... |