DMCA
Coupling Vanishing Point Tracking with Inertial Navigation to Estimate Attitude in a Structured Environment (2011)
Citations: | 2 - 0 self |
Citations
4663 | A computational approach to edge detection
- Canny
- 1986
(Show Context)
Citation Context ...lly expensive method of simply adding their absolute values as shown in Equation (2.27) is commonly used when constrained by data processing capacity. ∣∣푮(푖, 푗)∣∣ ≈ ∣푮푉 (푖, 푗)∣ + ∣푮퐻(푖, 푗)∣ (2.27) In =-=[6]-=-, Canny proposed that strong and weak edges be determined by establishing both upper and lower gradient thresholds. Strong edges occur where the magnitude of the image gradient is above the upper thre... |
3907 |
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography
- Fischler, Bolles
- 1981
(Show Context)
Citation Context ...rwal’s methods also are somewhat computationally burdensome with the requirement to either trace circles on the sphere for every line or compute 푁퐶2 intersections. 2.5.3.2 Random Sample Consensus. In =-=[9]-=-, Fischler and Bolles introduce the Random Sample Consensus (ransac) algorithm as part of their method for determining the position of a camera based on an image of landmarks with known locations. The... |
3836 |
A New Approach to Linear Filtering and Prediction Problems
- Kalman
- 1960
(Show Context)
Citation Context ...s from both types of sensor is required. This combination is performed inside of a Kalman filter. In 1960, Rudolf Kalman published his method for linear estimation in the Journal of Basic Engineering =-=[15]-=-. This method, which has come to be known as the Kalman filter, uses Bayesian statistics to optimally combine a dynamics model and sensor measurements to produce a minimal-uncertainty estimate of quan... |
898 |
Use of the hough transformation to detect lines and curves in pictures
- Duda, Hart
- 1972
(Show Context)
Citation Context ...s unbounded, as manifest by the infinite slope of vertical lines. 26 Duda and Hart proposed the polar representation of a line given in Equation (2.29), which provides a fully bounded parameter space =-=[8]-=-. 휌 = 푥 cos 휃 + 푦 sin 휃 (2.29) The parameters 휌 and 휃 represent the length and angular distance from the x-axis of the shortest line segment joining the origin of a digital image to the line being obs... |
628 |
Multiple View Geometry
- Hartley, Zisserman
(Show Context)
Citation Context ...these groupings will appear to converge at a single point, called a vanishing point, in perspective images of a Manhattan world scene. The positions of these vanishing points in an image are shown in =-=[11]-=- to be invariant to translational motion of the camera, and only change when the camera is rotated. 3 1.3 Problem Formulation The problem that we are trying to solve is this: attitude estimates provid... |
344 |
Stochastic models, estimation and control, (Volume 1
- Maybeck
- 1979
(Show Context)
Citation Context ...and sensor measurements to produce a minimal-uncertainty estimate of quantities of interest. An outline of Kalman’s method follows, based primarily on Dr. Peter Maybeck’s presentation of the topic in =-=[21]-=- and [22]. Though an equivalent continuous-time algorithm also exists, the Kalman filter is presented here as a discrete-time method, since it will ultimately be implemented on a digital computer. 2.6... |
332 |
Machine perception of three-dimensional solids
- Roberts
- 1965
(Show Context)
Citation Context ...the image, often the gradient is approximated by evaluating the convolution of the image with a small kernel or “mask”. Common convolution kernels used for this task include those proposed by Roberts =-=[25]-=-, Prewitt [24], and Sobel [28] shown in Figure 2.10. Convolution of the digital image with the first mask of each pair produces the gradient in the vertical direction, 푮푉 , and convolution with the se... |
128 |
Object enhancement and extraction
- Prewitt
- 1970
(Show Context)
Citation Context ...en the gradient is approximated by evaluating the convolution of the image with a small kernel or “mask”. Common convolution kernels used for this task include those proposed by Roberts [25], Prewitt =-=[24]-=-, and Sobel [28] shown in Figure 2.10. Convolution of the digital image with the first mask of each pair produces the gradient in the vertical direction, 푮푉 , and convolution with the second mask prod... |
119 | A new approach for vanishing point detection in architectural environments
- Rother
- 2000
(Show Context)
Citation Context ...hing point in each of the three principal directions individually. However, it may prove effective to search for the complete triad of vanishing points all at once instead of each one in sequence. In =-=[26]-=-, Rother presents a computationally intensive approach to finding all three vanishing directions simultaneously in which every possible intersection of two lines from the image is examined. Combining ... |
105 |
Interpreting perspective images
- Barnard
- 1984
(Show Context)
Citation Context ...detection is 35% faster than the Hough transform when processing a 1024×768 resolution image and 40% faster when processing a 512×384 resolution image. 2.5.2.4 Representing Image Lines in 3-Space. In =-=[1]-=-, Barnard describes how every line in an image can be imagined to represent a plane in 3-space which is defined by any two points on the line and the optical center of the camera. This plane, called a... |
101 |
A probabilistic Hough transform
- Kiryati, Eldar, et al.
- 1991
(Show Context)
Citation Context ...rresponding to the parameters of the line passing through the edgels in (a). 28 Several variations of the Hough transform have been developed, including the probabilistic Hough transform presented in =-=[18]-=-. Kiryati et al.’s method involves selecting a random subset of the collection of edgels in the image and performing the Hough transform on only those points. Since strong instances of a particular pa... |
97 | Manhattan world: Compass direction from a single image by Bayesian inference
- Coughlan, Yuille
- 1999
(Show Context)
Citation Context ...and vertical. • All rooms and corridors meet at right angles. Structural features of such an environment will align to an orthogonal three-dimensional (3-D) grid described as the “Manhattan world” in =-=[7]-=-. 1.2.2 Vanishing Points. Most of the lines defining the edges and intersections of planar surfaces in the Manhattan world are aligned to one of three mutually orthogonal directions, forming large gro... |
83 |
Computing Integrals Involving the Matrix Exponential
- Loan
- 1978
(Show Context)
Citation Context ... )Φ푇 +푸푑 (2.54) where 푸푑 is the discrete-time process noise strength matrix. The calculation of 푸풅 is not as simple as determining Φ, but can be accomplished using the process proposed by Van Loan in =-=[30]-=-. At discrete instants in which measurements are available, these measurements are used to update the state estimate and covariance by optimally combining the dynamics model estimate and uncertain mea... |
67 |
Camera models and machine perception”,
- Sobel
- 1970
(Show Context)
Citation Context ...is approximated by evaluating the convolution of the image with a small kernel or “mask”. Common convolution kernels used for this task include those proposed by Roberts [25], Prewitt [24], and Sobel =-=[28]-=- shown in Figure 2.10. Convolution of the digital image with the first mask of each pair produces the gradient in the vertical direction, 푮푉 , and convolution with the second mask produces the gradien... |
50 |
Method and Means for Recognizing
- Hough
- 1962
(Show Context)
Citation Context ...t lines in digital images has been investigated by many researchers and a plenitude of methods have been developed. However, most methods are at least loosely based on either the Hough transform from =-=[12]-=- or Burns’ line extractor from [5]. 25 (a) Hallway image (b) Canny edge image Figure 2.11: Canny edge image. (a) An image of a hallway. (b) This binary image results from performing Canny edge detecti... |
48 |
Determining vanishing points from perspective images,
- Magee, Aggarwal
- 1984
(Show Context)
Citation Context ...e likely vanishing points. Limitations of this method include the uneven spacing of accumulator elements on the sphere’s surface and the ambiguous azimuth angle of vectors pointing to either pole. In =-=[20]-=-, Magee and Aggarwal use a similar approach, but rather than tracing circles in a discretization of the Gaussian sphere, they calculate an (훼,훽) pair for the intersection of each possible pairing of i... |
18 | Visual control of a miniature quad-rotor helicopter
- Kemp
- 2005
(Show Context)
Citation Context ...capabilities eliminate the need to maintain forward velocity to produce lift. One form of miniature rotary vehicle that has become more common for scientific research in recent years is the quadrotor =-=[16]-=-, [13]. These vehicles use four counterrotating fixed-pitch propellers to generate lift. Desired motion is obtained by simply varying the propellers’ rotational speeds. Accurate attitude knowledge is ... |
14 |
Steering a Robot with Vanishing Points”,
- Schuster, Ansari, et al.
- 1993
(Show Context)
Citation Context ...sus set is used to estimate the model. 2.5.3.3 Determining Attitude From Vanishing Points. Various researchers have used vanishing points to determine camera orientation with respect to the scene. In =-=[27]-=-, Schuster et al. describe the use of vanishing points to determine the heading of a ground-based robotic vehicle. Their camera is pitched upward so as to view the grid of rectangular ceiling tiles in... |
12 |
Camera Calibration Toolbox for Matlab,” URL: http://www.vision.caltech.edu/bouguetj/calib_doc, Accessed 2010
- Bouguet
(Show Context)
Citation Context ...rdinates to a cameraframe line of sight vector cannot be determined in closed form. “Because of the high degree distortion model, there exists no general algebraic expression for this in22 verse map” =-=[3]-=-. Instead, the distortion removal is performed using iterative numerical methods. 2.4.2.5 Calibration. With a distortion model defined, the camera can be calibrated to determine the distortion paramet... |
9 | Vision-assisted control of a hovering air vehicle in an indoor setting
- Johnson
- 2008
(Show Context)
Citation Context ...lities eliminate the need to maintain forward velocity to produce lift. One form of miniature rotary vehicle that has become more common for scientific research in recent years is the quadrotor [16], =-=[13]-=-. These vehicles use four counterrotating fixed-pitch propellers to generate lift. Desired motion is obtained by simply varying the propellers’ rotational speeds. Accurate attitude knowledge is requis... |
7 |
Vision-based attitude estimation for indoor navigation using vanishing points and lines.
- Kessler, Ascher, et al.
- 2010
(Show Context)
Citation Context ...erization from Section 2.5.2.1 by using the coordinates of the centroid of each line support region, (푥̄푖, 푦̄푖) and the orientation, 휃푖 as the inputs to Equation (2.29). 2.5.2.3 Method Comparison. In =-=[17]-=-, Kessler et al. compare the speed of various line detection methods. Each of four different methods including the Hough transform discussed in Section 2.5.2.1 and Košecká and Zhang’s connected comp... |
6 |
Realtime feature extraction: A fast line finder for visionguided robot navigation.
- Kahn, Kitchen, et al.
- 1990
(Show Context)
Citation Context ...aracteristics such as length, contrast, width, location, orientation and straightness. Others have modified the method to enable faster image processing when computational limitations are encountered =-=[14]-=-. 30 In [14], Kahn, Kitchen and Riseman developed a line extraction algorithm they call the “fast line finder” (FLF) which fits lines to line support regions by finding the major axis of an ellipse fi... |
2 |
Video Compass”. ECCV ’02
- Košecká, Zhang
(Show Context)
Citation Context ... arctangent function corrected by quadrant. 휃푖 = tan−1 (휆푆푖 − 푎푖 푏푖 ) (2.33) Endpoints of the line are determined by finding the intersections of 푙푖 with the boundaries of the line support region. In =-=[19]-=-, Košecká and Zhang modified the line-fitting process further by omitting the weighting factors used in computing the scatter matrix. Their method more closely resembles the standard form for calcul... |
2 |
Fusion of Imaging and Inertial Sensors for Navigation. Ph.d. dissertation
- Veth
(Show Context)
Citation Context ...avigation frames are shown. The origins of the inertial and Earth frames are at the Earth’s center of mass while the origin of the navigation frame is fixed on the Earth’s surface. (Figure taken from =-=[31]-=-) 8 xb yb zb xb Figure 2.2: Body reference frame. For aircraft, the b-frame is oriented with the 푥-axis out the nose, 푦-axis out the right wing and 푧-axis out the belly. (Figure taken from [31]) 9 Cam... |
1 | Passive indoor image-aided inertial attitude estimation using a predictive hough transformation - Borkowski, Veth |
1 |
Strapdown Inertial Technology. The Institution of Engineering and Technology
- Titterton, Weston
- 2005
(Show Context)
Citation Context ...ntended are commonly equipped 14 with MEMS strapdown IMUs, due the small size, weight and power requirements of such devices. Titterton and Weston thoroughly describe strapdown inertial navigation in =-=[29]-=-. 2.3.1 Inertial Attitude Dynamics. Since the focus of this thesis is attitude estimation, inertial attitude calculations are described here. The quantity of interest regarding attitude is the relativ... |