Results 1  10
of
23
Determining the Epipolar Geometry and its Uncertainty: A Review
 International Journal of Computer Vision
, 1998
"... Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two i ..."
Abstract

Cited by 401 (9 self)
 Add to MetaCart
Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3&times;3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A wellfounded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.
MLESAC: A New Robust Estimator with Application to Estimating Image Geometry
 Computer Vision and Image Understanding
, 2000
"... A new method is presented for robustly estimating multiple view relations from point correspondences. The method comprises two parts. The first is a new robust estimator MLESAC which is a generalization of the RANSAC estimator. It adopts the same sampling strategy as RANSAC to generate putative solu ..."
Abstract

Cited by 362 (10 self)
 Add to MetaCart
(Show Context)
A new method is presented for robustly estimating multiple view relations from point correspondences. The method comprises two parts. The first is a new robust estimator MLESAC which is a generalization of the RANSAC estimator. It adopts the same sampling strategy as RANSAC to generate putative solutions, but chooses the solution that maximizes the likelihood rather than just the number of inliers. The second part of the algorithm is a general purpose method for automatically parameterizing these relations, using the output of MLESAC. A difficulty with multiview image relations is that there are often nonlinear constraints between the parameters, making optimization a difficult task. The parameterization method overcomes the difficulty of nonlinear constraints and conducts a constrained optimization. The method is general and its use is illustrated for the estimation of fundamental matrices, image–image homographies, and quadratic transformations. Results are given for both synthetic and real images. It is demonstrated that the method gives results equal or superior to those of previous approaches. c ○ 2000 Academic Press 1.
3D Model Acquisition from Extended Image Sequences
, 1995
"... This paper describes the extraction of 3D geometrical data from image sequences, for the purpose of creating 3D models of objects in the world. The approach is uncalibrated  camera internal parameters and camera motion are not known or required. Processing an image sequence is underpinned by token ..."
Abstract

Cited by 236 (29 self)
 Add to MetaCart
This paper describes the extraction of 3D geometrical data from image sequences, for the purpose of creating 3D models of objects in the world. The approach is uncalibrated  camera internal parameters and camera motion are not known or required. Processing an image sequence is underpinned by token correspondences between images. We utilise matching techniques which are both robust (detecting and discarding mismatches) and fully automatic. The matched tokens are used to compute 3D structure, which is initialised as it appears and then recursively updated over time. We describe a novel robust estimator of the trifocal tensor, based on a minimum number of token correspondences across an image triplet; and a novel tracking algorithm in which corners and line segments are matched over image triplets in an integrated framework. Experimental results are provided for a variety of scenes, including outdoor scenes taken with a handheld camcorder. Quantitative statistics are included to asses...
Sequential updating of projective and affine structure from motion
 International Journal of Computer Vision
, 1997
"... A structure from motion algorithm is described which recovers structure and camera position, modulo a projective ambiguity. Camera calibration is not required, and camera parameters such as focal length can be altered freely during motion. The structure is updated sequentially over an image sequenc ..."
Abstract

Cited by 161 (4 self)
 Add to MetaCart
A structure from motion algorithm is described which recovers structure and camera position, modulo a projective ambiguity. Camera calibration is not required, and camera parameters such as focal length can be altered freely during motion. The structure is updated sequentially over an image sequence, in contrast to schemes which employ a batch process. A specialisation of the algorithm to recover structure and camera position modulo an affine transformation is described, together with a method to periodically update the affine coordinate frame to prevent drift over time. We describe the constraint used to obtain this specialisation. Structure is recovered from image corners detected and matched automatically and reliably in real image sequences. Results are shown for reference objects and indoor environments, and accuracy of recovered structure is fully evaluated and compared for a number of reconstruction schemes. A specific application of the work is demonstrated  affine structure is used to compute free space maps enabling navigation through unstructured environments and avoidance of obstacles. The path planning involves only affine constructions.
Robust Parameterization and Computation of the Trifocal Tensor
 Image and Vision Computing
, 1997
"... The constraint that rigid motion places on the image positions of points and lines over three views is captured by the trifocal tensor. This paper demonstrates a novel robust estimator of the trifocal tensor, based on a minimum number of correspondences across an image triplet. In addition, it i ..."
Abstract

Cited by 124 (25 self)
 Add to MetaCart
(Show Context)
The constraint that rigid motion places on the image positions of points and lines over three views is captured by the trifocal tensor. This paper demonstrates a novel robust estimator of the trifocal tensor, based on a minimum number of correspondences across an image triplet. In addition, it is shown how the robust estimate can be used to find a minimal parameterization that enforces the constraints between the elements of the tensor. The matching techniques used to estimate the tensor are both robust (detecting and discarding mismatches) and fully automatic. Results are given for real image sequences. 1 Introduction The trifocal tensor plays a similar role for three views to that played by the fundamental matrix for two. It encapsulates all the (projective) geometric constraints between three views that are independent of scene structure. The tensor only depends on the motion between views and the internal parameters of the cameras, but it can be computed from image corre...
R.B.: A Buyer’s Guide to Conic Fitting
 British Machine Vision Conference
, 1995
"... In this paper we evaluate several methods of fitting data to conic sections. Conic fitting is a commonly required task in machine vision, but many algorithms perform badly on incomplete or noisy data. We evaluate several algorithms under various noise and degeneracy conditions, identify the key para ..."
Abstract

Cited by 69 (6 self)
 Add to MetaCart
(Show Context)
In this paper we evaluate several methods of fitting data to conic sections. Conic fitting is a commonly required task in machine vision, but many algorithms perform badly on incomplete or noisy data. We evaluate several algorithms under various noise and degeneracy conditions, identify the key parameters which affect sensitivity, and present the results of comparative experiments which emphasize the algorithms' behaviours under common examples of degenerate data. In addition, complexity analyses in terms of flop counts are provided in order to further inform the choice of algorithm for a specific application. 1
Impsac: A synthesis of importance sampling and random sample consensus to effect multiscale image matching for small and wide baselines
 In ECCV2000
, 2000
"... The goal of this work is to obtain accurate matches and epipolar geometry between two images of the same scene, where the motion is unlikely to be smooth or known a priori. Once the matches and two view image relation have been recovered, they can be used for image compression, for building 3D model ..."
Abstract

Cited by 58 (1 self)
 Add to MetaCart
(Show Context)
The goal of this work is to obtain accurate matches and epipolar geometry between two images of the same scene, where the motion is unlikely to be smooth or known a priori. Once the matches and two view image relation have been recovered, they can be used for image compression, for building 3D models [3, 33, 35, 48], for object recognition [19], for extraction of images from databases [31]
Robust detection of degenerate configurations for the fundamental matrix.
 In Proceedings of the 5th International Conference on Computer Vision,
, 1995
"... ..."
Euclidean structure from uncalibrated images
 Proceedings of the fth British Machine Vision Conference
, 1994
"... A number of recent papers have demonstrated that camera "selfcalibration " can be accomplished purely from image measurements, without requiring special calibration objects or known camera motion. We describe a method, based on selfcalibration, for obtaining (scaled) Euclidean structure ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
(Show Context)
A number of recent papers have demonstrated that camera "selfcalibration " can be accomplished purely from image measurements, without requiring special calibration objects or known camera motion. We describe a method, based on selfcalibration, for obtaining (scaled) Euclidean structure from multiple uncalibrated perspective images using only point matches between views. The method is in two stages. First, using an uncalibrated camera, structure is recovered up to an affine ambiguity from two views. Second, from one or more further views of this affine structure the camera intrinsic parameters are determined, and the structure ambiguity reduced to scaled Euclidean. The technique is independent of how the affine structure is obtained. We analyse its limitations and degeneracies. Results are given for images of real scenes. An application is described for active vision, where a Euclidean reconstruction is obtained during normal operation with an initially uncalibrated camera. Finally, it is demonstrated that Euclidean reconstruction can be obtained from a single perspective image of a repeated structure. 1
Robust Optical Flow Computation Based On LeastMedianofSquares Regression
, 1999
"... An optical flow estimation technique is presented which is based on the leastmedianofsquares (LMedS) robust regression algorithm enabling more accurate flow estimates to be computed in the vicinity of motion discontinuities. The flow is computed in a blockwise fashion using an affine model. Throu ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
An optical flow estimation technique is presented which is based on the leastmedianofsquares (LMedS) robust regression algorithm enabling more accurate flow estimates to be computed in the vicinity of motion discontinuities. The flow is computed in a blockwise fashion using an affine model. Through the use of overlapping blocks coupled with a block shifting strategy, redundancy is introduced into the computation of the flow. This eliminates blocking effects common in most other techniques based on blockwise processing and also allows flow to be accurately computed in regions containing three distinct motions. A multiresolution