Results 1  10
of
267
Autocalibration and the absolute quadric
 in Proc. IEEE Conf. Computer Vision, Pattern Recognition
, 1997
"... We describe a new method for camera autocalibration and scaled Euclidean structure and motion, from three or more views taken by a moving camera with fixed but unknown intrinsic parameters. The motion constancy of these is used to rectify an initial projective reconstruction. Euclidean scene structu ..."
Abstract

Cited by 248 (7 self)
 Add to MetaCart
(Show Context)
We describe a new method for camera autocalibration and scaled Euclidean structure and motion, from three or more views taken by a moving camera with fixed but unknown intrinsic parameters. The motion constancy of these is used to rectify an initial projective reconstruction. Euclidean scene structure is formulated in terms of the absolute quadric — the singular dual 3D quadric ( rank 3 matrix) giving the Euclidean dotproduct between plane normals. This is equivalent to the traditional absolute conic but simpler to use. It encodes both affine and Euclidean structure, and projects very simply to the dual absolute image conic which encodes camera calibration. Requiring the projection to be constant gives a bilinear constraint between the absolute quadric and image conic, from which both can be recovered nonlinearly from images, or quasilinearly from. Calibration and Euclidean structure follow easily. The nonlinear method is stabler, faster, more accurate and more general than the quasilinear one. It is based on a general constrained optimization technique — sequential quadratic programming — that may well be useful in other vision problems.
Automatic camera recovery for closed or open image sequences.
 In European conference on computer vision
, 1998
"... ..."
(Show Context)
Autocalibration from planar scenes
 European Conference on Computer Vision
, 1998
"... This paper describes a theory and a practical algorithm for the autocalibration of a moving projective camera, from views of a planar scene. The unknown camera calibration, and (up to scale) the unknown scene geometry and camera motion are recovered from the hypothesis that the camera’s internal par ..."
Abstract

Cited by 149 (2 self)
 Add to MetaCart
(Show Context)
This paper describes a theory and a practical algorithm for the autocalibration of a moving projective camera, from views of a planar scene. The unknown camera calibration, and (up to scale) the unknown scene geometry and camera motion are recovered from the hypothesis that the camera’s internal parameters remain constant during the motion. This work extends the various existing methods for nonplanar autocalibration to a practically common situation in which it is not possible to bootstrap the calibration from an intermediate projective reconstruction. It also extends Hartley’s method for the internal calibration of a rotating camera, to allow camera translation and to provide 3D as well as calibration information. The basic constraint is that the projections of orthogonal direction vectors (points at infinity) in the plane must be orthogonal in the calibrated camera frame of each image. Abstractly, since the two circular points of the 3D plane (representing its Euclidean structure) lie on the 3D absolute conic, their projections into each image must lie on the absolute conic’s image (representing the camera calibration). The resulting numerical algorithm optimizes this constraint over all circular points and projective calibration parameters, using the interimage homographies as a projective scene representation.
Factorization methods for projective structure and motion
 In IEEE Conf. Computer Vision & Pattern Recognition
, 1996
"... This paper describes a family of factorizationbased algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines. They can be viewed as generalizations of the TomasiKanade algorithm from affine to fully perspective cameras, and fro ..."
Abstract

Cited by 115 (5 self)
 Add to MetaCart
(Show Context)
This paper describes a family of factorizationbased algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines. They can be viewed as generalizations of the TomasiKanade algorithm from affine to fully perspective cameras, and from points to lines. They make no restrictive assumptions about scene or camera geometry, and unlike most existing reconstruction methods they do not rely on ‘privileged’ points or images. All of the available image data is used, and each feature in each image is treated uniformly. The key to projective factorization is the recovery of a consistent set of projective depths (scale factors) for the image points: this is done using fundamental matrices and epipoles estimated from the image data. We compare the performance of the new techniques with several existing ones, and also describe an approximate factorization method that gives similar results to SVDbased factorization, but runs much more quickly for large problems.
Factorization with Uncertainty
 In European Conference on Computer Vision
, 2000
"... . Factorization using Singular Value Decomposition (SVD) is often used for recovering 3D shape and motion from feature correspondences across multiple views. SVD is powerful at finding the global solution to the associated leastsquareerror minimization problem. However, this is the correct error t ..."
Abstract

Cited by 84 (5 self)
 Add to MetaCart
(Show Context)
. Factorization using Singular Value Decomposition (SVD) is often used for recovering 3D shape and motion from feature correspondences across multiple views. SVD is powerful at finding the global solution to the associated leastsquareerror minimization problem. However, this is the correct error to minimize only when the x and y positional errors in the features are uncorrelated and identically distributed. But this is rarely the case in real data. Uncertainty in feature position depends on the underlying spatial intensity structure in the image, which has strong directionality to it. Hence, the proper measure to minimize is covarianceweighted squarederror (or the Mahalanobis distance). In this paper, we describe a new approach to covarianceweighted factorization, which can factor noisy feature correspondences with high degree of directional uncertainty into structure and motion. Our approach is based on transforming the rawdata into a covarianceweighted data space, where the co...
Affine Structure from Line Correspondences with Uncalibrated Affine Cameras
 IEEE Trans. Pattern Analysis and Machine Intelligence
, 1997
"... This paper presents a linear algorithm for recovering 3D affine shape and motion from line correspondences with uncalibrated affine cameras. The algorithm requires a minimum of seven line correspondences over three views. The key idea is the introduction of a onedimensional projective camera. This ..."
Abstract

Cited by 83 (9 self)
 Add to MetaCart
(Show Context)
This paper presents a linear algorithm for recovering 3D affine shape and motion from line correspondences with uncalibrated affine cameras. The algorithm requires a minimum of seven line correspondences over three views. The key idea is the introduction of a onedimensional projective camera. This converts 3D affine reconstruction of "line directions" into 2D projective reconstruction of "points". In addition, a linebased factorisation method is also proposed to handle redundant views. Experimental results both on simulated and real image sequences validate the robustness and the accuracy of the algorithm.
Robust Rotation and Translation Estimation in Multiview Reconstruction
 In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR
, 2007
"... It is known that the problem of multiview reconstruction can be solved in two steps: first estimate camera rotations and then translations using them. This paper presents new robust techniques for both of these steps. (i) Given pairwise relative rotations, global camera rotations are estimated lin ..."
Abstract

Cited by 76 (4 self)
 Add to MetaCart
(Show Context)
It is known that the problem of multiview reconstruction can be solved in two steps: first estimate camera rotations and then translations using them. This paper presents new robust techniques for both of these steps. (i) Given pairwise relative rotations, global camera rotations are estimated linearly in least squares. (ii) Camera translations are estimated using a standard technique based on Second Order Cone Programming. Robustness is achieved by using only a subset of points according to a new criterion that diminishes the risk of chosing a mismatch. It is shown that only four points chosen in a special way are sufficient to represent a pairwise reconstruction almost equally as all points. This leads to a significant speedup. In image sets with repetitive or similar structures, nonexistent epipolar geometries may be found. Due to them, some rotations and consequently translations may be estimated incorrectly. It is shown that iterative removal of pairwise reconstructions with the largest residual and reregistration removes most nonexistent epipolar geometries. The performance of the proposed method is demonstrated on difficult wide baseline image sets. 1.
Linear Multi View Reconstruction and Camera Recovery
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 2001
"... This paper presents a linear algorithm for the simultaneous computation of 3D points and camera positions from multiple perspective views, based on having four points on a reference plane visible in all views. The reconstruction and camera recovery is achieved, in a single step, by finding the null ..."
Abstract

Cited by 66 (5 self)
 Add to MetaCart
This paper presents a linear algorithm for the simultaneous computation of 3D points and camera positions from multiple perspective views, based on having four points on a reference plane visible in all views. The reconstruction and camera recovery is achieved, in a single step, by finding the nullspace of a matrix using singular value decomposition. Unlike factorization algorithms, the presented algorithm does not require all points to be visible in all views. By simultaneously reconstructing points and views the numerically stabilizing effect of having wide spread cameras with large mutual baselines is exploited. Experimental results are presented for both finite and infinite reference planes. An especially interesting application of this method is the reconstruction of architectural scenes with the reference plane taken as the plane at infinity which is visible via three orthogonal vanishing points. This is demonstrated by reconstructing the outside and inside (courtyard) of a building on the basis of 35 views in one single SVD.