Results 1 - 10
of
14
A linear approach to motion estimation using generalized camera models
- In: CVPR
, 2008
"... A well-known theoretical result for motion estimation using the generalized camera model is that 17 corresponding image rays can be used to solve linearly for the motion of a generalized camera. However, this paper shows that for many common configurations of the generalized camera models (e.g., mul ..."
Abstract
-
Cited by 22 (7 self)
- Add to MetaCart
(Show Context)
A well-known theoretical result for motion estimation using the generalized camera model is that 17 corresponding image rays can be used to solve linearly for the motion of a generalized camera. However, this paper shows that for many common configurations of the generalized camera models (e.g., multi-camera rig, catadioptric camera etc.), such a simple 17-point algorithm does not exist, due to some previously overlooked ambiguities. We further discover that, despite the above ambiguities, we are still able to solve the motion estimation problem effectively by a new algorithm proposed in this paper. Our algorithm is essentially linear, easy to implement, and the computational efficiency is very high. Experiments on both real and simulated data show that the new algorithm achieves reasonably high accuracy as well. 1.
Absolute Scale in Structure from Motion from a . . .
, 2009
"... In structure-from-motion with a single camera it is well known that the scene can be only recovered up to a scale. In order to compute the absolute scale, one needs to know the baseline of the camera motion or the dimension of at least one element in the scene. In this paper, we show that there exis ..."
Abstract
-
Cited by 17 (2 self)
- Add to MetaCart
In structure-from-motion with a single camera it is well known that the scene can be only recovered up to a scale. In order to compute the absolute scale, one needs to know the baseline of the camera motion or the dimension of at least one element in the scene. In this paper, we show that there exists a class of structure-from-motion problems where it is possible to compute the absolute scale completely automatically without using this knowledge, that is, when the camera is mounted on wheeled vehicles (e.g. cars, bikes, or mobile robots). The construction of these vehicles puts interesting constraints on the camera motion, which are known as “nonholonomic constraints”. The interesting case is when the camera has an offset to the vehicle’s center of motion. We show that by just knowing this offset, the absolute scale can be computed with a good accuracy when the vehicle turns. We give a mathematical derivation and provide experimental results on both simulated and real data over a large image dataset collected during a 3 Km path. To our knowledge this is the first time nonholonomic constraints of wheeled vehicles are used to estimate the absolute scale. We believe that the proposed method can be useful in those research areas involving visual odometry and mapping with vehicle mounted cameras.
Motion estimation for nonoverlapping multicamera rigs: Linear algebraic and l∞ geometric solutions
- IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2010
"... We investigate the problem of estimating the ego-motion of a multicamera rig from two positions of the rig. We describe and compare two new algorithms for finding the 6 degrees of freedom (3 for rotation and 3 for translation) of the motion. One algorithm gives a linear solution and the other is a ..."
Abstract
-
Cited by 14 (4 self)
- Add to MetaCart
We investigate the problem of estimating the ego-motion of a multicamera rig from two positions of the rig. We describe and compare two new algorithms for finding the 6 degrees of freedom (3 for rotation and 3 for translation) of the motion. One algorithm gives a linear solution and the other is a geometric algorithm that minimizes the maximum measurement error—the optimal L1 solution. They are described in the context of the General Camera Model (GCM), and we pay particular attention to multicamera systems in which the cameras have nonoverlapping or minimally overlapping field of view. Many nonlinear algorithms have been developed to solve the multicamera motion estimation problem. However, no linear solution or guaranteed optimal geometric solution has previously been proposed. We made two contributions: 1) a fast linear algebraic method using the GCM and 2) a guaranteed globally optimal algorithm based on the L1 geometric error using the branch-and-bound technique. In deriving the linear method using the GCM, we give a detailed analysis of degeneracy of camera configurations. In finding the globally optimal solution, we apply a rotation space search technique recently proposed by Hartley and Kahl. Our experiments conducted on both synthetic and real data have shown excellent results.
S.B.: Plenoptic flow: Closed-form visual odometry for light field cameras
- In: Proceedings, 2011 IEEE/RSJ international conference on intelligent robots and systems (IROS
, 2011
"... Abstract—Three closed-form solutions are proposed for six de-gree of freedom (6-DOF) visual odometry for light field cameras. The first approach breaks the problem into geometrically driven sub-problems with solutions adaptable to specific applications, while the second generalizes methods from opti ..."
Abstract
-
Cited by 9 (7 self)
- Add to MetaCart
(Show Context)
Abstract—Three closed-form solutions are proposed for six de-gree of freedom (6-DOF) visual odometry for light field cameras. The first approach breaks the problem into geometrically driven sub-problems with solutions adaptable to specific applications, while the second generalizes methods from optical flow to yield a more direct approach. The third solution integrates elements into a remarkably simple equation of plenoptic flow which is directly solved to estimate the camera’s motion. The proposed methods avoid feature extraction, operating instead on all measured pixels, and are therefore robust to noise. The solutions are closed-form, computationally efficient, and operate in constant time regardless of scene complexity, making them suitable for real-time robotics applications. Results are shown for a simulated underwater survey scenario, and real-world results demonstrate good performance for a three-camera array, outperforming a state-of-the-art stereo feature-tracking approach. I.
Measuring camera translation by the dominant apical angle
- IEEE Conference on Computer Vision and Pattern Recognition, CVPR
, 2008
"... This paper provides a technique for measuring camera translation relatively w.r.t. the scene from two images. We demonstrate that the amount of the translation can be reliably measured for general as well as planar scenes by the most frequent apical angle, the angle under which the camera centers ar ..."
Abstract
-
Cited by 9 (7 self)
- Add to MetaCart
(Show Context)
This paper provides a technique for measuring camera translation relatively w.r.t. the scene from two images. We demonstrate that the amount of the translation can be reliably measured for general as well as planar scenes by the most frequent apical angle, the angle under which the camera centers are seen from the perspective of the reconstructed scene points. Simulated experiments show that the dominant apical angle is a linear function of the length of the true camera translation. In a real experiment, we demonstrate that by skipping image pairs with too small motion, we can reliably initialize structure from motion, compute accurate camera trajectory in order to rectify images and use the ground plane constraint in recognition of pedestrians in a hand-held video sequence. 1.
A New Minimal Solution to the Relative Pose of a Calibrated Stereo Camera with Small Field of View Overlap
"... In this paper we present a new minimal solver for the relative pose of a calibrated stereo camera. It is based on the observation that a feature visible in all cameras constrains the relative pose of the second stereo camera to be on a sphere around the feature which has a known position relative to ..."
Abstract
-
Cited by 5 (3 self)
- Add to MetaCart
(Show Context)
In this paper we present a new minimal solver for the relative pose of a calibrated stereo camera. It is based on the observation that a feature visible in all cameras constrains the relative pose of the second stereo camera to be on a sphere around the feature which has a known position relative to the first stereo camera pose due to its triangulation. The constraint leaves three degrees of freedom, two for the location of the second camera on the sphere and the third for rotation in the plane tangent to the sphere. We use three temporal 2D correspondences, two correspondences from the left (or right) camera and one correspondence from the other camera to solve for these three remaining degrees of freedom. This approach is amenable to stereo pairs having a small overlap in their views. We present an efficient solution of this novel relative pose problem, theoretically derive how to use our new solver with two classes of measurements in RANSAC, evaluate its performance given noise and outliers and demonstrate its use in a real-time structure from motion system. 1.
Motion estimation for self-driving cars with a generalized camera
- In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. doi: 10.1109/CVPR. 2013.354. URL http://ieeexplore.ieee.org/xpls/abs all. jsp?arnumber=6619198
"... In this paper, we present a visual ego-motion estima-tion algorithm for a self-driving car equipped with a close-to-market multi-camera system. By modeling the multi-camera system as a generalized camera and applying the non-holonomic motion constraint of a car, we show that this leads to a novel 2- ..."
Abstract
-
Cited by 4 (3 self)
- Add to MetaCart
(Show Context)
In this paper, we present a visual ego-motion estima-tion algorithm for a self-driving car equipped with a close-to-market multi-camera system. By modeling the multi-camera system as a generalized camera and applying the non-holonomic motion constraint of a car, we show that this leads to a novel 2-point minimal solution for the general-ized essential matrix where the full relative motion includ-ing metric scale can be obtained. We provide the analytical solutions for the general case with at least one inter-camera correspondence and a special case with only intra-camera correspondences. We show that up to a maximum of 6 so-lutions exist for both cases. We identify the existence of degeneracy when the car undergoes straight motion in the special case with only intra-camera correspondences where the scale becomes unobservable and provide a practical al-ternative solution. Our formulation can be efficiently imple-mented within RANSAC for robust estimation. We verify the validity of our assumptions on the motion model by com-paring our results on a large real-world dataset collected by a car equipped with 4 cameras with minimal overlap-ping field-of-views against the GPS/INS ground truth. 1.
A Visual Servoing Model for Generalised Cameras: Case study of non-overlapping cameras
- in "IEEE Int. Conf. on Robotics and Automation, ICRA’11
, 2011
"... Abstract — This paper proposes an adaptation of classical image-based visual servoing to a generalised imaging model where cameras are modelled as sets of 3D viewing rays. This new model leads to a generalised visual servoing control formalism that can be applied to any type of imaging system whethe ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Abstract — This paper proposes an adaptation of classical image-based visual servoing to a generalised imaging model where cameras are modelled as sets of 3D viewing rays. This new model leads to a generalised visual servoing control formalism that can be applied to any type of imaging system whether it be multi-camera, catadioptric, non-central, etc. In this paper the generalised 3D viewing cones are parameterised geometrically via Plücker line coordinates. The new visual servoing model is tested on an a-typical stereo-camera system with non-overlapping cameras. In this case no 3D information is available from triangulation and the system is comparable to a 2D visual servoing system with non-central ray-based control law. I.
Real-Time 6D Stereo Visual Odometry with Non-Overlapping Fields of View
"... In this paper, we present a framework for 6D absolute scale motion and structure estimation of a multi-camera system in challenging indoor environments. It operates in real-time and employs information from two cameras with non-overlapping fields of view. Monocular Visual Odome-try supplying up-to-s ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
In this paper, we present a framework for 6D absolute scale motion and structure estimation of a multi-camera system in challenging indoor environments. It operates in real-time and employs information from two cameras with non-overlapping fields of view. Monocular Visual Odome-try supplying up-to-scale 6D motion information is carried out in each of the cameras, and the metric scale is recovered via a linear solution by imposing the known static transfor-mation between both sensors. The redundancy in the motion estimates is finally exploited by a statistical fusion to an op-timal 6D metric result. The proposed technique is robust to outliers and able to continuously deliver a reasonable mea-surement of the scale factor. The quality of the framework is demonstrated by a concise evaluation on indoor datasets, including a comparison to accurate ground truth data pro-vided by an external motion tracking system. 1.
Absolute Scale in Structure from Motion from a Single Vehicle Mounted Camera
"... All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract
- Add to MetaCart
(Show Context)
All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.