Results 11 - 20
of
33
A Theory of Minimal 3D Point to 3D Plane Registration and Its Generalization
, 2012
"... Registration of 3D data is a key problem in many applications in computer vision, computer graphics and robotics. This paper provides a family of minimal solutions for the 3D-to-3D registration problem in which the 3D data are represented as points and planes. Such scenarios occur frequently when a ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
(Show Context)
Registration of 3D data is a key problem in many applications in computer vision, computer graphics and robotics. This paper provides a family of minimal solutions for the 3D-to-3D registration problem in which the 3D data are represented as points and planes. Such scenarios occur frequently when a 3D sensor provides 3D points and our goal is to register them to a 3D object represented by a set of planes. In order to compute the 6 degrees-of-freedom transformation between the sensor and the object, we need at least six points on three or more planes. We systematically investigate and develop pose estimation algorithms for several configurations, including all minimal configurations, that arise from the distribution of points on planes. We also identify the degenerate configurations in such registrations. The underlying algebraic equations used in many registration problems are the same and we show that many 2D-to-3D and 3D-to-3D pose estimation/registration algorithms involving points, lines, and planes can be mapped to the proposed framework. We validate our theory in simulations as well as in three real-world applications: registration of a robotic arm with an object using a contact sensor, registration of planar city models with 3D point clouds obtained using multi-view reconstruction, and registration between depth maps generated by a Kinect sensor
Generic and Real Time Structure from Motion using Local Bundle Adjustment
, 2009
"... This paper describes a method for estimating the motion of a calibrated camera and the three dimensional geometry of the filmed environment. The only data used is video input. Interest points are tracked and matched between frames at video rate. Robust estimates of the camera motion are computed in ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
This paper describes a method for estimating the motion of a calibrated camera and the three dimensional geometry of the filmed environment. The only data used is video input. Interest points are tracked and matched between frames at video rate. Robust estimates of the camera motion are computed in real-time, key frames are selected to enable 3D reconstruction of the features. We introduce a local bundle adjustment allowing 3D points and camera poses to be refined simultaneously through the sequence. This significantly reduces computational complexity when compared with global bundle adjustment. This method is applied initially to a perspective camera model, then extended to a generic camera model to describe most existing kinds of cameras. Experiments performed using real world data provide evaluations of the speed and robustness of the method. Results are compared to the ground truth measured with a differential GPS. The generalized method is also evaluated experimentally, using three types of calibrated cameras: stereo rig, perspective and catadioptric.
Motion estimation for self-driving cars with a generalized camera
- In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. doi: 10.1109/CVPR. 2013.354. URL http://ieeexplore.ieee.org/xpls/abs all. jsp?arnumber=6619198
"... In this paper, we present a visual ego-motion estima-tion algorithm for a self-driving car equipped with a close-to-market multi-camera system. By modeling the multi-camera system as a generalized camera and applying the non-holonomic motion constraint of a car, we show that this leads to a novel 2- ..."
Abstract
-
Cited by 4 (3 self)
- Add to MetaCart
(Show Context)
In this paper, we present a visual ego-motion estima-tion algorithm for a self-driving car equipped with a close-to-market multi-camera system. By modeling the multi-camera system as a generalized camera and applying the non-holonomic motion constraint of a car, we show that this leads to a novel 2-point minimal solution for the general-ized essential matrix where the full relative motion includ-ing metric scale can be obtained. We provide the analytical solutions for the general case with at least one inter-camera correspondence and a special case with only intra-camera correspondences. We show that up to a maximum of 6 so-lutions exist for both cases. We identify the existence of degeneracy when the car undergoes straight motion in the special case with only intra-camera correspondences where the scale becomes unobservable and provide a practical al-ternative solution. Our formulation can be efficiently imple-mented within RANSAC for robust estimation. We verify the validity of our assumptions on the motion model by com-paring our results on a large real-world dataset collected by a car equipped with 4 cameras with minimal overlap-ping field-of-views against the GPS/INS ground truth. 1.
A minimal solution for camera calibration using independent pairwise correspondences
- In ECCV
, 2012
"... Abstract. We propose a minimal algorithm for fully calibrating a cam-era from 11 independent pairwise point correspondences with two other calibrated cameras. Unlike previous approaches, our method neither re-quires triple correspondences, nor prior knowledge about the viewed scene. This algorithm c ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Abstract. We propose a minimal algorithm for fully calibrating a cam-era from 11 independent pairwise point correspondences with two other calibrated cameras. Unlike previous approaches, our method neither re-quires triple correspondences, nor prior knowledge about the viewed scene. This algorithm can be used to insert or re-calibrate a new cam-era into an existing network, without having to interrupt operation. Its main strength comes from the fact that it is often difficult to find triple correspondences in a camera network. This makes our algorithm, for the specified use cases, probably the most suited calibration solution that does not require a calibration target, and hence can be performed with-out human interaction.
Minimal Solutions for Generic Imaging Models
"... A generic imaging model refers to a non-parametric camera model where every camera is treated as a set of unconstrained projection rays. Calibration would simply be a method to map the projection rays to image pixels; such a mapping can be computed using plane based calibration grids. However, exist ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
A generic imaging model refers to a non-parametric camera model where every camera is treated as a set of unconstrained projection rays. Calibration would simply be a method to map the projection rays to image pixels; such a mapping can be computed using plane based calibration grids. However, existing algorithms for generic calibration use more point correspondences than the theoretical minimum. It has been well-established that non-minimal solutions for calibration and structure-from-motion algorithms are generally noise-prone compared to minimal solutions. In this work we derive minimal solutions for generic calibration algorithms. Our algorithms for generally central cameras use 4 point correspondences in three calibration grids to compute the motion between the grids. Using simulations we show that our minimal solutions are more robust to noise compared to non-minimal solutions. We also show very accurate distortion correction results on fisheye images.
Numerical Methods for Geometric Vision: From Minimal to Large Scale Problems
"... This thesis presents a number of results and algorithms for the numerical solution of problems in geometric computer vision. Estimation of scene structure and camera motion using only image data has been one of the central themes of research in photogrammetry, geodesy and computer vision. It has imp ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
This thesis presents a number of results and algorithms for the numerical solution of problems in geometric computer vision. Estimation of scene structure and camera motion using only image data has been one of the central themes of research in photogrammetry, geodesy and computer vision. It has important applications for robotics, autonomous vehicles, cartography, architecture, the movie industry, photography etc. Images inherently provide ambiguous and uncertain data about the world. Hence, geometric computer vision turns out to be as much about statistics as about geometry. Basically we consider two types of problems: Minimal problems where the number of constraints exactly matches the number of unknowns and large scale problems which need to be addressed using e cient optimization algorithms. Solvers for minimal problems are used heavily during preprocessing to eliminate outliers in uncertain data. Such problems are usually solved by nding the zeros of a system of polynomial equations.
Solving for Relative Pose with a Partially Known Rotation is a Quadratic Eigenvalue Problem
"... We propose a novel formulation of minimal case so-lutions for determining the relative pose of perspective and generalized cameras given a partially known rotation, namely, a known axis of rotation. An axis of rotation may be easily obtained by detecting vertical vanishing points with computer visio ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
We propose a novel formulation of minimal case so-lutions for determining the relative pose of perspective and generalized cameras given a partially known rotation, namely, a known axis of rotation. An axis of rotation may be easily obtained by detecting vertical vanishing points with computer vision techniques, or with the aid of sensor mea-surements from a smartphone. Given a known axis of rota-tion, our algorithms solve for the angle of rotation around the known axis along with the unknown translation. We for-mulate these relative pose problems as Quadratic Eigen-value Problems which are very simple to construct. We run several experiments on synthetic and real data to compare our methods to the current state-of-the-art algorithms. Our methods provide several advantages over alternatives meth-ods, including efficiency and accuracy, particularly in the presence of image and sensor noise as is often the case for mobile devices. 1.
Self-Calibration and Visual SLAM with a Multi-Camera System on a Micro Aerial Vehicle
"... Abstract—The use of a multi-camera system enables a robot to obtain a surround view, and thus, maximize its perceptual awareness of its environment. An accurate calibration is a nec-essary prerequisite if vision-based simultaneous localization and mapping (vSLAM) is expected to provide reliable pose ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Abstract—The use of a multi-camera system enables a robot to obtain a surround view, and thus, maximize its perceptual awareness of its environment. An accurate calibration is a nec-essary prerequisite if vision-based simultaneous localization and mapping (vSLAM) is expected to provide reliable pose estimates for a micro aerial vehicle (MAV) with a multi-camera system. On our MAV, we set up each camera pair in a stereo configuration. We propose a novel vSLAM-based self-calibration method for a multi-sensor system that includes multiple calibrated stereo cameras and an inertial measurement unit (IMU). Our self-calibration estimates the transform with metric scale between each camera and the IMU. Once the MAV is calibrated, the MAV is able to estimate its global pose via a multi-camera vSLAM implementation based on the generalized camera model. We propose a novel minimal and linear 3-point algorithm that uses inertial information to recover the relative motion of the MAV with metric scale. Our constant-time vSLAM implementation with loop closures runs on-board the MAV in real-time. To the best of our knowledge, no published work has demonstrated real-time on-board vSLAM with loop closures. We show experimental results in both indoor and outdoor environments. The code for both the self-calibration and vSLAM is available as a set of ROS packages at
Monocular Visual Odometry and Dense 3D Reconstruction for On-Road Vehicles
"... Abstract. More and more on-road vehicles are equipped with cameras each day. This paper presents a novel method for estimating the relative motion of a vehicle from a sequence of images obtained using a single vehicle-mounted camera. Recently, several researchers in robotics and computer vision have ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. More and more on-road vehicles are equipped with cameras each day. This paper presents a novel method for estimating the relative motion of a vehicle from a sequence of images obtained using a single vehicle-mounted camera. Recently, several researchers in robotics and computer vision have studied the performance of motion estimation al-gorithms under non-holonomic constraints and planarity. The successful algorithms typically use the smallest number of feature correspondences with respect to the motion model. It has been strongly established that such minimal algorithms are efficient and robust to outliers when used in a hypothesize-and-test framework such as random sample consensus (RANSAC). In this paper, we show that the planar 2-point motion es-timation can be solved analytically using a single quadratic equation, without the need of iterative techniques such as Newton-Raphson method used in existing work. Non-iterative methods are more efficient and do not suffer from local minima problems. Although 2-point motion estima-tion generates visually accurate on-road vehicle trajectory, the motion is not precise enough to perform dense 3D reconstruction due to the non-planarity of roads. Thus we use a 2-point relative motion algorithm for the initial images followed by 3-point 2D-to-3D camera pose estimation for the subsequent images. Using this hybrid approach, we generate ac-curate motion estimates for a plane-sweeping algorithm that produces dense depth maps for obstacle detection applications.
Real-Time 6D Stereo Visual Odometry with Non-Overlapping Fields of View
"... In this paper, we present a framework for 6D absolute scale motion and structure estimation of a multi-camera system in challenging indoor environments. It operates in real-time and employs information from two cameras with non-overlapping fields of view. Monocular Visual Odome-try supplying up-to-s ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
In this paper, we present a framework for 6D absolute scale motion and structure estimation of a multi-camera system in challenging indoor environments. It operates in real-time and employs information from two cameras with non-overlapping fields of view. Monocular Visual Odome-try supplying up-to-scale 6D motion information is carried out in each of the cameras, and the metric scale is recovered via a linear solution by imposing the known static transfor-mation between both sensors. The redundancy in the motion estimates is finally exploited by a statistical fusion to an op-timal 6D metric result. The proposed technique is robust to outliers and able to continuously deliver a reasonable mea-surement of the scale factor. The quality of the framework is demonstrated by a concise evaluation on indoor datasets, including a comparison to accurate ground truth data pro-vided by an external motion tracking system. 1.