Results 11  20
of
33
A Theory of Minimal 3D Point to 3D Plane Registration and Its Generalization
, 2012
"... Registration of 3D data is a key problem in many applications in computer vision, computer graphics and robotics. This paper provides a family of minimal solutions for the 3Dto3D registration problem in which the 3D data are represented as points and planes. Such scenarios occur frequently when a ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Registration of 3D data is a key problem in many applications in computer vision, computer graphics and robotics. This paper provides a family of minimal solutions for the 3Dto3D registration problem in which the 3D data are represented as points and planes. Such scenarios occur frequently when a 3D sensor provides 3D points and our goal is to register them to a 3D object represented by a set of planes. In order to compute the 6 degreesoffreedom transformation between the sensor and the object, we need at least six points on three or more planes. We systematically investigate and develop pose estimation algorithms for several configurations, including all minimal configurations, that arise from the distribution of points on planes. We also identify the degenerate configurations in such registrations. The underlying algebraic equations used in many registration problems are the same and we show that many 2Dto3D and 3Dto3D pose estimation/registration algorithms involving points, lines, and planes can be mapped to the proposed framework. We validate our theory in simulations as well as in three realworld applications: registration of a robotic arm with an object using a contact sensor, registration of planar city models with 3D point clouds obtained using multiview reconstruction, and registration between depth maps generated by a Kinect sensor
Generic and Real Time Structure from Motion using Local Bundle Adjustment
, 2009
"... This paper describes a method for estimating the motion of a calibrated camera and the three dimensional geometry of the filmed environment. The only data used is video input. Interest points are tracked and matched between frames at video rate. Robust estimates of the camera motion are computed in ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper describes a method for estimating the motion of a calibrated camera and the three dimensional geometry of the filmed environment. The only data used is video input. Interest points are tracked and matched between frames at video rate. Robust estimates of the camera motion are computed in realtime, key frames are selected to enable 3D reconstruction of the features. We introduce a local bundle adjustment allowing 3D points and camera poses to be refined simultaneously through the sequence. This significantly reduces computational complexity when compared with global bundle adjustment. This method is applied initially to a perspective camera model, then extended to a generic camera model to describe most existing kinds of cameras. Experiments performed using real world data provide evaluations of the speed and robustness of the method. Results are compared to the ground truth measured with a differential GPS. The generalized method is also evaluated experimentally, using three types of calibrated cameras: stereo rig, perspective and catadioptric.
Motion estimation for selfdriving cars with a generalized camera
 In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. doi: 10.1109/CVPR. 2013.354. URL http://ieeexplore.ieee.org/xpls/abs all. jsp?arnumber=6619198
"... In this paper, we present a visual egomotion estimation algorithm for a selfdriving car equipped with a closetomarket multicamera system. By modeling the multicamera system as a generalized camera and applying the nonholonomic motion constraint of a car, we show that this leads to a novel 2 ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we present a visual egomotion estimation algorithm for a selfdriving car equipped with a closetomarket multicamera system. By modeling the multicamera system as a generalized camera and applying the nonholonomic motion constraint of a car, we show that this leads to a novel 2point minimal solution for the generalized essential matrix where the full relative motion including metric scale can be obtained. We provide the analytical solutions for the general case with at least one intercamera correspondence and a special case with only intracamera correspondences. We show that up to a maximum of 6 solutions exist for both cases. We identify the existence of degeneracy when the car undergoes straight motion in the special case with only intracamera correspondences where the scale becomes unobservable and provide a practical alternative solution. Our formulation can be efficiently implemented within RANSAC for robust estimation. We verify the validity of our assumptions on the motion model by comparing our results on a large realworld dataset collected by a car equipped with 4 cameras with minimal overlapping fieldofviews against the GPS/INS ground truth. 1.
A minimal solution for camera calibration using independent pairwise correspondences
 In ECCV
, 2012
"... Abstract. We propose a minimal algorithm for fully calibrating a camera from 11 independent pairwise point correspondences with two other calibrated cameras. Unlike previous approaches, our method neither requires triple correspondences, nor prior knowledge about the viewed scene. This algorithm c ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a minimal algorithm for fully calibrating a camera from 11 independent pairwise point correspondences with two other calibrated cameras. Unlike previous approaches, our method neither requires triple correspondences, nor prior knowledge about the viewed scene. This algorithm can be used to insert or recalibrate a new camera into an existing network, without having to interrupt operation. Its main strength comes from the fact that it is often difficult to find triple correspondences in a camera network. This makes our algorithm, for the specified use cases, probably the most suited calibration solution that does not require a calibration target, and hence can be performed without human interaction.
Minimal Solutions for Generic Imaging Models
"... A generic imaging model refers to a nonparametric camera model where every camera is treated as a set of unconstrained projection rays. Calibration would simply be a method to map the projection rays to image pixels; such a mapping can be computed using plane based calibration grids. However, exist ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
A generic imaging model refers to a nonparametric camera model where every camera is treated as a set of unconstrained projection rays. Calibration would simply be a method to map the projection rays to image pixels; such a mapping can be computed using plane based calibration grids. However, existing algorithms for generic calibration use more point correspondences than the theoretical minimum. It has been wellestablished that nonminimal solutions for calibration and structurefrommotion algorithms are generally noiseprone compared to minimal solutions. In this work we derive minimal solutions for generic calibration algorithms. Our algorithms for generally central cameras use 4 point correspondences in three calibration grids to compute the motion between the grids. Using simulations we show that our minimal solutions are more robust to noise compared to nonminimal solutions. We also show very accurate distortion correction results on fisheye images.
Numerical Methods for Geometric Vision: From Minimal to Large Scale Problems
"... This thesis presents a number of results and algorithms for the numerical solution of problems in geometric computer vision. Estimation of scene structure and camera motion using only image data has been one of the central themes of research in photogrammetry, geodesy and computer vision. It has imp ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
This thesis presents a number of results and algorithms for the numerical solution of problems in geometric computer vision. Estimation of scene structure and camera motion using only image data has been one of the central themes of research in photogrammetry, geodesy and computer vision. It has important applications for robotics, autonomous vehicles, cartography, architecture, the movie industry, photography etc. Images inherently provide ambiguous and uncertain data about the world. Hence, geometric computer vision turns out to be as much about statistics as about geometry. Basically we consider two types of problems: Minimal problems where the number of constraints exactly matches the number of unknowns and large scale problems which need to be addressed using e cient optimization algorithms. Solvers for minimal problems are used heavily during preprocessing to eliminate outliers in uncertain data. Such problems are usually solved by nding the zeros of a system of polynomial equations.
Solving for Relative Pose with a Partially Known Rotation is a Quadratic Eigenvalue Problem
"... We propose a novel formulation of minimal case solutions for determining the relative pose of perspective and generalized cameras given a partially known rotation, namely, a known axis of rotation. An axis of rotation may be easily obtained by detecting vertical vanishing points with computer visio ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
We propose a novel formulation of minimal case solutions for determining the relative pose of perspective and generalized cameras given a partially known rotation, namely, a known axis of rotation. An axis of rotation may be easily obtained by detecting vertical vanishing points with computer vision techniques, or with the aid of sensor measurements from a smartphone. Given a known axis of rotation, our algorithms solve for the angle of rotation around the known axis along with the unknown translation. We formulate these relative pose problems as Quadratic Eigenvalue Problems which are very simple to construct. We run several experiments on synthetic and real data to compare our methods to the current stateoftheart algorithms. Our methods provide several advantages over alternatives methods, including efficiency and accuracy, particularly in the presence of image and sensor noise as is often the case for mobile devices. 1.
SelfCalibration and Visual SLAM with a MultiCamera System on a Micro Aerial Vehicle
"... Abstract—The use of a multicamera system enables a robot to obtain a surround view, and thus, maximize its perceptual awareness of its environment. An accurate calibration is a necessary prerequisite if visionbased simultaneous localization and mapping (vSLAM) is expected to provide reliable pose ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract—The use of a multicamera system enables a robot to obtain a surround view, and thus, maximize its perceptual awareness of its environment. An accurate calibration is a necessary prerequisite if visionbased simultaneous localization and mapping (vSLAM) is expected to provide reliable pose estimates for a micro aerial vehicle (MAV) with a multicamera system. On our MAV, we set up each camera pair in a stereo configuration. We propose a novel vSLAMbased selfcalibration method for a multisensor system that includes multiple calibrated stereo cameras and an inertial measurement unit (IMU). Our selfcalibration estimates the transform with metric scale between each camera and the IMU. Once the MAV is calibrated, the MAV is able to estimate its global pose via a multicamera vSLAM implementation based on the generalized camera model. We propose a novel minimal and linear 3point algorithm that uses inertial information to recover the relative motion of the MAV with metric scale. Our constanttime vSLAM implementation with loop closures runs onboard the MAV in realtime. To the best of our knowledge, no published work has demonstrated realtime onboard vSLAM with loop closures. We show experimental results in both indoor and outdoor environments. The code for both the selfcalibration and vSLAM is available as a set of ROS packages at
Monocular Visual Odometry and Dense 3D Reconstruction for OnRoad Vehicles
"... Abstract. More and more onroad vehicles are equipped with cameras each day. This paper presents a novel method for estimating the relative motion of a vehicle from a sequence of images obtained using a single vehiclemounted camera. Recently, several researchers in robotics and computer vision have ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. More and more onroad vehicles are equipped with cameras each day. This paper presents a novel method for estimating the relative motion of a vehicle from a sequence of images obtained using a single vehiclemounted camera. Recently, several researchers in robotics and computer vision have studied the performance of motion estimation algorithms under nonholonomic constraints and planarity. The successful algorithms typically use the smallest number of feature correspondences with respect to the motion model. It has been strongly established that such minimal algorithms are efficient and robust to outliers when used in a hypothesizeandtest framework such as random sample consensus (RANSAC). In this paper, we show that the planar 2point motion estimation can be solved analytically using a single quadratic equation, without the need of iterative techniques such as NewtonRaphson method used in existing work. Noniterative methods are more efficient and do not suffer from local minima problems. Although 2point motion estimation generates visually accurate onroad vehicle trajectory, the motion is not precise enough to perform dense 3D reconstruction due to the nonplanarity of roads. Thus we use a 2point relative motion algorithm for the initial images followed by 3point 2Dto3D camera pose estimation for the subsequent images. Using this hybrid approach, we generate accurate motion estimates for a planesweeping algorithm that produces dense depth maps for obstacle detection applications.
RealTime 6D Stereo Visual Odometry with NonOverlapping Fields of View
"... In this paper, we present a framework for 6D absolute scale motion and structure estimation of a multicamera system in challenging indoor environments. It operates in realtime and employs information from two cameras with nonoverlapping fields of view. Monocular Visual Odometry supplying uptos ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we present a framework for 6D absolute scale motion and structure estimation of a multicamera system in challenging indoor environments. It operates in realtime and employs information from two cameras with nonoverlapping fields of view. Monocular Visual Odometry supplying uptoscale 6D motion information is carried out in each of the cameras, and the metric scale is recovered via a linear solution by imposing the known static transformation between both sensors. The redundancy in the motion estimates is finally exploited by a statistical fusion to an optimal 6D metric result. The proposed technique is robust to outliers and able to continuously deliver a reasonable measurement of the scale factor. The quality of the framework is demonstrated by a concise evaluation on indoor datasets, including a comparison to accurate ground truth data provided by an external motion tracking system. 1.