Results 1 - 10
of
511
An Efficient Solution to the Five-Point Relative Pose Problem
, 2004
"... An efficient algorithmic solution to the classical five-point relative pose problem is presented. The problem is to find the possible solutions for relative camera pose between two calibrated views given five corresponding points. The algorithm consists of computing the coefficients of a tenth degre ..."
Abstract
-
Cited by 484 (13 self)
- Add to MetaCart
An efficient algorithmic solution to the classical five-point relative pose problem is presented. The problem is to find the possible solutions for relative camera pose between two calibrated views given five corresponding points. The algorithm consists of computing the coefficients of a tenth degree polynomial in closed form and subsequently finding its roots. It is the first algorithm well suited for numerical implementation that also corresponds to the inherent complexity of the problem. We investigate the numerical precision of the algorithm. We also study its performance under noise in minimal as well as over-determined cases. The performance is compared to that of the well known 8 and 7-point methods and a 6-point scheme. The algorithm is used in a robust hypothesize-and-test framework to estimate structure and motion in real-time with low delay. The real-time system uses solely visual input and has been demonstrated at major conferences.
Dynamically Reparameterized Light Fields
, 1999
"... An exciting new area in computer graphics is the synthesis of novel images with photographic effect from an initial database of reference images. This is the primary theme of imagebased rendering algorithms. This research extends the light field and lumigraph image-based rendering methods and greatl ..."
Abstract
-
Cited by 187 (9 self)
- Add to MetaCart
An exciting new area in computer graphics is the synthesis of novel images with photographic effect from an initial database of reference images. This is the primary theme of imagebased rendering algorithms. This research extends the light field and lumigraph image-based rendering methods and greatly extends their utility, especially in scenes with much depth variation. First, we have added the ability to vary the apparent focus within a light field using intuitive camera-like controls such as a variable aperture and focus ring. As with lumigraphs, we allow for more general and flexible focal surfaces than a typical focal plane. However, this parameterization works independently of scene geometry; we do not need to recover actual or approximate geometry of the scene for focusing. In addition, we present a method for using multiple focal surfaces in a single image rendering process.
Image-Based Reconstruction of Spatial Appearance and Geometric Detail
- ACM Transactions on Graphics
, 2003
"... ÓÙÖ Ñ��×ÙÖ� � �Ê�� × � ��� � ÕÙ�Ð�ØÝ ÑÓ��Ð Ó � � Ö��Ð Ó�� � Ø �Ò � � ��Ò�Ö�Ø� � Û�Ø � Ö�Ð�Ø�Ú�ÐÝ ..."
Abstract
-
Cited by 145 (24 self)
- Add to MetaCart
ÓÙÖ Ñ��×ÙÖ� � �Ê�� × � ��� � ÕÙ�Ð�ØÝ ÑÓ��Ð Ó � � Ö��Ð Ó�� � Ø �Ò � � ��Ò�Ö�Ø� � Û�Ø � Ö�Ð�Ø�Ú�ÐÝ
Extrinsic calibration of a camera and laser range finder
- In IEEE International Conference on Intelligent Robots and Systems (IROS
, 2004
"... Abstract — We describe theoretical and experimental results for the extrinsic calibration of sensor platform consisting of a camera and a 2D laser range finder. The calibration is based on observing a planar checkerboard pattern and solving for constraints between the “views ” of a planar checkerboa ..."
Abstract
-
Cited by 98 (0 self)
- Add to MetaCart
(Show Context)
Abstract — We describe theoretical and experimental results for the extrinsic calibration of sensor platform consisting of a camera and a 2D laser range finder. The calibration is based on observing a planar checkerboard pattern and solving for constraints between the “views ” of a planar checkerboard calibration pattern from a camera and laser range finder. we give a direct solution that minimizes an algebraic error from this constraint, and subsequent nonlinear refinement minimizes a re-projection error. To our knowledge, this is the first published calibration tool for this problem. Additionally we show how this constraint can reduce the variance in estimating intrinsic camera parameters. I.
Image-Based Reconstruction of Spatially Varying Materials
- In Proceedings of the 12th Eurographics Workshop on Rendering
, 2001
"... . The measurement of accurate material properties is an important step towards photorealistic rendering. Many real-world objects are composed of a number of materials that often show subtle changes even within a single material. Thus, for photorealistic rendering both the general surface properti ..."
Abstract
-
Cited by 90 (13 self)
- Add to MetaCart
(Show Context)
. The measurement of accurate material properties is an important step towards photorealistic rendering. Many real-world objects are composed of a number of materials that often show subtle changes even within a single material. Thus, for photorealistic rendering both the general surface properties as well as the spatially varying effects of the object are needed. We present an image-based measuring method that robustly detects the different materials of real objects and fits an average bidirectional reflectance distribution function (BRDF) to each of them. In order to model the local changes as well, we project the measured data for each surface point into a basis formed by the recovered BRDFs leading to a truly spatially varying BRDF representation. A high quality model of a real object can be generated with relatively few input data. The generated model allows for rendering under arbitrary viewing and lighting conditions and realistically reproduces the appearance of the original object. 1
Relative pose calibration between visual and inertial sensors
- International Journal of Robotics Research, Special Issue 2nd Workshop on Integration of Vision and Inertial Sensors, 26:561–575, 2007. Luiz Gustavo Bizarro
, 2009
"... Abstract — This paper proposes an approach to calibrate off-the-shelf cameras and inertial sensors to have a useful integrated system to be used in static and dynamic situations. The rotation between the camera and the inertial sensor can be estimated, when calibrating the camera, by having both sen ..."
Abstract
-
Cited by 72 (15 self)
- Add to MetaCart
(Show Context)
Abstract — This paper proposes an approach to calibrate off-the-shelf cameras and inertial sensors to have a useful integrated system to be used in static and dynamic situations. The rotation between the camera and the inertial sensor can be estimated, when calibrating the camera, by having both sensors observe the vertical direction, using a vertical chessboard target and gravity. The translation between the two can be estimated using a simple passive turntable and static images, provided that the system can be adjusted to turn about the inertial sensor null point in several poses. Simulation and real data results are presented to show the validity and simple requirements of the proposed method. Index Terms — computer vision, inertial sensors, sensor fusion, calibration. I.
Camera network calibration from dynamic silhouettes
- in CVPR
, 2004
"... In this paper we present an automatic method for calibrating a network of cameras from only silhouettes. This is particularly useful for shape-from-silhouette or visual-hull systems, as no additional data is needed for calibration. The key novel contribution of this work is an algorithm to robustly ..."
Abstract
-
Cited by 63 (6 self)
- Add to MetaCart
(Show Context)
In this paper we present an automatic method for calibrating a network of cameras from only silhouettes. This is particularly useful for shape-from-silhouette or visual-hull systems, as no additional data is needed for calibration. The key novel contribution of this work is an algorithm to robustly compute the epipolar geometry from dynamic silhouettes. We use the fundamental matrices computed by this method to determine the projective reconstruction of the complete camera configuration. This is refined into a metric reconstruction using self-calibration. We validate our approach by calibrating a four camera visual-hull system from archive data where the dynamic object is a moving person. Once the calibration parameters have been computed, we use a visual-hull algorithm to reconstruct the dynamic object from its silhouettes. 1
P.: Single view point omnidirectional camera calibration from planar grids. ICRA
, 2007
"... Abstract — This paper presents a flexible approach for calibrating omnidirectional single viewpoint sensors from planar grids. Current approaches in the field are either based on theoretical properties and do not take into account important factors such as misalignment or camera-lens distortion or o ..."
Abstract
-
Cited by 59 (4 self)
- Add to MetaCart
(Show Context)
Abstract — This paper presents a flexible approach for calibrating omnidirectional single viewpoint sensors from planar grids. Current approaches in the field are either based on theoretical properties and do not take into account important factors such as misalignment or camera-lens distortion or over-parametrised which leads to minimisation problems that are difficult to solve. Recent techniques based on polynomial approximations lead to impractical calibration methods. Our model is based on an exact theoretical projection function to which we add well identified parameters to model real-world errors. This leads to a full methodology from the initialisation of the intrinsic parameters to the general calibration. We also discuss the validity of the approach for fish-eye and spherical models. An implementation of the method is available as opensource software on the author’s Web page. We validate the approach with the calibration of parabolic, hyperbolic, wideangle and spherical sensors. I.
A Self Correcting Projector
- In IEEE Computer Vision and Pattern Recognition (CVPR
, 2001
"... We describe a calibration and rendering technique for a projector that can render rectangular images under keystoned position. The projector utilizes a rigidly attached camera to form a stereo pair. We describe a very easy to use technique for calibration of the projector-camera pair using only blac ..."
Abstract
-
Cited by 59 (7 self)
- Add to MetaCart
(Show Context)
We describe a calibration and rendering technique for a projector that can render rectangular images under keystoned position. The projector utilizes a rigidly attached camera to form a stereo pair. We describe a very easy to use technique for calibration of the projector-camera pair using only black planar surfaces. We present an efficient rendering method to pre-warp images so that they appear correctly on the screen, and show experimental results.
A new optical tracking system for virtual and augmented reality applications
- In Proceedings of the IEEE Instrumentation and Measurement Technical Conference
, 2001
"... Abstract – A new stereo vision tracker setup for virtual and augmented reality applications is presented in this paper. Performance, robustness and accuracy of the system are achieved under real-time constraints. The method is based on blobs extraction, two-dimensional prediction, the epipolar const ..."
Abstract
-
Cited by 55 (6 self)
- Add to MetaCart
(Show Context)
Abstract – A new stereo vision tracker setup for virtual and augmented reality applications is presented in this paper. Performance, robustness and accuracy of the system are achieved under real-time constraints. The method is based on blobs extraction, two-dimensional prediction, the epipolar constraint and three-dimensional reconstruction. Experimental results using a stereo rig setup (equipped with IR capabilities) and retroreflective targets are presented to demonstrate the capabilities of our optical tracking system. The system tracks up to 25 independent targets at 30 Hz.