Results 1 -
5 of
5
Urban localization with camera and inertial measurement unit
- in IEEE Intelligent Vehicles Symposium, Gold
, 2013
"... Abstract—Next generation driver assistance systems require precise self localization. Common approaches using global navigation satellite systems (GNSSs) suffer from multipath and shadowing effects often rendering this solution insufficient. In urban environments this problem becomes even more pro-n ..."
Abstract
-
Cited by 8 (3 self)
- Add to MetaCart
(Show Context)
Abstract—Next generation driver assistance systems require precise self localization. Common approaches using global navigation satellite systems (GNSSs) suffer from multipath and shadowing effects often rendering this solution insufficient. In urban environments this problem becomes even more pro-nounced. Herein we present a system for six degrees of freedom (DOF) ego localization using a mono camera and an inertial measure-ment unit (IMU). The camera image is processed to yield a rough position estimate using a previously computed landmark map. Thereafter IMU measurements are fused with the position estimate for a refined localization update. Moreover, we present the mapping pipeline required for the creation of landmark maps. Finally, we present experiments on real world data. The accu-racy of the system is evaluated by computing two independent ego positions of the same trajectory from two distinct cameras and investigating these estimates for consistency. A mean localization accuracy of 10 cm is achieved on a 10 km sequence in an inner city scenario. I.
City GPS using Stereo Vision
"... Abstract—Next generation driver assistance systems require a precise localization. However, global navigation satellite systems (GNSS) often exhibit a paucity of accuracy due to shadowing effects in street canyon like scenarios rendering this solution insufficient for many tasks. Alternatively 3D la ..."
Abstract
-
Cited by 5 (3 self)
- Add to MetaCart
(Show Context)
Abstract—Next generation driver assistance systems require a precise localization. However, global navigation satellite systems (GNSS) often exhibit a paucity of accuracy due to shadowing effects in street canyon like scenarios rendering this solution insufficient for many tasks. Alternatively 3D laser scanners can be utilized to localize the vehicle within a previously recorded 3D map. These scanners however, are expensive and bulky hampering a wide spread use. Herein we propose to use stereo cameras to localize the ego vehicle within a previously computed visual 3D map. The proposed localization solution is low cost, precise and runs in real time. The map is computed once and kept fixed thereafter using cameras as sole sensors without GPS readings. The presented mapping algorithm is largely inspired by current state of the art simultaneous localization and mapping (SLAM) methods. Moreover, the map merely consists of a sparse set of landmark points keeping the map storage manageably low. I.
Vision Only Localization
"... Abstract—Autonomous and intelligent vehicles will undoubt-edly depend on an accurate ego localization solution. Global navigation satellite systems (GNSS) suffer from multipath prop-agation rendering this solution insufficient. Herein we present a real time system for six degrees of free-dom (DOF) e ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Autonomous and intelligent vehicles will undoubt-edly depend on an accurate ego localization solution. Global navigation satellite systems (GNSS) suffer from multipath prop-agation rendering this solution insufficient. Herein we present a real time system for six degrees of free-dom (DOF) ego localization that uses only a single monocular camera. The camera image is harnessed to yield an ego pose relative to a previously computed visual map. We describe a process to automatically extract the ingredients of this map from stereoscopic image sequences. These include a mapping trajectory relative to the first pose, global scene signatures and local landmark descriptors. The localization algorithm then consists of a topological localization step that completely obviates the need for any global positioning sensors like GNSS. A metric refinement step that recovers an accurate metric pose is subsequently applied. Metric localization recovers the ego pose in a factor graph optimization process based on local landmarks. We demonstrate a centimeter level accuracy by a set of exper-iments in an urban environment. To this end, two localization estimates are computed for two independent cameras mounted on the same vehicle. These two independent trajectories are there-after compared for consistency. Finally, we present qualitative experiments of an augmented reality (AR) system that depends on the aforementioned localization solution. Several screen shots of the AR system are shown confirming centimeter level accuracy and sub degree angular precision. Index Terms—camera, localization, GPS, landmark, bundle adjustment, nonlinear least squares, SLAM
Joint self-localization and tracking of generic objects in 3d range data
- In Proceedings of the IEEE International Conference on Robotics and Automation
, 2013
"... Abstract—Both, the estimation of the trajectory of a sensor and the detection and tracking of moving objects are essential tasks for autonomous robots. This work proposes a new algorithm that treats both problems jointly. The sole input is a sequence of dense 3D measurements as returned by multi-lay ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Both, the estimation of the trajectory of a sensor and the detection and tracking of moving objects are essential tasks for autonomous robots. This work proposes a new algorithm that treats both problems jointly. The sole input is a sequence of dense 3D measurements as returned by multi-layer laser scanners or time-of-flight cameras. A major characteristic of the proposed approach is its applicability to any type of environment since specific object models are not used at any algorithm stage. More specifically, precise localization in non-flat environments is possible as well as the detection and tracking of e.g. trams or recumbent bicycles. Moreover, 3D shape estimation of moving objects is inherent to the proposed method. Thorough evaluation is conducted on a vehicular platform with a mounted Velodyne HDL-64E laser scanner. I.
Laser-Based 3D Mapping and Navigation in Planetary Worksite Environments
, 2013
"... For robotic deployments in planetary worksite environments, map construction and nav-igation are essential for tasks such as base construction, scientific investigation, and in-situ resource utilization. However, operation in a planetary environment imposes sensing restrictions, as well as challenge ..."
Abstract
- Add to MetaCart
For robotic deployments in planetary worksite environments, map construction and nav-igation are essential for tasks such as base construction, scientific investigation, and in-situ resource utilization. However, operation in a planetary environment imposes sensing restrictions, as well as challenges due to the terrain. In this thesis, we develop enabling technologies for autonomous mapping and navigation by employing a panning laser rangefinder as our primary sensor on a rover platform. The mapping task is addressed as a three-dimensional Simultaneous Localization and Mapping (3D SLAM) problem. During operation, long-range 360 ◦ scans are obtained at infrequent stops. These scans are aligned using a combination of sparse features and odometry measurements in a batch alignment framework, resulting in accurate maps of planetary worksite terrain. For navigation, the panning laser rangefinder is configured to perform short, continu-ous sweeps while the rover is in motion. An appearance-based approach is taken, where laser intensity images are used to compute Visual Odometry (VO) estimates. We over-come the motion distortion issues by formulating the estimation problem in continuous