Results 1 - 10
of
37
Visual odometry for ground vehicle applications
- Journal of Field Robotics
, 2006
"... We present a system that estimates the motion of a stereo head or a single moving camera based on video input. The system operates in real-time with low delay and the motion estimates are used for navigational purposes. The front end of the system is a feature tracker. Point features are matched bet ..."
Abstract
-
Cited by 155 (7 self)
- Add to MetaCart
We present a system that estimates the motion of a stereo head or a single moving camera based on video input. The system operates in real-time with low delay and the motion estimates are used for navigational purposes. The front end of the system is a feature tracker. Point features are matched between pairs of frames and linked into image trajectories at video rate. Robust estimates of the camera motion are then produced from the feature tracks using a geometric hypothesize-and-test architecture. This generates motion estimates from visual input alone. No prior knowledge of the scene nor the motion is necessary. The visual estimates can also be used in conjunction with information from other sources such as GPS, inertia sensors, wheel encoders, etc. The pose estimation method has been applied successfully to video from aerial, automotive and handheld platforms. We focus on results obtained with a stereo-head mounted on an autonomous ground vehicle. We give examples of camera trajectories estimated in real-time purely from images over previously unseen distances (600 meters) and periods of time. 1.
Autonomous rover navigation on unknown terrains demonstrations in the space museum ”cite de l’espace” at toulouse automation
- In 7th International Symp. on Experimental Robotics
, 1997
"... Autonomous long range navigation in partially known planetary-like terrain is on open challenge for robotics. Nav-igating several hundreds of meters without any human interven-tion requires the robot to be able to build various representations of its environment, to plan and execute trajectories acc ..."
Abstract
-
Cited by 69 (3 self)
- Add to MetaCart
(Show Context)
Autonomous long range navigation in partially known planetary-like terrain is on open challenge for robotics. Nav-igating several hundreds of meters without any human interven-tion requires the robot to be able to build various representations of its environment, to plan and execute trajectories according to the kind of terrain traversed, to localize itself as it moves, and to schedule, start, control and interrupt these various activities. In this paper, we briefly describe some functionalities that are currently being integrated on board the Marsokhod model robot Lama at LAAS/CNRS. We then focus on the necessity to inte-grate various instances of the perception and decision function-alities, and on the difficulties raised by this integration. 1
Vision-based slam: Stereo and monocular approaches
- Int. J. Compt. Vision
, 2006
"... Building a spatially consistent model is a key functionality to endow a mobile robot with autonomy. Without an initial map or an absolute localization means, it requires to concurrently solve the localization and mapping problems. For this purpose, vision is a powerful sensor, because it provides da ..."
Abstract
-
Cited by 54 (2 self)
- Add to MetaCart
Building a spatially consistent model is a key functionality to endow a mobile robot with autonomy. Without an initial map or an absolute localization means, it requires to concurrently solve the localization and mapping problems. For this purpose, vision is a powerful sensor, because it provides data from which stable features can be extracted and matched as the robot moves. But it does not directly provide 3D information, which is a difficulty for estimating the geometry of the environment. This article presents two approaches to the SLAM problem using vision: one with stereovision, and one with monocular images. Both approaches rely on a robust interest point matching algorithm that works in very diverse environments. The stereovision based approach is a classic SLAM implementation, whereas the monocular approach introduces a new way to initialize landmarks. Both approaches are analyzed and compared with extensive experimental results, with a rover and a blimp. 1
Stereo-based ego-motion estimation using pixel tracking and iterative closest point
- in IEEE International Conference on Computer Vision Systems
, 2006
"... In this paper, we present a stereovision algorithm for real-time 6DoF ego-motion estimation, which integrates image intensity information and 3D stereo data in the well-known Iterative Closest Point (ICP) scheme. The proposed method addresses a basic problem of standard ICP, i.e. its inability to pe ..."
Abstract
-
Cited by 27 (0 self)
- Add to MetaCart
(Show Context)
In this paper, we present a stereovision algorithm for real-time 6DoF ego-motion estimation, which integrates image intensity information and 3D stereo data in the well-known Iterative Closest Point (ICP) scheme. The proposed method addresses a basic problem of standard ICP, i.e. its inability to perform the segmentation of data points and to deal with large displacements. Neither a-priori knowledge of the motion nor inputs from other sensors are required, while the only assumption is that the scene always contains visually distinctive features which can be tracked over subsequent stereo pairs. This generates what is usually called Visual Odometry. The paper details the various steps of the algorithm and presents the results of experimental tests performed with an allterrain mobile robot, proving the method to be as accurate as effective for autonomous navigation purposes. 1.
Vision based modeling and localization for planetary exploration rovers
- 55th International Astronautical Congress 2004
, 2004
"... Exploration of large unknown planetary environments will rely on rovers that can autonomously cover distances of kilometres and maintain precise information about their location with respect to local features. During such traversals, the rovers will create photo-realistic three dimensional (3D) mode ..."
Abstract
-
Cited by 23 (1 self)
- Add to MetaCart
(Show Context)
Exploration of large unknown planetary environments will rely on rovers that can autonomously cover distances of kilometres and maintain precise information about their location with respect to local features. During such traversals, the rovers will create photo-realistic three dimensional (3D) models of visited sites for autonomous operations on-site and mission planning on Earth. Currently rover position is estimated using wheel odometry, which is sufficient for short traversals but as error accumulates quickly, it is unsuitable for long distances. At MD Robotics, we are working on imaging technologies for future planetary rover missions. Two complementary technologies are currently investigated: a stereo based vision system and a scanning time-of-flight LIDAR system. Both imaging systems have been installed on board of two experimental rovers and tested in laboratory and outdoor environments. With stereo cameras, the rover can create photo-realistic 3D model as well as provide visual odometry that is more accurate than the rover dead reckoning. With the LIDAR, the rover can match 3D scans to estimate the relative location to improve the wheel and visual odometry. 1
3D simultaneous localization and modeling from stereo vision
- Proceedings of the 2004 IEEE International Conference on Robotics & Automation
, 2004
"... Abstract- This paper presents a new algorithm for determining the trajectory of a mobile robot and, simultaneously, creating a detailed volumetric 3D model of its workspace. The algorithm exclusively utilizes information provided by a single stereo vision system, avoiding thus the use both of more c ..."
Abstract
-
Cited by 20 (0 self)
- Add to MetaCart
(Show Context)
Abstract- This paper presents a new algorithm for determining the trajectory of a mobile robot and, simultaneously, creating a detailed volumetric 3D model of its workspace. The algorithm exclusively utilizes information provided by a single stereo vision system, avoiding thus the use both of more costly laser systems and error-prone odometry. Six-degrees-of-freedom egomotion is directly estimated from images acquired at relatively close positions along the robot’s path. Thus, the algorithm can deal with both planar and uneven terrain in a natural way, without requiring extra processing stages or additional orientation sensors. The 3D model is based on an octree that encapsulates clouds of 3D points obtained through stereo vision, which are integrated after each egomotion stage. Every point has three spatial coordinates referred to a single frame, as well as true-color components. The spatial location of those points is continuously improved as new images are acquired and integrated into the model. Index Terms- 3D SLAM. Mobile robots. Stereo vision. I.
Visual motion estimation and terrain modeling for planetary rovers
- International Symposium on Artificial Intelligence for Robotics and Automation in Space (iSARIAS
, 2005
"... The next round of planetary missions will require increased autonomy to enable exploration rovers to travel great distances with limited aid from a human operator. For autonomous operations at this scale, localization and terrain modeling become key aspects of onboard rover functionality. Previous M ..."
Abstract
-
Cited by 17 (4 self)
- Add to MetaCart
(Show Context)
The next round of planetary missions will require increased autonomy to enable exploration rovers to travel great distances with limited aid from a human operator. For autonomous operations at this scale, localization and terrain modeling become key aspects of onboard rover functionality. Previous Mars rover missions have relied on odometric sensors such as wheel encoders and inertial measurement units/gyros for on-board motion estimation. While these offer a simple solution, they are prone to wheel-slip in loose soil and drift of biases, respectively. Alternatively, the use of visual landmarks observed by stereo cameras to localize a rover offers a more robust solution but at the cost of increased complexity. Additionally rovers will need to create photo-realistic three-dimensional models of visited sites for autonomous operations on-site and mission planning on Earth. 1.
Digital Elevation Map Building from Low Altitude Stereo Imagery
- Proc. Of the 9 th Int. Symposium on Intelligent Robotic Systems
, 2001
"... ..."
(Show Context)
SLAM with panoramic vision
- Journal of Field Robotics
, 2007
"... This article presents an approach to SLAM that takes advantage of panoramic images. Landmarks are interest points detected and matched in the images and mapped according to a bearings-only SLAM approach. As they are acquired and processed, the panoramic images are also indexed and stored into a data ..."
Abstract
-
Cited by 14 (0 self)
- Add to MetaCart
(Show Context)
This article presents an approach to SLAM that takes advantage of panoramic images. Landmarks are interest points detected and matched in the images and mapped according to a bearings-only SLAM approach. As they are acquired and processed, the panoramic images are also indexed and stored into a database. A database query procedure, independent of the robot and landmark position estimates, is able to detect loop closures by retrieving memorized images that are close to the current robot position. The bearings-only estimation process is described, and results over a trajectory of a few hundreds of meters are presented and discussed. 1
Visual navigation in natural environments: From range and color data to a landmark-based model
- Autonomous Robots
, 2002
"... Abstract. This paper concerns the exploration of a natural environment by a mobile robot equipped with both a video color camera and a stereo-vision system. We focus on the interest of such a multi-sensory system to deal with the navigation of a robot in an a priori unknown environment, including (1 ..."
Abstract
-
Cited by 12 (6 self)
- Add to MetaCart
Abstract. This paper concerns the exploration of a natural environment by a mobile robot equipped with both a video color camera and a stereo-vision system. We focus on the interest of such a multi-sensory system to deal with the navigation of a robot in an a priori unknown environment, including (1) the incremental construction of a landmark-based model, and the use of these landmarks for (2) the 3-D localization of the mobile robot and for (3) a sensor-based navigation mode. For robot localization, a slow process and a fast one are simultaneously executed during the robot motions. In the modeling process (currently 0.1 Hz), the global landmark-based model is incrementally built and the robot situation can be estimated from discriminant landmarks selected amongst the detected objects in the range data. In the tracking process (currently 4 Hz), selected landmarks are tracked in the visual data; the tracking results are used to simplify the matching between landmarks in the modeling process. Finally, a sensor-based visual navigation mode, based on the same landmark selection and tracking, is also presented; in order to navigate during a long robot motion, different landmarks (targets) can be selected as a sequence of sub-goals that the robot must successively reach. Keywords: vision, robotics, outdoor model building, target tracking, multi-sensory fusion, visual navigation