Results 1 - 10
of
34
Visual odometry and mapping for autonomous flight using an RGB-D camera
- In Proc. of the Intl. Sym. of Robot. Research
, 2011
"... Abstract RGB-D cameras provide both a color image and per-pixel depth estimates. The richness of their data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapp ..."
Abstract
-
Cited by 77 (4 self)
- Add to MetaCart
(Show Context)
Abstract RGB-D cameras provide both a color image and per-pixel depth estimates. The richness of their data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on unreliable wireless links. We evaluate the effectiveness of our system for stabilizing and controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations. 1
Vision-Based Autonomous Mapping and Exploration Using a Quadrotor MAV
"... Abstract — In this paper, we describe our autonomous visionbased quadrotor MAV system which maps and explores unknown environments. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a frontlooking stereo camera as the main exteroceptive sensor, our quadrotor ..."
Abstract
-
Cited by 26 (3 self)
- Add to MetaCart
(Show Context)
Abstract — In this paper, we describe our autonomous visionbased quadrotor MAV system which maps and explores unknown environments. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a frontlooking stereo camera as the main exteroceptive sensor, our quadrotor achieves these capabilities with both the Vector Field Histogram+ (VFH+) algorithm for local navigation, and the frontier-based exploration algorithm. In addition, we implement the Bug algorithm for autonomous wall-following which could optionally be selected as the substitute exploration algorithm in sparse environments where the frontier-based exploration under-performs. We incrementally build a 3D global occupancy map on-board the MAV. The map is used by the VFH+ and frontier-based exploration in dense environments, and the Bug algorithm for wall-following in sparse environments. During the exploration phase, images from the front-looking camera are transmitted over Wi-Fi to the ground station. These images are input to a large-scale visual SLAM process running off-board on the ground station. SLAM is carried out with pose-graph optimization and loop closure detection using a vocabulary tree. We improve the robustness of the pose estimation by fusing optical flow and visual odometry. Optical flow data is provided by a customized downward-looking camera integrated with a microcontroller while visual odometry measurements are derived from the front-looking stereo camera. We verify our approaches with experimental results. I.
Image retrieval for imagebased localization revisited
- in: IEEE Conference on Computer Vision and Pattern Recognition
, 2012
"... Image-based localization is the task of determining the exact location from which a query photo was taken. In this paper, we consider imagebased localization relative to a 3D point cloud of a scene, obtained from Structure-from-Motion, which allows an accurate estimate of the full camera pose from ..."
Abstract
-
Cited by 12 (2 self)
- Add to MetaCart
(Show Context)
Image-based localization is the task of determining the exact location from which a query photo was taken. In this paper, we consider imagebased localization relative to a 3D point cloud of a scene, obtained from Structure-from-Motion, which allows an accurate estimate of the full camera pose from correspondences between 2D features and 3D points. To quickly establish the required 2D-to-3D matches, Irschara et al. use image retrieval methods In this paper, we therefore analyze the algorithmic factors that cause the gap in registration performance. We show that using selective voting schemes enable retrieval methods to outperform state-of-the-art direct matching methods and explore how both selective voting and correspondence search can be accelerated by using a Hamming embedding Selective Voting The main cause for the performance gap are the incorrect votes that are cast by image retrieval-based approaches such as Two selective voting schemes can be used to avoid incorrect votes. Correspondence voting finds the two nearest neighbors among all descriptors of 3D points having the same visual word and votes for the image that contains the nearest neighbor if the SIFT ratio test [4] to vote for database images. The camera pose is then estimated from correspondences found with pairwise image matching. Since correspondence voting requires that SIFT descriptors are kept in memory at all times, a selective voting scheme using Hamming embedding [2] can be used to the reduce memory requirements. The resulting Hamming voting only casts a vote for an image containing a point if the Hamming distance between the binary embeddings of the query feature and the point is below a certain threshold (cf . Results We compare selective voting-based localization to classical image retrievalbased methods and the state-of-the-art direct matching approach from The evaluation of Hamming voting with different sizes for the resulting binary descriptors and different vocabulary sizes in
Vision-based state estimation for autonomous rotorcraft MAVs in complex environments
- In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA
, 2013
"... Abstract-In this paper, we consider the development of a rotorcraft micro aerial vehicle (MAV) system capable of visionbased state estimation in complex environments. We pursue a systems solution for the hardware and software to enable autonomous flight with a small rotorcraft in complex indoor and ..."
Abstract
-
Cited by 7 (1 self)
- Add to MetaCart
(Show Context)
Abstract-In this paper, we consider the development of a rotorcraft micro aerial vehicle (MAV) system capable of visionbased state estimation in complex environments. We pursue a systems solution for the hardware and software to enable autonomous flight with a small rotorcraft in complex indoor and outdoor environments using only onboard vision and inertial sensors. As rotorcrafts frequently operate in hover or nearhover conditions, we propose a vision-based state estimation approach that does not drift when the vehicle remains stationary. The vision-based estimation approach combines the advantages of monocular vision (range, faster processing) with that of stereo vision (availability of scale and depth information), while overcoming several disadvantages of both. Specifically, our system relies on fisheye camera images at 25 Hz and imagery from a second camera at a much lower frequency for metric scale initialization and failure recovery. This estimate is fused with IMU information to yield state estimates at 100 Hz for feedback control. We show indoor experimental results with performance benchmarking and illustrate the autonomous operation of the system in challenging indoor and outdoor environments.
Real-time velocity estimation based on optical flow and disparity matching
- In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on
, 2012
"... Abstract — A high update rate of metric velocity values is crucial for a robust operation of navigation control loops of mobile robots such as micro aerial vehicles (MAVs). An efficient way for obtaining metric velocity of robots without external reference are image-based optical flow measurements, ..."
Abstract
-
Cited by 7 (2 self)
- Add to MetaCart
(Show Context)
Abstract — A high update rate of metric velocity values is crucial for a robust operation of navigation control loops of mobile robots such as micro aerial vehicles (MAVs). An efficient way for obtaining metric velocity of robots without external reference are image-based optical flow measurements, scaled with the distance between camera and the observed scene. How-ever, since optical flow and stereo vision are computationally intensive tasks, metric optical flow calculations on embedded systems are typically only possible at limited frame rate. In this work, we therefore present an FPGA-based platform with the capability of calculating real-time metric optical flow at 127 frames per second and 376x240 resolution. Radial undis-tortion, image rectification, disparity estimation and optical flow calculation tasks are performed on a single FPGA without the need for external memory. The platform is perfectly suited for mobile robots or MAVs due to its low weight and low power consumption. I.
A.: Integrating sensor and motion models to localize an autonomous ar.drone
- International Journal of Micro Air Vehicles
, 2011
"... AR.Drone ..."
(Show Context)
State Estimation for highly dynamic flying Systems using Key Frame Odometry with varying Time Delays
"... Abstract — System state estimation is an essential part for robot navigation and control. A combination of Inertial Nav-igation Systems (INS) and further exteroceptive sensors such as cameras or laser scanners is widely used. On small robotic systems with limitations in payload, power consumption an ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Abstract — System state estimation is an essential part for robot navigation and control. A combination of Inertial Nav-igation Systems (INS) and further exteroceptive sensors such as cameras or laser scanners is widely used. On small robotic systems with limitations in payload, power consumption and computational resources the processing of exteroceptive sensor data often introduces time delays which have to be considered in the sensor data fusion process. These time delays are especially critical in the estimation of system velocity. In this paper we present a state estimation framework fusing an INS with time delayed, relative exteroceptive sensor measurements. We evalu-ate its performance for a highly dynamic flight system trajectory including a flip. The evolution of velocity and position errors for varying measurement frequencies from 15Hz to 1Hz and time delays up to 1s is shown in Monte Carlo simulations. The filter algorithm with key frame based odometry permits an optimal, local drift free navigation while still being computationally tractable on small onboard computers. Finally, we present the results of the algorithm applied to a real quadrotor by flying from inside a house out through the window. I.
Framework for Autonomous Onboard Navigation with the AR.Drone
"... Abstract—We present a framework for autonomous flying using the AR.Drone low cost quadrotor. The system performs all sensing and computations onboard, making the system independent of any base station or remote control. High level navigation and control tasks are carried out in a microcontroller tha ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
Abstract—We present a framework for autonomous flying using the AR.Drone low cost quadrotor. The system performs all sensing and computations onboard, making the system independent of any base station or remote control. High level navigation and control tasks are carried out in a microcontroller that steers the vehicle to a desired location. We experimentally demonstrate the properties and capabilities of the system autonomously following several trajectory patterns of different complexity levels and evaluate the performance of our system. I.
The international micro air vehicle flight competition as autonomy benchmark
- in Robotics Competition: Benchmarking, Technology, Transfer and Education Workshop - European Robotics Forum
, 2013
"... The development of air vehicles with competitions has a long history [1]. The first Micro Air Vehicle competition was organized in 1997 at the University of Florida, the same year that the ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
The development of air vehicles with competitions has a long history [1]. The first Micro Air Vehicle competition was organized in 1997 at the University of Florida, the same year that the
M.: RS-SLAM: RANSAC sampling for visual FastSLAM
- In: Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems. (2011) 1655–1660
"... Abstract — In this paper, we present our RS-SLAM algorithm for monocular camera where the proposal distribution is derived from the 5-point RANSAC algorithm and image feature measurement uncertainties instead of using the easily violated constant velocity model. We propose to do another RANSAC sampl ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract — In this paper, we present our RS-SLAM algorithm for monocular camera where the proposal distribution is derived from the 5-point RANSAC algorithm and image feature measurement uncertainties instead of using the easily violated constant velocity model. We propose to do another RANSAC sampling within all the inliers that have the best RANSAC score to check for inlier misclassifications in the original cor-respondences and use all the hypotheses generated from these consensus sets in the proposal distribution. This is to mitigate data association errors (inlier misclassifications) caused by the observation that the consensus set from RANSAC that yields the highest score might not, in practice, contain all the true inliers due to noise on the feature measurements. Hypotheses which are less probable will eventually be eliminated in the particle filter resampling process. We also show in this paper that our monocular approach can be easily extended for stereo camera. Experimental results validate the potential of our approach. I.