Results 1 - 10
of
30
Camera-Based Navigation of a Low-Cost Quadrocopter
- in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS
"... Abstract — In this paper, we describe a system that enables a low-cost quadrocopter coupled with a ground-based laptop to navigate autonomously in previously unknown and GPSdenied environments. Our system consists of three components: a monocular SLAM system, an extended Kalman filter for data fusio ..."
Abstract
-
Cited by 28 (6 self)
- Add to MetaCart
(Show Context)
Abstract — In this paper, we describe a system that enables a low-cost quadrocopter coupled with a ground-based laptop to navigate autonomously in previously unknown and GPSdenied environments. Our system consists of three components: a monocular SLAM system, an extended Kalman filter for data fusion and state estimation and a PID controller to generate steering commands. Next to a working system, the main contribution of this paper is a novel, closed-form solution to estimate the absolute scale of the generated visual map from inertial and altitude measurements. In an extensive set of experiments, we demonstrate that our system is able to navigate in previously unknown environments at absolute scale without requiring artificial markers or external sensors. Furthermore, we show (1) its robustness to temporary loss of visual tracking and significant delays in the communication process, (2) the elimination of odometry drift as a result of the visual SLAM system and (3) accurate, scale-aware pose estimation and navigation. I.
Vision-Based Autonomous Mapping and Exploration Using a Quadrotor MAV
"... Abstract — In this paper, we describe our autonomous visionbased quadrotor MAV system which maps and explores unknown environments. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a frontlooking stereo camera as the main exteroceptive sensor, our quadrotor ..."
Abstract
-
Cited by 26 (3 self)
- Add to MetaCart
(Show Context)
Abstract — In this paper, we describe our autonomous visionbased quadrotor MAV system which maps and explores unknown environments. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a frontlooking stereo camera as the main exteroceptive sensor, our quadrotor achieves these capabilities with both the Vector Field Histogram+ (VFH+) algorithm for local navigation, and the frontier-based exploration algorithm. In addition, we implement the Bug algorithm for autonomous wall-following which could optionally be selected as the substitute exploration algorithm in sparse environments where the frontier-based exploration under-performs. We incrementally build a 3D global occupancy map on-board the MAV. The map is used by the VFH+ and frontier-based exploration in dense environments, and the Bug algorithm for wall-following in sparse environments. During the exploration phase, images from the front-looking camera are transmitted over Wi-Fi to the ground station. These images are input to a large-scale visual SLAM process running off-board on the ground station. SLAM is carried out with pose-graph optimization and loop closure detection using a vocabulary tree. We improve the robustness of the pose estimation by fusing optical flow and visual odometry. Optical flow data is provided by a customized downward-looking camera integrated with a microcontroller while visual odometry measurements are derived from the front-looking stereo camera. We verify our approaches with experimental results. I.
Learning Monocular Reactive UAV Control in Cluttered Natural Environments
, 1211
"... Abstract—Autonomous navigation for large Unmanned Aerial Vehicles (UAVs) is fairly straight-forward, as expensive sensors and monitoring devices can be employed. In contrast, obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAVs) which operate at low altitude in cluttered env ..."
Abstract
-
Cited by 19 (2 self)
- Add to MetaCart
(Show Context)
Abstract—Autonomous navigation for large Unmanned Aerial Vehicles (UAVs) is fairly straight-forward, as expensive sensors and monitoring devices can be employed. In contrast, obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAVs) which operate at low altitude in cluttered environments. Unlike large vehicles, MAVs can only carry very light sensors, such as cameras, making autonomous navigation through obstacles much more challenging. In this paper, we describe a system that navigates a small quadrotor helicopter autonomously at low altitude through natural forest environments. Using only a single cheap camera to perceive the environment, we are able to maintain a constant velocity of up to 1.5m/s. Given a small set of human pilot demonstrations, we use recent state-of-theart imitation learning techniques to train a controller that can avoid trees by adapting the MAVs heading. We demonstrate the performance of our system in a more controlled environment indoors, and in real natural forest environments outdoors. I.
First results in detecting and avoiding frontal obstacles from a monocular camera for micro unmanned aerial vehicles
- in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA
, 2013
"... Abstract — Obstacle avoidance is desirable for lightweight micro aerial vehicles and is a challenging problem since the payload constraints only permit monocular cameras and obstacles cannot be directly observed. Depth can however be inferred based on various cues in the image. Prior work has examin ..."
Abstract
-
Cited by 15 (0 self)
- Add to MetaCart
(Show Context)
Abstract — Obstacle avoidance is desirable for lightweight micro aerial vehicles and is a challenging problem since the payload constraints only permit monocular cameras and obstacles cannot be directly observed. Depth can however be inferred based on various cues in the image. Prior work has examined optical flow, and perspective cues, however these methods cannot handle frontal obstacles well. In this paper we examine the problem of detecting obstacles right in front of the vehicle. We developed a method to detect relative size changes of image patches that is able to detect size changes in the absence of optical flow. The method uses SURF feature matches in combination with template matching to compare relative obstacle sizes with different image spacing. We present results from our algorithm in autonomous flight tests on a small quadrotor. We are able to detect obstacles with a frameto-frame enlargement of 120 % with a high confidence and confirmed our algorithm in 20 successful flight experiments. In future work, we will improve the control algorithms to avoid more complicated obstacle configurations. I.
Vision-based state estimation for autonomous rotorcraft MAVs in complex environments
- In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA
, 2013
"... Abstract-In this paper, we consider the development of a rotorcraft micro aerial vehicle (MAV) system capable of visionbased state estimation in complex environments. We pursue a systems solution for the hardware and software to enable autonomous flight with a small rotorcraft in complex indoor and ..."
Abstract
-
Cited by 7 (1 self)
- Add to MetaCart
(Show Context)
Abstract-In this paper, we consider the development of a rotorcraft micro aerial vehicle (MAV) system capable of visionbased state estimation in complex environments. We pursue a systems solution for the hardware and software to enable autonomous flight with a small rotorcraft in complex indoor and outdoor environments using only onboard vision and inertial sensors. As rotorcrafts frequently operate in hover or nearhover conditions, we propose a vision-based state estimation approach that does not drift when the vehicle remains stationary. The vision-based estimation approach combines the advantages of monocular vision (range, faster processing) with that of stereo vision (availability of scale and depth information), while overcoming several disadvantages of both. Specifically, our system relies on fisheye camera images at 25 Hz and imagery from a second camera at a much lower frequency for metric scale initialization and failure recovery. This estimate is fused with IMU information to yield state estimates at 100 Hz for feedback control. We show indoor experimental results with performance benchmarking and illustrate the autonomous operation of the system in challenging indoor and outdoor environments.
A.: Integrating sensor and motion models to localize an autonomous ar.drone
- International Journal of Micro Air Vehicles
, 2011
"... AR.Drone ..."
(Show Context)
Vistas and wall-floor intersection features - enabling autonomous flight in man-made environments
- in Workshop on Visual Control of Mobile Robots
, 2012
"... Abstract — We propose a solution toward the problem of autonomous flight and exploration in man-made indoor environments with a micro aerial vehicle (MAV), using a frontal camera, a downward-facing sonar, and an IMU. We present a general method to detect and steer an MAV toward distant features that ..."
Abstract
-
Cited by 5 (1 self)
- Add to MetaCart
(Show Context)
Abstract — We propose a solution toward the problem of autonomous flight and exploration in man-made indoor environments with a micro aerial vehicle (MAV), using a frontal camera, a downward-facing sonar, and an IMU. We present a general method to detect and steer an MAV toward distant features that we call vistas while building a map of the environment to detect unexplored regions. Our method enables autonomous exploration capabilities while working reliably in textureless indoor environments that are challenging for traditional monocular SLAM approaches. We overcome the difficulties faced by traditional approaches with Wall-Floor Intersection Features, a novel type of low-dimensional landmarks that are specifically designed for man-made environments to capture the geometric structure of the scene. We demonstrate our results on a small, commercially available quadrotor platform. I.
A.: An elevation map from a micro aerial vehicle for urban search and rescue
- In: RoboCup 2012 CD (2012
"... Abstract. The developments in unmanned aerial vehicles make it possible to use this platform on a much larger scale. The current challenge is to use a team of flying robots to explore a city block, place lookouts at strategic points and if possible to enter some of the buildings in the block, to sea ..."
Abstract
-
Cited by 5 (2 self)
- Add to MetaCart
(Show Context)
Abstract. The developments in unmanned aerial vehicles make it possible to use this platform on a much larger scale. The current challenge is to use a team of flying robots to explore a city block, place lookouts at strategic points and if possible to enter some of the buildings in the block, to search for hazards and/or people. This challenge is still quite ambitious, but allows researchers to explore some of the aspects in simulation. This paper describes how to build a visual map of the environment including height information. This is an essential step towards more extensive applications, both in simulation and for a real platform. 1
Accurate Figure Flying with a Quadrocopter Using Onboard Visual and Inertial Sensing
"... Abstract — We present an approach that enables a low-cost quadrocopter to accurately fly various figures using vision as main sensor modality. Our approach consists of three components: a monocular SLAM system, an extended Kalman filter for data fusion and state estimation and a PID controller to ge ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Abstract — We present an approach that enables a low-cost quadrocopter to accurately fly various figures using vision as main sensor modality. Our approach consists of three components: a monocular SLAM system, an extended Kalman filter for data fusion and state estimation and a PID controller to generate steering commands. Our system is able to navigate in previously unknown indoor and outdoor environments at absolute scale without requiring artificial markers or external sensors. Next to a full description of our system, we introduce our scripting language and present several examples of accurate figure flying in the corresponding video submission. I.
Low-Power Parallel Algorithms for Single Image based Obstacle Avoidance in Aerial Robots
"... Abstract — For an aerial robot, perceiving and avoiding obstacles are necessary skills to function autonomously in a cluttered unknown environment. In this work, we use a single image captured from the onboard camera as input, produce obstacle classifications, and use them to select an evasive maneu ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Abstract — For an aerial robot, perceiving and avoiding obstacles are necessary skills to function autonomously in a cluttered unknown environment. In this work, we use a single image captured from the onboard camera as input, produce obstacle classifications, and use them to select an evasive maneuver. We present a Markov Random Field based approach that models the obstacles as a function of visual features and non-local dependencies in neighboring regions of the image. We perform efficient inference using new low-power parallel neuromorphic hardware, where belief propagation updates are done using leaky integrate and fire neurons in parallel, while consuming less than 1 W of power. In outdoor robotic experiments, our algorithm was able to consistently produce clean, accurate obstacle maps which allowed our robot to avoid a wide variety of obstacles, including trees, poles and fences. I.