Results 1  10
of
18
Motion Planning under Uncertainty for Robotic Tasks with Long Time Horizons
"... Abstract Partially observable Markov decision processes (POMDPs) are a principled mathematical framework for planning under uncertainty, a crucial capability for reliable operation of autonomous robots. By using probabilistic sampling, pointbased POMDP solvers have drastically improved the speed of ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
(Show Context)
Abstract Partially observable Markov decision processes (POMDPs) are a principled mathematical framework for planning under uncertainty, a crucial capability for reliable operation of autonomous robots. By using probabilistic sampling, pointbased POMDP solvers have drastically improved the speed of POMDP planning, enabling POMDPs to handle moderately complex robotic tasks. However, robot motion planning tasks with long time horizons remain a severe obstacle for even the fastest pointbased POMDP solvers today. This paper proposes Milestone Guided Sampling (MiGS), a new pointbased POMDP solver, which exploits state space information to reduce the effective planning horizon. MiGS samples a set of points, called milestones, from a robot’s state space, uses them to construct a compact, sampled representation of the state space, and then uses this representation of the state space to guide sampling in the belief space. This strategy reduces the effective planning horizon, while still capturing the essential features of the belief space with a small number of sampled points. Preliminary results are very promising. We tested MiGS in simulation on several difficult POMDPs modeling distinct robotic tasks with long time horizons; they are impossible with the fastest pointbased POMDP solvers today. MiGS solved them in a few minutes. 1
A survey of pointbased POMDP solvers
 AUTON AGENT MULTIAGENT SYST
, 2012
"... The past decade has seen a significant breakthrough in research on solving partially observable Markov decision processes (POMDPs). Where past solvers could not scale beyond perhaps a dozen states, modern solvers can handle complex domains with many thousands of states. This breakthrough was mainly ..."
Abstract

Cited by 33 (5 self)
 Add to MetaCart
The past decade has seen a significant breakthrough in research on solving partially observable Markov decision processes (POMDPs). Where past solvers could not scale beyond perhaps a dozen states, modern solvers can handle complex domains with many thousands of states. This breakthrough was mainly due to the idea of restricting value function computations to a finite subset of the belief space, permitting only local value updates for this subset. This approach, known as pointbased value iteration, avoids the exponential growth of the value function, and is thus applicable for domains with longer horizons, even with relatively large state spaces. Many extensions were suggested to this basic idea, focusing on various aspects of the algorithm—mainly the selection of the belief space subset, and the order of value function updates. In this survey, we walk the reader through the fundamentals of pointbased value iteration, explaining the main concepts and ideas. Then, we survey the major extensions to the basic algorithm, discussing their merits. Finally, we include an extensive empirical analysis using well known benchmarks, in order to shed light on the strengths and limitations of the various approaches.
A Probabilistic ParticleControl Approximation of ChanceConstrained Stochastic Predictive Control
"... Abstract—Robotic systems need to be able to plan control actions that are robust to the inherent uncertainty in the real world. This uncertainty arises due to uncertain state estimation, disturbances, and modeling errors, as well as stochastic mode transitions such as component failures. Chancecons ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Robotic systems need to be able to plan control actions that are robust to the inherent uncertainty in the real world. This uncertainty arises due to uncertain state estimation, disturbances, and modeling errors, as well as stochastic mode transitions such as component failures. Chanceconstrained control takes into account uncertainty to ensure that the probability of failure, due to collision with obstacles, for example, is below a given threshold. In this paper, we present a novel method for chanceconstrained predictive stochastic control of dynamic systems. The method approximates the distribution of the system state using a finite number of particles. By expressing these particles in terms of the control variables, we are able to approximate the original stochastic control problem as a deterministic one; furthermore, the approximation becomes exact as the number of particles tends to infinity. This method applies to arbitrary noise distributions, and for systems with linear or jump Markov linear dynamics, we show that the approximate problem can be solved using efficient mixedinteger linearprogramming techniques. We also introduce an important weighting extension that enables the method to deal with lowprobability mode transitions such as failures. We demonstrate in simulation that the new method is able to control an aircraft in turbulence and can control a ground vehicle while being robust to brake failures. Index Terms—Chance constraints, hybrid discretecontinuous systems, nonholonomic motion planning, planning under stochastic uncertainty. I.
Multiobjective exploration and search for autonomous rescue robots
 Journal of Field Robotics
, 2007
"... “Exploration and search ” is a typical task for autonomous robots performing in rescue missions, specifically addressing the problem of exploring the environment and at the same time searching for interesting features within the environment. In this paper, we model this problem as a multiobjective ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
“Exploration and search ” is a typical task for autonomous robots performing in rescue missions, specifically addressing the problem of exploring the environment and at the same time searching for interesting features within the environment. In this paper, we model this problem as a multiobjective exploration and search problem and present a prototype system, featuring a strategic level, which can be used to adapt the task of exploration and search to specific rescue missions. Specifically, we make use of highlevel representation of the robot plans through a Petri Net formalism that allows representing in a coherent framework decisions, loops, interrupts due to unexpected events or action failures, concurrent actions, and action synchronization. While autonomous exploration has been investigated in the past, we specifically focus on the problem of searching interesting features in the environment during the map building process. We discuss performance evaluation of exploration and search strategies for rescue robots, by using an effective performance metric, and present evaluation of our system through a set of experiments. © 2007 Wiley Periodicals, Inc. 1.
From pixels to multirobot decisionmaking: A study in uncertainty. Robotics and Autonomous Systems: Special issue on Planning Under Uncertainty in Robotics
 Robotics and Autonomous Systems
, 2006
"... Mobile robots must cope with uncertainty from many sources along the path from interpreting raw sensor inputs to behavior selection to execution of the resulting primitive actions. This article identifies several such sources and introduces methods for i) reducing uncertainty and ii) making decision ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
(Show Context)
Mobile robots must cope with uncertainty from many sources along the path from interpreting raw sensor inputs to behavior selection to execution of the resulting primitive actions. This article identifies several such sources and introduces methods for i) reducing uncertainty and ii) making decisions in the face of uncertainty. We present a complete visionbased robotic system that includes several algorithms for learning models that are useful and necessary for planning, and then place particular emphasis on the planning and decisionmaking capabilities of the robot. Specifically, we present models for autonomous color calibration, autonomous sensor and actuator modeling, and an adaptation of particle filtering for improved localization on legged robots. These contributions enable effective planning under uncertainty for robots engaged in goaloriented behavior within a dynamic, collaborative and adversarial environment. Each of our algorithms is fully implemented and tested on a commercial offtheshelf visionbased quadruped robot.
Generating exponentially smaller POMDP models using conditionally irrelevant variable abstraction
 In Proc. Int. Conf. on Applied Planning and Scheduling (ICAPS
, 2007
"... The state of a POMDP can often be factored into a tuple of n state variables. The corresponding flat model, with size exponential in n, may be intractably large. We present a novel method called conditionally irrelevant variable abstraction (CIVA) for losslessly compressing the factored model, which ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
(Show Context)
The state of a POMDP can often be factored into a tuple of n state variables. The corresponding flat model, with size exponential in n, may be intractably large. We present a novel method called conditionally irrelevant variable abstraction (CIVA) for losslessly compressing the factored model, which is then expanded into an exponentially smaller flat model in a representation compatible with many existing POMDP solvers. We applied CIVA to previously intractable problems from a robotic exploration domain. We were able to abstract, expand, and approximately solve POMDPs that had up to 10 24 states in the uncompressed flat representation.
G.: Pomdpbased longterm user intention prediction for wheelchair navigation
 In: IEEE International Conference on Robotics and Automation,(ICRA
, 2008
"... IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Technology, Sydney's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purpo ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Technology, Sydney's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubspermissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it
Planning to See: A Hierarchical Approach to Planning Visual Actions on a Robot using POMDPs
 Artificial Intelligence
, 2010
"... Flexible, generalpurpose robots need to autonomously tailor their sensing and information processing to the task at hand. We pose this challenge as the task of planning under uncertainty. In our domain, the goal is to plan a sequence of visual operators to apply on regions of interest (ROIs) in ima ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Flexible, generalpurpose robots need to autonomously tailor their sensing and information processing to the task at hand. We pose this challenge as the task of planning under uncertainty. In our domain, the goal is to plan a sequence of visual operators to apply on regions of interest (ROIs) in images of a scene, so that a human and a robot can jointly manipulate and converse about objects on a tabletop. We pose visual processing management as an instance of probabilistic sequential decision making, and specifically as a Partially Observable Markov Decision Process (POMDP). The POMDP formulation uses models that quantitatively capture the unreliability of the operators and enable a robot to reason precisely about the tradeoffs between plan reliability and plan execution time. Since planning in practicalsized POMDPs is intractable, we partially ameliorate this intractability for visual processing by defining a novel hierarchical POMDP based on the cognitive requirements of the corresponding planning task. We compare our hierarchical POMDP planning system (HiPPo) with a nonhierarchical POMDP formulation and the Continual Planning (CP) framework that handles uncertainty in a qualitative manner. We show empirically that HiPPo and CP outperform the naive application of all visual operators on all ROIs. The key result is that the POMDP methods produce more robust plans than CP or the naive visual processing. In summary, visual processing problems
Optimal, Robust Predictive Control of Nonlinear Systems under Probabilistic Uncertainty using Particles
"... Abstract — In this paper we present a novel method for robust, optimal control of nonlinear systems under probabilistic uncertainty. The method extends a previous approach for linear systems that approximates the distribution of the predicted system state using a finite number of particles. We coupl ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper we present a novel method for robust, optimal control of nonlinear systems under probabilistic uncertainty. The method extends a previous approach for linear systems that approximates the distribution of the predicted system state using a finite number of particles. We couple this particlebased approach with a nonlinear solver that does not take into account uncertainty to give a new method for nonlinear, robust control. Any solution returned by the algorithm is guaranteed to be ɛclose to a local optimum of the nonlinear stochastic control problem. I.
Wheelchair Dri ver Assistance and Intention Prediction Using POMDPs
 Proceedings of the 2007 International Conference on In telligent Sensors, Sensor Networks and Information Pro cessing
, 2007
"... ..."
(Show Context)