Results 1  10
of
12
Improved Nondeterministic Planning by Exploiting State Relevance
"... We address the problem of computing a policy for fully observable nondeterministic (FOND) planning problems. By focusing on the relevant aspects of the state of the world, we introduce a series of improvements to the previous state of the art and extend the applicability of our planner, PRP, to wor ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
(Show Context)
We address the problem of computing a policy for fully observable nondeterministic (FOND) planning problems. By focusing on the relevant aspects of the state of the world, we introduce a series of improvements to the previous state of the art and extend the applicability of our planner, PRP, to work in an online setting. The use of state relevance allows our policy to be exponentially more succinct in representing a solution to a FOND problem for some domains. Through the introduction of new techniques for avoiding deadends and determining sufficient validity conditions, PRP has the potential to compute a policy up to several orders of magnitude faster than previous approaches. We also find dramatic improvements over the state of the art in online replanning when we treat suitable probabilistic domains as FOND domains. 1
Simple and fast strong cyclic planning for fullyobservable nondeterministic planning problems
 in Proceedings of the TwentySecond international joint conference on Artificial Intelligence  Volume Three
, 2011
"... We address a difficult, yet underinvestigated class of planning problems: fullyobservable nondeterministic (FOND) planning problems with strong cyclic solutions. The difficulty of these strong cyclic FOND planning problems stems from the large size of the state space. Hence, to achieve efficient p ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We address a difficult, yet underinvestigated class of planning problems: fullyobservable nondeterministic (FOND) planning problems with strong cyclic solutions. The difficulty of these strong cyclic FOND planning problems stems from the large size of the state space. Hence, to achieve efficient planning, a planner has to cope with the explosion in the size of the state space by planning along the directions that allow the goal to be reached quickly. A major challenge is: how would one know which states and search directions are relevant before the search for a solution has even begun? We first describe an NDPmotivated strong cyclic algorithm that, without addressing the above challenge, can already outperform stateoftheart FOND planners, and then extend this NDPmotivated planner with a novel heuristic that addresses the challenge. 1
ShortSighted Stochastic Shortest Path Problems
"... Algorithms to solve probabilistic planning problems can be classified in probabilistic planners and replanners. Probabilistic planners invest significant computational effort to generate a closed policy, i.e., a mapping function from every state to an action, and these solutions never “fail ” if the ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Algorithms to solve probabilistic planning problems can be classified in probabilistic planners and replanners. Probabilistic planners invest significant computational effort to generate a closed policy, i.e., a mapping function from every state to an action, and these solutions never “fail ” if the problem correctly models the environment. Alternatively, replanners compute a partial policy, i.e., a mapping function from a subset of the state space to actions, and when and if such policy fails during execution in the environment, the replanner is reinvoked to plan again from the failed state. In this paper, we introduce a special case of Stochastic Shortest Path Problems (SSPs), the shortsighted SSPs, in which every state has positive probability of being reached using at most t actions. We introduce the novel algorithm ShortSighted Probabilistic Planner (SSiPP) that solves SSPs through shortsighted SSPs and guarantees that at least t actions can be executed without replanning. Therefore, by varying t, SSiPP can behave as either a probabilistic planner by computing closed policies, or a replanner by computing partial policies. Moreover, we prove that SSiPP is asymptotically optimal, making SSiPP the only planner that, at the same time, guarantees optimality and offers a bound in the minimum number of actions executed without replanning. We empirically compare SSiPP with the winners of the previous probabilistic planning competitions and, in 81.7 % of the problems, SSiPP performs at least as good as the best competitor. 1
TrajectoryBased ShortSighted Probabilistic Planning
"... Probabilistic planning captures the uncertainty of plan execution by probabilistically modeling the effects of actions in the environment, and therefore the probability of reaching different states from a given state and action. In order to compute a solution for a probabilistic planning problem, pl ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Probabilistic planning captures the uncertainty of plan execution by probabilistically modeling the effects of actions in the environment, and therefore the probability of reaching different states from a given state and action. In order to compute a solution for a probabilistic planning problem, planners need to manage the uncertainty associated with the different paths from the initial state to a goal state. Several approaches to manage uncertainty were proposed, e.g., consider all paths at once, perform determinization of actions, and sampling. In this paper, we introduce trajectorybased shortsighted Stochastic Shortest Path Problems (SSPs), a novel approach to manage uncertainty for probabilistic planning problems in which states reachable with low probability are substituted by artificial goals that heuristically estimate their cost to reach a goal state. We also extend the theoretical results of ShortSighted Probabilistic Planner (SSiPP) [1] by proving that SSiPP always finishes and is asymptotically optimal under sufficient conditions on the structure of shortsighted SSPs. We empirically compare SSiPP using trajectorybased shortsighted SSPs with the winners of the previous probabilistic planning competitions and other stateoftheart planners in the triangle tireworld problems. Trajectorybased SSiPP outperforms all the competitors and is the only planner able to scale up to problem number 60, a problem in which the optimal solution contains approximately 10 70 states. 1
The deterministic part of the seventh International Planning Competition – Appendices
"... This document provides additional information about the techniques used and the results obtained in the seventh International Planning Competition, IPC. As in other editions of the IPC, it contains a glossary of the different entrants that took part in the Competition and the benchmarking suite se ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
This document provides additional information about the techniques used and the results obtained in the seventh International Planning Competition, IPC. As in other editions of the IPC, it contains a glossary of the different entrants that took part in the Competition and the benchmarking suite selected for their evaluation. The last three appendices cover different subjects: once the competition was over new experiments were performed to provide additional data about a couple of interesting cases; Appendix D proposes a novel technique to rank the difficulty of planning tasks that are reused from previous competitions. Finally, Appendix E provides additional explanations about the different statistical tests that were applied in the evaluation of the results produced at the IPC2011.
Exploiting Relevance to Improve Robustness and Flexibility in Plan Generation and Execution
, 2014
"... Automated plan generation and execution is an essential component of most autonomous agents. An agent’s model of the world is often incomplete or incorrect, and its environment is typically noisy. To account for potential discrepancies between the agent’s model of the world and the true state of th ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Automated plan generation and execution is an essential component of most autonomous agents. An agent’s model of the world is often incomplete or incorrect, and its environment is typically noisy. To account for potential discrepancies between the agent’s model of the world and the true state of the world, the planning techniques and representations used should enable flexible and robust agent behaviour. The agent should react swiftly when unexpected changes occur to assess the impact of the discrepancy and to accommodate as necessary. In particular, the agent should avoid unnecessary replanning and recognize changes that are irrelevant for its plan to achieve the goal. In this dissertation we address various aspects of the planning process including (1) how to synthesize a plan, (2) what a plan should constitute and how we should represent one, and (3) how to effectively execute a plan. We enable robust and flexible agent behaviour by exploiting the notion of relevance in each of the key planning areas. Intuitively, relevance characterizes what is important to consider as a sufficient condition for some property to hold. We apply relevance to the key areas of automated planning to achieve the following contributions: (1) increased flexibility of partialorder plans, (2)
Planning Under Uncertainty Using Reduced Models: Revisiting Determinization
"... We introduce a family of MDP reduced models characterized by two parameters: the maximum number of primary outcomes per action that are fully accounted for and the maximum number of occurrences of the remaining exceptional outcomes that are planned for in advance. Reduced models can be solved much ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We introduce a family of MDP reduced models characterized by two parameters: the maximum number of primary outcomes per action that are fully accounted for and the maximum number of occurrences of the remaining exceptional outcomes that are planned for in advance. Reduced models can be solved much faster using heuristic search algorithms such as LAO*, benefiting from the dramatic reduction in the number of reachable states. A commonly used determinization approach is a special case of this family of reductions, with one primary outcome per action and zero exceptional outcomes per plan. We present a framework to compute the benefits of planning with reduced models, relying on online planning when the number of exceptional outcomes exceeds the bound. Using this framework, we compare the performance of various reduced models and consider the challenge of generating good ones automatically. We show that each one of the dimensions—allowing more than one primary outcome or planning for some limited number of exceptions— could improve performance relative to standard determinization. The results place recent work on determinization in a broader context and lay the foundation for efficient and systematic exploration of the space of MDP model reductions. 1
Mixed Probabilistic and Nondeterministic Factored Planning through Markov Decision Processes with SetValued Transitions
"... This paper focuses on factored planning problems with probabilistic and nondeterministic elements. We first show that problems expressed in the nondeterministic extensions of PPDDL used in the 5th planning competition, yield Markov Decision Processes with SetValued Transitions (MDPSTs). We present ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
This paper focuses on factored planning problems with probabilistic and nondeterministic elements. We first show that problems expressed in the nondeterministic extensions of PPDDL used in the 5th planning competition, yield Markov Decision Processes with SetValued Transitions (MDPSTs). We present a generalization of the language that still yields MDPSTs, and examine the solution of these MDPSTs using real time dynamic programming.
Proceedings of the TwentySecond International Joint Conference on Artificial Intelligence Simple and Fast Strong Cyclic Planning for FullyObservable Nondeterministic Planning Problems ��
"... We address a difficult, yet underinvestigated class of planning problems: fullyobservable nondeterministic (FOND) planning problems with strong cyclic solutions. The difficulty of these strong cyclic FOND planning problems stems from the large size of the state space. Hence, to achieve efficient p ..."
Abstract
 Add to MetaCart
We address a difficult, yet underinvestigated class of planning problems: fullyobservable nondeterministic (FOND) planning problems with strong cyclic solutions. The difficulty of these strong cyclic FOND planning problems stems from the large size of the state space. Hence, to achieve efficient planning, a planner has to cope with the explosion in the size of the state space by planning along the directions that allow the goal to be reached quickly. A major challenge is: how would one know which states and search directions are relevant before the search for a solution has even begun? We first describe an NDPmotivated strong cyclic algorithm that, without addressing the above challenge, can already outperform stateoftheart FOND planners, and then extend this NDPmotivated planner with a novel heuristic that addresses the challenge. 1
Depthbased Shortsighted Stochastic Shortest Path Problems
"... Stochastic Shortest Path Problems (SSPs) are a common representation for probabilistic planning problems. Two approaches can be used to solve SSPs: (i) consider all probabilistically reachable states and (ii) plan only for a subset of these reachable states. Closed policies, the solutions obtained ..."
Abstract
 Add to MetaCart
Stochastic Shortest Path Problems (SSPs) are a common representation for probabilistic planning problems. Two approaches can be used to solve SSPs: (i) consider all probabilistically reachable states and (ii) plan only for a subset of these reachable states. Closed policies, the solutions obtained in the former approach, require significant computational effort, and they do not require replanning, i.e., the planner is never reinvoked. The second approach, employed by replanners, computes open policies, i.e., policies for a subset of the probabilistically reachable states. Therefore, when a state is reached in which the open policy is not defined, the replanner is reinvoked to compute a new open policy. In this article, we introduce a special case of SSPs, the depthbased shortsighted SSPs, in which every state has a nonzero probability of being reached using at most t actions. We also introduce the novel algorithm ShortSighted Probabilistic Planner (SSiPP), which solves SSPs through depthbased shortsighted SSPs and guarantees that at least t actions can be executed without replanning. Therefore, SSiPP can compute both open and closed policies: as t increases, the returned policy approaches the behavior of a closed policy, and for t large enough, the returned policy is closed. Moreover, we present two extensions to SSiPP: LabeledSSiPP and SSiPPFF. The former extension incorporates a labeling mechanism to avoid revisiting states that have already converged. The latter extension combines SSiPP and determinizations to improve the performance of SSiPP in problems without dead ends. We also performed an extensive empirical evaluation of SSiPP and its extensions in several problems against stateoftheart planners. The results show that (i) LabeledSSiPP outperforms SSiPP and the considered planners in the task of finding the optimal solution when the problems have a low percentage of relevant states; and (ii) SSiPPFF outperforms SSiPP in the task of quickly finding suboptimal solutions to problems without dead ends while performing similarly in problems with dead ends. 1.