Results 1  10
of
87
The LAMA planner: Guiding costbased anytime planning with landmarks
, 2010
"... LAMA is a classical planning system based on heuristic forward search. Its core feature is the use of a pseudoheuristic derived from landmarks, propositional formulas that must be true in every solution of a planning task. LAMA builds on the Fast Downward planning system, using finitedomain rather ..."
Abstract

Cited by 140 (5 self)
 Add to MetaCart
(Show Context)
LAMA is a classical planning system based on heuristic forward search. Its core feature is the use of a pseudoheuristic derived from landmarks, propositional formulas that must be true in every solution of a planning task. LAMA builds on the Fast Downward planning system, using finitedomain rather than binary state variables and multiheuristic search. The latter is employed to combine the landmark heuristic with a variant of the wellknown FF heuristic. Both heuristics are costsensitive, focusing on highquality solutions in the case where actions have nonuniform cost. A weighted A ∗ search is used with iteratively decreasing weights, so that the planner continues to search for plans of better quality until the search is terminated. LAMA showed best performance among all planners in the sequential satisficing track of the International Planning Competition 2008. In this paper we present the system in detail and investigate which features of LAMA are crucial for its performance. We present individual results for some of the domains used at the competition, demonstrating good and bad cases for the techniques implemented in LAMA. Overall, we find that using landmarks improves performance, whereas the incorporation of action costs into the heuristic estimators proves not to be beneficial. We show that in some domains a search that ignores cost solves far more problems, raising the question of how to deal with action costs more effectively in the future. The iterated weighted A ∗ search greatly improves results, and shows synergy effects with the use of landmarks. 1.
Concise finitedomain representations for PDDL planning tasks
, 2009
"... We introduce an efficient method for translating planning tasks specified in the standard PDDL formalism into a concise grounded representation that uses finitedomain state variables instead of the straightforward propositional encoding. Translation is performed in four stages. Firstly, we transfo ..."
Abstract

Cited by 62 (13 self)
 Add to MetaCart
(Show Context)
We introduce an efficient method for translating planning tasks specified in the standard PDDL formalism into a concise grounded representation that uses finitedomain state variables instead of the straightforward propositional encoding. Translation is performed in four stages. Firstly, we transform the input task into an equivalent normal form expressed in a restricted fragment of PDDL. Secondly, we synthesize invariants of the planning task that identify groups of mutually exclusive propositions which can be represented by a single finitedomain variable. Thirdly, we perform an efficient relaxed reachability analysis using logic programming techniques to obtain a grounded representation of the input. Finally, we combine the results of the third and fourth stage to generate the final grounded finitedomain representation. The presented approach has originally been implemented as part of the Fast Downward planning system for the 4th International Planning Competition (IPC4). Since then, it has been used in a number of other contexts with considerable success, and the use of concise finitedomain representations has become a common feature of stateoftheart planners.
The Joy of Forgetting: Faster Anytime Search via Restarting
"... {jtd7, ruml} at cs.unh.edu Anytime search algorithms solve optimisation problems by quickly finding a usually suboptimal solution and then finding improved solutions when given additional time. To deliver a solution quickly, they are typically greedy with respect to the heuristic costtogo estimate ..."
Abstract

Cited by 46 (15 self)
 Add to MetaCart
(Show Context)
{jtd7, ruml} at cs.unh.edu Anytime search algorithms solve optimisation problems by quickly finding a usually suboptimal solution and then finding improved solutions when given additional time. To deliver a solution quickly, they are typically greedy with respect to the heuristic costtogo estimate h. In this paper, we first show that this lowh bias can cause poor performance if the heuristic is inaccurate. Building on this observation, we then present a new anytime approach that restarts the search from the initial state every time a new solution is found. We demonstrate the utility of our method via experiments in PDDL planning as well as other domains. We show that it is particularly useful for hard optimisation problems like planning where heuristics may be quite inaccurate and inadmissible, and where the greedy solution makes early mistakes.
Preferred Operators and Deferred Evaluation in Satisficing Planning
"... Heuristic forward search is the dominant approach to satisficing planning to date. Most successful planning systems, however, go beyond plain heuristic search by employing various searchenhancement techniques. One example is the use of helpful actions or preferred operators, providing information w ..."
Abstract

Cited by 40 (13 self)
 Add to MetaCart
Heuristic forward search is the dominant approach to satisficing planning to date. Most successful planning systems, however, go beyond plain heuristic search by employing various searchenhancement techniques. One example is the use of helpful actions or preferred operators, providing information which may complement heuristic values. A second example is deferred heuristic evaluation, a search variant which can reduce the number of costly node evaluations. Despite the widespread use of these searchenhancement techniques however, we note that few results have been published examining their usefulness. In particular, while various ways of using, and possibly combining, these techniques are conceivable, no work to date has studied the performance of such variations. In this paper, we address this gap by examining the use of preferred operators and deferred evaluation in a variety of settings within bestfirst search. In particular, our findings are consistent with and help explain the good performance of the winners of the satisficing tracks at IPC 2004 and 2008.
Probabilistic plan recognition using offtheshelf classical planners
 In Proc. AAAI2010
"... Plan recognition is the problem of inferring the goals and plans of an agent after observing its behavior. Recently, it has been shown that this problem can be solved efficiently, without the need of a plan library, using slightly modified planning algorithms. In this work, we extend this approach t ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
Plan recognition is the problem of inferring the goals and plans of an agent after observing its behavior. Recently, it has been shown that this problem can be solved efficiently, without the need of a plan library, using slightly modified planning algorithms. In this work, we extend this approach to the more general problem of probabilistic plan recognition where a probability distribution over the set of goals is sought under the assumptions that actions have deterministic effects and both agent and observer have complete information about the initial state. We show that this problem can be solved efficiently using classical planners provided that the probability of a partially observed execution given a goal is defined in terms of the cost difference of achieving the goal under two conditions: complying with the observations, and not complying with them. This cost, and hence the posterior goal probabilities, are computed by means of two calls to a classical planner that no longer has to be modified in any way. A number of examples is considered to illustrate the quality, flexibility, and scalability of the approach.
Automatic Derivation of Memoryless Policies and FiniteState Controllers Using Classical Planners
"... (a) A B Finitestate and memoryless controllers are simple action selection mechanisms widely used in domains such as videogames and mobile robotics. Memoryless controllers stand for functions that map observations into actions, while finitestate controllers generalize memoryless ones with a finite ..."
Abstract

Cited by 36 (11 self)
 Add to MetaCart
(a) A B Finitestate and memoryless controllers are simple action selection mechanisms widely used in domains such as videogames and mobile robotics. Memoryless controllers stand for functions that map observations into actions, while finitestate controllers generalize memoryless ones with a finite amount of memory. In contrast to the policies obtained from MDPs and POMDPs, finitestate controllers have two advantages: they are often extremely compact, involving a small number of controller states or none at all, and they are general, applying to many problems and not just one. A limitation of finitestate controllers is that they must be written by hand. In this work, we address this limitation, and develop a method for deriving finitestate controllers automatically from models. These models represent a class of contingent problems where actions are deterministic and some fluents are observable. The problem of deriving a controller from such models is converted into a conformant planning problem that is solved using classical planners, taking advantage of a complete translation introduced recently. The controllers derived in this way are ‘general ’ in the sense that they do not solve the original problem only, but many variations as well, including changes in the size of the problem or in the uncertainty of the initial situation and action effects. Experiments illustrating the derivation of such controllers are presented.
The More, the Merrier: Combining Heuristic Estimators for Satisficing Planning (Extended Version)
, 2010
"... The problem of effectively combining multiple heuristic estimators has been studied extensively in the context of optimal planning, but not in the context of satisficing planning. To narrow this gap, we empirically examine several ways of exploiting the information of multiple heuristics in a satisf ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
The problem of effectively combining multiple heuristic estimators has been studied extensively in the context of optimal planning, but not in the context of satisficing planning. To narrow this gap, we empirically examine several ways of exploiting the information of multiple heuristics in a satisficing bestfirst search algorithm, comparing their performance in terms of coverage, plan quality and runtime. Our empirical results indicate that using multiple heuristics for satisficing search is indeed useful and that the best results are not obtained by the most obvious combination methods.
Sound and Complete Landmarks for And/Or Graphs
"... Abstract. Landmarks for a planning problem are subgoals that are necessarily made true at some point in the execution of any plan. Since verifying that a fact is a landmark is PSPACEcomplete, earlier approaches have focused on finding landmarks for the delete relaxation Π +. Furthermore, some of th ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
(Show Context)
Abstract. Landmarks for a planning problem are subgoals that are necessarily made true at some point in the execution of any plan. Since verifying that a fact is a landmark is PSPACEcomplete, earlier approaches have focused on finding landmarks for the delete relaxation Π +. Furthermore, some of these approaches have approximated this set of landmarks, although it has been shown that the complete set of causal deleterelaxation landmarks can be identified in polynomial time by a simple procedure over the relaxed planning graph. Here, we give a declarative characterisation of this set of landmarks and show that the procedure computes the landmarks described by our characterisation. Building on this, we observe that the procedure can be applied to any deleterelaxation problem and take advantage of a recent compilation of the mrelaxation of a problem into a problem with no delete effects to extract landmarks that take into account delete effects in the original problem. We demonstrate that this approach finds strictly more causal landmarks than previous approaches and discuss the relationship between increased computational effort and experimental performance, using these landmarks in a recently proposed admissible landmarkcounting heuristic. 1
Resourceconstrained planning: A Monte Carlo random walk approach
, 2012
"... The need to economize limited resources, such as fuel or money, is a ubiquitous feature of planning problems. If the resources cannot be replenished, the planner must make do with the initial supply. It is then of paramount importance how constrained the problem is, i.e., whether and to which extent ..."
Abstract

Cited by 19 (11 self)
 Add to MetaCart
The need to economize limited resources, such as fuel or money, is a ubiquitous feature of planning problems. If the resources cannot be replenished, the planner must make do with the initial supply. It is then of paramount importance how constrained the problem is, i.e., whether and to which extent the initial resource supply exceeds the minimum need. While there is a large body of literature on numeric planning and planning with resources, such resource constrainedness has only been scantily investigated. We herein start to address this in more detail. We generalize the previous notion of resource constrainedness, characterized through a numeric problem feature C ≥ 1, to the case of multiple resources. We implement an extended benchmark suite controlling C. We conduct a largescale study of the current state of the art as a function of C, highlighting which techniques contribute to success. We introduce two new techniques on top of a recent Monte Carlo Random Walk method, resulting in a planner that, in these benchmarks, outperforms previous planners when resources are scarce (C close to 1). We investigate the parameters influencing the performance of that planner, and we show that one of the two new techniques works well also on the regular IPC benchmarks.
Analyzing search topology without running any search: On the connection between causal graphs and h+
 JAIR
, 2011
"... The ignoring delete lists relaxation is of paramount importance for both satisficing and optimal planning. In earlier work, it was observed that the optimal relaxation heuristic h + has amazing qualities in many classical planning benchmarks, in particular pertaining to the complete absence of local ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
The ignoring delete lists relaxation is of paramount importance for both satisficing and optimal planning. In earlier work, it was observed that the optimal relaxation heuristic h + has amazing qualities in many classical planning benchmarks, in particular pertaining to the complete absence of local minima. The proofs of this are handmade, raising the question whether such proofs can be lead automatically by domain analysis techniques. In contrast to earlier disappointing results – the analysis method has exponential runtime and succeeds only in two extremely simple benchmark domains – we herein answer this question in the affirmative. We establish connections between causal graph structure and h + topology. This results in loworder polynomial time analysis methods, implemented in a tool we call TorchLight. Of the 12 domains where the absence of local minima has been proved, TorchLight gives strong success guarantees in 8 domains. Empirically, its analysis exhibits strong performance in a further 2 of these domains, plus in 4 more domains where local minima may exist but are rare. In this way, TorchLight can distinguish “easy” domains from “hard” ones. By summarizing structural reasons for analysis failure, TorchLight also provides diagnostic output indicating domain aspects that may cause local minima.