Results 1  10
of
91
Landmarks, Critical Paths and Abstractions: What’s the Difference Anyway?
, 2009
"... Current heuristic estimators for classical domainindependent planning are usually based on one of four ideas: delete relaxations, critical paths, abstractions, and, most recently, landmarks. Previously, these different ideas for deriving heuristic functions were largely unconnected. We prove that a ..."
Abstract

Cited by 111 (28 self)
 Add to MetaCart
Current heuristic estimators for classical domainindependent planning are usually based on one of four ideas: delete relaxations, critical paths, abstractions, and, most recently, landmarks. Previously, these different ideas for deriving heuristic functions were largely unconnected. We prove that admissible heuristics based on these ideas are in fact very closely related. Exploiting this relationship, we introduce a new admissible heuristic called the landmark cut heuristic, which compares favourably with the state of the art in terms of heuristic accuracy and overall performance.
How good is almost perfect
 In ICAPSWorkshop on Heuristics for DomainIndependent Planning
, 2007
"... Heuristic search using algorithms such as A ∗ and IDA ∗ is the prevalent method for obtaining optimal sequential solutions for classical planning tasks. Theoretical analyses of these classical search algorithms, such as the wellknown results of Pohl, Gaschnig and Pearl, suggest that such heuristic ..."
Abstract

Cited by 65 (4 self)
 Add to MetaCart
Heuristic search using algorithms such as A ∗ and IDA ∗ is the prevalent method for obtaining optimal sequential solutions for classical planning tasks. Theoretical analyses of these classical search algorithms, such as the wellknown results of Pohl, Gaschnig and Pearl, suggest that such heuristic search algorithms can obtain better than exponential scaling behaviour, provided that the heuristics are accurate enough. Here, we show that for a number of common planning benchmark domains, including ones that admit optimal solution in polynomial time, general search algorithms such as A ∗ must necessarily explore an exponential number of search nodes even under the optimistic assumption of almost perfect heuristic estimators, whose heuristic error is bounded by a small additive constant. Our results shed some light on the comparatively bad performance of optimal heuristic search approaches in “simple” planning domains such as GRIPPER. They suggest that in many applications, further improvements in runtime require changes to other parts of the search algorithm than the heuristic estimator.
Concise finitedomain representations for PDDL planning tasks
, 2009
"... We introduce an efficient method for translating planning tasks specified in the standard PDDL formalism into a concise grounded representation that uses finitedomain state variables instead of the straightforward propositional encoding. Translation is performed in four stages. Firstly, we transfo ..."
Abstract

Cited by 62 (13 self)
 Add to MetaCart
(Show Context)
We introduce an efficient method for translating planning tasks specified in the standard PDDL formalism into a concise grounded representation that uses finitedomain state variables instead of the straightforward propositional encoding. Translation is performed in four stages. Firstly, we transform the input task into an equivalent normal form expressed in a restricted fragment of PDDL. Secondly, we synthesize invariants of the planning task that identify groups of mutually exclusive propositions which can be represented by a single finitedomain variable. Thirdly, we perform an efficient relaxed reachability analysis using logic programming techniques to obtain a grounded representation of the input. Finally, we combine the results of the third and fourth stage to generate the final grounded finitedomain representation. The presented approach has originally been implemented as part of the Fast Downward planning system for the 4th International Planning Competition (IPC4). Since then, it has been used in a number of other contexts with considerable success, and the use of concise finitedomain representations has become a common feature of stateoftheart planners.
A general theory of additive state space abstractions
 JAIR
"... Informally, a set of abstractions of a state space S is additive if the distance between any two states in S is always greater than or equal to the sum of the corresponding distances in the abstract spaces. The first known additive abstractions, called disjoint pattern databases, were experimentally ..."
Abstract

Cited by 25 (15 self)
 Add to MetaCart
Informally, a set of abstractions of a state space S is additive if the distance between any two states in S is always greater than or equal to the sum of the corresponding distances in the abstract spaces. The first known additive abstractions, called disjoint pattern databases, were experimentally demonstrated to produce state of the art performance on certain state spaces. However, previous applications were restricted to state spaces with special properties, which precludes disjoint pattern databases from being defined for several commonly used testbeds, such as Rubik’s Cube, TopSpin and the Pancake puzzle. In this paper we give a general definition of additive abstractions that can be applied to any state space and prove that heuristics based on additive abstractions are consistent as well as admissible. We use this new definition to create additive abstractions for these testbeds and show experimentally that well chosen additive abstractions can reduce search time substantially for the (18,4)TopSpin puzzle and by three orders of magnitude over state of the art methods for the 17Pancake puzzle. We also derive a way of testing if the heuristic value returned by additive abstractions is provably too low and show that the use of this test can reduce search time for the 15puzzle and TopSpin by roughly a factor of two. 1.
Scalable, Parallel BestFirst Search for Optimal Sequential Planning
, 2009
"... Largescale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate parallel algorithms for optimal sequential planning, with an emphasis on exploiting distribut ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
Largescale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate parallel algorithms for optimal sequential planning, with an emphasis on exploiting distributed memory computing clusters. In particular, we focus on an approach which distributes and schedules work among processors based on a hash function of the search state. We use this approach to parallelize the A * algorithm in the optimal sequential version of the Fast Downward planner. The scaling behavior of the algorithm is evaluated experimentally on clusters using up to 128 processors, a significant increase compared to previous work in parallelizing planners. We show that this approach scales well, allowing us to effectively utilize the large amount of distributed memory to optimally solve problems which require hundreds of gigabytes of RAM to solve. We also show that this approach scales well for a single, sharedmemory multicore machine.
Optimal additive composition of abstractionbased admissible heuristics
 In ICAPS (this volume
, 2008
"... We describe a procedure that takes a classical planning task, a forwardsearch state, and a set of abstractionbased admissible heuristics, and derives an optimal additive composition of these heuristics with respect to the given state. Most importantly, we show that this procedure is polynomialtim ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
(Show Context)
We describe a procedure that takes a classical planning task, a forwardsearch state, and a set of abstractionbased admissible heuristics, and derives an optimal additive composition of these heuristics with respect to the given state. Most importantly, we show that this procedure is polynomialtime for arbitrary sets of all known to us abstractionbased heuristics such as PDBs, constrained PDBs, mergeandshrink abstractions, forkdecomposition structural patterns, and structural patterns based on tractable constraint optimization. 1.
Computing perfect heuristics in polynomial time: On bisimulation and mergeandshrink abstraction in optimal planning
 In Proc. of the 22nd International Joint Conference on AI (IJCAI’11
, 2011
"... A ∗ with admissible heuristics is a very successful approach to optimal planning. But how to derive such heuristics automatically? Mergeandshrink abstraction (M&S) is a general approach to heuristic design whose key advantage is its capability to make very finegrained choices in defining abst ..."
Abstract

Cited by 20 (9 self)
 Add to MetaCart
(Show Context)
A ∗ with admissible heuristics is a very successful approach to optimal planning. But how to derive such heuristics automatically? Mergeandshrink abstraction (M&S) is a general approach to heuristic design whose key advantage is its capability to make very finegrained choices in defining abstractions. However, little is known about how to actually make these choices. We address this via the wellknown notion of bisimulation. When aggregating only bisimilar states, M&S yields a perfect heuristic. Alas, bisimulations are exponentially large even in trivial domains. We show how to apply label reduction – not distinguishing between certain groups of operators – without incurring any information loss, while potentially reducing bisimulation size exponentially. In several benchmark domains, the resulting algorithm computes perfect heuristics in polynomial time. Empirically, we show that approximating variants of this algorithm improve the state of the art in M&S heuristics. In particular, a hybrid of two such variants is competitive with the leading heuristic LMcut.
Resourceconstrained planning: A Monte Carlo random walk approach
, 2012
"... The need to economize limited resources, such as fuel or money, is a ubiquitous feature of planning problems. If the resources cannot be replenished, the planner must make do with the initial supply. It is then of paramount importance how constrained the problem is, i.e., whether and to which extent ..."
Abstract

Cited by 19 (11 self)
 Add to MetaCart
The need to economize limited resources, such as fuel or money, is a ubiquitous feature of planning problems. If the resources cannot be replenished, the planner must make do with the initial supply. It is then of paramount importance how constrained the problem is, i.e., whether and to which extent the initial resource supply exceeds the minimum need. While there is a large body of literature on numeric planning and planning with resources, such resource constrainedness has only been scantily investigated. We herein start to address this in more detail. We generalize the previous notion of resource constrainedness, characterized through a numeric problem feature C ≥ 1, to the case of multiple resources. We implement an extended benchmark suite controlling C. We conduct a largescale study of the current state of the art as a function of C, highlighting which techniques contribute to success. We introduce two new techniques on top of a recent Monte Carlo Random Walk method, resulting in a planner that, in these benchmarks, outperforms previous planners when resources are scarce (C close to 1). We investigate the parameters influencing the performance of that planner, and we show that one of the two new techniques works well also on the regular IPC benchmarks.
To Max or not to Max: Online Learning for Speeding Up Optimal Planning
, 2010
"... It is well known that there cannot be a single “best ” heuristic for optimal planning in general. One way of overcoming this is by combining admissible heuristics (e.g. by using their maximum), which requires computing numerous heuristic estimates at each state. However, there is a tradeoff between ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
It is well known that there cannot be a single “best ” heuristic for optimal planning in general. One way of overcoming this is by combining admissible heuristics (e.g. by using their maximum), which requires computing numerous heuristic estimates at each state. However, there is a tradeoff between the time spent on computing these heuristic estimates for each state, and the time saved by reducing the number of expanded states. We present a novel method that reduces the cost of combining admissible heuristics for optimal search, while maintaining its benefits. Based on an idealized search space model, we formulate a decision rule for choosing the best heuristic to compute at each state. We then present an active online learning approach for that decision rule, and employ the learned model to decide which heuristic to compute at each state. We evaluate this technique empirically, and show that it substantially outperforms each of the individual heuristics that were used, as well as their regular maximum.
Implicit abstraction heuristics
"... Statespace search with explicit abstraction heuristics is at the state of the art of costoptimal planning. These heuristics are inherently limited, nonetheless, because the size of the abstract space must be bounded by some, even if a very large, constant. Targeting this shortcoming, we introduce t ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
(Show Context)
Statespace search with explicit abstraction heuristics is at the state of the art of costoptimal planning. These heuristics are inherently limited, nonetheless, because the size of the abstract space must be bounded by some, even if a very large, constant. Targeting this shortcoming, we introduce the notion of (additive) implicit abstractions, in which the planning task is abstracted by instances of tractable fragments of optimal planning. We then introduce a concrete setting of this framework, called forkdecomposition, that is based on two novel fragments of tractable costoptimal planning. The induced admissible heuristics are then studied formally and empirically. This study testifies for the accuracy of the fork decomposition heuristics, yet our empirical evaluation also stresses the tradeoff between their accuracy and the runtime complexity of computing them. Indeed, some of the power of the explicit abstraction heuristics comes from precomputing the heuristic function offline and then determining h(s) for each evaluated state s by a very fast lookup in a “database. ” By contrast, while forkdecomposition heuristics can be calculated in polynomial time, computing them is far from being fast. To address this problem, we show that the timepernode complexity bottleneck of the forkdecomposition heuristics can be successfully overcome. We demonstrate that an equivalent of the explicit abstraction notion of a “database ” exists for the forkdecomposition abstractions as well, despite their exponentialsize abstract spaces. We then verify empirically that heuristic search with the “databased ” forkdecomposition heuristics favorably competes with the state of the art of costoptimal planning. 1.