Results 1 - 10
of
65
Landmarks, Critical Paths and Abstractions: What’s the Difference Anyway?
, 2009
"... Current heuristic estimators for classical domain-independent planning are usually based on one of four ideas: delete relaxations, critical paths, abstractions, and, most recently, landmarks. Previously, these different ideas for deriving heuristic functions were largely unconnected. We prove that a ..."
Abstract
-
Cited by 112 (28 self)
- Add to MetaCart
Current heuristic estimators for classical domain-independent planning are usually based on one of four ideas: delete relaxations, critical paths, abstractions, and, most recently, landmarks. Previously, these different ideas for deriving heuristic functions were largely unconnected. We prove that admissible heuristics based on these ideas are in fact very closely related. Exploiting this relationship, we introduce a new admissible heuristic called the landmark cut heuristic, which compares favourably with the state of the art in terms of heuristic accuracy and overall performance.
The Joy of Forgetting: Faster Anytime Search via Restarting
"... {jtd7, ruml} at cs.unh.edu Anytime search algorithms solve optimisation problems by quickly finding a usually suboptimal solution and then finding improved solutions when given additional time. To deliver a solution quickly, they are typically greedy with respect to the heuristic cost-to-go estimate ..."
Abstract
-
Cited by 47 (15 self)
- Add to MetaCart
(Show Context)
{jtd7, ruml} at cs.unh.edu Anytime search algorithms solve optimisation problems by quickly finding a usually suboptimal solution and then finding improved solutions when given additional time. To deliver a solution quickly, they are typically greedy with respect to the heuristic cost-to-go estimate h. In this paper, we first show that this low-h bias can cause poor performance if the heuristic is inaccurate. Building on this observation, we then present a new anytime approach that restarts the search from the initial state every time a new solution is found. We demonstrate the utility of our method via experiments in PDDL planning as well as other domains. We show that it is particularly useful for hard optimisation problems like planning where heuristics may be quite inaccurate and inadmissible, and where the greedy solution makes early mistakes.
permission. Combined Task and Motion Planning for Mobile Manipulation
, 2010
"... All rights reserved. ..."
(Show Context)
Faster Than Weighted A*: An Optimistic Approach to Bounded Suboptimal Search
"... Planning, scheduling, and other applications of heuristic search often demand we tackle problems that are too large to solve optimally. In this paper, we address the problem of solving shortest-path problems as quickly as possible while guaranteeing that solution costs are bounded within a specified ..."
Abstract
-
Cited by 31 (12 self)
- Add to MetaCart
Planning, scheduling, and other applications of heuristic search often demand we tackle problems that are too large to solve optimally. In this paper, we address the problem of solving shortest-path problems as quickly as possible while guaranteeing that solution costs are bounded within a specified factor of optimal. 38 years after its publication, weighted A * remains the best-performing algorithm for general-purpose bounded suboptimal search. However, it typically returns solutions that are better than a given bound requires. We show how to take advantage of this behavior to speed up search while retaining bounded suboptimality. We present an optimistic algorithm that uses a weight higher than the user’s bound and then attempts to prove that the resulting solution adheres to the bound. While simple, we demonstrate that this algorithm consistently surpasses weighted A * in four different benchmark domains including temporal planning and gridworld pathfinding.
Scalable, Parallel Best-First Search for Optimal Sequential Planning
, 2009
"... Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate parallel algorithms for optimal sequential planning, with an emphasis on exploiting distribut ..."
Abstract
-
Cited by 26 (4 self)
- Add to MetaCart
(Show Context)
Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate parallel algorithms for optimal sequential planning, with an emphasis on exploiting distributed memory computing clusters. In particular, we focus on an approach which distributes and schedules work among processors based on a hash function of the search state. We use this approach to parallelize the A * algorithm in the optimal sequential version of the Fast Downward planner. The scaling behavior of the algorithm is evaluated experimentally on clusters using up to 128 processors, a significant increase compared to previous work in parallelizing planners. We show that this approach scales well, allowing us to effectively utilize the large amount of distributed memory to optimally solve problems which require hundreds of gigabytes of RAM to solve. We also show that this approach scales well for a single, shared-memory multicore machine.
To max or not to max: Online learning for speeding up optimal planning
- In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2010
, 2010
"... Abstract It is well known that there cannot be a single "best" heuristic for optimal planning in general. One way of overcoming this is by combining admissible heuristics (e.g. by using their maximum), which requires computing numerous heuristic estimates at each state. However, there is ..."
Abstract
-
Cited by 18 (7 self)
- Add to MetaCart
Abstract It is well known that there cannot be a single "best" heuristic for optimal planning in general. One way of overcoming this is by combining admissible heuristics (e.g. by using their maximum), which requires computing numerous heuristic estimates at each state. However, there is a tradeoff between the time spent on computing these heuristic estimates for each state, and the time saved by reducing the number of expanded states. We present a novel method that reduces the cost of combining admissible heuristics for optimal search, while maintaining its benefits. Based on an idealized search space model, we formulate a decision rule for choosing the best heuristic to compute at each state. We then present an active online learning approach for that decision rule, and employ the learned model to decide which heuristic to compute at each state. We evaluate this technique empirically, and show that it substantially outperforms each of the individual heuristics that were used, as well as their regular maximum.
Completeness and Optimality Preserving Reduction for Planning
"... Traditional AI search methods search in a state space typically modelled as a directed graph. Prohibitively large sizes of state space graphs make complete or optimal search expensive. A key observation, as exemplified by the SAS+ formalism for planning, is that most commonly a state-space graph can ..."
Abstract
-
Cited by 15 (3 self)
- Add to MetaCart
(Show Context)
Traditional AI search methods search in a state space typically modelled as a directed graph. Prohibitively large sizes of state space graphs make complete or optimal search expensive. A key observation, as exemplified by the SAS+ formalism for planning, is that most commonly a state-space graph can be decomposed into subgraphs, linked by constraints. We propose a novel space reduction algorithm that exploits such structure. The result reveals that standard search algorithms may explore many redundant paths. Our method provides an automatic way to remove such redundancy. At each state, we expand only the subgraphs within a dependency closure satisfying certain sufficient conditions instead of all the subgraphs. Theoretically we prove that the proposed algorithm is completeness-preserving as well as optimality-preserving. We show that our reduction method can significantly reduce the search cost on a collection of planning domains. 1
Incremental Lower Bounds for Additive Cost Planning Problems
"... We present a novel method for computing increasing lower bounds on the cost of solving planning problems, based on repeatedly solving and strengthening the delete relaxation of the problem. Strengthening is done by compiling select conjunctions into new atoms, similar to theP m ⋆ construction. Becau ..."
Abstract
-
Cited by 15 (7 self)
- Add to MetaCart
We present a novel method for computing increasing lower bounds on the cost of solving planning problems, based on repeatedly solving and strengthening the delete relaxation of the problem. Strengthening is done by compiling select conjunctions into new atoms, similar to theP m ⋆ construction. Because it does not rely on search in the state space, this method does not suffer some of the weaknesses of admissible search algorithms and therefore is able to prove higher lower bounds for many problems that are too hard for optimal planners to solve, thus narrowing the gap between lower bound and cost of the best known plan, providing better assurances of plan quality.
Stratified Planning
"... Most planning problems have strong structures. They can be decomposed into subdomains with causal dependencies. The idea of exploiting the domain decomposition has motivated previous work such as hierarchical planning and factored planing. However, these algorithms require extensive backtracking and ..."
Abstract
-
Cited by 14 (5 self)
- Add to MetaCart
(Show Context)
Most planning problems have strong structures. They can be decomposed into subdomains with causal dependencies. The idea of exploiting the domain decomposition has motivated previous work such as hierarchical planning and factored planing. However, these algorithms require extensive backtracking and lead to few efficient general-purpose planners. On the other hand, heuristic search has been a successful approach to automated planning. The domain decomposition of planning problems, unfortunately, is not directly and fully exploited by heuristic search. We propose a novel and general framework to exploit domain decomposition. Based on a structure analysis on the SAS+ planning formalism, we stratify the sub-domains of a planning problem into dependency layers. By recognizing the stratification of a planning structure, we propose a space reduction method that expands only a subset of executable actions at each state. This reduction method can be combined with state-space search, allowing us to simultaneously employ the strength of domain decomposition and high-quality heuristics. We prove that the reduction preserves completeness and optimality of search and experimentally verify its effectiveness in space reduction. 1
Predicting the Performance of IDA* using Conditional Distributions
, 2010
"... Korf, Reid, and Edelkamp introduced a formula to predict the number of nodes IDA* will expandon a single iteration for a given consistent heuristic, and experimentally demonstrated that it could make very accurate predictions. In this paper we show that, in addition to requiring the heuristic to be ..."
Abstract
-
Cited by 11 (7 self)
- Add to MetaCart
(Show Context)
Korf, Reid, and Edelkamp introduced a formula to predict the number of nodes IDA* will expandon a single iteration for a given consistent heuristic, and experimentally demonstrated that it could make very accurate predictions. In this paper we show that, in addition to requiring the heuristic to be consistent, their formula’s predictions are accurate only at levels of the brute-force search tree where the heuristic values obey the unconditional distribution that they defined and then used in their formula. We then propose a new formula that works well without these requirements, i.e., it can make accurate predictions of IDA*’s performance for inconsistent heuristics and if the heuristic values in any level do not obey the unconditional distribution. In order to achieve this we introduce the conditional distribution of heuristic values which is a generalization of their unconditional heuristic distribution. We also provide extensions of our formula that handle individual start states and the augmentation of IDA* with bidirectional pathmax (BPMX), a technique for propagating heuristic values when inconsistent heuristics are used. Experimental results demonstrate the accuracy of our new method and all its variations.