Results 1  10
of
44
The LAMA planner: Guiding costbased anytime planning with landmarks
, 2010
"... LAMA is a classical planning system based on heuristic forward search. Its core feature is the use of a pseudoheuristic derived from landmarks, propositional formulas that must be true in every solution of a planning task. LAMA builds on the Fast Downward planning system, using finitedomain rather ..."
Abstract

Cited by 140 (5 self)
 Add to MetaCart
(Show Context)
LAMA is a classical planning system based on heuristic forward search. Its core feature is the use of a pseudoheuristic derived from landmarks, propositional formulas that must be true in every solution of a planning task. LAMA builds on the Fast Downward planning system, using finitedomain rather than binary state variables and multiheuristic search. The latter is employed to combine the landmark heuristic with a variant of the wellknown FF heuristic. Both heuristics are costsensitive, focusing on highquality solutions in the case where actions have nonuniform cost. A weighted A ∗ search is used with iteratively decreasing weights, so that the planner continues to search for plans of better quality until the search is terminated. LAMA showed best performance among all planners in the sequential satisficing track of the International Planning Competition 2008. In this paper we present the system in detail and investigate which features of LAMA are crucial for its performance. We present individual results for some of the domains used at the competition, demonstrating good and bad cases for the techniques implemented in LAMA. Overall, we find that using landmarks improves performance, whereas the incorporation of action costs into the heuristic estimators proves not to be beneficial. We show that in some domains a search that ignores cost solves far more problems, raising the question of how to deal with action costs more effectively in the future. The iterated weighted A ∗ search greatly improves results, and shows synergy effects with the use of landmarks. 1.
ArvandHerd: Parallel Planning with a Portfolio
 in IPC 2011 Deterministic Track
, 2011
"... Abstract. ArvandHerd is a parallel planner that won the multicore sequential satisficing track of the 2011 International Planning Competition (IPC 2011). It assigns processors to run different members of an algorithm portfolio which contains several configurations of each of two different planners: ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
(Show Context)
Abstract. ArvandHerd is a parallel planner that won the multicore sequential satisficing track of the 2011 International Planning Competition (IPC 2011). It assigns processors to run different members of an algorithm portfolio which contains several configurations of each of two different planners: LAMA2008 and Arvand. In this paper, we demonstrate that simple techniques for using different planner configurations can significantly improve the coverage of both of these planners. We then show that these two planners, when using multiple configurations, can be combined to construct a high performance parallel planner. In particular, we will show that ArvandHerd can solve more IPC benchmark problems than even a perfect parallelization of LAMA2011, which won the satisficing track at IPC 2011. We will also show that the coverage of ArvandHerd can be further improved if LAMA2008 is replaced by LAMA2011 in the portfolio. 1
Anytime Heuristic Search: Frameworks and Algorithms
"... Anytime search is a pragmatic approach for trading solution cost and solving time. It can also be used for solving problems within a time bound. Three frameworks for constructing anytime algorithms from bounded suboptimal search have been proposed: continuing search, repairing search, and restarting ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
Anytime search is a pragmatic approach for trading solution cost and solving time. It can also be used for solving problems within a time bound. Three frameworks for constructing anytime algorithms from bounded suboptimal search have been proposed: continuing search, repairing search, and restarting search, but what combination of suboptimal search and anytime framework performs best? An extensive empirical evaluation results in several novel algorithms and reveals that the relative performance of frameworks is essentially fixed, with the repairing framework having the strongest overall performance. As part of our study, we present two enhancements to Anytime Window A * that allow it to solve a wider range of problems and hastens its convergence on optimal solutions.
Avoiding and Escaping Depressions in RealTime Heuristic Search
"... Heuristics used for solving hard realtime search problems have regions with depressions. Such regions are bounded areas of the search space in which the heuristic function is inaccurate compared to the actual cost to reach a solution. Early realtime search algorithms, like LRTA ∗ , easily become t ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
Heuristics used for solving hard realtime search problems have regions with depressions. Such regions are bounded areas of the search space in which the heuristic function is inaccurate compared to the actual cost to reach a solution. Early realtime search algorithms, like LRTA ∗ , easily become trapped in those regions since the heuristic values of their states may need to be updated multiple times, which results in costly solutions. Stateoftheart realtime search algorithms, like LSSLRTA ∗ or LRTA ∗ (k), improve LRTA ∗ ’s mechanism to update the heuristic, resulting in improved performance. Those algorithms, however, do not guide search towards avoiding depressed regions. This paper presents depression avoidance, a simple realtime search principle to guide search towards avoiding states that have been marked as part of a heuristic depression. We propose two ways in which depression avoidance can be implemented: markandavoid and movetoborder. We implement these strategies on top of LSSLRTA ∗ and RTAA ∗ , producing 4 new realtime heuristic search algorithms: aLSSLRTA ∗ , daLSSLRTA ∗ , aRTAA ∗ , and daRTAA ∗. When the objective is to find a single solution by running the realtime search algorithm once, we show that daLSSLRTA ∗ and daRTAA ∗ outperform their predecessors sometimes by one order of magnitude. Of the four new algorithms, daRTAA ∗ produces the best solutions given a fixed deadline on the average time allowed per planning episode. We prove all our algorithms have good theoretical properties: in finite search spaces, they find a solution if one exists, and converge to an optimal after a number of trials. 1.
Highdimensional sequence transduction
 in ICASSP
, 2013
"... We investigate the problem of transforming an input sequence into a highdimensional output sequence in order to transcribe polyphonic audio music into symbolic notation. We introduce a probabilistic model based on a recurrent neural network that is able to learn realistic output distributions give ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
We investigate the problem of transforming an input sequence into a highdimensional output sequence in order to transcribe polyphonic audio music into symbolic notation. We introduce a probabilistic model based on a recurrent neural network that is able to learn realistic output distributions given the input and we devise an efficient algorithm to search for the global mode of that distribution. The resulting method produces musically plausible transcriptions even under high levels of noise and drastically outperforms previous stateoftheart approaches on five datasets of synthesized sounds and real recordings, approximately halving the test error rate. Index Terms — Sequence transduction, restricted Boltzmann machine, recurrent neural network, polyphonic transcription 1.
Deadlineaware search using online measures of behavior
 In SOCS
, 2011
"... In many applications of heuristic search, insufficient time is available to find provably optimal solutions. We consider the contract search problem: finding the best solution possible within a given time limit. The conventional approach to this problem is to use an interruptible anytime algorithm. ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
In many applications of heuristic search, insufficient time is available to find provably optimal solutions. We consider the contract search problem: finding the best solution possible within a given time limit. The conventional approach to this problem is to use an interruptible anytime algorithm. Such algorithms return a sequence of improving solutions until interuppted and do not consider the approaching deadline during the course of the search. We propose a new approach, Deadline Aware Search, that explicitly takes the deadline into account and attempts to use all available time to find a single highquality solution. This algorithm is simple and fully general: it modifies bestfirst search with online pruning. Empirical results on variants of gridworld navigation, the sliding tile puzzle, and dynamic robot navigation show that our method can surpass the leading anytime algorithms across a wide variety of deadlines.
When does Weighted A * Fail?
"... Weighted A * is the most popular satisficing algorithm for heuristic search. Although there is no formal guarantee that increasing the weight on the heuristic costtogo estimate will decrease search time, it is commonly assumed that increasing the weight leads to faster searches, and that greedy se ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Weighted A * is the most popular satisficing algorithm for heuristic search. Although there is no formal guarantee that increasing the weight on the heuristic costtogo estimate will decrease search time, it is commonly assumed that increasing the weight leads to faster searches, and that greedy search will provide the fastest search of all. As we show, however, in some domains, increasing the weight slows down the search. This has an important consequence on the scaling behavior of Weighted A*: increasing the weight ad infinitum will only speed up the search if greedy search is effective. We examine several plausible hypotheses as to why greedy search would sometimes expand more nodes than A * and show that each of the simple explanations has flaws. Our contribution is to show that greedy search is fast if and only if there is a strong correlation between h(n) and d ∗ (n), the true distancetogo, or if the heuristic is extremely accurate.
ANA*: Anytime Nonparametric A*
"... Anytime variants of Dijkstra’s and A * shortest path algorithms quickly produce a suboptimal solution and then improve it over time. For example, ARA * introduces a weighting value (ε) to rapidly find an initial suboptimal path and then reduces ε to improve path quality over time. In ARA*, ε is base ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Anytime variants of Dijkstra’s and A * shortest path algorithms quickly produce a suboptimal solution and then improve it over time. For example, ARA * introduces a weighting value (ε) to rapidly find an initial suboptimal path and then reduces ε to improve path quality over time. In ARA*, ε is based on a linear trajectory with adhoc parameters chosen by each user. We propose a new Anytime A * algorithm, Anytime Nonparametric A * (ANA*), that does not require adhoc parameters, and adaptively reduces ε to expand the most promising node per iteration, adapting the greediness of the search as path quality improves. We prove that each node expanded by ANA * provides an upper bound on the suboptimality of the currentbest solution. We evaluate the performance of ANA * with experiments in the domains of robot motion planning, gridworld planning, and multiple sequence alignment. The results suggest that ANA * is as efficient as ARA * and in most cases: (1) ANA * finds an initial solution faster, (2) ANA * spends less time between solution improvements, (3) ANA * decreases the suboptimality bound of the currentbest solution more gradually, and (4) ANA * finds the optimal solution faster. ANA * is freely available from Maxim Likhachev’s Searchbased Planning Library (SBPL).
Adaptive KParallel BestFirst Search: A Simple but Efficient Algorithm for MultiCore DomainIndependent Planning
 THIRD INTERNATIONAL SYMPOSIUM ON COMBINATORIAL SEARCH (SOCS'10)
, 2010
"... Motivated by the recent hardware evolution towards multicore machines, we investigate parallel planning techniques in a sharedmemory environment. We consider, more specifically, parallel versions of a bestfirst search algorithm that run K threads, each expanding the next best node from the open l ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Motivated by the recent hardware evolution towards multicore machines, we investigate parallel planning techniques in a sharedmemory environment. We consider, more specifically, parallel versions of a bestfirst search algorithm that run K threads, each expanding the next best node from the open list. We show that the proposed technique has a number of advantages. First, it is (reasonably) simple: we show how the algorithm can be obtained from a sequential version mostly by adding parallel annotations. Second, we conduct an extensive empirical study that shows that this approach is quite effective. It is also dynamic in the sense that the number of nodes expanded in parallel is adapted during the search. Overall we show that the approach is promising for parallel domainindependent, suboptimal planning.
A Comparison of KnowledgeBased GBFS Enhancements and KnowledgeFree Exploration
"... GBFSbased satisficing planners often augment their search with knowledgebased enhancements such as preferred operators and multiple heuristics. These techniques seek to improve planner performance by making the search more informed. In our work, we will focus on how these enhancements impact c ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
GBFSbased satisficing planners often augment their search with knowledgebased enhancements such as preferred operators and multiple heuristics. These techniques seek to improve planner performance by making the search more informed. In our work, we will focus on how these enhancements impact coverage and we will use a simple technique called ✏greedy node selection to demonstrate that planner coverage can also be improved by introducing knowledgefree random exploration into the search. We then revisit the existing knowledgebased enhancements so as to determine if the knowledge these enhancements employ is offering necessary guidance, or if the impact of this knowledge is to add exploration which can be achieved more simply using randomness. This investigation provides further evidence of the importance of preferred operators and shows that the knowledge added when using an additional heuristic is crucial in certain domains, while not being as effective as random exploration in others. Finally, we demonstrate that random exploration can also improve the coverage of LAMA, a planner which already employs multiple enhancements. This suggests that knowledgebased enhancements need to be compared to appropriate knowledgefree random baselines so as to ensure the importance of the knowledge being used. 1