Results 1 - 10
of
141
Best-First Heuristic Search for Multi-Core Machines
"... eaburns, seth.lemons, ruml at cs.unh.edu rzhou at parc.com To harness modern multi-core processors, it is imperative to develop parallel versions of fundamental algorithms. In this paper, we present a general approach to best-first heuristic search in a sharedmemory setting. Each thread attempts to ..."
Abstract
-
Cited by 27 (7 self)
- Add to MetaCart
(Show Context)
eaburns, seth.lemons, ruml at cs.unh.edu rzhou at parc.com To harness modern multi-core processors, it is imperative to develop parallel versions of fundamental algorithms. In this paper, we present a general approach to best-first heuristic search in a sharedmemory setting. Each thread attempts to expand the most promising open nodes. By using abstraction to partition the state space, we detect duplicate states without requiring frequent locking. We allow speculative expansions when necessary to keep threads busy. We identify and fix potential livelock conditions in our approach, verifying its correctness using temporal logic. In an empirical comparison on STRIPS planning, grid pathfinding, and sliding tile puzzle problems using an 8-core machine, we show that A * implemented in our framework yields faster search than improved versions of previous parallel search proposals. Our approach extends easily to other best-first searches, such as Anytime weighted A*. 1
Strengthening Landmark Heuristics via Hitting Sets
"... The landmark cut heuristic is perhaps the strongest known polytime admissible approximation of the optimal delete relaxation heuristic h +. Equipped with this heuristic, a best-first search was able to optimally solve 40 % more benchmark problems than the winners of the sequential optimization track ..."
Abstract
-
Cited by 21 (5 self)
- Add to MetaCart
(Show Context)
The landmark cut heuristic is perhaps the strongest known polytime admissible approximation of the optimal delete relaxation heuristic h +. Equipped with this heuristic, a best-first search was able to optimally solve 40 % more benchmark problems than the winners of the sequential optimization track of IPC 2008. We show that this heuristic can be understood as a simple relaxation of a hitting set problem, and that stronger heuristics can be obtained by considering stronger relaxations. Based on these findings, we propose a simple polytime method for obtaining heuristics stronger than landmark cut, and evaluate them over benchmark problems. We also show that hitting sets can be used to characterize h + and thus provide a fresh and novel insight for better comprehension of the delete relaxation. 1
Sound and Complete Landmarks for And/Or Graphs
"... Abstract. Landmarks for a planning problem are subgoals that are necessarily made true at some point in the execution of any plan. Since verifying that a fact is a landmark is PSPACE-complete, earlier approaches have focused on finding landmarks for the delete relaxation Π +. Furthermore, some of th ..."
Abstract
-
Cited by 19 (8 self)
- Add to MetaCart
(Show Context)
Abstract. Landmarks for a planning problem are subgoals that are necessarily made true at some point in the execution of any plan. Since verifying that a fact is a landmark is PSPACE-complete, earlier approaches have focused on finding landmarks for the delete relaxation Π +. Furthermore, some of these approaches have approximated this set of landmarks, although it has been shown that the complete set of causal delete-relaxation landmarks can be identified in polynomial time by a simple procedure over the relaxed planning graph. Here, we give a declarative characterisation of this set of landmarks and show that the procedure computes the landmarks described by our characterisation. Building on this, we observe that the procedure can be applied to any delete-relaxation problem and take advantage of a recent compilation of the m-relaxation of a problem into a problem with no delete effects to extract landmarks that take into account delete effects in the original problem. We demonstrate that this approach finds strictly more causal landmarks than previous approaches and discuss the relationship between increased computational effort and experimental performance, using these landmarks in a recently proposed admissible landmark-counting heuristic. 1
Analyzing search topology without running any search: On the connection between causal graphs and h+
- JAIR
, 2011
"... The ignoring delete lists relaxation is of paramount importance for both satisficing and optimal planning. In earlier work, it was observed that the optimal relaxation heuristic h + has amazing qualities in many classical planning benchmarks, in particular pertaining to the complete absence of local ..."
Abstract
-
Cited by 19 (3 self)
- Add to MetaCart
(Show Context)
The ignoring delete lists relaxation is of paramount importance for both satisficing and optimal planning. In earlier work, it was observed that the optimal relaxation heuristic h + has amazing qualities in many classical planning benchmarks, in particular pertaining to the complete absence of local minima. The proofs of this are hand-made, raising the question whether such proofs can be lead automatically by domain analysis techniques. In contrast to earlier disappointing results – the analysis method has exponential runtime and succeeds only in two extremely simple benchmark domains – we herein answer this question in the affirmative. We establish connections between causal graph structure and h + topology. This results in low-order polynomial time analysis methods, implemented in a tool we call TorchLight. Of the 12 domains where the absence of local minima has been proved, TorchLight gives strong success guarantees in 8 domains. Empirically, its analysis exhibits strong performance in a further 2 of these domains, plus in 4 more domains where local minima may exist but are rare. In this way, TorchLight can distinguish “easy” domains from “hard” ones. By summarizing structural reasons for analysis failure, TorchLight also provides diagnostic output indicating domain aspects that may cause local minima.
Scaling Up Multiagent Planning: A BestResponse Approach
- In Procs. ICAPS 2011
, 2011
"... Multiagent planning is computationally hard in the general case due to the exponential blowup in the action space induced by concurrent action of different agents. At the same time, many scenarios require the computation of plans that are strategically meaningful for selfinterested agents, in order ..."
Abstract
-
Cited by 19 (2 self)
- Add to MetaCart
Multiagent planning is computationally hard in the general case due to the exponential blowup in the action space induced by concurrent action of different agents. At the same time, many scenarios require the computation of plans that are strategically meaningful for selfinterested agents, in order to ensure that there would be sufficient incentives for those agents to participate in a joint plan. In this paper, we present a multiagent planning and plan improvement method that is based on conducting iterative best-response planning using standard single-agent planning algorithms. In constrained types of planning scenarios that correspond to congestion games, this is guaranteed to converge to a plan that is a Nash equilibrium with regard to agents ’ preference profiles over the entire plan space. Our empirical evaluation beyond these restricted scenarios shows, however, that the algorithm has much broader applicability as a method for plan improvement in general multiagent planning problems. Extensive empirical experiments in various domains illustrate the scalability of our method in both cases.
Searching for Plans with Carefully Designed Probes
"... We define a probe to be a single action sequence computed greedily from a given state that either terminates in the goal or fails. We show that by designing these probes carefully using a number of existing and new polynomial techniques such as helpful actions, landmarks, commitments, and consistent ..."
Abstract
-
Cited by 15 (3 self)
- Add to MetaCart
(Show Context)
We define a probe to be a single action sequence computed greedily from a given state that either terminates in the goal or fails. We show that by designing these probes carefully using a number of existing and new polynomial techniques such as helpful actions, landmarks, commitments, and consistent subgoals, a single probe from the initial state solves by itself 683 out of 980 problems from previous IPCs, a number that compares well with the 627 problems solved by FF in EHC mode, with similar times and plan lengths. We also show that by launching one probe from each expanded state in a standard greedy best first search informed by the additive heuristic, the number of problems solved jumps to 900 (92%), as opposed to FF that solves 827 problems (84%), and LAMA that solves 879 (89%). The success of probes suggests that many domains can be solved easily once a suitable serialization of the landmarks is found, an observation that may open new connections between recent work in planning and more classical work concerning goal serialization and problem decomposition in planning and search.
COLIN: Planning with Continuous Linear Numeric Change
, 2012
"... In this paper we describe COLIN, a forward-chaining heuristic search planner, capable of reasoning with COntinuous LINear numeric change, in addition to the full temporal semantics of PDDL2.1. Through this work we make two advances to the state-of-the-art in terms of expressive reasoning capabilitie ..."
Abstract
-
Cited by 14 (3 self)
- Add to MetaCart
(Show Context)
In this paper we describe COLIN, a forward-chaining heuristic search planner, capable of reasoning with COntinuous LINear numeric change, in addition to the full temporal semantics of PDDL2.1. Through this work we make two advances to the state-of-the-art in terms of expressive reasoning capabilities of planners: the handling of continuous linear change, and the handling of duration-dependent effects in combination with duration inequalities, both of which require tightly coupled temporal and numeric reasoning during planning. COLIN combines FF-style forward chaining search, with the use of a Linear Program (LP) to check the consistency of the interacting temporal and numeric constraints at each state. The LP is used to compute bounds on the values of variables in each state, reducing the range of actions that need to be considered for application. In addition, we develop an extension of the Temporal Relaxed Planning Graph heuristic of CRIKEY3, to support reasoning directly with continuous change. We extend the range of task variables considered to be suitable candidates for specifying the gradient of the continuous numeric change effected by an action. Finally, we explore the potential for employing mixed integer programming as a tool for optimising the timestamps of the actions in the plan, once a solution has been found. To support this, we further contribute a selection of extended benchmark domains that include continuous numeric effects. We present results for COLIN that demonstrate its scalability on a range of benchmarks, and compare to existing state-of-the-art planners.
ArvandHerd: Parallel Planning with a Portfolio
- in IPC 2011 Deterministic Track
, 2011
"... Abstract. ArvandHerd is a parallel planner that won the multicore sequential satisficing track of the 2011 International Planning Competition (IPC 2011). It assigns processors to run different members of an algorithm portfolio which contains several configurations of each of two different planners: ..."
Abstract
-
Cited by 14 (9 self)
- Add to MetaCart
(Show Context)
Abstract. ArvandHerd is a parallel planner that won the multicore sequential satisficing track of the 2011 International Planning Competition (IPC 2011). It assigns processors to run different members of an algorithm portfolio which contains several configurations of each of two different planners: LAMA-2008 and Arvand. In this paper, we demonstrate that simple techniques for using different planner configurations can significantly improve the coverage of both of these planners. We then show that these two planners, when using multiple configurations, can be combined to construct a high performance parallel planner. In particular, we will show that ArvandHerd can solve more IPC benchmark problems than even a perfect parallelization of LAMA-2011, which won the satisficing track at IPC 2011. We will also show that the coverage of ArvandHerd can be further improved if LAMA-2008 is replaced by LAMA-2011 in the portfolio. 1
Diagnosis as planning: Two case studies
- in: ICAPS Scheduling and Planning Applications Workshop
, 2011
"... Diagnosis of discrete event systems amounts to finding good explanations, in the form of system trajectories consistent with a given set of partially ordered observations. This prob-lem is closely related to planning, and in fact can be recast as a classical planning problem. We formulate a PDDL enc ..."
Abstract
-
Cited by 13 (4 self)
- Add to MetaCart
Diagnosis of discrete event systems amounts to finding good explanations, in the form of system trajectories consistent with a given set of partially ordered observations. This prob-lem is closely related to planning, and in fact can be recast as a classical planning problem. We formulate a PDDL encod-ing of this diagnosis problem, and use it to evaluate planners representing a variety of planning paradigms on two realistic case studies. Results demonstrate that certain planning tech-niques have the potential to be very useful in diagnosis, but on the whole, current planners are far from a practical means of solving diagnosis problems.
The Roamer Planner Random-Walk Assisted Best-First Search
"... Best-first search is one of the most fundamental techniques for planning. A heuristic function is used in best-first search to guide the search. A well-observed phenomenon on bestfirst search for planning is that for most of the time during search, it explores a large number of states without reduci ..."
Abstract
-
Cited by 10 (0 self)
- Add to MetaCart
Best-first search is one of the most fundamental techniques for planning. A heuristic function is used in best-first search to guide the search. A well-observed phenomenon on bestfirst search for planning is that for most of the time during search, it explores a large number of states without reducing the heuristic function value. This phenomenon, called “plateau exploration”, has been extensively studied for heuristic search algorithms for satisfiability (SAT) and constraint satisfaction problems (CSP). In planning, plateau exploration consists of most of the search time in state-of-the-art best-first search planners. Therefore, their performance can be improved if we can reduce the plateau exploration time by finding an exit state (a state with better heuristic value than the best one found so far). In this paper, we present a random-walk assisted best-first search algorithm for planning, which invokes a random walk procedure to find exits when the best-first search is stuck on a plateau. The resulting planner, Roamer, building on the LAMA and Fast-Downward planning system, uses a best-first search in first iteration to find a plan and a weighted A ∗ search to iteratively decreasing weights of plans. Roamer is an anytime planner which continues to search for plans with better quality until exhausting the whole state space or being terminated because of time limits.