Results 1  10
of
53
Disjoint pattern database heuristics
 Artificial Intelligence
, 2002
"... We explore a method for computing admissible heuristic evaluation functions for search problems. It utilizes pattern databases (Culberson & Schaeffer, 1998), which are precomputed tables of the exact cost of solving various subproblems of an existing problem. Unlike standard pattern database heu ..."
Abstract

Cited by 141 (36 self)
 Add to MetaCart
(Show Context)
We explore a method for computing admissible heuristic evaluation functions for search problems. It utilizes pattern databases (Culberson & Schaeffer, 1998), which are precomputed tables of the exact cost of solving various subproblems of an existing problem. Unlike standard pattern database heuristics, however, we partition our problems into disjoint subproblems, so that the costs of solving the different subproblems can be added together without overestimating the cost of solving the original problem. Previously (Korf & Felner, 2002) we showed how to statically partition the slidingtile puzzles into disjoint groups of tiles to compute an admissible heuristic, using the same partition for each state and problem instance. Here we extend the method and show that it applies to other domains as well. We also present another method for additive heuristics which we call dynamically partitioned pattern databases. Here we partition the problem into disjoint subproblems for each state of the search dynamically. We discuss the pros and cons of each of these methods and apply both methods to three different problem domains: the slidingtile puzzles, the 4peg Towers of Hanoi problem, and finding an optimal vertex cover of a graph. We find that in some problem domains, static partitioning is most effective, while in others dynamic partitioning is a better choice. In each of these problem domains, either statically partitioned or dynamically partitioned pattern database heuristics are the best known heuristics for the problem.
Breadthfirst heuristic search
 Artificial Intelligence
"... Recent work shows that the memory requirements of bestfirst heuristic search can be reduced substantially by using a divideandconquer method of solution reconstruction. We show that memory requirements can be reduced even further by using a breadthfirst instead of a bestfirst search strategy. We ..."
Abstract

Cited by 55 (11 self)
 Add to MetaCart
(Show Context)
Recent work shows that the memory requirements of bestfirst heuristic search can be reduced substantially by using a divideandconquer method of solution reconstruction. We show that memory requirements can be reduced even further by using a breadthfirst instead of a bestfirst search strategy. We describe optimal and approximate breadthfirst heuristic search algorithms that use divideandconquer solution reconstruction. Computational results show that they outperform other optimal and approximate heuristic search algorithms in solving domainindependent planning problems.
Taming Numbers and Durations in the Model Checking Integrated Planning System
 Journal of Artificial Intelligence Research
, 2002
"... The Model Checking Integrated Planning System (MIPS) has shown distinguished performance in the second and third international planning competitions. With its objectoriented framework architecture MIPS clearly separates the portfolio of explicit and symbolic heuristic search exploration algorith ..."
Abstract

Cited by 55 (13 self)
 Add to MetaCart
(Show Context)
The Model Checking Integrated Planning System (MIPS) has shown distinguished performance in the second and third international planning competitions. With its objectoriented framework architecture MIPS clearly separates the portfolio of explicit and symbolic heuristic search exploration algorithms from different online and offline computed estimates and from the grounded planning problem representation.
Compressing pattern databases
 In Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI04
, 2004
"... A pattern database (PDB) is a heuristic function implemented as a lookup table that stores the lengths of optimal solutions for subproblem instances. Standard PDBs have a distinct entry in the table for each subproblem instance. In this paper we investigate compressing PDBs by merging several entrie ..."
Abstract

Cited by 45 (24 self)
 Add to MetaCart
(Show Context)
A pattern database (PDB) is a heuristic function implemented as a lookup table that stores the lengths of optimal solutions for subproblem instances. Standard PDBs have a distinct entry in the table for each subproblem instance. In this paper we investigate compressing PDBs by merging several entries into one, thereby allowing the use of PDBs that exceed available memory in their uncompressed form. We introduce a number of methods for determining which entries to merge and discuss their relative merits. These vary from domainindependent approaches that allow any set of entries in the PDB to be merged, to more intelligent methods that take into account the structure of the problem. The choice of the best compression method is based on domaindependent attributes. We present experimental results on a number of combinatorial problems, including the fourpeg Towers of Hanoi problem, the slidingtile puzzles, and the TopSpin puzzle. For the Towers of Hanoi, we show that the search time can be reduced by up to three orders of magnitude by using compressed PDBs compared to uncompressed PDBs of the same size. More modest improvements were observed for the other domains.
Breadthfirst frontier search with delayed duplicate detection
 In Proceedings of the IJCAI03 Workshop on Model Checking and Artificial Intelligence
"... Bestfirst search is limited by the memory needed to store the Open and Closed lists, primarily to detect duplicate nodes. Magnetic disks provide vastly more storage, but random access of a disk is extremely slow. Instead of checking generated nodes immediately against existing nodes in a hash tab ..."
Abstract

Cited by 42 (6 self)
 Add to MetaCart
(Show Context)
Bestfirst search is limited by the memory needed to store the Open and Closed lists, primarily to detect duplicate nodes. Magnetic disks provide vastly more storage, but random access of a disk is extremely slow. Instead of checking generated nodes immediately against existing nodes in a hash table, delayed duplicate detection (DDD) appends them to a file, then periodically removes the duplicate nodes using only sequential disk accesses. Frontier search saves storage in a bestfirst search by storing only the Open list and not the Closed list. The main contributions of this paper are to provide a scalable implementation of DDD, to combine it with frontier search, and to extend it to more general bestfirst searches such as A*. We illustrate these ideas by performing complete breadthfirst searches of slidingtile puzzles up to the 3x5 Fourteen Puzzle. For the 4peg Towers of Hanoi problem, we perform complete searches with up to 20 disks, searching a space of over a trillion nodes, and discover a surprising anomaly concerning the problemspace diameter of the 15 and 20disk problems. We also verify the presumed optimal solution lengths for up to 24 disks. In addition, we implement A * with DDD on the Fifteen Puzzle. Finally, we present a scalable implementation of DDD based on hashing rather than sorting.
Structured duplicate detection in externalmemory graph search
 In Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI04
"... We consider how to use external memory, such as disk storage, to improve the scalability of heuristic search in statespace graphs. To limit the number of slow disk I/O operations, we develop a new approach to duplicate detection in graph search that localizes memory references by partitioning the se ..."
Abstract

Cited by 38 (14 self)
 Add to MetaCart
(Show Context)
We consider how to use external memory, such as disk storage, to improve the scalability of heuristic search in statespace graphs. To limit the number of slow disk I/O operations, we develop a new approach to duplicate detection in graph search that localizes memory references by partitioning the search graph based on an abstraction of the state space, and expanding the frontier nodes of the graph in an order that respects this partition. We demonstrate the effectiveness of this approach both analytically and empirically.
Beamstack search: Integrating backtracking with beam search
 In International Conference on Automated Planning and Scheduling (ICAPS
, 2005
"... We describe a method for transforming beam search into a complete search algorithm that is guaranteed to find an optimal solution. Called beamstack search, the algorithm uses a new data structure, called a beam stack, that makes it possible to integrate systematic backtracking with beam search. The ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
(Show Context)
We describe a method for transforming beam search into a complete search algorithm that is guaranteed to find an optimal solution. Called beamstack search, the algorithm uses a new data structure, called a beam stack, that makes it possible to integrate systematic backtracking with beam search. The resulting search algorithm is an anytime algorithm that finds a good, suboptimal solution quickly, like beam search, and then backtracks and continues to find improved solutions until convergence to an optimal solution. We describe a memoryefficient implementation of beamstack search, called divideandconquer beamstack search, as well as an iterativedeepening version of the algorithm. The approach is applied to domainindependent STRIPS planning, and computational results show its advantages.
Scalable, Parallel BestFirst Search for Optimal Sequential Planning
, 2009
"... Largescale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate parallel algorithms for optimal sequential planning, with an emphasis on exploiting distribut ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
(Show Context)
Largescale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate parallel algorithms for optimal sequential planning, with an emphasis on exploiting distributed memory computing clusters. In particular, we focus on an approach which distributes and schedules work among processors based on a hash function of the search state. We use this approach to parallelize the A * algorithm in the optimal sequential version of the Fast Downward planner. The scaling behavior of the algorithm is evaluated experimentally on clusters using up to 128 processors, a significant increase compared to previous work in parallelizing planners. We show that this approach scales well, allowing us to effectively utilize the large amount of distributed memory to optimally solve problems which require hundreds of gigabytes of RAM to solve. We also show that this approach scales well for a single, sharedmemory multicore machine.
Protocol verification with heuristic search
, 2001
"... We present an approach to reconcile explicit state model checking and heuristic directed search. We provide experimental evidence that the model checking problem for concurrent systems, such as communications protocols, can be solved more efficiently, since finding a state violating a property ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
We present an approach to reconcile explicit state model checking and heuristic directed search. We provide experimental evidence that the model checking problem for concurrent systems, such as communications protocols, can be solved more efficiently, since finding a state violating a property can be understood as a directed search problem. In our work we combine the expressive power and implementation efficiency of the SPIN model checker with the HSF heuristic search workbench, yielding the HSFSPIN tool that we have implemented. We start off from the A* algorithm and some of its derivatives and define heuristics for various system properties that guide the search so that it finds error states faster. In this paper we focus on safety properties and provide heuristics for invariant and assertion violation and deadlock detection. We provide experimental results for applying HSFSPIN to two toy protocols and one real world protocol, the CORBA GIOP protocol.