Results 1  10
of
13
The Computational Intelligence of MoGo Revealed in Taiwan’s Computer Go Tournaments
, 2009
"... In order to promote computer Go and stimulate further development and research in the field, the event activities, “Computational Intelligence Forum ” and “World 9×9 Computer Go Championship, ” were held in Taiwan. This study focuses on the invited games played in the tournament, “Taiwanese Go playe ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
(Show Context)
In order to promote computer Go and stimulate further development and research in the field, the event activities, “Computational Intelligence Forum ” and “World 9×9 Computer Go Championship, ” were held in Taiwan. This study focuses on the invited games played in the tournament, “Taiwanese Go players
Predicting the Performance of IDA* using Conditional Distributions
, 2010
"... Korf, Reid, and Edelkamp introduced a formula to predict the number of nodes IDA* will expandon a single iteration for a given consistent heuristic, and experimentally demonstrated that it could make very accurate predictions. In this paper we show that, in addition to requiring the heuristic to be ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
(Show Context)
Korf, Reid, and Edelkamp introduced a formula to predict the number of nodes IDA* will expandon a single iteration for a given consistent heuristic, and experimentally demonstrated that it could make very accurate predictions. In this paper we show that, in addition to requiring the heuristic to be consistent, their formula’s predictions are accurate only at levels of the bruteforce search tree where the heuristic values obey the unconditional distribution that they defined and then used in their formula. We then propose a new formula that works well without these requirements, i.e., it can make accurate predictions of IDA*’s performance for inconsistent heuristics and if the heuristic values in any level do not obey the unconditional distribution. In order to achieve this we introduce the conditional distribution of heuristic values which is a generalization of their unconditional heuristic distribution. We also provide extensions of our formula that handle individual start states and the augmentation of IDA* with bidirectional pathmax (BPMX), a technique for propagating heuristic values when inconsistent heuristics are used. Experimental results demonstrate the accuracy of our new method and all its variations.
Predicting optimal solution cost with bidirectional stratified sampling
 In ICAPS
, 2012
"... Optimal planning and heuristic search systems solve statespace search problems by finding a leastcost path from start to goal. As a byproduct of having an optimal path they also determine the optimal solution cost. In this paper we focus on the problem of determining the optimal solution cost for ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
(Show Context)
Optimal planning and heuristic search systems solve statespace search problems by finding a leastcost path from start to goal. As a byproduct of having an optimal path they also determine the optimal solution cost. In this paper we focus on the problem of determining the optimal solution cost for a statespace search problem directly, i.e., without actually finding a solution path of that cost. We present an efficient algorithm, BiSS, based on ideas of bidirectional search and stratified sampling that produces accurate estimates of the optimal solution cost. Our method is guaranteed to return the optimal solution cost in the limit as the sample size goes to infinity. We show empirically that our method makes accurate predictions in several domains. In addition, we show that our method scales to state spaces much larger than can be solved optimally. In particular, we estimate the average solution cost for the 6x6, 7x7, and 8x8 SlidingTile Puzzle and provide indirect evidence that these estimates are accurate.
Singlefrontier bidirectional search
 In AAAI
, 2010
"... Abstract On the surface, bidirectional search (BDS) is an attractive idea with the potential for significant asymptotic reductions in search effort. However, the results in practice often fall far short of expectations. We introduce a new bidirectional search algorithm, SingleFrontier Bidirectiona ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Abstract On the surface, bidirectional search (BDS) is an attractive idea with the potential for significant asymptotic reductions in search effort. However, the results in practice often fall far short of expectations. We introduce a new bidirectional search algorithm, SingleFrontier Bidirectional Search (SFBDS). Unlike traditional BDS which keeps two frontiers, SFBDS uses a single frontier. Each node in the tree can be seen as an independent task of finding the shortest path between the current start and current goal. At a particular node we can decide to search from start to goal or from goal to start, choosing the direction with the highest potential for minimizing the total work done. Theoretical results give insights as to when this approach will work and experimental data validates the algorithm for a broad range of domains.
1.6Bit Pattern Databases
"... We present a new technique to compress consistent pattern databases without loss of information by storing the heuristic estimate modulo three, requiring only two bits per entry, or in a more compact representation only 1.6 bits. This enables us to store a pattern database with four or five times as ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
We present a new technique to compress consistent pattern databases without loss of information by storing the heuristic estimate modulo three, requiring only two bits per entry, or in a more compact representation only 1.6 bits. This enables us to store a pattern database with four or five times as many entries in the same amount of memory as an uncompressed pattern database. We compare both methods to the best existing compression methods for the TopSpin puzzle, Rubik’s cube, the 4peg Towers of Hanoi Problem, and the 24puzzle. For the TopSpin puzzle and Rubik’s cube we also compare our best implementation to the respective state of the art solvers. This compression technique is most useful where methods for lossy compression fail, for example where patterns mapping to adjacent entries in the pattern database are not reachable from each other by one move, such as in the TopSpin puzzle and Rubik’s cube.
Stratified tree search: a novel suboptimal heuristic search algorithm
 Proceeding of the 12th International Conference on Autonomous Agents and MultiAgent Systems
, 2013
"... ABSTRACT Traditional heuristic search algorithms use the ranking of states that a heuristic function provides to guide the search. In this paperwith the objective of improving suboptimality and runtime of search algorithms when only weak heuristics are availablewe present Stratified Tree Search ( ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
ABSTRACT Traditional heuristic search algorithms use the ranking of states that a heuristic function provides to guide the search. In this paperwith the objective of improving suboptimality and runtime of search algorithms when only weak heuristics are availablewe present Stratified Tree Search (STS), a suboptimal heuristic search algorithm that uses a heuristic to partition the state space to guide the search. We call this partition a type system. STS assumes that nodes of the same type will lead to solutions of the same cost. Thus, STS expands only one node of each type in every level of search. We show that in general STS offers a good tradeoff between solution quality and search speed by varying the size of the type system. However, in some cases, STS might not provide a fine adjustment of this tradeoff. We present a variant of STS, Beam STS (BSTS), that allows one to make fine adjustments of this tradeoff. BSTS combines the ideas of STS with those of Beam Search. Our empirical results in benchmark domains show that both STS and BSTS can find solutions of lower suboptimality in less time than standard heuristic search algorithms for finding suboptimal solutions.
RelativeOrder Abstractions for the Pancake Problem
"... Abstract. The pancake problem is a famous search problem where the objective is to sort a sequence of objects (pancakes) through a minimal number of prefix reversals (flips). The best approaches for the problem are based on heuristic search with abstraction (pattern database) heuristics. We present ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The pancake problem is a famous search problem where the objective is to sort a sequence of objects (pancakes) through a minimal number of prefix reversals (flips). The best approaches for the problem are based on heuristic search with abstraction (pattern database) heuristics. We present a new class of abstractions for the pancake problem called relativeorder abstractions. Relativeorder abstractions have three advantages over the objectlocation abstractions considered in previous work. First, they are sizeindependent, i. e., do not need to be tailored to a particular instance size of the pancake problem. Second, they are more compact in that they can represent a larger number of pancakes within abstractions of bounded size. Finally, they can exploit symmetries in the problem specification to allow multiple heuristic lookups, significantly improving search performance over a single lookup. Our experiments show that compared to objectlocation abstractions, our new techniques lead to an improvement of one order of magnitude in runtime and up to three orders of magnitude in the number of generated states. 1