#### DMCA

## Fast approximate nearest neighbors with automatic algorithm configuration (2009)

### Cached

### Download Links

Venue: | In VISAPP International Conference on Computer Vision Theory and Applications |

Citations: | 441 - 2 self |

### Citations

8753 | Distinctive image features from scale-invariant keypoints
- Lowe
(Show Context)
Citation Context ...ter vision algorithms consists of searching for the closest matches to high-dimensional vectors. Examples of such problems include finding the best matches for local image features in large datasets (=-=Lowe, 2004-=-; Philbin et al., 2007), clustering local features into visual words using the k-means or similar algorithms (Sivic and Zisserman, 2003), or performing normalized cross-correlation to compare image pa... |

1604 | Video Google: A text retrieval approach to object matching in videos
- Sivic, Zisserman
- 2003
(Show Context)
Citation Context ...ms include finding the best matches for local image features in large datasets (Lowe, 2004; Philbin et al., 2007), clustering local features into visual words using the k-means or similar algorithms (=-=Sivic and Zisserman, 2003-=-), or performing normalized cross-correlation to compare image patches in large datasets (Torralba et al., 2008). The nearest neighbor search problem is also of major importance in many other applicat... |

1029 | Scalable recognition with a vocabulary tree
- Nistér, Stewénius
- 2006
(Show Context)
Citation Context ...ween the children of each node, called the spill-tree. However, our experiments so far have found that randomized kdtrees provide higher performance while requiring less memory. Nister and Stewenius (=-=Nister and Stewenius, 2006-=-) present a fast method for nearest-neighbor feature search in very large databases. Their method is based on accessing a single leaf node of a hierarchi-Figure 1: Projections of hierarchical k-means... |

970 | An optimal algorithm for approximate nearest neighbor searching fixed dimensions
- Arya, Mount, et al.
- 1998
(Show Context)
Citation Context ...search is the kd-tree (Freidman et al., 1977), which works well for exact nearest neighbor search in lowdimensional data, but quickly loses its effectiveness as dimensionality increases. Arya et al. (=-=Arya et al., 1998-=-) modify the original kd-tree algorithm to use it for approximate matching. They impose a bound on the accuracy of a solution using the notion of εapproximate nearest neighbor: a point p ∈ X is an εap... |

507 | Object retrieval with large vocabularies and fast spatial matching
- Philbin, Chum, et al.
- 2007
(Show Context)
Citation Context ...lgorithms consists of searching for the closest matches to high-dimensional vectors. Examples of such problems include finding the best matches for local image features in large datasets (Lowe, 2004; =-=Philbin et al., 2007-=-), clustering local features into visual words using the k-means or similar algorithms (Sivic and Zisserman, 2003), or performing normalized cross-correlation to compare image patches in large dataset... |

436 | Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions
- Andoni, Indyk
(Show Context)
Citation Context ...be the best at finding fast approximate nearest neighbors (the multiple randomized kd-trees and the hierarchical kmeans tree) with existing approaches, the ANN (Arya et al., 1998) and LSH algorithms (=-=Andoni, 2006-=-) 3 on 2 http://www.vis.uky.edu/˜ stewe/ukbench/data/ 3 We have used the publicly available implementations the first dataset of 100,000 SIFT features. Since the LSH implementation (the E2LSH package)... |

305 | Shape indexing using approximate nearest-neighbor search in highdimensional spaces
- Beis, Lowe
- 1997
(Show Context)
Citation Context ...the true nearest neighbor. The authors also propose the use of a priority queue to speed up the search in a tree by visiting tree nodes in order of their distance from the query point. Beis and Lowe (=-=Beis and Lowe, 1997-=-) describe a similar kd-tree based algorithm, but use a stopping criterion based on examining a fixed number Emax of leaf nodes, which can give better performance than the ε-approximate cutoff. Silpa-... |

214 | Near neighbor search in large metric spaces
- Brin
- 1995
(Show Context)
Citation Context ...k-means algorithm into k disjoint groups and then recursively doing the same for each of the groups. The tree they propose requires a vector space because they compute the mean of each cluster. Brin (=-=Brin, 1995-=-) proposes a similar tree, called GNAT, Geometric Near-neighbor Access Tree, in which he uses some of the data points as the cluster centers instead of computing the cluster mean points. This change a... |

175 | R.: City-scale location recognition
- Schindler, Brown, et al.
(Show Context)
Citation Context ...ierarchical k-means trees constructed using the same 100K SIFT features dataset with different branching factors: 2, 4, 8, 16, 32, 128. The projections are constructed using the same technique as in (=-=Schindler et al., 2007-=-). The gray values indicate the ratio between the distances to the nearest and the second-nearest cluster center at each tree level, so that the darkest values (ratio≈1) fall near the boundaries betwe... |

144 |
A branch and bound algorithm for computing k-nearest neighbors
- Fukunaga, Narendra
- 1975
(Show Context)
Citation Context ...andomized kdtrees as a means to speed up approximate nearestneighbor search. They perform only limited tests, but we have found this to work well over a wide range of problems. Fukunaga and Narendra (=-=Fukunaga and Narendra, 1975-=-) propose that nearest-neighbor matching be performed with a tree structure constructed by clustering the data points with the k-means algorithm into k disjoint groups and then recursively doing the s... |

113 | An Investigation of Practical Approximate Nearest Neighbor Algorithms
- Liu, Moore, et al.
- 2004
(Show Context)
Citation Context ...ss Tree, in which he uses some of the data points as the cluster centers instead of computing the cluster mean points. This change allows the tree to be defined in a general metric space. Liu et al. (=-=Liu et al., 2004-=-) propose a new kind of metric tree that allows an overlap between the children of each node, called the spill-tree. However, our experiments so far have found that randomized kdtrees provide higher p... |

89 | Optimised KD-trees for fast image descriptor matching
- Silpa-Anan, Hartley
- 2008
(Show Context)
Citation Context ...ar kd-tree based algorithm, but use a stopping criterion based on examining a fixed number Emax of leaf nodes, which can give better performance than the ε-approximate cutoff. Silpa-Anan and Hartley (=-=Silpa-Anan and Hartley, 2008-=-) propose the use of multiple randomized kdtrees as a means to speed up approximate nearestneighbor search. They perform only limited tests, but we have found this to work well over a wide range of pr... |

53 |
An algorithm for finding best matches in logarithmic expected time
- Freidman, Bentley, et al.
- 1977
(Show Context)
Citation Context ...r of 1,000 times relative to linear search while still identifying 95% of the correct nearest neighbors. 2 PREVIOUS RESEARCH The most widely used algorithm for nearest-neighbor search is the kd-tree (=-=Freidman et al., 1977-=-), which works well for exact nearest neighbor search in lowdimensional data, but quickly loses its effectiveness as dimensionality increases. Arya et al. (Arya et al., 1998) modify the original kd-tr... |

35 | Efficient Clustering and Matching for Object Class Recognition
- Leibe, Mikolajczyk, et al.
- 2006
(Show Context)
Citation Context ...ree level, so that the darkest values (ratio≈1) fall near the boundaries between k-means regions. cal k-means tree similar to that proposed by Fukunaga and Narendra (Fukunaga and Narendra, 1975). In (=-=Leibe et al., 2006-=-) the authors propose an efficient method for clustering and matching features in large datasets. They compare several clustering methods: k-means clustering, agglomerative clustering, and a combined ... |

17 | Improving descriptors for fast tree matching by optimal linear projection
- Mikolajczyk, Matas
- 2007
(Show Context)
Citation Context ...stering and matching features in large datasets. They compare several clustering methods: k-means clustering, agglomerative clustering, and a combined partitional-agglomerative algorithm. Similarly, (=-=Mikolajczyk and Matas, 2007-=-) evaluates the nearest neighbor matching performance for several tree structures, including the kd-tree, the hierarchical k-means tree, and the agglomerative tree. We have used these experiments to g... |

16 |
Localisation using an image-map
- Silpa-Anan, Hartley
- 2004
(Show Context)
Citation Context ...as the highest known performance. For other datasets, we have found that an algorithm that uses multiple randomized kd-trees provides the best results. This algorithm has only been proposed recently (=-=Silpa-Anan and Hartley, 2004-=-; SilpaAnan and Hartley, 2008) and has not been widely tested. Our results show that once optimal parameter values have been determined this algorithm often gives an order of magnitude improvement com... |