Results 1 
3 of
3
Nearoptimal hashing algorithms for approximate nearest neighbor in high dimensions
, 2008
"... In this article, we give an overview of efficient algorithms for the approximate and exact nearest neighbor problem. The goal is to preprocess a dataset of objects (e.g., images) so that later, given a new query object, one can quickly return the dataset object that is most similar to the query. The ..."
Abstract

Cited by 457 (7 self)
 Add to MetaCart
In this article, we give an overview of efficient algorithms for the approximate and exact nearest neighbor problem. The goal is to preprocess a dataset of objects (e.g., images) so that later, given a new query object, one can quickly return the dataset object that is most similar to the query. The problem is of significant interest in a wide variety of areas.
Lower Bound Techniques for Data Structures
, 2008
"... We describe new techniques for proving lower bounds on datastructure problems, with the following broad consequences:
â¢ the first Î©(lgn) lower bound for any dynamic problem, improving on a bound that had been standing since 1989;
â¢ for static data structures, the first separation between linea ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
We describe new techniques for proving lower bounds on datastructure problems, with the following broad consequences:
â¢ the first Î©(lgn) lower bound for any dynamic problem, improving on a bound that had been standing since 1989;
â¢ for static data structures, the first separation between linear and polynomial space. Specifically, for some problems that have constant query time when polynomial space is allowed, we can show Î©(lg n/ lg lg n) bounds when the space is O(n Â· polylog n).
Using these techniques, we analyze a variety of central datastructure problems, and obtain improved lower bounds for the following:
â¢ the partialsums problem (a fundamental application of augmented binary search trees);
â¢ the predecessor problem (which is equivalent to IP lookup in Internet routers);
â¢ dynamic trees and dynamic connectivity;
â¢ orthogonal range stabbing;
â¢ orthogonal range counting, and orthogonal range reporting;
â¢ the partial match problem (searching with wildcards);
â¢ (1 + Îµ)approximate near neighbor on the hypercube;
â¢ approximate nearest neighbor in the lâ metric.
Our new techniques lead to surprisingly nontechnical proofs. For several problems, we obtain simpler proofs for bounds that were already known.
Hardness of Nearest Neighbor under Linfinity
"... Recent years have seen a significant increase in our understanding of highdimensional nearest neighbor search (NNS) for distances like the ℓ1 and ℓ2 norms. By contrast, our understanding of the ℓ ∞ norm is now where it was (exactly) 10 years ago. In FOCS’98, Indyk proved the following unorthodox re ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Recent years have seen a significant increase in our understanding of highdimensional nearest neighbor search (NNS) for distances like the ℓ1 and ℓ2 norms. By contrast, our understanding of the ℓ ∞ norm is now where it was (exactly) 10 years ago. In FOCS’98, Indyk proved the following unorthodox result: there is a data structure (in fact, a decision tree) of size O(n ρ), for any ρ> 1, which achieves approximation O(log ρ log d) for NNS in the ddimensional ℓ ∞ metric. In this paper, we provide results that indicate that Indyk’s unconventional bound might in fact be optimal. Specifically, we show a lower bound for the asymmetric communication complexity of NNS under ℓ∞, which proves that this space/approximation tradeoff is optimal for decision trees and for data structures with constant cellprobe complexity. 1