#### DMCA

## The pyramid match kernel: Discriminative classification with sets of image features (2005)

### Cached

### Download Links

- [www.cs.utexas.edu]
- [www.vision.jhu.edu]
- [vision.jhu.edu]
- [www.vision.caltech.edu]
- [www.cs.utexas.edu]
- [pages.cs.wisc.edu]
- [web.eecs.umich.edu]
- [www.cs.utexas.edu]
- [www.cs.utexas.edu]
- [www.vision.jhu.edu]
- [pages.cs.wisc.edu]
- [vision.jhu.edu]
- [www.vis.uky.edu]
- [www.cs.utexas.edu]
- [www.cs.utexas.edu]
- [publications.csail.mit.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | IN ICCV |

Citations: | 537 - 29 self |

### Citations

12873 | Statistical Learning Theory
- Vapnik
- 1998
(Show Context)
Citation Context ...tures, approximating the optimal correspondences between the sets’ features. criminative methods are known to represent complex decision boundaries very efficiently and generalize well to unseen data =-=[24, 21]-=-. For example, the Support Vector Machine (SVM) is a widely used approach to discriminative classification that finds the optimal separating hyperplane between two classes. Kernel functions, which mea... |

10437 | Introduction to Algorithms
- Cormen, Leiserson, et al.
- 2009
(Show Context)
Citation Context ...h the same index are summed to form one entry. This sorting requires only O(dm + kd) time using the radix-sort algorithm, a linear time sorting algorithm that is applicable to the integer bin indices =-=[6]-=-. The histogram pyramid that results is high-dimensional, but very sparse, with only O(m log D) non-zero entries that need to be stored. The complexity of K∆ is O(dm log D), since computing the inters... |

8750 | Distinctive image features from scale-invariant keypoints
- Lowe
- 2004
(Show Context)
Citation Context ...earance variation. We used the pyramid match kernel with a one-versus-all SVM classifier on the latest version of the database (which does not contain duplicated images). We used the SIFT detector of =-=[13]-=- and 10-dimensional PCASIFT descriptors [11] to form the input feature sets, which ranged in size from 14 to 4,118 features, with an average of 454 features per image. We set T =2. We trained our algo... |

6250 | LIBSVM: a library for support vector machines, 2001. Available at http://www.csie.ntu.edu.tw/~cjlin/libsvm
- Chang, Lin
(Show Context)
Citation Context ...xamples’ relative positions in an embedded space, and quadratic programming is used to find the optimal separating hyperplane in this space between the two classes. We use the implementation given by =-=[5]-=-. Local affine- or scale- invariant feature descriptors extracted from a sparse set of interest points in an image have been shown to be an effective, compact representation (e.g. [17, 19]). This is a... |

2320 |
An introduction to support Vector Machines and other kernel-based learning methods
- Cristianini, Shawe-Taylor
- 1999
(Show Context)
Citation Context ...challenging. While generative methods have had some success, kernel-based discriminative methods are known to represent complex decision boundaries very efficiently and generalize well to unseen data =-=[26, 8]-=-. For example, the Support Vector Machine (SVM) is a widely used approach to discriminative classification that finds the optimal separating hyperplane between two classes. Kernel functions, which mea... |

1769 | Shape matching and object recognition using shape contexts
- Belongie, Puzicha
(Show Context)
Citation Context ...es of the same object or scene under different viewpoints, poses, or lighting conditions. Most approaches, however, perform recognition with local feature representations using nearestneighbor (e.g., =-=[1, 8, 22, 2]-=-) or voting-based classifiers followed by an alignment step (e.g., [13, 15]); both may be impractical for large training sets, since their classification times increase with the number of training exa... |

1604 | Video Google: A text retrieval approach to object matching in videos - Sivic, Zisserman - 2003 |

1602 | Color Indexing
- Swain, Ballard
- 1991
(Show Context)
Citation Context ...hich is computed by binning data points into discrete regions of increasingly larger size. Single-level histograms have been used in various visual recognition systems, one of the first being that of =-=[23]-=-, where the intersection of global color histograms was used to compare images. Pyramids have been shown to be a useful representation in a wide variety of image processing tasks – see [9] for a summa... |

1221 |
Kernel Methods for Pattern Analysis
- SHAWE-TAYLOR, CRISTIANINI
- 2004
(Show Context)
Citation Context ...tures, approximating the optimal correspondences between the sets’ features. criminative methods are known to represent complex decision boundaries very efficiently and generalize well to unseen data =-=[24, 21]-=-. For example, the Support Vector Machine (SVM) is a widely used approach to discriminative classification that finds the optimal separating hyperplane between two classes. Kernel functions, which mea... |

762 | Perona P.: Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories
- Fei-Fei, Fergus
- 2004
(Show Context)
Citation Context ...ass recognition accuracy of 48% on this database using a nearestneighbor classifier with a correspondence-based distance and manually segmented training examples. A generative model approach given in =-=[10]-=- obtained a recognition rate of 16%. Chance performance would be less than 1% on this database. We ran our method on the 101 Objects database with a one-versus-all SVM classifier. We used the SIFT int... |

703 | The earth mover’s distance as a metric for image retrieval
- Rubner, Tomasi, et al.
(Show Context)
Citation Context ...ed similarity between corresponding points. We compared our kernel’s outputs to those produced by the optimal partial matching obtained via a linear programming solution to the transportation problem =-=[19]-=-. 2 2 This optimal solution requires time exponential in the number of features in the worst case, although it often exhibits polynomial-time behavior in practice. In contrast, the pyramid kernel’s co... |

571 | Pca-sift: A more distinctive representation for local image descriptors
- Ke, Sukthankar
- 2004
(Show Context)
Citation Context ...e the desired number of features. We evaluated our method on this same subset of the ETH-80 database under the same conditions provided in [7], and it achieved a recognition rate of 83% using PCASIFT =-=[11]-=- features from all Harris-detected interest points (averages 153 points per image) and T =8. Restricting ourselves to an average of 40 interest points yields a recognition rate of 73%. Thus our method... |

413 | Shape matching and object recognition using low distortion correspondence
- Berg, Berg, et al.
(Show Context)
Citation Context ...es of the same object or scene under different viewpoints, poses, or lighting conditions. Most approaches, however, perform recognition with local feature representations using nearestneighbor (e.g., =-=[1, 8, 22, 2]-=-) or voting-based classifiers followed by an alignment step (e.g., [13, 15]); both may be impractical for large training sets, since their classification times increase with the number of training exa... |

400 | Indexing based on scale invariant interest points
- Mikolajczyk, Schmid
- 2001
(Show Context)
Citation Context ...nditions. Most approaches, however, perform recognition with local feature representations using nearestneighbor (e.g., [1, 8, 22, 2]) or voting-based classifiers followed by an alignment step (e.g., =-=[13, 15]-=-); both may be impractical for large training sets, since their classification times increase with the number of training examples. An SVM, on the other hand, identifies a sparse subset of the trainin... |

373 | Learning to detect objects in images via a sparse, part-based representation
- Agarwal, Awan, et al.
(Show Context)
Citation Context ...ypes, choosing how many there should be, and updating the prototypes properly when new types of data are encountered. A learning architecture based on a sparse network of linear functions is given in =-=[1]-=- and applied to object detection using binary feature vectors that encode the presence or absence of a pre-established set of object parts. The method only makes binary decisions (detection, not multi... |

287 | Object recognition with features inspired by visual cortex - Serre, Wolf, et al. - 2005 |

175 | Recognition with Local Features: the Kernel Recipe
- Wallraven, Caputo, et al.
- 2003
(Show Context)
Citation Context ...ally, ignoring potential dependencies conveyed by features within one set, our similarity measure captures the features’ joint statistics. Other approaches to this problem have recently been proposed =-=[25, 14, 3, 12, 27, 16, 20]-=-, but unfortunately each of these techniques suffers from some number of the following drawbacks: computational complexities that make large feature set sizes infeasible; limitations to parametric dis... |

144 | A survey of kernels for structured data
- Gärtner
(Show Context)
Citation Context ...ecialized kernels that can more fully leverage these tools for situations where the data cannot be naturally represented by a Euclidean vector space, such as graphs, strings, or trees. See the survey =-=[11]-=- for an overview of different types of kernels. Several researchers have designed similarity measures that operate on sets of unordered features. See Table 1 for a concise comparison of the approaches... |

132 | A Kullback-Leibler Divergence Based Kernel for SVM Classification in Multimedia Applications
- Moreno, Ho, et al.
- 2003
(Show Context)
Citation Context ...ally, ignoring potential dependencies conveyed by features within one set, our similarity measure captures the features’ joint statistics. Other approaches to this problem have recently been proposed =-=[25, 14, 3, 12, 27, 16, 20]-=-, but unfortunately each of these techniques suffers from some number of the following drawbacks: computational complexities that make large feature set sizes infeasible; limitations to parametric dis... |

127 | A Kernel between Sets of Vectors
- Kondor, Jebara
- 2003
(Show Context)
Citation Context ...ally, ignoring potential dependencies conveyed by features within one set, our similarity measure captures the features’ joint statistics. Other approaches to this problem have recently been proposed =-=[25, 14, 3, 12, 27, 16, 20]-=-, but unfortunately each of these techniques suffers from some number of the following drawbacks: computational complexities that make large feature set sizes infeasible; limitations to parametric dis... |

104 | Learning Over Sets Using Kernel Principal Angles
- Wolf, Shashua
(Show Context)
Citation Context |

96 | V.: SVMs for histogram-based image classification
- Chapelle, Haffner, et al.
(Show Context)
Citation Context ...t recognition, they have generally used global image features – ordered features of equal length measured from the image as a whole, such as color or grayscale histograms or vectors of raw pixel data =-=[5, 18, 17]-=-. Such global representations are known to be sensitive to real-world imaging conditions, such as occlusions, pose Method Complexity C P M U Match [25] O(dm 2 ) x x Exponent [14] O(dm 2 ) x x x Greedy... |

89 | Fast Contour Matching Using Approximate Earth Mover's Distance
- Grauman, Darrell
- 2004
(Show Context)
Citation Context ...es of the same object or scene under different viewpoints, poses, or lighting conditions. Most approaches, however, perform recognition with local feature representations using nearestneighbor (e.g., =-=[1, 8, 22, 2]-=-) or voting-based classifiers followed by an alignment step (e.g., [13, 15]); both may be impractical for large training sets, since their classification times increase with the number of training exa... |

71 |
Fast Image Retrieval via Embeddings
- Indyk, Thaper
(Show Context)
Citation Context ... the intersection of global color histograms was used to compare images. Pyramids have been shown to be a useful representation in a wide variety of image processing tasks – see [9] for a summary. In =-=[10]-=-, multi-resolution histograms are compared with L1 distance to approximate a least-cost matching of equalmass global color histograms for nearest neighbor image retrievals. This work inspired our use ... |

62 | Mercer Kernels for Object Recognition with Local Features
- Lyu
- 2005
(Show Context)
Citation Context ...ally, ignoring potential dependencies conveyed by features within one set, our similarity measure captures the features’ joint statistics. Other approaches to this problem have recently been proposed =-=[27, 18, 4, 16, 28, 20]-=-, but unfortunately each of these techniques suffers from some number of the following drawbacks: computational complexities that make large feature set 2sCaptures Positive- Model- Handles unequal Met... |

59 | Multiresolution histograms and their use for recognition
- Hadjidemetriou, Grossberg, et al.
(Show Context)
Citation Context ...ng that of [23], where the intersection of global color histograms was used to compare images. Pyramids have been shown to be a useful representation in a wide variety of image processing tasks – see =-=[9]-=- for a summary. In [10], multi-resolution histograms are compared with L1 distance to approximate a least-cost matching of equalmass global color histograms for nearest neighbor image retrievals. This... |

53 |
Building kernels from binary strings for image matching
- Odone, Barla, et al.
- 2005
(Show Context)
Citation Context ...t recognition, they have generally used global image features – ordered features of equal length measured from the image as a whole, such as color or grayscale histograms or vectors of raw pixel data =-=[5, 18, 17]-=-. Such global representations are known to be sensitive to real-world imaging conditions, such as occlusions, pose Method Complexity C P M U Match [25] O(dm 2 ) x x Exponent [14] O(dm 2 ) x x x Greedy... |

48 | Hulle, View-based 3D object recognition with support vector machines
- Roobaert, Van
- 1999
(Show Context)
Citation Context ...t recognition, they have generally used global image features – ordered features of equal length measured from the image as a whole, such as color or grayscale histograms or vectors of raw pixel data =-=[5, 18, 17]-=-. Such global representations are known to be sensitive to real-world imaging conditions, such as occlusions, pose Method Complexity C P M U Match [25] O(dm 2 ) x x Exponent [14] O(dm 2 ) x x x Greedy... |

44 | Learning a discriminative classifier using shape context distances
- Zhang, Malik
- 2003
(Show Context)
Citation Context ...l examples from each class, and then represent examples in terms of their distances to each prototype; standard algorithms that handle vectors in a Euclidean space are then applicable. The authors of =-=[28]-=- build such a classifier for handwritten digits, and use the shape context distance of [1] as the measure of similarity. The issues faced by such a prototype-based method are determining which example... |

29 | Object Categorization with SVM: Kernels for Local Features
- Eichhorn, Chapelle
- 2004
(Show Context)
Citation Context ...tional cost than other state-of-the-art approaches. All run-times reported below include the time needed to compute both the pyramids and the weighted intersections. A performance evaluation given in =-=[7]-=- compares the methods of [12, 27, 25] in the context of an object categorization task using images from the publicly available ETH80 database. 3 The experiment uses eight object classes, with 10 uniqu... |

22 | Shape Matching and Object Recognition
- Berg
- 2005
(Show Context)
Citation Context ...ber of examples in that class. This is an improvement over the 45% performance achieved by the correspondence-based method of Berg et al. [3], and close to the (best) 52% performance reported in Berg =-=[2]-=- using a voting approach. The method of Berg et al. [3] measures similarity between sets of geometric blur descriptors by approximating the optimal low-distortion correspondences via linear programmin... |

20 |
LIBSVM: a library for SVMs
- Chang, Lin
- 2011
(Show Context)
Citation Context ...xamples’ relative positions in an embedded space, and quadratic programming is used to find the optimal separating hyperplane between the two classes in this space. We use the implementation given by =-=[4]-=-. When kernel matrices have dominant diagonals we use the transformation suggested in [26]: a subpolynomial kernel is applied to the original kernel values, followed by an empirical kernel mapping th... |

15 |
kernels for object recognition with local features
- Mercer
- 2005
(Show Context)
Citation Context |

14 |
W (2003) Dealing with large diagonals in kernel matrices
- Weston, Schölkopf, et al.
(Show Context)
Citation Context ...nd the optimal separating hyperplane between the two classes in this space. We use the implementation given by [4]. When kernel matrices have dominant diagonals we use the transformation suggested in =-=[26]-=-: a subpolynomial kernel is applied to the original kernel values, followed by an empirical kernel mapping that embeds the distance measure into a feature space. Local affine- or scale- invariant fea... |

14 | Exploiting unlabelled data for hybrid object classification - Holub, Welling, et al. - 2005 |

12 | Algebraic set kernels with applications to inference over local image representations
- Shashua, Hazan
- 2005
(Show Context)
Citation Context |

7 |
Non-Mercer Kernels for SVM Object Recognition
- Boughhorbel, Tarel, et al.
- 2004
(Show Context)
Citation Context |