• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

DMCA

Information-theoretic metric learning (2007)

Cached

  • Download as a PDF

Download Links

  • [www.cs.utexas.edu]
  • [www.cs.utexas.edu]
  • [www.cs.utexas.edu]
  • [www.cs.utexas.edu]
  • [www.cs.utexas.edu]
  • [www.cs.utexas.edu]
  • [rogerioferis.com]
  • [www.cs.utexas.edu]
  • [www2.cs.uh.edu]
  • [imls.engr.oregonstate.edu]
  • [www.cs.utexas.edu]
  • [www.machinelearning.org]
  • [www.idiap.ch]
  • [david.grangier.info]
  • [bengio.abracadoudou.com]
  • [www.cs.utexas.edu]
  • [is.tuebingen.mpg.de]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Jason Davis , Brian Kulis , Suvrit Sra , Inderjit Dhillon
Venue:in NIPS 2006 Workshop on Learning to Compare Examples
Citations:359 - 15 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Davis07information-theoreticmetric,
    author = {Jason Davis and Brian Kulis and Suvrit Sra and Inderjit Dhillon},
    title = {Information-theoretic metric learning},
    booktitle = {in NIPS 2006 Workshop on Learning to Compare Examples},
    year = {2007}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

We formulate the metric learning problem as that of minimizing the differential relative entropy between two multivariate Gaussians under constraints on the Mahalanobis distance function. Via a surprising equivalence, we show that this problem can be solved as a low-rank kernel learning problem. Specifically, we minimize the Burg divergence of a low-rank kernel to an input kernel, subject to pairwise distance constraints. Our approach has several advantages over existing methods. First, we present a natural information-theoretic formulation for the problem. Second, the algorithm utilizes the methods developed by Kulis et al. [6], which do not involve any eigenvector computation; in particular, the running time of our method is faster than most existing techniques. Third, the formulation offers insights into connections between metric learning and kernel learning. 1

Keyphrases

information-theoretic metric learning    kernel learning    surprising equivalence    differential relative entropy    low-rank kernel learning problem    natural information-theoretic formulation    burg divergence    low-rank kernel    input kernel    metric learning problem    metric learning    multivariate gaussians    running time    distance constraint    several advantage    eigenvector computation    mahalanobis distance function   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University