• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

Online Learning with Kernels (2003)

Cached

  • Download as a PDF

Download Links

  • [mlg.anu.edu.au]
  • [alex.smola.org]
  • [omega.albany.edu:8008]
  • [axiom.anu.edu.au]
  • [users.cecs.anu.edu.au]
  • [users.cecs.anu.edu.au]
  • [www-2.cs.cmu.edu]
  • [eprints.pascal-network.org]
  • [eprints.pascal-network.org]
  • [alex.smola.org]
  • [alex.smola.org]
  • [books.nips.cc]
  • [mlg.anu.edu.au]
  • [papers.nips.cc]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Jyrki Kivinen , Alexander J. Smola , Robert C. Williamson
Citations:2826 - 123 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@MISC{Kivinen03onlinelearning,
    author = {Jyrki Kivinen and Alexander J. Smola and Robert C. Williamson},
    title = {Online Learning with Kernels},
    year = {2003}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

Kernel based algorithms such as support vector machines have achieved considerable success in various problems in the batch setting where all of the training data is available in advance. Support vector machines combine the so-called kernel trick with the large margin idea. There has been little use of these methods in an online setting suitable for real-time applications. In this paper we consider online learning in a Reproducing Kernel Hilbert Space. By considering classical stochastic gradient descent within a feature space, and the use of some straightforward tricks, we develop simple and computationally efficient algorithms for a wide range of problems such as classification, regression, and novelty detection. In addition to allowing the exploitation of the kernel trick in an online setting, we examine the value of large margins for classification in the online setting with a drifting target. We derive worst case loss bounds and moreover we show the convergence of the hypothesis to the minimiser of the regularised risk functional. We present some experimental results that support the theory as well as illustrating the power of the new algorithms for online novelty detection. In addition

Keyphrases

online learning    online setting    support vector machine    feature space    worst case loss bound    so-called kernel trick    classical stochastic gradient descent    various problem    online novelty detection    wide range    straightforward trick    efficient algorithm    real-time application    kernel trick    considerable success    training data    large margin idea    large margin    little use    new algorithm    reproducing kernel hilbert space    risk functional    experimental result   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University