• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

Abstract in (1993)

Cached

  • Download as a PDF

Download Links

  • [www.gatsby.ucl.ac.uk]
  • [people.cs.uchicago.edu]
  • [vikas.sindhwani.org]
  • [people.csail.mit.edu]
  • [learning.eng.cam.ac.uk]
  • [www.keerthis.com]
  • [www.ai.mit.edu]
  • [www.gatsby.ucl.ac.uk]
  • [www.csail.mit.edu]
  • [mlg.eng.cam.ac.uk]
  • [mlg.eng.cam.ac.uk]
  • [www.gatsby.ucl.ac.uk]
  • [books.nips.cc]
  • [books.nips.cc]
  • [www.cs.stanford.edu]
  • [www.ece.rutgers.edu]
  • [www.cs.stanford.edu]
  • [www-cs.stanford.edu]
  • [www.cs.stanford.edu]
  • [www.cis.udel.edu]
  • [www.ece.rutgers.edu]
  • [www.cs.columbia.edu]
  • [www.dcs.bbk.ac.uk]
  • [www.robotics.stanford.edu]
  • [ai.stanford.edu]
  • [jaso.co.kr]
  • [www.cs.columbia.edu]
  • [www.dcs.bbk.ac.uk]
  • [ai.stanford.edu]
  • [www.robotics.stanford.edu]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Cheng-tao Chu , Yuanyuan Yu , Sang Kyun Kim , Gary Bradski , Kunle Olukotun , Yi-an Lin , Andrew Y. Ng
Venue:I 13-GHz f~ graded-base SiGe HBTs,” 51st Device Res. Conf
Citations:1 - 0 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Chu93abstractin,
    author = {Cheng-tao Chu and Yuanyuan Yu and Sang Kyun Kim and Gary Bradski and Kunle Olukotun and Yi-an Lin and Andrew Y. Ng},
    title = {Abstract in},
    booktitle = {I 13-GHz f~ graded-base SiGe HBTs,” 51st Device Res. Conf},
    year = {1993},
    pages = {2100}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain “summation form, ” which allows them to be easily parallelized on multicore computers. We adapt Google’s map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression

Keyphrases

machine learning    potential speed    linear regression    good programming framework    distinct contrast    single algorithm    certain summation form    multicore era    many core    unified way    logistic regression    multicore computer    applicable parallel programming method    statistical query model   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University