• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

DMCA

Connectionist Learning Procedures (1989)

Cached

  • Download as a PDF

Download Links

  • [www.cs.toronto.edu]
  • [learning.cs.toronto.edu]
  • [www.cs.utoronto.ca]
  • [www.csri.utoronto.ca]
  • [www.cs.toronto.edu]
  • [www.cnbc.cmu.edu]
  • [www.cnbc.cmu.edu]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Geoffrey E. Hinton
Venue:ARTIFICIAL INTELLIGENCE
Citations:406 - 9 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@MISC{Hinton89connectionistlearning,
    author = {Geoffrey E. Hinton},
    title = { Connectionist Learning Procedures},
    year = {1989}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

A major goal of research on networks of neuron-like processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way that internal units which are not part of the input or output come to represent important features of the task domain. Several interesting gradient-descent procedures have recently been discovered. Each connection computes the derivative, with respect to the connection strength, of a global measure of the error in the performance of the network. The strength is then adjusted in the direction that decreases the error. These relatively simple, gradient-descent learning procedures work well for small tasks and the new challenge is to find ways of improving their convergence rate and their generalization abilities so that they can be applied to larger, more realistic tasks.

Keyphrases

connectionist learning procedure    connection strength    complex internal representation    new challenge    gradient-descent learning procedure    efficient learning procedure    learning procedure    several interesting gradient-descent procedure    convergence rate    small task    realistic task    generalization ability    internal unit    major goal    global measure    task domain    important feature   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University