• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

DMCA

Parallel Networks that Learn to Pronounce English Text (1987)

Cached

  • Download as a PDF

Download Links

  • [www.cs.ubc.ca]
  • [psiexp.ss.uci.edu]
  • [www.cnl.salk.edu]
  • [people.cs.ubc.ca]
  • [www2.in.tu-clausthal.de]
  • [psych.stanford.edu]
  • [www.cs.ubc.ca]
  • [www2.in.tu-clausthal.de]
  • [people.cs.ubc.ca]
  • [papers.cnl.salk.edu]
  • [papers.cnl.salk.edu]
  • [papers.cnl.salk.edu]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Terrence J. Sejnowski , Charles R. Rosenberg
Venue:COMPLEX SYSTEMS
Citations:542 - 5 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@MISC{Sejnowski87parallelnetworks,
    author = {Terrence J. Sejnowski and Charles R. Rosenberg},
    title = {Parallel Networks that Learn to Pronounce English Text},
    year = {1987}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

This paper describes NETtalk, a class of massively-parallel network systems that learn to convert English text to speech. The memory representations for pronunciations are learned by practice and are shared among many processing units. The performance of NETtalk has some similarities with observed human performance. (i) The learning follows a power law. (;i) The more words the network learns, the better it is at generalizing and correctly pronouncing new words, (iii) The performance of the network degrades very slowly as connections in the network are damaged: no single link or processing unit is essential. (iv) Relearning after damage is much faster than learning during the original training. (v) Distributed or spaced practice is more effective for long-term retention than massed practice. Network models can be constructed that have the same performance and learning characteristics on a particular task, but differ completely at the levels of synaptic strengths and single-unit responses. However, hierarchical clustering techniques applied to NETtalk reveal that these different networks have similar internal representations of letter-to-sound correspondences within groups of processing units. This suggests that invariant internal representations may be found in assemblies of neurons intermediate in size between highly localized and completely distributed representations.

Keyphrases

pronounce english text    parallel network    observed human performance    power law    network model    particular task    processing unit    different network    massively-parallel network system    single-unit response    synaptic strength    english text    long-term retention    similar internal representation    new word    original training    single link    massed practice    invariant internal representation    letter-to-sound correspondence    memory representation    spaced practice    distributed representation   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University