• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 862,695
Next 10 →

A tutorial on learning with Bayesian networks

by David Heckerman - LEARNING IN GRAPHICAL MODELS , 1995
"... ..."
Abstract - Cited by 1063 (3 self) - Add to MetaCart
Abstract not found

Parallel Networks that Learn to Pronounce English Text

by Terrence J. Sejnowski, Charles R. Rosenberg - COMPLEX SYSTEMS , 1987
"... This paper describes NETtalk, a class of massively-parallel network systems that learn to convert English text to speech. The memory representations for pronunciations are learned by practice and are shared among many processing units. The performance of NETtalk has some similarities with observed h ..."
Abstract - Cited by 548 (5 self) - Add to MetaCart
This paper describes NETtalk, a class of massively-parallel network systems that learn to convert English text to speech. The memory representations for pronunciations are learned by practice and are shared among many processing units. The performance of NETtalk has some similarities with observed

Learning and development in neural networks: The importance of starting small

by Jeffrey L. Elman - Cognition , 1993
"... It is a striking fact that in humans the greatest learnmg occurs precisely at that point in time- childhood- when the most dramatic maturational changes also occur. This report describes possible synergistic interactions between maturational change and the ability to learn a complex domain (language ..."
Abstract - Cited by 518 (18 self) - Add to MetaCart
It is a striking fact that in humans the greatest learnmg occurs precisely at that point in time- childhood- when the most dramatic maturational changes also occur. This report describes possible synergistic interactions between maturational change and the ability to learn a complex domain

Learning Bayesian networks: The combination of knowledge and statistical data

by David Heckerman, David M. Chickering - Machine Learning , 1995
"... We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simpl ..."
Abstract - Cited by 1142 (36 self) - Add to MetaCart
We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly

Dynamic Bayesian Networks: Representation, Inference and Learning

by Kevin Patrick Murphy , 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have bee ..."
Abstract - Cited by 758 (3 self) - Add to MetaCart
been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete

The cascade-correlation learning architecture

by Scott E. Fahlman, Christian Lebiere - Advances in Neural Information Processing Systems 2 , 1990
"... Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creatin ..."
Abstract - Cited by 796 (6 self) - Add to MetaCart
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one

Designing Learning

by Ann C. Baker, Patricia J. Jensen, David A. Kolb, D. A. Conversational, Ann Baker, Patricia Jensen, David Kolb - In , 2004
"... …Truth [is] being involved in an eternal conversation about things that matter, conducted with passion and discipline…truth is not in the conclusions so much as in the process of conversation itself…if you want to be in truth you must be in conversation. Parker Palmer ..."
Abstract - Cited by 555 (9 self) - Add to MetaCart
…Truth [is] being involved in an eternal conversation about things that matter, conducted with passion and discipline…truth is not in the conclusions so much as in the process of conversation itself…if you want to be in truth you must be in conversation. Parker Palmer

A Learning Algorithm for Continually Running Fully Recurrent Neural Networks

by Ronald J. Williams, David Zipser , 1989
"... The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a precis ..."
Abstract - Cited by 529 (4 self) - Add to MetaCart
The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a

Active Learning with Statistical Models

by David A. Cohn, Zoubin Ghahramani, Michael I. Jordan , 1995
"... For manytypes of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992# Cohn, 1994]. We then showhow the same principles may be used to select data for two alternative, statist ..."
Abstract - Cited by 677 (12 self) - Add to MetaCart
, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate.

Learning low-level vision

by William T. Freeman, Egon C. Pasztor - International Journal of Computer Vision , 2000
"... We show a learning-based method for low-level vision problems. We set-up a Markov network of patches of the image and the underlying scene. A factorization approximation allows us to easily learn the parameters of the Markov network from synthetic examples of image/scene pairs, and to e ciently prop ..."
Abstract - Cited by 586 (31 self) - Add to MetaCart
We show a learning-based method for low-level vision problems. We set-up a Markov network of patches of the image and the underlying scene. A factorization approximation allows us to easily learn the parameters of the Markov network from synthetic examples of image/scene pairs, and to e ciently
Next 10 →
Results 1 - 10 of 862,695
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University