• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 566
Next 10 →

A scaled conjugate gradient algorithm for fast supervised learning

by Martin F. Møller - NEURAL NETWORKS , 1993
"... A supervised learning algorithm (Scaled Conjugate Gradient, SCG) with superlinear convergence rate is introduced. The algorithm is based upon a class of optimization techniques well known in numerical analysis as the Conjugate Gradient Methods. SCG uses second order information from the neural netwo ..."
Abstract - Cited by 451 (0 self) - Add to MetaCart
A supervised learning algorithm (Scaled Conjugate Gradient, SCG) with superlinear convergence rate is introduced. The algorithm is based upon a class of optimization techniques well known in numerical analysis as the Conjugate Gradient Methods. SCG uses second order information from the neural

Hierarchical mixtures of experts and the EM algorithm

by Michael I. Jordan, Robert A. Jacobs , 1993
"... We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM’s). Learning is treated as a max-imum likelihood ..."
Abstract - Cited by 885 (21 self) - Add to MetaCart
We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM’s). Learning is treated as a max-imum likelihood

The cascade-correlation learning architecture

by Scott E. Fahlman, Christian Lebiere - Advances in Neural Information Processing Systems 2 , 1990
"... Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creatin ..."
Abstract - Cited by 801 (6 self) - Add to MetaCart
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one

Greedy layer-wise training of deep networks

by Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle , 2006
"... Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allow ..."
Abstract - Cited by 394 (48 self) - Add to MetaCart
and extend it to cases where the inputs are continuous or where the structure of the input dis-tribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also conrm the hypothesis that the greedy layer-wise unsu-pervised training strategy mostly helps

Growing Cell Structures - A Self-organizing Network for Unsupervised and Supervised Learning

by Bernd Fritzke - Neural Networks , 1993
"... We present a new self-organizing neural network model having two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches, e.g., the Kohonen feature map, is the ability of the m ..."
Abstract - Cited by 300 (11 self) - Add to MetaCart
of the model to automatically find a suitable network structure and size. This is achieved through a controlled growth process which also includes occasional removal of units. The second variant of the model is a supervised learning method which results from the combination of the abovementioned self

Supervised Neural Networks for the Classification of Structures

by Alessandro Sperduti, Ro Sperduti, Antonina Starita - IEEE Transactions on Neural Networks , 1997
"... Until now neural networks have been used for classifying unstructured patterns and sequences. However, standard neural networks and statistical methods are usually believed to be inadequate when dealing with complex structures because of their feature-based approach. In fact, feature-based approache ..."
Abstract - Cited by 92 (14 self) - Add to MetaCart
Until now neural networks have been used for classifying unstructured patterns and sequences. However, standard neural networks and statistical methods are usually believed to be inadequate when dealing with complex structures because of their feature-based approach. In fact, feature

Stefania Gentili Retrieving Visual Concepts in Image Databases Supervisor:

by Prof Goffredo, G. Pieroni
"... This thesis addresses the problem of extracting and retrieving visual concepts in images databases, with the aim of developing a generic CBIR system applicable to a wide range of images. The system proposed in this thesis is scale and rotation invariant and easily updateable. The proposed approach i ..."
Abstract - Add to MetaCart
by this method characterize both the color and the shape of the regions. • A supervised updateable neural structure, based on neural trees. This algorithm

A novel method of protein secondary structure prediction with high segment overlap measure: support vector machine approach

by Sujun Hua, Zhirong Sun - J MOL BIOL , 2001
"... We have introduced a new method of protein secondary structure prediction which is based on the theory of support vector machine (SVM). SVM represents a new approach to supervised pattern classification which has been successfully applied to a wide range of pattern recognition problems, including ob ..."
Abstract - Cited by 177 (3 self) - Add to MetaCart
We have introduced a new method of protein secondary structure prediction which is based on the theory of support vector machine (SVM). SVM represents a new approach to supervised pattern classification which has been successfully applied to a wide range of pattern recognition problems, including

A General Framework for Adaptive Processing of Data Structures

by Paolo Frasconi, Marco Gori, Alessandro Sperduti - IEEE TRANSACTIONS ON NEURAL NETWORKS , 1998
"... A structured organization of information is typically required by symbolic processing. On the other hand, most connectionist models assume that data are organized according to relatively poor structures, like arrays or sequences. The framework described in this paper is an attempt to unify adaptive ..."
Abstract - Cited by 150 (61 self) - Add to MetaCart
be regarded as an extension of both recurrent neural networks and hidden Markov models to the case of acyclic graphs. In particular we study the supervised learning problem as the problem of learning transductions from an input structured space to an output structured space, where transductions are assumed

Information Update On Neural Tree Networks

by Stefania Gentili - 2001 International Conference On Image Processing (ICIP 2001) Thessaloniki, Greece 7-10 , 2001
"... In this paper, a method for information update on a supervised neural structure is presented. The method, applied to neural trees, combines the advantages of classical and neural classifiers, allowing both the update of the system without destroying previous information, and the use all the availabl ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
In this paper, a method for information update on a supervised neural structure is presented. The method, applied to neural trees, combines the advantages of classical and neural classifiers, allowing both the update of the system without destroying previous information, and the use all
Next 10 →
Results 1 - 10 of 566
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University