Results 1  10
of
34,717
Enhanced Robustness of Multilayer Perceptron Training
 In Proceedings of the 36th Asilomar Conference on Signals, Systems and Computers
, 2002
"... Due to the chaotic nature of multilayer perceptron training, training error usually fails to be a monotonically nonincreasing function of the number of hidden units. An initialization and training methodology is developed to significantly increase the probability that the training error is mon ..."
Abstract
 Add to MetaCart
Due to the chaotic nature of multilayer perceptron training, training error usually fails to be a monotonically nonincreasing function of the number of hidden units. An initialization and training methodology is developed to significantly increase the probability that the training error
Multilayer Perceptron Trained with Numerical Gradient
 Proc. of Int. Conf. on Artificial Neural Networks (ICANN
, 2003
"... Abstract—An application of numerical gradient (NG) to training of MLP networks is presented. Several versions of the algorithm and the influence of various parameters on the training process are discussed. Optimization of network parameters based on global search with numerical gradient is presented ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Abstract—An application of numerical gradient (NG) to training of MLP networks is presented. Several versions of the algorithm and the influence of various parameters on the training process are discussed. Optimization of network parameters based on global search with numerical gradient
HadoopPerceptron: a Toolkit for Distributed Perceptron Training and Prediction with MapReduce
"... We propose a set of opensource software modules to perform structured Perceptron Training, Prediction and Evaluation within the Hadoop framework. Apache Hadoop is a freely available environment for running distributed applications on a computer cluster. The software is designed within the MapReduc ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We propose a set of opensource software modules to perform structured Perceptron Training, Prediction and Evaluation within the Hadoop framework. Apache Hadoop is a freely available environment for running distributed applications on a computer cluster. The software is designed within the Map
Multiple Layer Perceptron Training Using Genetic Algorithms
, 2001
"... This paper describes an approach to substitute it completely by a genetic algorithm. By means of some benchmark applications characteristic properties of both the genetic algorithm and the neural network are explained. 1 Introduction Multiple layer perceptrons (MLP) [1] commonly trained with bac ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
This paper describes an approach to substitute it completely by a genetic algorithm. By means of some benchmark applications characteristic properties of both the genetic algorithm and the neural network are explained. 1 Introduction Multiple layer perceptrons (MLP) [1] commonly trained
Directed Random Search For Multiple Layer Perceptron Training
 In: D.J. Miller et al. (Eds): Neural Networks for Signal Processing XI, (Piscataway: IEEE
, 2001
"... Although Backpropagation (BP) is commonly used to train Multiple Layer Perceptron (MLP) neural networks and its original algorithm has been significantly improved several times, it still suffers from some drawbacks like being slow, getting stuck in local minima or being bound to constraints regardin ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Although Backpropagation (BP) is commonly used to train Multiple Layer Perceptron (MLP) neural networks and its original algorithm has been significantly improved several times, it still suffers from some drawbacks like being slow, getting stuck in local minima or being bound to constraints
Multilayer Perceptron Training with Inaccurate Derivative Information
 Proc. 1995 IEEE International Conference on Neural Networks ICNN'95
, 1995
"... In this contribution we present an algorithm for using possibly inaccurate knowledge of model derivatives as a part of the training data for a multilayer perceptron network (MLP). In many practical process control problems there are many wellknown rules about the effect of control variables to the ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
In this contribution we present an algorithm for using possibly inaccurate knowledge of model derivatives as a part of the training data for a multilayer perceptron network (MLP). In many practical process control problems there are many wellknown rules about the effect of control variables
Regularization by Early Stopping in Single Layer Perceptron training
, 1996
"... . Adaptative training of the nonlinear singlelayer perceptron can lead to the Euclidean distance classifier and later to the standard Fisher linear discriminant function. On the way between these two classifiers one has a regularized discriminant analysis. That is equivalent to the "weight de ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. Adaptative training of the nonlinear singlelayer perceptron can lead to the Euclidean distance classifier and later to the standard Fisher linear discriminant function. On the way between these two classifiers one has a regularized discriminant analysis. That is equivalent to the "
Multilayer Perceptron Training with Inaccurate Derivative Information
 Proc. 1995 IEEE International Conference on Neural Networks ICNN'95
, 1995
"... In this contribution we present an algorithm for using possibly inaccurate knowledge of model derivatives as a part of the training data for a multilayer perceptron network (MLP). In many practical process control problems there are many wellknown rules about the effect of control variables to the ..."
Abstract
 Add to MetaCart
In this contribution we present an algorithm for using possibly inaccurate knowledge of model derivatives as a part of the training data for a multilayer perceptron network (MLP). In many practical process control problems there are many wellknown rules about the effect of control variables
Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms
, 2002
"... We describe new algorithms for training tagging models, as an alternative to maximumentropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modific ..."
Abstract

Cited by 641 (16 self)
 Add to MetaCart
We describe new algorithms for training tagging models, as an alternative to maximumentropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a
Large Margin Classification Using the Perceptron Algorithm
 Machine Learning
, 1998
"... We introduce and analyze a new algorithm for linear classification which combines Rosenblatt 's perceptron algorithm with Helmbold and Warmuth's leaveoneout method. Like Vapnik 's maximalmargin classifier, our algorithm takes advantage of data that are linearly separable with large ..."
Abstract

Cited by 518 (2 self)
 Add to MetaCart
We introduce and analyze a new algorithm for linear classification which combines Rosenblatt 's perceptron algorithm with Helmbold and Warmuth's leaveoneout method. Like Vapnik 's maximalmargin classifier, our algorithm takes advantage of data that are linearly separable
Results 1  10
of
34,717