Results 21  30
of
51,226
SelfOrganizing Multilayered Neural Networks of Optimal Complexity
, 1998
"... The principles of selforganizing the neural networks of optimal complexity is considered under the unrepresentative learning set. The method of selforganizing the multilayered neural networks is offered and used to train the logical neural networks which were applied to the medical diagnostics. ..."
Abstract
 Add to MetaCart
The principles of selforganizing the neural networks of optimal complexity is considered under the unrepresentative learning set. The method of selforganizing the multilayered neural networks is offered and used to train the logical neural networks which were applied to the medical diagnostics.
A training method for discrete multilayer neural networks
 inMathematics of Neural Networks: Models, Algorithms & Applications
, 1997
"... In this contribution a new training method is proposed for neural networks that are based on neurons whose output can be in a particular state. This method minimises the well known least square criterion by using information concerning only the signs of the error function and inaccurate gradient val ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In this contribution a new training method is proposed for neural networks that are based on neurons whose output can be in a particular state. This method minimises the well known least square criterion by using information concerning only the signs of the error function and inaccurate gradient
Incorporating LCLV NonLinearities in Optical Multilayer Neural Networks
, 1996
"... : Sigmoidlike activation functions as available in analog hardware differ in various ways from the standard sigmoidal function as they are usually asymmetric, truncated, and have a nonstandard gain. We present an adaptation of the backpropagation learning rule to compensate for these nonstandard ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
standard sigmoids. This method is applied to multilayer neural networks with alloptical forward propagation and liquid crystal light valves (LCLV) as optical thresholding devices. In this paper, the results of simulations of a backpropagation neural network with five different LCLV response curves as activation
Error and Variance Bounds in Multilayer Neural Networks
"... Recently, upper bounds have been derived on the expectation and variance of errors at the output of a multilayer feedforward neural network with input and weight errors. We investigate the validity of these bounds, in particular the suggestion that the expected value of output error increases expone ..."
Abstract
 Add to MetaCart
Recently, upper bounds have been derived on the expectation and variance of errors at the output of a multilayer feedforward neural network with input and weight errors. We investigate the validity of these bounds, in particular the suggestion that the expected value of output error increases
Synthesis and Performance Analysis of Multilayer Neural Network Architectures
, 1992
"... pirical Studies on the Speed of Convergence of Neural Network Training Using Genetic Algorithms, Proc. of the National Conf. of th American Association of Artificial Intelligence (AAAI), pp. 789795 Heistermann J., 1990 : Learning in Neural Nets by Genetic Algorithms, Proc. of Parallel Processing ..."
Abstract
 Add to MetaCart
pirical Studies on the Speed of Convergence of Neural Network Training Using Genetic Algorithms, Proc. of the National Conf. of th American Association of Artificial Intelligence (AAAI), pp. 789795 Heistermann J., 1990 : Learning in Neural Nets by Genetic Algorithms, Proc. of Parallel
Modelling of High Performance of Multilayer Neural Networks
"... ABSTRACT: This paper helps in modelling and analysing that how the performance of network can be maintained or achieved such that it can reach to optimum result. Neural network are fast processing units which computes the results so as the system performs well. The performance is computed by making ..."
Abstract
 Add to MetaCart
ABSTRACT: This paper helps in modelling and analysing that how the performance of network can be maintained or achieved such that it can reach to optimum result. Neural network are fast processing units which computes the results so as the system performs well. The performance is computed by making
Annealed Online Learning in Multilayer Neural Networks
, 1998
"... In this article we will examine online learning with an annealed learning rate. Annealing the learning rate is necessary if online learning is to reach its optimal solution. With a fixed learning rate, the system will approximate the best solution only up to some fluctuations. These fluctuations are ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
). Even the simplest multilayer network, the committee machine, shows an additional symptom, which makes straightforward annealing uneffective. This is because, at the beginning of learning the committee machine is attracted by a metastable, suboptimal solution (section 4). The system stays
Local Minima and Plateaus in Multilayer Neural Networks
 IN: 9TH INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS
, 1999
"... Local minima and plateaus pose a serious problem in learning of neural networks. We investigate the geometric structure of the parameter space of threelayer perceptrons in order to show the existence of local minima and plateaus. It is proved that a critical point of the model with H  1 hidden uni ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Local minima and plateaus pose a serious problem in learning of neural networks. We investigate the geometric structure of the parameter space of threelayer perceptrons in order to show the existence of local minima and plateaus. It is proved that a critical point of the model with H  1 hidden
Dynamical Multilayer Neural Networks That Learn Continuous Trajectories
"... er a bounded time interval or trains nonfixedpoint attractors. Williams and Zipser [8], Meert and Ludik [9] have constructed a gradient descent learning rule which they call realtime recurrent learning (RTRL) and which can deal with time sequences of arbitrary length. A stochastic search method b ..."
Abstract
 Add to MetaCart
based on an adaptive simulated annealing algorithm has been used by Cohen et al. [10] to efficiently train recurrent neural networks with time delays (TDRNN). An effort was made in the above investigation to implement several benchmark tasks using minimum size networks. In all the above investigations
Generalization ability of a multilayer neural network
, 1997
"... We investigate the generalization ability of a perceptron with nonmonotonic transfer function of a reversedwedge type in online mode. This network is identical to a parity machine, a multilayer network. We consider several learning algorithms. By the perceptron algorithm the generalization error ..."
Abstract
 Add to MetaCart
We investigate the generalization ability of a perceptron with nonmonotonic transfer function of a reversedwedge type in online mode. This network is identical to a parity machine, a multilayer network. We consider several learning algorithms. By the perceptron algorithm the generalization error
Results 21  30
of
51,226