Results 1  10
of
886
Supervised Training via Error Backpropagation:
"... t must be very small for each q. If this were all that there is to it, it would be a simple process, provided that we had a strategy that would adjust the weights properly. Unfortunately, the MLP architecture must be designed properly for the particular dataset to assure that the network will ..."
Abstract
 Add to MetaCart
t must be very small for each q. If this were all that there is to it, it would be a simple process, provided that we had a strategy that would adjust the weights properly. Unfortunately, the MLP architecture must be designed properly for the particular dataset to assure that the network will learn robustly and will be reasonably efficient. The main questions in laying out the architecture and then training the MLP are listed below. 1. How many layers of neurodes should we use? 2. How many input nodes should we use? 137 3. How many neurodes in the hidden layers should we use? 4. How many neurodes should we use in the output layer? 5. What should the target (identifier) vectors be? 6. How do we proceed to train the MLP? 7. How can we test to determine whether or not the MLP is properly trained? 8. How do we select parameters (such as ), speed up and improve the training? 9. What should be the range of the weights and the network inputs and outputs? Some Answers Answe
ErrorBackpropagation in Temporally Encoded Networks of Spiking Neurons
 NEUROCOMPUTING
, 2000
"... For a network of spiking neurons that encodes information in the timing of individual spiketimes, we derive a supervised learning rule, SpikeProp, akin to traditional errorbackpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm, we demonstrate h ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
For a network of spiking neurons that encodes information in the timing of individual spiketimes, we derive a supervised learning rule, SpikeProp, akin to traditional errorbackpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm, we demonstrate
Error backpropagation in multivalued logic systems
 IN: PROC. INTERNATIONAL CONF. ON COMPUTATIONAL INTELLIGENCE AND MULTIMEDIA APPLICATIONS (ICCIMA 2007), SIVAKASI
, 2007
"... Error backpropagation—and its many variations—has been used extensively to train neural networks. A multilayer system cannot be trained in a supervised learning scheme because data are usually provided only as endtoend inputoutput pairs for the global system. The central idea of error backprop ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Error backpropagation—and its many variations—has been used extensively to train neural networks. A multilayer system cannot be trained in a supervised learning scheme because data are usually provided only as endtoend inputoutput pairs for the global system. The central idea of error backpropagation
Learning web page scores by error backpropagation
 In IJCAI
, 2005
"... In this paper we present a novel algorithm to learn a score distribution over the nodes of a labeled graph (directed or undirected). Markov Chain theory is used to define the model of a random walker that converges to a score distribution which depends both on the graph connectivity and on the node ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
labels. A supervised learning task is defined on the given graph by assigning a target score for some nodes and a training algorithm based on error backpropagation through the graph is devised to learn the model parameters. The trained model can assign scores to the graph nodes generalizing the criteria
Stabilizing and Robustifying the Error Backpropagation Method in Neurocontrol Applications
, 2000
"... This paper discusses the stabilizability of artificial neural networks trained by utilizing the gradient information. The method proposed constructs a dynamic model of the conventional update mechanism and derives the stabilizing values of the learning rate. This is achieved by integrating the Error ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
the Error Backpropagation (EBP) technique with Variable Structure Systems (VSS) methodology, which is well known with its robustness to environmental disturbances. In the simulations, control of a three degrees of freedom anthropoid robot is chosen for the evaluation of the performance. For this purpose, a
Errorbackpropagation in Networks of Fractionally Predictive Spiking Neurons
"... Abstract. We develop a learning rule for networks of spiking neurons where signals are encoded using fractionally predictive spikecoding. In this paradigm, neural output signals are encoded as a sum of shifted powerlaw kernels. Simple greedy thresholding can compute this encoding, and spiketrains ..."
Abstract
 Add to MetaCart
that properly tuning the decoding kernel at receiving neurons can implement spectral filtering; the applicability to general temporal filtering was left open. Here, we present an errorbackpropagation algorithm to learn decoding these filters, and we show that networks of fractionally predictive spiking neurons
Spiketiming error backpropagation in theta neuron networks
 Neural Comput
, 2009
"... The main contribution of this paper is the derivation of a steepest gradient descent learning rule for a multilayer network of theta neurons; a onedimensional nonlinear neuron model. Central to our model is the assumption that the intrinsic neuron dynamics are sufficient to achieve consistent tim ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
The main contribution of this paper is the derivation of a steepest gradient descent learning rule for a multilayer network of theta neurons; a onedimensional nonlinear neuron model. Central to our model is the assumption that the intrinsic neuron dynamics are sufficient to achieve consistent time coding, with no need to involve the precise shape of postsynaptic currents; this assumption departs from other related models such as SpikeProp and Tempotron learning. Our results clearly show that it is possible to perform complex computations by applying supervised learning techniques to the spike times and time response properties of nonlinear integrate and fire neurons. Networks trained with our multilayer training rule are shown to have similar generalization abilities for spike latency pattern classification as Tempotron learning. The rule is also able to train networks to perform complex regression tasks that neither SpikeProp or Tempotron learning appear to be capable of. 1
Error backpropagation algorithm for classification of imbalanced data, Neurocomputing 74 (6
, 2011
"... Abstract Classification of imbalanced data is pervasive but it is a difficult problem to solve. In order to improve the classification of imbalanced data, this letter proposes a new error function for the error backpropagation algorithm of multilayer perceptrons. The error function intensifies wei ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract Classification of imbalanced data is pervasive but it is a difficult problem to solve. In order to improve the classification of imbalanced data, this letter proposes a new error function for the error backpropagation algorithm of multilayer perceptrons. The error function intensifies
Trainable & Dynamic Computing: Error Backpropagation Through Physical Media
, 2014
"... Machine learning algorithms, and more in particular neural networks, arguably experience a revolution in terms of performance. Currently, the best systems we have for speech recognition, computer vision and similar problems are based on neural networks, trained using the halfcentury old backpropag ..."
Abstract
 Add to MetaCart
for dynamic, neuroinspired analog computing. We show that a crucial advantage of this setup is that the error backpropagation can
Results 1  10
of
886