Results 1  10
of
389
Face recognition by independent component analysis
 IEEE Transactions on Neural Networks
, 2002
"... Abstract—A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such ..."
Abstract

Cited by 348 (5 self)
 Add to MetaCart
be found by methods sensitive to these highorder statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET
Training a Single Sigmoidal Neuron Is Hard
"... We first present a brief survey of hardness results for training feedforward neural networks. These results are then completed by the proof that the simplest architecture containing only a single neuron that applies a sigmoidal activation function : < ! [; ], satisfying certain natural axio ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We first present a brief survey of hardness results for training feedforward neural networks. These results are then completed by the proof that the simplest architecture containing only a single neuron that applies a sigmoidal activation function : < ! [; ], satisfying certain natural
Deep Sparse Rectifier Neural Networks
"... While logistic sigmoid neurons are more biologically plausible than hyperbolic tangent neurons, the latter work better for training multilayer neural networks. This paper shows that rectifying neurons are an even better model of biological neurons and yield equal or better performance than hyperbol ..."
Abstract

Cited by 57 (17 self)
 Add to MetaCart
While logistic sigmoid neurons are more biologically plausible than hyperbolic tangent neurons, the latter work better for training multilayer neural networks. This paper shows that rectifying neurons are an even better model of biological neurons and yield equal or better performance than
Networks of Spiking Neurons: The Third Generation of Neural Network Models
, 1996
"... The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more powe ..."
Abstract

Cited by 192 (14 self)
 Add to MetaCart
The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more
Minimizing the Quadratic Training Error of a Sigmoid Neuron Is Hard
"... . We rst present a brief survey of hardness results for training feedforward neural networks. These results are then completed by the proof that the simplest architecture containing only a single neuron that applies the standard (logistic) activation function to the weighted sum of n inputs is h ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
. We rst present a brief survey of hardness results for training feedforward neural networks. These results are then completed by the proof that the simplest architecture containing only a single neuron that applies the standard (logistic) activation function to the weighted sum of n inputs
ORIGINAL CONTRIBUTION On the Derivatives of the Sigmoid
, 1992
"... AbstractThe sigmoid fimction & very widely used as a neuron activation fimction in artificial neural networks. which makes its attributes a matter of some interest. This paper presents some general results on the derivatives of the sigmoid. These results relate the coefficients of various deri ..."
Abstract
 Add to MetaCart
AbstractThe sigmoid fimction & very widely used as a neuron activation fimction in artificial neural networks. which makes its attributes a matter of some interest. This paper presents some general results on the derivatives of the sigmoid. These results relate the coefficients of various
Parameter Estimation of Sigmoid Superpositions
, 2008
"... Superposition of sigmoid function over a finite time interval is shown to be equivalent to the linear combination of the solutions of a linearly parameterized system of logistic differential equations. Due to the linearity with respect to the parameters of the system, it is possible to design an eff ..."
Abstract
 Add to MetaCart
Superposition of sigmoid function over a finite time interval is shown to be equivalent to the linear combination of the solutions of a linearly parameterized system of logistic differential equations. Due to the linearity with respect to the parameters of the system, it is possible to design
Robust Trainability of Single Neurons
, 1995
"... It is well known that (McCullochPitts) neurons are efficiently trainable to learn an unknown halfspace from examples, using linearprogramming methods. We want to analyze how the learning performance degrades when the representational power of the neuron is overstrained, i.e., if more complex conce ..."
Abstract

Cited by 99 (0 self)
 Add to MetaCart
for several variants of this problem. We considerably strengthen these negative results for neurons with binary weights 0 or 1. We also show that neither heuristical learning nor learning by sigmoidal neurons with a constant reject rate is efficiently possible (unless RP = NP).
Approximation of Sigmoid Function and the Derivative for Artificial Neurons
"... A piecewise linear recursive approximation scheme is applied to the computation of the sigmoid function and its derivative in artificial neurons with learning capability. The scheme provides high approximation accuracy with very low memory requirements. The recursive nature of this method allows fo ..."
Abstract
 Add to MetaCart
A piecewise linear recursive approximation scheme is applied to the computation of the sigmoid function and its derivative in artificial neurons with learning capability. The scheme provides high approximation accuracy with very low memory requirements. The recursive nature of this method allows
SIGMOID NEURONS ARE THE SAFEST AGAINST ADDITIVE ERRORS
"... Abstract. In this paper, we provide one more explanation of why sigmoid 1/(1 + exp(−x)) is successfully used in neural data processing. We are looking for a nonlinear transformation device x → s(x) with the following property: if we have not corrected some systematic error in the input, then we wil ..."
Abstract
 Add to MetaCart
Abstract. In this paper, we provide one more explanation of why sigmoid 1/(1 + exp(−x)) is successfully used in neural data processing. We are looking for a nonlinear transformation device x → s(x) with the following property: if we have not corrected some systematic error in the input, then we
Results 1  10
of
389