Results 1  10
of
304,005
Representation and Learning in Feedforward Neural Networks
, 1993
"... This paper gives an introduction to feedforward neural networks. The aim ..."
Visualization and Implementation of Feedforward Neural Networks
 Humboldt Universitat Berlin
, 1996
"... Feedforward neural networks are often used methods for regression and classification. But mostly they are treated as black boxes, that will find the "right" model by themselves. The advantage of flexibility is then compensated by nontransparency of the training process and the final model. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Feedforward neural networks are often used methods for regression and classification. But mostly they are treated as black boxes, that will find the "right" model by themselves. The advantage of flexibility is then compensated by nontransparency of the training process and the final model
Circuit Complexity and Feedforward Neural Networks
 in Mathematical Perspectives on Neural Networks
, 1996
"... Circuit complexity, a subfield of computational complexity theory, can be used to analyze how the resource usage of neural networks scales with problem size. The computational complexity of discrete feedforward neural networks is surveyed, with a comparison of classical circuits to circuits const ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Circuit complexity, a subfield of computational complexity theory, can be used to analyze how the resource usage of neural networks scales with problem size. The computational complexity of discrete feedforward neural networks is surveyed, with a comparison of classical circuits to circuits
A backpropagation learning framework for feedforward neural networks
 IEEE Int. Syposium Circuits Sys. ISCAS
"... In this paper, a general backpropagation learning framework for the training of feedforward neural networks is proposed. The convergence to global minimum under the framework is investigated using the Lyapunov stability theory. It is shown the existing feedforward neural networks training algorithms ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
In this paper, a general backpropagation learning framework for the training of feedforward neural networks is proposed. The convergence to global minimum under the framework is investigated using the Lyapunov stability theory. It is shown the existing feedforward neural networks training
On Interference of Signals and Generalization in Feedforward Neural Networks
, 2003
"... Interference of signals in a feedforward neural network may improve generalization. In this paper it is discussed that the interference may also cause highly random generalization. 1 ..."
Abstract
 Add to MetaCart
Interference of signals in a feedforward neural network may improve generalization. In this paper it is discussed that the interference may also cause highly random generalization. 1
Statistical Physics of Feedforward Neural Networks
"... The article is a lightly edited version of my habilitation thesis at the University Würzburg. My aim is to give a self contained, if concise, introduction to the formal methods used when offline learning in feedforward networks is analyzed by statistical physics. However, due to its origin, the art ..."
Abstract
 Add to MetaCart
, the article is not a comprehensive review of the field but is highly skewed towards reporting my own research. Preface This thesis summarizes my postdoctoral research in sofar as it dealt with supervised learning in feedforward neural networks. This research was carried out at the University of Würzburg
On Efficiently Monitoring the Learning Process of Feedforward Neural Networks
 In ICANN95, Proceedings of the International Conference on Artificial Neural Networks
, 1995
"... We propose a characteristic structure number interrelating the weight vectors of a feedforward neural network. It allows the monitoring of the learning process of feedforward neural networks and the identification of characteristic points/phases during the learning process. Some properties are gi ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We propose a characteristic structure number interrelating the weight vectors of a feedforward neural network. It allows the monitoring of the learning process of feedforward neural networks and the identification of characteristic points/phases during the learning process. Some properties
Optimal Unsupervised Learning in a SingleLayer Linear Feedforward Neural Network
, 1989
"... A new approach to unsupervised learning in a singlelayer linear feedforward neural network is discussed. An optimality principle is proposed which is based upon preserving maximal information in the output units. An algorithm for unsupervised learning based upon a Hebbian learning rule, which achie ..."
Abstract

Cited by 290 (2 self)
 Add to MetaCart
A new approach to unsupervised learning in a singlelayer linear feedforward neural network is discussed. An optimality principle is proposed which is based upon preserving maximal information in the output units. An algorithm for unsupervised learning based upon a Hebbian learning rule, which
Feedforward Neural Networks in the Classification of Financial Information
 THE EUROPEAN JOURNAL OF FINANCE
, 1997
"... ... In this paper, a practical case based on data from Spanish companies, shows, in an empirical form, the strengths and weaknesses of feedforward neural networks. The desirability of carrying out an exploratory data analysis of the financial ratios in order to study their statistical properties, wi ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
... In this paper, a practical case based on data from Spanish companies, shows, in an empirical form, the strengths and weaknesses of feedforward neural networks. The desirability of carrying out an exploratory data analysis of the financial ratios in order to study their statistical properties
Feedforward Neural Networks for Nonparametric Regression
, 1998
"... Feed forward neural networks (FFNN) with an unconstrained random number of hidden neurons define flexible nonparametric regression models. In Müller and Rios Insua (1998) we have argued that variable architecture models with random size hidden layer significantly reduce posterior multimodality typi ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Feed forward neural networks (FFNN) with an unconstrained random number of hidden neurons define flexible nonparametric regression models. In Müller and Rios Insua (1998) we have argued that variable architecture models with random size hidden layer significantly reduce posterior multimodality
Results 1  10
of
304,005