Results 1 -
5 of
5
Task-Dependent Evolution of Modularity in Neural Networks
- Connection Science
, 2002
"... There exist many ideas and assumptions concerning the development and meaning of modularity in biological and technical neural systems. Nevertheless, this wide field is far from being understood; quantitative simulations and investigations are rare. In our contribution, we empirically study the deve ..."
Abstract
-
Cited by 10 (4 self)
- Add to MetaCart
There exist many ideas and assumptions concerning the development and meaning of modularity in biological and technical neural systems. Nevertheless, this wide field is far from being understood; quantitative simulations and investigations are rare. In our contribution, we empirically study the development of connectionist models in the context of the evolution of artificial neural networks for highly modular problems. We define two measures for the degree of modularity and monitor their values during the evolutionary process. We identify two different reasons for the development of modular structures: the modularity of the task is reflected by the modularity of the adapted structure and the demand for fast learning structures increases the selective pressure towards modularity. However, learning can also counterbalance some imperfection of the underlying structure.
Rprop Using the Natural Gradient
, 2005
"... Gradient-based optimization algorithms are the standard methods for adapting the weights of neural networks. The natural gradient gives the steepest descent direction based on a non-Euclidean, from a theoretical point of view more appropriate metric in the weight space. While the natural gradient ..."
Abstract
-
Cited by 8 (1 self)
- Add to MetaCart
Gradient-based optimization algorithms are the standard methods for adapting the weights of neural networks. The natural gradient gives the steepest descent direction based on a non-Euclidean, from a theoretical point of view more appropriate metric in the weight space. While the natural gradient has already proven to be advantageous for online learning, we explore its benefits for batch learning: We empirically compare Rprop (resilient backpropagation) , one of the best performing first-order learning algorithms, using the Euclidean and the non-Euclidean metric, respectively. As batch steepest descent on the natural gradient is closely related to Levenberg-Marquardt optimization, we add this method to our comparison. It turns
A Neural Model for Multi-Expert Architectures
, 2002
"... We present a generalization of conventional artificial neural networks that allows for a functional equivalence to multi-expert systems. The new model provides an architectural freedom going beyond existing multi-expert models and an integrarive formalism to compare and combine various techniques of ..."
Abstract
-
Cited by 7 (2 self)
- Add to MetaCart
We present a generalization of conventional artificial neural networks that allows for a functional equivalence to multi-expert systems. The new model provides an architectural freedom going beyond existing multi-expert models and an integrarive formalism to compare and combine various techniques of learning. (We consider gradient, EM, reinforcement, and unsupervised learn- ing.) Its uniform representation aims at a simple netic encoding and evolutionary structure optimization of multi-expert systems. This paper contains a detailed description of the model and learning rules, empirically validates its functionality, and discusses future perspec- tives.
unknown title
, 2002
"... We present a generalization of conventional artificial neural networks that allows for a functional equivalence to multi-expert systems. The new model provides an architectural freedom going beyond existing multi-expert models and an integrative formalism to compare and combine various techniques of ..."
Abstract
- Add to MetaCart
(Show Context)
We present a generalization of conventional artificial neural networks that allows for a functional equivalence to multi-expert systems. The new model provides an architectural freedom going beyond existing multi-expert models and an integrative formalism to compare and combine various techniques of learning. (We consider gradient, EM, reinforcement, and unsupervised learning.) Its uniform representation aims at a simple genetic encoding and evolutionary structure optimization of multi-expert systems. This paper contains a detailed description of the model and learning rules, empirically validates its functionality, and discusses future perspectives. I
unknown title
, 2002
"... We present a generalization of conventional artificial neural networks that allows for a functional equivalence to multi-expert systems. The new model provides an architectural freedom going beyond existing multi-expert models and an integrative formalism to compare and combine various techniques of ..."
Abstract
- Add to MetaCart
(Show Context)
We present a generalization of conventional artificial neural networks that allows for a functional equivalence to multi-expert systems. The new model provides an architectural freedom going beyond existing multi-expert models and an integrative formalism to compare and combine various techniques of learning. (We consider gradient, EM, reinforcement, and unsupervised learning.) Its uniform representation aims at a simple genetic encoding and evolutionary structure optimization of multi-expert systems. This paper contains a detailed description of the model and learning rules, empirically validates its functionality, and discusses future perspectives. I