Results 1  10
of
10
Efficient representation of recurrent neural networks for markovian/nonmarkovian nonlinear control problems
 in Proceedings of the 10th International Conference on Intelligent Systems Design and Applications (ISDA2010) (2010) 615–620
"... Abstract—A novel representation of Recurrent Artificial neural network is proposed for nonlinear markovian and nonmarkovian control problems. The network architecture is inspired by Cartesian Genetic Programming. The neural network attributes namely weights, topology and functions are encoded usi ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
Abstract—A novel representation of Recurrent Artificial neural network is proposed for nonlinear markovian and nonmarkovian control problems. The network architecture is inspired by Cartesian Genetic Programming. The neural network attributes namely weights, topology and functions are encoded using Cartesian Genetic Programming. The proposed algorithm is applied on the standard benchmark control problem: double pole balancing for both markovian and nonmarkovian cases. Results demonstrate that the network has the ability to generate neural architecture and parameters that can solve these problems in substantially fewer number of evaluations in comparison to earlier neuroevolutionary techniques. The power of Recurrent Cartesian Genetic Programming Artificial Neural Network (RCGPANN) is its representation which leads to a thorough evolutionary search producing generalized networks.
Compressed Network Complexity Search
, 2012
"... Indirect encoding schemes for neural network phenotypes can represent large networks compactly. In previous work, we presented a new approach where networks are encoded indirectly as a set of Fouriertype coefficients that decorrelate weight matrices such that they can often be represented by a sma ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Indirect encoding schemes for neural network phenotypes can represent large networks compactly. In previous work, we presented a new approach where networks are encoded indirectly as a set of Fouriertype coefficients that decorrelate weight matrices such that they can often be represented by a small number of genes, effectively reducing the search space dimensionality, and speed up search. Up to now, the complexity of networks using this encoding was fixed a priori, both in terms of (1) the number of free parameters (topology) and (2) the number of coefficients. In this paper, we introduce a method, called Compressed Network Complexity Search (CNCS), for automatically determining network complexity that favors parsimonious solutions. CNCS maintains a probability distribution over complexity classes that it uses to select which class to optimize. Class probabilities are adapted based on their expected fitness. Starting with a prior biased toward the simplest networks, the distribution grows gradually until a solution is found. Experiments on two benchmark control problems, including a challenging nonlinear version of the helicopter hovering task, demonstrate that the method consistently finds simple solutions.
Evolving LargeScale Neural Networks for VisionBased Reinforcement Learning
"... The idea of using evolutionary computation to train artificial neural networks, or neuroevolution (NE), for reinforcement learning (RL) tasks has now been around for over 20 years. However, as RL tasks become more challenging, the networks required become larger, as do their genomes. But, scaling NE ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
The idea of using evolutionary computation to train artificial neural networks, or neuroevolution (NE), for reinforcement learning (RL) tasks has now been around for over 20 years. However, as RL tasks become more challenging, the networks required become larger, as do their genomes. But, scaling NE to large nets (i.e. tens of thousands of weights) is infeasible using direct encodings that map genes onetoone to network components. In this paper, we scaleup our “compressed ” network encoding where network weight matrices are represented indirectly as a set of Fouriertype coefficients, to tasks that require verylarge networks due to the highdimensionality of their input space. The approach is demonstrated successfully on two reinforcement learning tasks in which the control networks receive visual input: (1) a visionbased version of the octopus control task requiring networks with over 3 thousand weights, and (2) a version of the TORCS driving game where networks with over 1 million weights are evolved to drive a car around a track using video images from the driver’s perspective. 1.
Intrinsically Motivated NeuroEvolution for VisionBased Reinforcement Learning
, 2011
"... Neuroevolution, the artificial evolution of neural networks, has shown great promise on continuous reinforcement learning tasks that require memory. However, it is not yet directly applicable to realistic embedded agents using highdimensional (e.g. raw video images) inputs, requiring very large ne ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Neuroevolution, the artificial evolution of neural networks, has shown great promise on continuous reinforcement learning tasks that require memory. However, it is not yet directly applicable to realistic embedded agents using highdimensional (e.g. raw video images) inputs, requiring very large networks. In this paper, neuroevolution is combined with an unsupervised sensory preprocessor or compressor that is trained on images generated from the environment by the population of evolving recurrent neural network controllers. The compressor not only reduces the input cardinality of the controllers, but also biases the search toward novel controllers by rewarding those controllers that discover images that it reconstructs poorly. The method is successfully demonstrated on a visionbased version of the wellknown mountain car benchmark, where controllers receive only single highdimensional visual images of the environment, from a thirdperson perspective, instead of the standard twodimensional state vector which includes information about velocity.
Generalized Compressed Network Search
"... Abstract. This paper presents initial results of Generalized Compressed Network Search (GCNS), a method for automatically identifying the important frequencies for neural networks encoded as Fouriertype coefficients (i.e. “compressed ” networks [7]). GCNS is a general search procedure in this coeff ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper presents initial results of Generalized Compressed Network Search (GCNS), a method for automatically identifying the important frequencies for neural networks encoded as Fouriertype coefficients (i.e. “compressed ” networks [7]). GCNS is a general search procedure in this coefficient space – both the number of frequencies and their value are automatically determined by employing the use of variablelength chromosomes, inspired by messy genetic algorithms. The method achieves better compression than our previous approach, and promises improved generalization for evolved controllers. Results for a highdimensional Octopus arm control problem show that a high fitness 3680weight network can be encoded using less than 10 coefficients using the frequencies identified by GCNS. 1
New Millennium AI and the Convergence of History: Update of 2012
, 2012
"... ...millennium brought the first mathematically sound, asymptotically optimal, universal problem solvers, providing a new, rigorous foundation for the previously largely heuristic field of General AI and embedded agents. There also has been rapid progress in not quite universal but still rather gener ..."
Abstract
 Add to MetaCart
(Show Context)
...millennium brought the first mathematically sound, asymptotically optimal, universal problem solvers, providing a new, rigorous foundation for the previously largely heuristic field of General AI and embedded agents. There also has been rapid progress in not quite universal but still rather general and practical artificial recurrent neural networks for learning sequenceprocessing programs, now yielding stateoftheart results in real world applications. And the computing power per Euro is still growing by a factor of 1001000 per decade, greatly increasing the feasibility of neural networks in general, which have started to yield humancompetitive results in challenging pattern recognition competitions. Finally, a recent formal theory of fun and creativity identifies basic principles of curious and creative machines, laying foundations for artificial scientists and artists. Here I will briefly review some of the new results of my lab at IDSIA, and speculate about future developments, pointing out that the time intervals between the most notable events in over 40,000 years or 2 9 lifetimes of human history have sped up exponentially, apparently converging to zero within the next few decades. Or is this impression just a byproduct of the way humans allocate memory space to past events?
Complexity Search for Compressed Neural Networks
"... In this paper, we introduce a method, called Compressed Network Complexity Search (CNCS), for automatically determining the complexity of compressed networks (neural networks encoded indirectly by Fouriertype coefficients) that favors parsimonious solutions. CNCS maintains a probability distributio ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper, we introduce a method, called Compressed Network Complexity Search (CNCS), for automatically determining the complexity of compressed networks (neural networks encoded indirectly by Fouriertype coefficients) that favors parsimonious solutions. CNCS maintains a probability distribution over complexity classes that it uses to select which class to optimize. Class probabilities are adapted based on their expected fitness, starting with a prior biased toward the simplest networks. Experiments on a challenging nonlinear version of the helicopter hovering task, show that the method consistently finds simple solutions.
Kernel Representations for Evolving Continuous Functions
"... Abstract. To parameterize continuous functions for evolutionary learning, we use kernel expansions in nested sequences of function spaces of growing complexity. This approach is particularly powerful when dealing with nonconvex constraints and discontinuous objective functions. Kernel methods offer ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. To parameterize continuous functions for evolutionary learning, we use kernel expansions in nested sequences of function spaces of growing complexity. This approach is particularly powerful when dealing with nonconvex constraints and discontinuous objective functions. Kernel methods offer a number of beneficial properties for parameterizing continuous functions, such as smoothness and locality, which make them attractive as a basis for mutation operators. Beyond such practical considerations, kernel methods make heavy use of inner products in function space and offer a well established regularization framework. We show how evolutionary computation can profit from these properties. Searching function spaces of iteratively increasing complexity allows the solution to evolve from a simple first guess to a complex and highly refined function. At transition points where the evolution strategy is confronted with the next level of functional complexity, the kernel framework can be used to project the search distribution into the extended search space. The feasibility of the method is demonstrated on challenging trajectory planning problems where redundant robots have to avoid obstacles.
NeuroEvolution: The Importance of Transfer Function Evolution
"... Abstract. NeuroEvolution is the application of Evolutionary Algorithms to the training of Artificial Neural Networks. Currently the vast majority of NeuroEvolutionary methods create homogeneous networks of user defined transfer functions. This is despite NeuroEvolution being capable of creating he ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. NeuroEvolution is the application of Evolutionary Algorithms to the training of Artificial Neural Networks. Currently the vast majority of NeuroEvolutionary methods create homogeneous networks of user defined transfer functions. This is despite NeuroEvolution being capable of creating heterogeneous networks where each neurons transfer function is not chosen by the user, but selected or optimised during evolution. This paper demonstrates how NeuroEvolution can be used to select or optimise each neuron transfer function and demonstrates empirically that doing so significantly aids training. This finding is important as most NeuroEvolutionary methods are capable of creating heterogeneous networks using the methods described. 1