Results 1  10
of
54
Towards a modern theory of adaptive networks: expectation and prediction
 Psychol. Rev
, 1981
"... Many adaptive neural network theories are based on neuronlike adaptive elements that can behave as single unit analogs of associative conditioning. In this article we develop a similar adaptive element, but one which is more closely in accord with the facts of animal learning theory than elements co ..."
Abstract

Cited by 282 (18 self)
 Add to MetaCart
Many adaptive neural network theories are based on neuronlike adaptive elements that can behave as single unit analogs of associative conditioning. In this article we develop a similar adaptive element, but one which is more closely in accord with the facts of animal learning theory than elements commonly studied in adaptive network research. We suggest that an essential feature of classical conditioning that has been largely overlooked by adaptive network theorists is its predictive nature. The adaptive element we present learns to increase its response rate in anticipation of increased stimulation, producing a conditioned response before the occurrence of the unconditioned stimulus. The element also is in strong agreement with the behavioral data regarding the effects of stimulus context, since it is a temporally refined extension of the RescorlaWagner model. We show by computer simulation that the element becomes sensitive to the most reliable, nonredundant, and earliest predictors of reinforcement. We also point out that the model solves many of the stability and saturation problems encountered in network simulations. Finally, we discuss our model in light of recent advances in the physiology and biochemistry of synaptic mechanisms. One way to bridge the gap between behavioral and neural views of learning is to postulate neural analogs of behavioral modification paradigms. Hebb's suggestion that when a cell A repeatedly and persistently takes part in firing another cell B, then A's efficiency in firing B is increased, is the most familiar of these postulates (Hebb, 1949). This rule for synaptic plasticity is a neural analog of associative conditioning and continues to exert a powerful influence on theoretical and experimental research in learning and memory. Neural network models designed to explore the behavioral possibilities of modifiable structures typically em
Exact Schema Theory for Genetic Programming and Variablelength Genetic Algorithms with OnePoint Crossover
, 2001
"... A few schema theorems for Genetic Programming (GP) have been proposed in the literature in the last few years. Since they consider schema survival and disruption only, they can only provide a lower bound for the expected value of the number of instances of a given schema at the next generation rathe ..."
Abstract

Cited by 37 (16 self)
 Add to MetaCart
A few schema theorems for Genetic Programming (GP) have been proposed in the literature in the last few years. Since they consider schema survival and disruption only, they can only provide a lower bound for the expected value of the number of instances of a given schema at the next generation rather than an exact value. This paper presents theoretical results for GP with onepoint crossover which overcome this problem. Firstly, we give an exact formulation for the expected number of instances of a schema at the next generation in terms of microscopic quantities. Thanks to this formulation we are then able to provide an improved version of an earlier GP schema theorem in which some (but not all) schema creation events are accounted for. Then, we extend this result to obtain an exact formulation in terms of macroscopic quantities which makes all the mechanisms of schema creation explicit. This theorem allows the exact formulation of the notion of effective fitness in GP and opens the way to future work on GP convergence, population sizing, operator biases, and bloat, to mention only some of the possibilities.
Exact Schema Theorems for GP with OnePoint and Standard Crossover Operating on Linear Structures and their Application to the Study of the Evolution of Size
 IN GENETIC PROGRAMMING, PROCEEDINGS OF EUROGP 2001, LNCS
, 2001
"... In this paper, firstly we specialise the exact GP schema theorem for onepoint crossover to the case of linear structures of variable length, for example binary strings or programs with arity1 primitives only. Secondly, we extend this to an exact schema theorem for GP with standard crossover app ..."
Abstract

Cited by 32 (18 self)
 Add to MetaCart
In this paper, firstly we specialise the exact GP schema theorem for onepoint crossover to the case of linear structures of variable length, for example binary strings or programs with arity1 primitives only. Secondly, we extend this to an exact schema theorem for GP with standard crossover applicable to the case of linear structures. Then we study, both mathematically and numerically, the schema equations and their fixed points for infinite populations for both a constant and a lengthrelated fitness function. This allows us to characterise the bias induced by standard crossover. This is very peculiar. In the case of a constant fitness function, at the fixedpoint, structures of any length are present with nonzero probability. However, shorter structures are sampled exponentially much more frequently than longer ones.
A Schema Theory Analysis of the Evolution of Size in Genetic Programming With Linear Representations
, 2001
"... In this paper we use the schema theory presented in [20] to better understand the changes in size distribution when using GP with standard crossover and linear structures. Applications of the theory to problems both with and without fitness suggest that standard crossover induces specific biases ..."
Abstract

Cited by 30 (17 self)
 Add to MetaCart
In this paper we use the schema theory presented in [20] to better understand the changes in size distribution when using GP with standard crossover and linear structures. Applications of the theory to problems both with and without fitness suggest that standard crossover induces specific biases in the distributions of sizes, with a strong tendency to over sample small structures, and indicate the existence of strong redistribution effects that may be a major force in the early stages of a GP run. We also present two important theoretical results: An exact theory of bloat, and a general theory of how average size changes on flat landscapes with glitches. The latter implies the surprising result that a single program glitch in an otherwise flat fitness landscape is sufficient to drive the average program size of an infinite population, which may have important implications for the control of code growth.
Generalisation of the limiting distribution of program sizes in treebased genetic programming and analysis of its effects on bloat
 in GECCO ’07: Proceedings of the 9th Annual Conference on Genetic and Evolutionary
, 2007
"... Abstract. We provide strong theoretical and experimental evidence that standard subtree crossover with uniform selection of crossover points pushes a population of aary GP trees towards a distribution of tree sizes of the form: Pr{n} =(1−apa) an +1 ..."
Abstract

Cited by 27 (10 self)
 Add to MetaCart
(Show Context)
Abstract. We provide strong theoretical and experimental evidence that standard subtree crossover with uniform selection of crossover points pushes a population of aary GP trees towards a distribution of tree sizes of the form: Pr{n} =(1−apa) an +1
Schema Theory Analysis of Mutation Size Biases in Genetic Programming With Linear Representations
, 2001
"... Understanding operator bias in evolutionary computation is important because it is possible for the operator's biases to work against the intended biases induced by the fitness function. In recent work we showed how developments in GP schema theory can be used to better understand the biases in ..."
Abstract

Cited by 22 (17 self)
 Add to MetaCart
Understanding operator bias in evolutionary computation is important because it is possible for the operator's biases to work against the intended biases induced by the fitness function. In recent work we showed how developments in GP schema theory can be used to better understand the biases induced by the standard subtree crossover when genetic programming is applied to variable length linear structures. In this paper we use the schema theory to better understand the biases induced on linear structures by two common GP subtree mutation operators: FULL and GROW mutation. In both cases we find that the operators do have quite specific biases and typically strongly oversample shorter strings.
Exact GP Schema Theory for Headless Chicken Crossover and Subtree Mutation
 In Proceedings of the 2001 Congress on Evolutionary Computation CEC 2001, Seoul, Korea
, 2001
"... Here a new general GP schema theory for headless chicken crossover and subtree mutation is presented. The theory gives an exact formulation for the expected number of instances of a schema at the next generation either in terms of microscopic quantities or in terms of macroscopic ones. The paper giv ..."
Abstract

Cited by 20 (14 self)
 Add to MetaCart
(Show Context)
Here a new general GP schema theory for headless chicken crossover and subtree mutation is presented. The theory gives an exact formulation for the expected number of instances of a schema at the next generation either in terms of microscopic quantities or in terms of macroscopic ones. The paper gives examples which show how the theory can be specialised to specific operators.
Markov chain models for GP and variablelength GAs with homologous crossover
 In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO2001
, 2001
"... In this paper we present a Markov chain model for GP and variablelength GAs with homologous crossover: a set of GP operators where the offspring are created preserving the position of the genetic material taken from the parents. We obtain this result by using the core of Vose’s model for GAs in con ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
In this paper we present a Markov chain model for GP and variablelength GAs with homologous crossover: a set of GP operators where the offspring are created preserving the position of the genetic material taken from the parents. We obtain this result by using the core of Vose’s model for GAs in conjunction with a specialisation of recent GP schema theory for such operators. The model is then specialised for the case of GP operating on 0/1 trees: a treelike generalisation of the concept of binary string. For these symmetries exist that can be exploited to obtain further simplifications. In the absence of mutation, the theory presented here generalises Vose’s GA model to GP and variablelength GAs. 1
Backwardchaining evolutionary algorithms
, 2006
"... Starting from some simple observations on a popular selection method in Evolutionary Algorithms (EAs)tournament selectionwe highlight a previouslyunknown source of inefficiency. This leads us to rethink the order in which operations are performed within EAs, and to suggest an algorithmthe ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Starting from some simple observations on a popular selection method in Evolutionary Algorithms (EAs)tournament selectionwe highlight a previouslyunknown source of inefficiency. This leads us to rethink the order in which operations are performed within EAs, and to suggest an algorithmthe EA with efficient macroselectionthat avoids the inefficiencies associated with tournament selection. This algorithm has the same expected behaviour as the standard EA but yields considerable savings in terms of fitness evaluations. Since fitness evaluations typically dominates the resources needed to solve any nontrivial problem, these savings translate into a reduction in computer time. Noting the connection between the algorithm and rulebased systems, we then further modify the order of operations in the EA, effectively turning the evolutionary search into an inference process operating in backwardchaining mode. The resulting backwardchaining EA, creates and evaluates individuals recursively, backward from the last generation to the first, using depthfirst search and backtracking. It is even more powerful than the EA with efficient macroselection in that it shares all its benefits, but it also provably finds fitter solutions sooner, i.e., it is a faster algorithm. These algorithms can be applied to any form of population based search, any representation, fitness function, crossover and mutation, provided they use tournament selection. We analyse their behaviour and benefits both theoretically, using Markov chain theory and space/time complexity analysis, and empirically, by performing a variety of experiments with standard and backward chaining versions of genetic algorithms and genetic programming.