Results 1 
7 of
7
Evolutionary computation: Comments on the history and current state
 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
, 1997
"... Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950’s. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and ..."
Abstract

Cited by 280 (0 self)
 Add to MetaCart
(Show Context)
Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950’s. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) [with links to genetic programming (GP) and classifier systems (CS)], evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e., representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete.
Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems
 SIAM Journal on Optimization
, 2004
"... A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. This class combines and extends the AudetDennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPSfilter algorithms for gene ..."
Abstract

Cited by 55 (6 self)
 Add to MetaCart
(Show Context)
A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. This class combines and extends the AudetDennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPSfilter algorithms for general nonlinear constraints. In generalizing existing algorithms, new theoretical convergence results are presented that reduce seamlessly to existing results for more specific classes of problems. While no local continuity or smoothness assumptions are required to apply the algorithm, a hierarchy of theoretical convergence results based on the Clarke calculus is given, in which local smoothness dictate what can be proved about certain limit points generated by the algorithm. To demonstrate the usefulness of the algorithm, the algorithm is applied to the design of a loadbearing thermal insulation system. We believe this is the first algorithm with provable convergence results to directly target this class of problems.
Applying evolutionary programming to selected traveling salesman problems
 Cybernetics f!Y Systems
, 1993
"... AbstractEvolutionary programming is a stochastic optimization procedure that can be applied to difficult combinatorial problems. Experiments are conducted with three standard optimal control problems (linearquadratic, harvest, and pushcart). The results are compared to those obtained with genetic ..."
Abstract

Cited by 52 (0 self)
 Add to MetaCart
(Show Context)
AbstractEvolutionary programming is a stochastic optimization procedure that can be applied to difficult combinatorial problems. Experiments are conducted with three standard optimal control problems (linearquadratic, harvest, and pushcart). The results are compared to those obtained with genetic algorithms and the General Algebraic Modeling System (GAMS), a numerical optimization software package. The results indicate that evolutionary programming generally outperforms genetic algorithms. Evolutionary programming also compares well with GAMS on certain problems for which GAMS is specifically designed and outperforms GAMS on other problems. The computational requirements for each procedure are briefly discussed.
Learning Finite State Transducers: Evolution Versus Heuristic State Merging
"... Finite State Transducers (FSTs) are Finite State Machines (FSMs) that map strings in a source domain into strings in a target domain. While there are many reports in the literature of evolving FSMs, there has been much less work on evolving FSTs. In particular, the fitness functions required for evo ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Finite State Transducers (FSTs) are Finite State Machines (FSMs) that map strings in a source domain into strings in a target domain. While there are many reports in the literature of evolving FSMs, there has been much less work on evolving FSTs. In particular, the fitness functions required for evolving FSTs are generally different from those used for FSMs. In this paper three string distance based fitness functions are evaluated, in order of increasing computational complexity: string equality, Hamming distance and edit distance. The fitnessdistance correlation (FDC) and evolutionary performance of each fitness function is analysed when used within a random mutation hillclimber (RMHC). Edit distance has the strongest FDC and also provides the best evolutionary performance, in that it is more likely to find the target FST within a given number of fitness function evaluations. Edit distance is also the most expensive to compute, but in most cases this extra computation is more than justified by its performance. The RMHC was compared with the best known heuristic method for learning FSTs, the Onward Subsequential Transducer Inference Algorithm (OSTIA). On noisefree data the RMHC performs best on problems with sparse training sets and small target machines. The RMHC and OSTIA offer similar performance for large target machines and denser data sets. When noisecorrupted data is used for training, the RMHC still performs well, while OSTIA performs 1 poorly given even small amounts of noise. The RMHC is also shown to outperform a genetic algorithm. Hence, for certain classes of FST induction problem, the RMHC presented in this paper offers the best performance of any known algorithm.
Foundations of evolutionary computation
 Proceedings of the SPIE, Volume 6228
, 2006
"... Evolutionary computation is a rapidly expanding field of research with a long history. Much of that history remains unknown to most practitioners and researchers. This paper offers a review of selected foundational efforts in evolutionary computation. A brief initial overview of the essential compon ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Evolutionary computation is a rapidly expanding field of research with a long history. Much of that history remains unknown to most practitioners and researchers. This paper offers a review of selected foundational efforts in evolutionary computation. A brief initial overview of the essential components of evolutionary algorithms is presented, followed by a review of early research in artificial life, evolving programs, and evolvable hardware. Comments on theoretical developments and future developments conclude the review.
Pattern Search Algorithms for Mixed Variable General Constrained Optimization Problems
 Rice University, Department of
, 2002
"... Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, Catherinei and maintaining the data needed, and completing and reviewing the collection of information. Send comment ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, Catherinei and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this
Playing the RockPaperScissors Game with a Genetic Algorithm
"... This paper describes a strategy to follow whilst playing the RockPaperScissors game. Instead of making a biased decision, a rule is adopted where the outcomings of the game in the last few turns are observed and then a deterministic decision is made. Such a strategy is encoded into a genetic strin ..."
Abstract
 Add to MetaCart
This paper describes a strategy to follow whilst playing the RockPaperScissors game. Instead of making a biased decision, a rule is adopted where the outcomings of the game in the last few turns are observed and then a deterministic decision is made. Such a strategy is encoded into a genetic string and a genetic algorithm (GA) works on a population of such strings. Good strings are produced at later generations. Such strategy is found to be successful, and its efficiency is demonstrated by testing the strategy against systematic, as well as human strategies. 1. introduction Many concepts and examples in game theory can provide good models in constructing abstract evolutionary systems. Though game theory was originally developed by V. Neuman and Morgenstern [13] for application to economic theory, it has spread later to many other disciplines. MaynarkSmith and Price [17] have opened the door to the wide use of game theory in evolutionary ecology. In our current work, we construct an evolutionary system to be applied to the RockPaperScissors (RPS) game. The RockPaperScissors is a classical twoperson simple game to quickly decide on a winner. It is a game that children as well as adults play, mathematicians analyze, and a certain species of lizard in California takes very seriously [14]. We use a genetic algorithm [1][2] to train a player, that makes use of the historical behavior of the opponent during the past few games to guide its current decision. The RockPaperScissors is a good model for experimental and theoretical investigations of coop er ative short memory behavior. 2. rockpaperscissors rule In its simplest form, each of two players has a choice of ScZssors, Paper, or Rock. The two players, simultaneously make a choice each. Depending on the two players ’ choices, a winner is decided according to the rule in Table 1.