Results 1  10
of
11
Geneticsbased Machine Learning
 GRZEGORZ ROZENBERG, THOMAS BÄCK, AND JOOST KOK, EDITORS, HANDBOOK OF NATURAL COMPUTING: THEORY, EXPERIMENTS, AND
, 2010
"... ..."
Learning Classifier Systems: Looking Back and Glimpsing Ahead
"... Over the recent years, research on Learning Classifier Systems (LCSs) got more and more pronounced and diverse. There have been significant advances of the LCS field on various fronts including system understanding, representations, computational models, and successful applications. In comparison ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Over the recent years, research on Learning Classifier Systems (LCSs) got more and more pronounced and diverse. There have been significant advances of the LCS field on various fronts including system understanding, representations, computational models, and successful applications. In comparison to other machine learning techniques, the advantages of LCSs have become more pronounced: (1) rulecomprehensibility and thus knowledge extraction is straightforward; (2) online learning is possible; (3) local minima are avoided due to the evolutionary learning component; (4) distributed solution representations evolve; or (5) larger problem domains can be handled. After the tenth edition of the International Workshop on LCSs, more than ever before, we are looking towards an exciting future. More diverse and challenging applications, efficiency enhancements, studies of dynamical systems, and applications to cognitive control approaches appear imminent. The aim of this paper is to provide a look back at the LCS field, whereby we place our emphasis on the recent advances. Moreover, we take a glimpse ahead by discussing future challenges and opportunities for successful system applications in various domains.
Improving Classifier Error Estimate in XCSF
"... Abstract. We study the current definition of classifier error in XCSF and we discuss the limitations of the algorithm that is currently used to compute classifier error from online experience. We introduce a new definition of classifier error and study the performance of two novel estimation algorit ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We study the current definition of classifier error in XCSF and we discuss the limitations of the algorithm that is currently used to compute classifier error from online experience. We introduce a new definition of classifier error and study the performance of two novel estimation algorithms based on this definition. Our results suggest that the new estimation algorithms can be more robust and improve system generalization. 1
Mixing Independent Classifiers
, 2006
"... In this study we deal with the mixing problem, which concerns combining the prediction of independently trained local models to a global prediction. We deal with it from the perspective of Learning Classifier Systems where a set of classifiers provide the local models. Firstly, we formalise the mixi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
In this study we deal with the mixing problem, which concerns combining the prediction of independently trained local models to a global prediction. We deal with it from the perspective of Learning Classifier Systems where a set of classifiers provide the local models. Firstly, we formalise the mixing problem and provide both analytical and heuristic approaches to solving it. The analytical approaches are shown to not scale well with the number of local models, but are nevertheless compared to heuristic models in a set of function approximation tasks. These experiments show that we can design heuristics that exceed the performance of the current stateoftheart Learning Classifier System XCS, and are competitive when compared to analytical solutions. Additionally, we provide an upper bound on the prediction errors for the heuristic mixing approaches. 1
A Formal Framework for Reinforcement Learning with Function Approximation in Learning Classifier Systems
, 2006
"... To fully understand the properties of Accuracybased Learning Classifier Systems, we need a formal framework that captures all components of classifier systems, that is, function approximation, reinforcement learning, and classifier replacement, and permits the modelling of them separately and in th ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
To fully understand the properties of Accuracybased Learning Classifier Systems, we need a formal framework that captures all components of classifier systems, that is, function approximation, reinforcement learning, and classifier replacement, and permits the modelling of them separately and in their interaction. In this paper we extend our previous work on function approximation [22] to reinforcement learning and its interaction between reinforcement learning and function approximation. After giving an overview and derivations for common reinforcement learning methods from first principles, we show how they apply to Learning Classifier Systems. At the same time, we present a new algorithm that is expected to outperform all current methods, discuss the use of XCS with gradient descent and TD(λ), and given an indepth discussion on how to study the convergence of Learning Classifier Systems with a timeinvariant population.
General Terms
"... We propose an algorithm for function approximation that evolves a set of hierarchical piecewise linear regressors. The algorithm, named HIRELin, follows the iterative rule learning approach. A genetic algorithm is iteratively called to find a partition of the search space where a linear regressor ..."
Abstract
 Add to MetaCart
(Show Context)
We propose an algorithm for function approximation that evolves a set of hierarchical piecewise linear regressors. The algorithm, named HIRELin, follows the iterative rule learning approach. A genetic algorithm is iteratively called to find a partition of the search space where a linear regressor can accurately fit the objective function. The resulting ruleset performs an approximation to the objective function formed by a hierarchy of locally trained linear regressors. The approach is evaluated in a set of objective functions and compared to other regression techniques.
Analysis and Improvements of the Classifier Error Estimate in XCSF
"... Abstract. The estimation of the classifier error plays a key role in accuracybased learning classifier systems. In this paper we study the current definition of the classifier error in XCSF and discuss the limitations of the algorithm that is currently used to compute the classifier error estimate ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The estimation of the classifier error plays a key role in accuracybased learning classifier systems. In this paper we study the current definition of the classifier error in XCSF and discuss the limitations of the algorithm that is currently used to compute the classifier error estimate from online experience. Subsequently, we introduce a new definition for the classifier error and apply the Bayes Linear Analysis framework to find a more accurate and reliable error estimate. This results in two incremental error estimate update algorithms that we compare empirically to the performance of the currently applied approach. Our results suggest that the new estimation algorithms can improve the generalization capabilities of XCSF, especially when the actionset subsumption operator is used. 1
Online, GA based Mixture of Experts: a Probabilistic Model of UCS
"... In recent years there have been efforts to develop a probabilistic framework to explain the workings of a Learning Classifier System. This direction of research has met with limited success due to the intractability of complicated heuristic training rules used by the learning classifier systems. In ..."
Abstract
 Add to MetaCart
(Show Context)
In recent years there have been efforts to develop a probabilistic framework to explain the workings of a Learning Classifier System. This direction of research has met with limited success due to the intractability of complicated heuristic training rules used by the learning classifier systems. In this paper, we derive a learning classifier system from a mixture of experts that is similar to a sUpervised Classifier System (UCS) in terms of its training and prediction routines. We start by framing the learning model as a mixture of experts which uses an Expectation Maximisation (EM) procedure to learn its parameters. The batch updates of the EM is then converted into online updates and finally into a GA based sampled online update thus ending up with a classifier system similar to a sUpervised Classifier System. In this paper, we show the effectiveness of such a system as compared to UCS through a series of comparative studies on test datasets.
wuerzburg.de
"... Many successful applications have proven the potential of Learning Classifier Systems and the XCS classifier system in particular in datamining, reinforcement learning, and function approximation tasks. Recent research has shown that XCS is a highly flexible system, which can be adapted to the task ..."
Abstract
 Add to MetaCart
(Show Context)
Many successful applications have proven the potential of Learning Classifier Systems and the XCS classifier system in particular in datamining, reinforcement learning, and function approximation tasks. Recent research has shown that XCS is a highly flexible system, which can be adapted to the task at hand by adjusting its condition structures, learning operators, and prediction mechanisms. However, fundamental theory concerning the scalability of XCS dependent on these enhancements and problem difficulty is still rather sparse and mainly restricted to boolean function problems. In this article we developed a learning scalability theory for XCSF—the XCS system applied to realvalued function approximation problems. We determine crucial dependencies on functional properties and on the developed solution representation and derive a theoretical scalability model out of these constraints. The theoretical model is verified with empirical evidence. That is, we show that given a particular problem difficulty and particular representational constraints XCSF scales optimally. In consequence, we discuss the importance of appropriate prediction and condition structures regarding a given problem and show that scalability properties can be improved by polynomial orders, given an appropriate, problemsuitable representation.
ISSN 17409497Towards Convergence of Learning Classifier Systems Value Iteration
, 2006
"... In this paper we are extending our previous work on analysing Learning Classifier Systems (LCS) in the reinforcement learning framework [4] to deepen the theoretical analysis of Value Iteration with LCS function approximation. After introducing our formal framework and some mathematical preliminarie ..."
Abstract
 Add to MetaCart
In this paper we are extending our previous work on analysing Learning Classifier Systems (LCS) in the reinforcement learning framework [4] to deepen the theoretical analysis of Value Iteration with LCS function approximation. After introducing our formal framework and some mathematical preliminaries we demonstrate convergence of the algorithm for fixed classifier mixing weights, and show that if the weights are not fixed, the choice of the mixing function is significant. Furthermore, we discuss accuracybased mixing and outline a proof that shows convergence of LCS Value Iteration with an accuracybased classifier mixing. This work is a significant step towards convergence of accuracybased LCS that use QLearning as the reinforcement learning component. 1