Results 1  10
of
2,637
Ensemble Learning
, 2011
"... This note presents a chronological review of the literature on ensemble learning which has accumulated over the past twenty years. The idea of ensemble learning is to employ multiple learners and combine their predictions. If we have a committee of M models with uncorrelated errors, simply by averag ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
This note presents a chronological review of the literature on ensemble learning which has accumulated over the past twenty years. The idea of ensemble learning is to employ multiple learners and combine their predictions. If we have a committee of M models with uncorrelated errors, simply
Ensemble Learning
, 2000
"... Introduction When we say we are making a model of a system, we are setting up a tool which can be used to make inferences, predictions and decisions. Each model can be seen as a hypothesis, or explanation, which makes assertions about the quantities which are directly observable and which can only ..."
Abstract

Cited by 70 (2 self)
 Add to MetaCart
Introduction When we say we are making a model of a system, we are setting up a tool which can be used to make inferences, predictions and decisions. Each model can be seen as a hypothesis, or explanation, which makes assertions about the quantities which are directly observable and which can only be inferred from their eect on observable quantities. In the Bayesian framework, knowledge is contained in the conditional probability distributions of the models. We can use Bayes' theorem to evaluate the conditional probability distributions for the unknown parameters, y, given the set of observed quantities, x, using p (y jx ) = p (x jy ) p (y) p (x) (1) The prior distribution p (y) contains our knowledge of the unknown variables before we make any observ
Ensemble Learning and Evidence Maximization
 Proc. NIPS
, 1995
"... Ensemble learning by variational free energy minimization is a tool introduced to neural networks by Hinton and van Camp in which learning is described in terms of the optimization of an ensemble of parameter vectors. The optimized ensemble is an approximation to the posterior probability distributi ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Ensemble learning by variational free energy minimization is a tool introduced to neural networks by Hinton and van Camp in which learning is described in terms of the optimization of an ensemble of parameter vectors. The optimized ensemble is an approximation to the posterior probability
MultiStrategy Ensemble Learning: Reducing Error by Combining Ensemble Learning Techniques
 IEEE Transactions on Knowledge and Data Engineering
, 2004
"... Ensemble learning strategies, especially Boosting and Bagging decision trees, have demonstrated impressive capacities to improve the prediction accuracy of base learning algorithms. Further gains have been demonstrated by strategies that combine simple ensemble formation approaches. In this paper, w ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
Ensemble learning strategies, especially Boosting and Bagging decision trees, have demonstrated impressive capacities to improve the prediction accuracy of base learning algorithms. Further gains have been demonstrated by strategies that combine simple ensemble formation approaches. In this paper
Ensemble Methods in Machine Learning
 MULTIPLE CLASSIFIER SYSTEMS, LBCS1857
, 2000
"... Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include errorcorrecting output coding, Bagging, and boostin ..."
Abstract

Cited by 625 (3 self)
 Add to MetaCart
Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include errorcorrecting output coding, Bagging
Ensemble learning for independent component analysis
 IN ADVANCES IN INDEPENDENT COMPONENT ANALYSIS
, 2000
"... This thesis is concerned with the problem of Blind Source Separation. Specifically we considerthe Independent Component Analysis (ICA) model in which a set of observations are modelled by xt = Ast: (1) where A is an unknown mixing matrix and st is a vector of hidden source components attime t. The ..."
Abstract

Cited by 59 (3 self)
 Add to MetaCart
. The ICA problem is to find the sources given only a set of observations. In chapter 1, the blind source separation problem is introduced. In chapter 2 the methodof Ensemble Learning is explained. Chapter 3 applies Ensemble Learning to the ICA model and chapter 4 assesses the use of Ensemble Learning
Ensemble learning Ordered aggregation
, 2014
"... Ensemble margin Classification confidence a b s t r a c t Ensemble learning has attracted considerable attention owing to its good generalization performance. The main issues in constructing a powerful ensemble include training a set of diverse and accurate base classifiers, and effectively combinin ..."
Abstract
 Add to MetaCart
Ensemble margin Classification confidence a b s t r a c t Ensemble learning has attracted considerable attention owing to its good generalization performance. The main issues in constructing a powerful ensemble include training a set of diverse and accurate base classifiers, and effectively
Multilabel ensemble learning
 In ECML/PKDD
, 2011
"... Abstract. Multilabel learning aims at predicting potentially multiple labels for a given instance. Conventional multilabel learning approaches focus on exploiting the label correlations to improve the accuracy of the learner by building an individual multilabel learner or a combined learner based ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
based upon a group of singlelabel learners. However, the generalization ability of such individual learner can be weak. It is well known that ensemble learning can effectively improve the generalization ability of learning systems by constructing multiple base learners and the performance
Ensemble Learning for Hidden Markov Models
, 1997
"... The standard method for training Hidden Markov Models optimizes a point estimate of the model parameters. This estimate, which can be viewed as the maximum of a posterior probability density over the model parameters, may be susceptible to overfitting, and contains no indication of parameter uncerta ..."
Abstract

Cited by 94 (0 self)
 Add to MetaCart
uncertainty. Also, this maximummay be unrepresentative of the posterior probability distribution. In this paper we study a method in which we optimize an ensemble which approximates the entire posterior probability distribution. The ensemble learning algorithm requires the same resources as the traditional
Cooperative Coevolutionary Ensemble Learning
"... Abstract. A new optimization technique is proposed for classifiers fusion — Cooperative Coevolutionary Ensemble Learning (CCEL). It is based on a specific multipopulational evolutionary algorithm — cooperative coevolution. It can be used as a wrapper over any kind of weak algorithms, learning proced ..."
Abstract
 Add to MetaCart
Abstract. A new optimization technique is proposed for classifiers fusion — Cooperative Coevolutionary Ensemble Learning (CCEL). It is based on a specific multipopulational evolutionary algorithm — cooperative coevolution. It can be used as a wrapper over any kind of weak algorithms, learning
Results 1  10
of
2,637