Results 1  10
of
26,598
Inference for the Generalization Error
, 2001
"... In order to compare learning algorithms, experimental results reported in the machine learning literature often use statistical tests of signicance to support the claim that a new learning algorithm generalizes better. Such tests should take into account the variability due to the choice of training ..."
Abstract

Cited by 184 (3 self)
 Add to MetaCart
of the variance of a crossvalidation estimator of the generalization error that takes into account the variability due to the randomness of the training set as well as test examples. Our analysis shows that all the variance estimators that are based only on the results of the crossvalidation experiment must
On the Training Error and Generalization Error of
"... In this article, we analyzed the expected training error and the expected generalization error for neural networks in unidentifiable case, in which a set of output data is assumed to be a Gaussian noise sequence. Firstly, the results on the bounds of the expected training error and the expected ..."
Abstract
 Add to MetaCart
In this article, we analyzed the expected training error and the expected generalization error for neural networks in unidentifiable case, in which a set of output data is assumed to be a Gaussian noise sequence. Firstly, the results on the bounds of the expected training error and the expected
Stacked generalization
 NEURAL NETWORKS
, 1992
"... This paper introduces stacked generalization, a scheme for minimizing the generalization error rate of one or more generalizers. Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second sp ..."
Abstract

Cited by 731 (9 self)
 Add to MetaCart
This paper introduces stacked generalization, a scheme for minimizing the generalization error rate of one or more generalizers. Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second
Minimum Error Rate Training in Statistical Machine Translation
, 2003
"... Often, the training procedure for statistical machine translation models is based on maximum likelihood or related criteria. A general problem of this approach is that there is only a loose relation to the final translation quality on unseen text. In this paper, we analyze various training cri ..."
Abstract

Cited by 757 (7 self)
 Add to MetaCart
Often, the training procedure for statistical machine translation models is based on maximum likelihood or related criteria. A general problem of this approach is that there is only a loose relation to the final translation quality on unseen text. In this paper, we analyze various training
Generalization Error of Combined Classifiers
 Journal of Computer and System Sciences
, 1997
"... this paper we present an upper bound on the generalization error of any thresholded convex combination of functions which are themselves thresholded convex combinations of functions in terms of the margin and the average complexity of the combined functions. Furthermore, by considering a single hidd ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
this paper we present an upper bound on the generalization error of any thresholded convex combination of functions which are themselves thresholded convex combinations of functions in terms of the margin and the average complexity of the combined functions. Furthermore, by considering a single
Solving multiclass learning problems via errorcorrecting output codes
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1995
"... Multiclass learning problems involve nding a de nition for an unknown function f(x) whose range is a discrete set containing k>2values (i.e., k \classes"). The de nition is acquired by studying collections of training examples of the form hx i;f(x i)i. Existing approaches to multiclass l ..."
Abstract

Cited by 726 (8 self)
 Add to MetaCart
output representations. This paper compares these three approaches to a new technique in which errorcorrecting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range
Random forests
 Machine Learning
, 2001
"... Abstract. Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the fo ..."
Abstract

Cited by 3613 (2 self)
 Add to MetaCart
Abstract. Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees
A GENERALIZED ERROR DISTRIBUTION
"... ABSTRACT. We review the properties of a univariate probability distribution that is a possible candidate for the description of financial market price changes. This distribution is an “error ” distribution that represents a generalized form of the Normal, possesses a natural multivariate form, has ..."
Abstract
 Add to MetaCart
ABSTRACT. We review the properties of a univariate probability distribution that is a possible candidate for the description of financial market price changes. This distribution is an “error ” distribution that represents a generalized form of the Normal, possesses a natural multivariate form, has
Constructing Free Energy Approximations and Generalized Belief Propagation Algorithms
 IEEE Transactions on Information Theory
, 2005
"... Important inference problems in statistical physics, computer vision, errorcorrecting coding theory, and artificial intelligence can all be reformulated as the computation of marginal probabilities on factor graphs. The belief propagation (BP) algorithm is an efficient way to solve these problems t ..."
Abstract

Cited by 585 (13 self)
 Add to MetaCart
Important inference problems in statistical physics, computer vision, errorcorrecting coding theory, and artificial intelligence can all be reformulated as the computation of marginal probabilities on factor graphs. The belief propagation (BP) algorithm is an efficient way to solve these problems
A proximal point algorithm converging strongly for general errors
 Optimization Letters
"... for general errors ..."
Results 1  10
of
26,598