Results 11  20
of
156
MDL Convergence Speed for Bernoulli Sequences
, 2006
"... The Minimum Description Length principle for online sequence estimation/prediction in a proper learning setup is studied. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is finitely bounded, implying ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
convergence with probability one, and (b) it additionally specifies the convergence speed. For MDL, in general one can only have loss bounds which are finite but exponentially larger than those for Bayes mixtures. We show that this is even the case if the model class contains only Bernoulli distributions. We
Statistical Analysis of Regularization Constant  From Bayes, MDL and NIC Points of View
"... In order to avoid overfitting in neural learning, a regularization term is added to the loss function to be minimized. It is naturally derived from the Bayesian standpoint. The present paper studies how to determine the regularization constant from the points of view of the empirical Bayes approach, ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
, the maximum description length (MDL) approach, and the network information criterion (NIC) approach. The asymptotic statistical analysis is given to elucidate their differences. These approaches are tightly connected with the method of model selection. The superiority of the NIC is shown from this analysis. 1
Re ion COm et it ion: Unifying Snakes,Region Growing, inergy/Bayes P MDL for Multiband Image Segmentation
"... ABSTRACT their boundaries and require good initial estimates to yield We present a novel statistical and variational approach to image segmentation baaed on a new algorithm named region competition. This algorithm is derived by minimizing a generalized Bayes/MDL(Minimum Description Length) criteri ..."
Abstract
 Add to MetaCart
ABSTRACT their boundaries and require good initial estimates to yield We present a novel statistical and variational approach to image segmentation baaed on a new algorithm named region competition. This algorithm is derived by minimizing a generalized Bayes/MDL(Minimum Description Length) criterion
Convergence of Discrete MDL for Sequential Prediction
, 2004
"... We study the properties of the Minimum Description Length principle for sequence prediction, considering a twopart MDL estimator which is chosen from a countable class of models. This applies in particular to the important case of universal sequence prediction, where the model class corresponds to ..."
Abstract
 Add to MetaCart
to all algorithms for some fixed universal Turing machine (this correspondence is by enumerable semimeasures, hence the resulting models are stochastic). We prove convergence theorems similar to Solomonoff’s theorem of universal induction, which also holds for general Bayes mixtures. The bound
MDL convergence speed for Bernoulli sequences
 Statistics and Computing
, 2006
"... The Minimum Description Length principle for online sequence estimation/prediction in a proper learning setup is studied. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is finitely bounded, implying ..."
Abstract
 Add to MetaCart
convergence with probability one, and (b) it additionally specifies the convergence speed. For MDL, in general one can only have loss bounds which are finite but exponentially larger than those for Bayes mixtures. We show that this is even the case if the model class contains only Bernoulli distributions. We
Skeletonbased Region Competition for Automated Gray Matter and White Matter Segmentation of Human Brain MR Images
"... Image segmentation is an essential process for quantitative analysis. Segmentation of brain tissues in magnetic resonance (MR) images is very important for understanding the structuralfunctional relationship for various pathological conditions, such as dementia vs. normal brain aging. Different bra ..."
Abstract
 Add to MetaCart
generalized Bayes/MDL criterion. However, it is sensitive to initial conditions – the “seeds”, therefore an optimal choice of “seeds ” is necessary for accurate segmentation. In this paper, we present a new skeletonbased region competition algorithm for automated gray and white matter segmentation. Skeletons
MDL, Penalized Likelihood, and Statistical Risk
"... AbstractWe determine, for both countable and uncountable collections of functions, informationtheoretic conditions on a penalty pen(f ) such that the optimizerf of the penalized log likelihood criterion log 1/likelihood(f )+pen(f ) has risk not more than the index of resolvability corresponding t ..."
Abstract
 Add to MetaCart
to the accuracy of the optimizer of the expected value of the criterion. If F is the linear span of a dictionary of functions, traditional descriptionlength penalties are based on the number of nonzero terms (the 0 norm of the coefficients). We specialize our general conclusions to show the 1 norm
On the Convergence Speed of MDL Predictions for Bernoulli Sequences
, 2004
"... We consider the Minimum Description Length principle for online sequence prediction. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is bounded, implying convergence with probability one, and (b) it a ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
) it additionally specifies a rate of convergence. Generally, for MDL only exponential loss bounds hold, as opposed to the linear bounds for a Bayes mixture. We show that this is even the case if the model class contains only Bernoulli distributions. We derive a new upper bound on the prediction error for countable
On the Convergence Speed of MDL Predictions for Bernoulli Sequences
, 2004
"... We consider the Minimum Description Length principle for online sequence prediction. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is bounded, implying convergence with probability one, and (b) it a ..."
Abstract
 Add to MetaCart
) it additionally specifies a rate of convergence. Generally, for MDL only exponential loss bounds hold, as opposed to the linear bounds for a Bayes mixture. We show that this is even the case if the model class contains only Bernoulli distributions. We derive a new upper bound on the prediction error for countable
THE MDL PRINCIPLE, PENALIZED LIKELIHOODS, AND STATISTICAL RISK
"... ABSTRACT. We determine, for both countable and uncountable collections of functions, informationtheoretic conditions on a penalty pen(f) such that the optimizer ˆ f of the penalized log likelihood criterion log 1/likelihood(f) + pen(f) has statistical risk not more than the index of resolvability co ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
ABSTRACT. We determine, for both countable and uncountable collections of functions, informationtheoretic conditions on a penalty pen(f) such that the optimizer ˆ f of the penalized log likelihood criterion log 1/likelihood(f) + pen(f) has statistical risk not more than the index of resolvability
Results 11  20
of
156