Results 1  10
of
67
MDL Procedures with ℓ1 Penalty and their Statistical Risk
"... Abstract — We review recently developed theory for the Minimum Description Length principle, penalized likelihood and its statistical risk. An information theoretic condition on a penalty pen(f) yields the conclusion that the optimizer of the penalized log likelihood criterion log 1/likelihood(f) + ..."
Abstract
 Add to MetaCart
(f) + pen(f) has risk not more than the index of resolvability, corresponding to the accuracy of the optimizer of the expected value of the criterion. For the linear span of a dictionary of candidate terms, we develop the validity of descriptionlength penalties based on the ℓ1 norm of the coefficients. New
THE MDL PRINCIPLE, PENALIZED LIKELIHOODS, AND STATISTICAL RISK
"... ABSTRACT. We determine, for both countable and uncountable collections of functions, informationtheoretic conditions on a penalty pen(f) such that the optimizer ˆ f of the penalized log likelihood criterion log 1/likelihood(f) + pen(f) has statistical risk not more than the index of resolvability co ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
corresponding to the accuracy of the optimizer of the expected value of the criterion. If F is the linear span of a dictionary of functions, traditional descriptionlength penalties are based on the number of nonzero terms of candidate fits (the ℓ0 norm of the coefficients) as we review. We specialize our
IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Quadtree Structured Image Approximation for Denoising
"... Abstract—The success of many image restoration algorithms is often due to their ability to sparsely describe the original signal. In [3] Shukla et al. proposed a compression algorithm, based on a sparse quadtree decomposition model, which could optimally represent piecewise polynomial images. In t ..."
Abstract
 Add to MetaCart
. In this paper we adapt this model to image restoration by changing the ratedistortion penalty to a descriptionlength penalty. Moreover, one of the major drawbacks of this type of approximation is the computational complexity required to find a suitable subspace for each node of the quadtree. We address
Coil sensitivity encoding for fast MRI. In:
 Proceedings of the ISMRM 6th Annual Meeting,
, 1998
"... New theoretical and practical concepts are presented for considerably enhancing the performance of magnetic resonance imaging (MRI) by means of arrays of multiple receiver coils. Sensitivity encoding (SENSE) is based on the fact that receiver sensitivity generally has an encoding effect complementa ..."
Abstract

Cited by 193 (3 self)
 Add to MetaCart
of this section gives a practical description of the Cartesian case. The following parts are dedicated to general theory, SNR and error considerations, and sensitivity assessment. Sensitivity Encoding With Cartesian Sampling of kSpace In twodimensional (2D) Fourier imaging with common Cartesian sampling of k
3 MinimumDescription Length and Cognitive Modeling
"... The question of how one should decide among competing explanations of data is at the heart of the scientific enterprise. In the field of cognitive science, mathematical models are increasingly being advanced as explanations of cognitive behavior. In the application of Minimum Description Length (MDL ..."
Abstract
 Add to MetaCart
The question of how one should decide among competing explanations of data is at the heart of the scientific enterprise. In the field of cognitive science, mathematical models are increasingly being advanced as explanations of cognitive behavior. In the application of Minimum Description Length
Boosting Classifiers with Tightened L0Relaxation Penalties
"... We propose a novel boosting algorithm which improves on current algorithms for weighted voting classification by striking a better balance between classification accuracy and the sparsity of the weight vector. In order to justify our optimization formulations, we first consider a novel integer linea ..."
Abstract
 Add to MetaCart
selection of parameters using a minimum description length, compression interpretation of learning. 1.
Minimum Description Length Principle for Linear Mixed Effects Models
"... Abstracts The minimum description length (MDL) principle originated from data compression literature and has been considered for deriving statistical model selection procedures. Most of the existing methods that use the MDL principle focus on models with independent data, particularly in the contex ..."
Abstract
 Add to MetaCart
Abstracts The minimum description length (MDL) principle originated from data compression literature and has been considered for deriving statistical model selection procedures. Most of the existing methods that use the MDL principle focus on models with independent data, particularly
The flexibility of models of recognition memory: An analysis by the minimum description length principle
 Journal of Mathematical Psychology
, 2011
"... a b s t r a c t Ten continuous, discrete, and hybrid models of recognition memory are considered in the traditional paradigm with manipulation of response bias via baserates or payoff schedules. We present an efficient method for computing the Fisher information approximation (FIA) to the normalize ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
) to the normalized maximum likelihood index (NML) for these models, and a relatively efficient method for computing NML itself. This leads to a comparative evaluation of the complexity of the different models from the minimumdescriptionlength perspective. Furthermore, we evaluate the goodness of the approximation
Model Selection for Sinusoids in Noise: Statistical Analysis and a New Penalty Term
"... Abstract—Detection of the number of sinusoids embedded in noise is a fundamental problem in statistical signal processing. Most parametric methods minimize the sum of a data fit (likelihood) term and a complexity penalty term. The latter is often derived via information theoretic criteria, such as m ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
, such as minimum description length (MDL), or via Bayesian approaches including Bayesian information criterion (BIC) or maximum aposteriori (MAP). While the resulting estimators are asymptotically consistent, empirically their finite sample performance is strongly dependent on the specific penalty term chosen
A Source Coding Approach to Classification by Vector Quantization and the Principle of Minimum Description Length
"... An algorithm for supervised classification using vector quantization and entropy coding is presented. The classification rule is formed from a set of training data f(Xi;Yi)g n i=1, which are independent samples from a joint distribution PXY. Based on the principle of Minimum Description Length (MDL) ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
An algorithm for supervised classification using vector quantization and entropy coding is presented. The classification rule is formed from a set of training data f(Xi;Yi)g n i=1, which are independent samples from a joint distribution PXY. Based on the principle of Minimum Description Length (MDL
Results 1  10
of
67