Results 1  10
of
441
MDL, Penalized Likelihood, and Statistical Risk
"... AbstractWe determine, for both countable and uncountable collections of functions, informationtheoretic conditions on a penalty pen(f ) such that the optimizerf of the penalized log likelihood criterion log 1/likelihood(f )+pen(f ) has risk not more than the index of resolvability corresponding t ..."
Abstract
 Add to MetaCart
AbstractWe determine, for both countable and uncountable collections of functions, informationtheoretic conditions on a penalty pen(f ) such that the optimizerf of the penalized log likelihood criterion log 1/likelihood(f )+pen(f ) has risk not more than the index of resolvability corresponding
THE MDL PRINCIPLE, PENALIZED LIKELIHOODS, AND STATISTICAL RISK
"... ABSTRACT. We determine, for both countable and uncountable collections of functions, informationtheoretic conditions on a penalty pen(f) such that the optimizer ˆ f of the penalized log likelihood criterion log 1/likelihood(f) + pen(f) has statistical risk not more than the index of resolvability co ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
ABSTRACT. We determine, for both countable and uncountable collections of functions, informationtheoretic conditions on a penalty pen(f) such that the optimizer ˆ f of the penalized log likelihood criterion log 1/likelihood(f) + pen(f) has statistical risk not more than the index of resolvability
Iterative decoding of binary block and convolutional codes
 IEEE TRANS. INFORM. THEORY
, 1996
"... Iterative decoding of twodimensional systematic convolutional codes has been termed “turbo” (de)coding. Using loglikelihood algebra, we show that any decoder can he used which accepts soft inputsincluding a priori valuesand delivers soft outputs that can he split into three terms: the soft chann ..."
Abstract

Cited by 610 (43 self)
 Add to MetaCart
Iterative decoding of twodimensional systematic convolutional codes has been termed “turbo” (de)coding. Using loglikelihood algebra, we show that any decoder can he used which accepts soft inputsincluding a priori valuesand delivers soft outputs that can he split into three terms: the soft
Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2008
"... We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added ℓ1norm penalty term. The problem as formulated is convex but the memor ..."
Abstract

Cited by 334 (2 self)
 Add to MetaCart
be interpreted as recursive ℓ1norm penalized regression. Our second algorithm, based on Nesterov’s first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright
An EM Algorithm for WaveletBased Image Restoration
, 2002
"... This paper introduces an expectationmaximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with lowcomplexity, expressed in terms of the wavelet coecients, taking a ..."
Abstract

Cited by 352 (22 self)
 Add to MetaCart
process requiring O(N log N) operations per iteration. Thus, it is the rst image restoration algorithm that optimizes a waveletbased penalized likelihood criterion and has computational complexity comparable to that of standard wavelet denoising or frequency domain deconvolution methods. The convergence
MDL Procedures with ℓ1 Penalty and their Statistical Risk
"... Abstract — We review recently developed theory for the Minimum Description Length principle, penalized likelihood and its statistical risk. An information theoretic condition on a penalty pen(f) yields the conclusion that the optimizer of the penalized log likelihood criterion log 1/likelihood(f) + ..."
Abstract
 Add to MetaCart
Abstract — We review recently developed theory for the Minimum Description Length principle, penalized likelihood and its statistical risk. An information theoretic condition on a penalty pen(f) yields the conclusion that the optimizer of the penalized log likelihood criterion log 1/likelihood
A paraboloidal surrogates algorithm for convergent penalizedlikelihood emission image reconstruction
 In Proc. IEEE Nuc. Sci. Symp. Med. Im. Conf
, 1998
"... We present a new algorithm for penalizedlikelihood emission image reconstruction. The algorithm monotonically increases the objective function, converges globally to the unique maximizer, and easily accommodates the nonnegativity constraint and nonquadratic but convex penalty functions. The algorit ..."
Abstract

Cited by 37 (22 self)
 Add to MetaCart
We present a new algorithm for penalizedlikelihood emission image reconstruction. The algorithm monotonically increases the objective function, converges globally to the unique maximizer, and easily accommodates the nonnegativity constraint and nonquadratic but convex penalty functions
Rate of convergence of penalized likelihood context tree estimators
, 2007
"... Abstract: We find upper bounds for the probability of error of penalized likelihood context tree estimators, including the wellknown Bayesian Information Criterion (BIC). Our bounds are all explicit and apply to trees of bounded and unbounded depth. We show that the maximal decay for the probabilit ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract: We find upper bounds for the probability of error of penalized likelihood context tree estimators, including the wellknown Bayesian Information Criterion (BIC). Our bounds are all explicit and apply to trees of bounded and unbounded depth. We show that the maximal decay
Support Vector Machines, Reproducing Kernel Hilbert Spaces and the Randomized GACV
, 1998
"... this paper we very briefly review some of these results. RKHS can be chosen tailored to the problem at hand in many ways, and we review a few of them, including radial basis function and smoothing spline ANOVA spaces. Girosi (1997), Smola and Scholkopf (1997), Scholkopf et al (1997) and others have ..."
Abstract

Cited by 189 (11 self)
 Add to MetaCart
noted the relationship between SVM's and penalty methods as used in the statistical theory of nonparametric regression. In Section 1.2 we elaborate on this, and show how replacing the likelihood functional of the logit (log odds ratio) in penalized likelihood methods for Bernoulli [yesno] data
RATE OF CONVERGENCE OF PENALIZEDLIKELIHOOD CONTEXT TREE ESTIMATORS
, 2007
"... Abstract. We find upper bounds for the probability of error of the penalizedlikelihood type context tree estimators, where the trees are not assumed to be finite. This estimators includes the wellknown Bayesian Information Criterion (BIC). We show that the maximal decay for the probability of erro ..."
Abstract
 Add to MetaCart
Abstract. We find upper bounds for the probability of error of the penalizedlikelihood type context tree estimators, where the trees are not assumed to be finite. This estimators includes the wellknown Bayesian Information Criterion (BIC). We show that the maximal decay for the probability
Results 1  10
of
441