Results 1  10
of
1,771
Bounded Approximations for Marginal Likelihoods
, 2010
"... We discuss novel approaches to evaluation of both upper and lower bounds on log marginal likelihoods for model comparison in Bayesian analysis. From posterior Monte Carlo samples, we show how existing variational approximation methods defining lower bounds on marginal likelihoods can be extended to ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We discuss novel approaches to evaluation of both upper and lower bounds on log marginal likelihoods for model comparison in Bayesian analysis. From posterior Monte Carlo samples, we show how existing variational approximation methods defining lower bounds on marginal likelihoods can be extended
Marginal Likelihood From the MetropolisHastings Output
 OUTPUT,JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2001
"... This article provides a framework for estimating the marginal likelihood for the purpose of Bayesian model comparisons. The approach extends and completes the method presented in Chib (1995) by overcoming the problems associated with the presence of intractable full conditional densities. The propos ..."
Abstract

Cited by 217 (16 self)
 Add to MetaCart
This article provides a framework for estimating the marginal likelihood for the purpose of Bayesian model comparisons. The approach extends and completes the method presented in Chib (1995) by overcoming the problems associated with the presence of intractable full conditional densities
Marginal Likelihood for Distance Matrices
 Statistica Sinica
"... Abstract A Wishart model is proposed for random distance matrices in which the components are correlated gamma random variables, all having the same degrees of freedom. The marginal likelihood is obtained in closed form. Its use is illustrated by multidimensional scaling, by rooted tree models for ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract A Wishart model is proposed for random distance matrices in which the components are correlated gamma random variables, all having the same degrees of freedom. The marginal likelihood is obtained in closed form. Its use is illustrated by multidimensional scaling, by rooted tree models
Marginal likelihood for parallel series
, 810
"... Suppose that k series, all having the same autocorrelation function, are observed in parallel at n points in time or space. From a single series of moderate length, the autocorrelation parameter β can be estimated with limited accuracy, so we aim to increase the information by formulating a suitable ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
likelihood for the model with k(k + 1)/2 covariance parameters behaves anomalously in two respects. On the one hand, it is a log likelihood, so the derivatives satisfy the Bartlett identities. On the other hand, the Fisher information for β increases to a maximum at k = n/2, decreasing to zero for k
Efficient marginal likelihood optimization in . . .
"... In blind deconvolution one aims to estimate from an input blurred image y a sharp image x and an unknown blur kernel k. Recent research shows that a key to success is to consider the overall shape of the posterior distribution p(x, ky) and not only its mode. This leads to a distinction between MAPx ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
MAPx,k strategies which estimate the mode pair x, k and often lead to undesired results, and MAPk strategies which select the best k while marginalizing over all possible x images. The MAPk principle is significantly more robust than the MAPx,k one, yet, it involves a challenging marginalization over
Efficient marginal likelihood . . .
, 2011
"... In blind deconvolution one aims to estimate from an input blurred image y a sharp image x and an unknown blur kernel k. Recent research shows that a key to success is to consider the overall shape of the posterior distribution p(x, ky) and not only its mode. This leads to a distinction between MAPx ..."
Abstract
 Add to MetaCart
MAPx,k strategies which estimate the mode pair x, k and often lead to undesired results, and MAPk strategies which select the best k while marginalizing over all possible x images. The MAPk principle is significantly more robust than the MAPx,k one, yet, it involves a challenging marginalization over
Classifier Learning with Supervised Marginal Likelihood
"... It has been argued that in supervised classification tasks it may be more sensible to perform model selection with respect to a more focused model selection score, like the supervised (conditional) marginal likelihood, than with respect to the standard unsupervised marginal likelihood criterion ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
It has been argued that in supervised classification tasks it may be more sensible to perform model selection with respect to a more focused model selection score, like the supervised (conditional) marginal likelihood, than with respect to the standard unsupervised marginal likelihood
Efficient Forward Regression with Marginal Likelihood
"... Abstract. We propose an efficient forward regression algorithm based on greedy optimization of marginal likelihood. It can be understood as a forward selection procedure which adds a new basis vector at each step with the largest increment to the marginal likelihood. The computational cost of our al ..."
Abstract
 Add to MetaCart
Abstract. We propose an efficient forward regression algorithm based on greedy optimization of marginal likelihood. It can be understood as a forward selection procedure which adds a new basis vector at each step with the largest increment to the marginal likelihood. The computational cost of our
Generalized Marginal Likelihood for Gaussian Mixtures
 LSS Internal Report GPI94 01
, 1994
"... The dominant approach in BernoulliGaussian myopic deconvolution consists in the joint maximization of a single Generalized Likelihood with respect to the input signal and the hyperparameters. The aim of this correspondence is to assess the theoretical properties of a related Generalized Marginal ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Marginal Likelihood criterion in a simplified framework where the filter is reduced to identity. Then the output is a mixture of Gaussian populations. Under a single reasonable assumption we prove that the maximum generalized marginal likelihood estimator always converge asymptotically. Then numerical
Results 1  10
of
1,771