Results 1  10
of
344,577
Bounded Approximations for Marginal Likelihoods
, 2010
"... We discuss novel approaches to evaluation of both upper and lower bounds on log marginal likelihoods for model comparison in Bayesian analysis. From posterior Monte Carlo samples, we show how existing variational approximation methods defining lower bounds on marginal likelihoods can be extended to ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We discuss novel approaches to evaluation of both upper and lower bounds on log marginal likelihoods for model comparison in Bayesian analysis. From posterior Monte Carlo samples, we show how existing variational approximation methods defining lower bounds on marginal likelihoods can be extended
Marginal Likelihood for Distance Matrices
 Statistica Sinica
"... Abstract: A Wishart model is proposed for random distance matrices in which the components are correlated gamma random variables, all having the same degrees of freedom. The marginal likelihood is obtained in closed form. Its use is illustrated by multidimensional scaling, by rooted tree models for ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract: A Wishart model is proposed for random distance matrices in which the components are correlated gamma random variables, all having the same degrees of freedom. The marginal likelihood is obtained in closed form. Its use is illustrated by multidimensional scaling, by rooted tree models
Marginal Likelihood From the MetropolisHastings Output
 OUTPUT,JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2001
"... This article provides a framework for estimating the marginal likelihood for the purpose of Bayesian model comparisons. The approach extends and completes the method presented in Chib (1995) by overcoming the problems associated with the presence of intractable full conditional densities. The propos ..."
Abstract

Cited by 209 (16 self)
 Add to MetaCart
This article provides a framework for estimating the marginal likelihood for the purpose of Bayesian model comparisons. The approach extends and completes the method presented in Chib (1995) by overcoming the problems associated with the presence of intractable full conditional densities
Marginal likelihood for parallel series
, 810
"... Suppose that k series, all having the same autocorrelation function, are observed in parallel at n points in time or space. From a single series of moderate length, the autocorrelation parameter β can be estimated with limited accuracy, so we aim to increase the information by formulating a suitable ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
likelihood for the model with k(k + 1)/2 covariance parameters behaves anomalously in two respects. On the one hand, it is a log likelihood, so the derivatives satisfy the Bartlett identities. On the other hand, the Fisher information for β increases to a maximum at k = n/2, decreasing to zero for k
Classifier Learning with Supervised Marginal Likelihood
"... It has been argued that in supervised classification tasks it may be more sensible to perform model selection with respect to a more focused model selection score, like the supervised (conditional) marginal likelihood, than with respect to the standard unsupervised marginal likelihood criterion ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
It has been argued that in supervised classification tasks it may be more sensible to perform model selection with respect to a more focused model selection score, like the supervised (conditional) marginal likelihood, than with respect to the standard unsupervised marginal likelihood
Efficient Forward Regression with Marginal Likelihood
"... Abstract. We propose an efficient forward regression algorithm based on greedy optimization of marginal likelihood. It can be understood as a forward selection procedure which adds a new basis vector at each step with the largest increment to the marginal likelihood. The computational cost of our al ..."
Abstract
 Add to MetaCart
Abstract. We propose an efficient forward regression algorithm based on greedy optimization of marginal likelihood. It can be understood as a forward selection procedure which adds a new basis vector at each step with the largest increment to the marginal likelihood. The computational cost of our
Exact Evaluation of Marginal Likelihood Integrals
"... In Bayesian statistics, marginal likelihood integrals are important for model selection. Unfortunately, they are generally difficult to compute. We present algebraic algorithms for evaluating such integrals exactly for small sample sizes, and compare them with some existing approximations. Our metho ..."
Abstract
 Add to MetaCart
In Bayesian statistics, marginal likelihood integrals are important for model selection. Unfortunately, they are generally difficult to compute. We present algebraic algorithms for evaluating such integrals exactly for small sample sizes, and compare them with some existing approximations. Our
Generalized Marginal Likelihood for Gaussian Mixtures
 LSS Internal Report GPI94 01
, 1994
"... The dominant approach in BernoulliGaussian myopic deconvolution consists in the joint maximization of a single Generalized Likelihood with respect to the input signal and the hyperparameters. The aim of this correspondence is to assess the theoretical properties of a related Generalized Marginal ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Marginal Likelihood criterion in a simplified framework where the filter is reduced to identity. Then the output is a mixture of Gaussian populations. Under a single reasonable assumption we prove that the maximum generalized marginal likelihood estimator always converge asymptotically. Then numerical
Fast Marginal Likelihood Maximisation for Sparse Bayesian Models
 Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics
, 2003
"... The 'sparse Bayesian' modelling approach, as exemplified by the 'relevance vector machine ', enables sparse classification and regression functions to be obtained by linearlyweighting a small nmnber of fixed basis functions from a large dictionary of potential candidates. S ..."
Abstract

Cited by 115 (0 self)
 Add to MetaCart
. Such a model conveys a nmnber of advantages over the related and very popular 'support vector machine', but the necessary 'training' procedure optimisation of the marginal likelihood function is typically much slower. We describe a new and highly accelerated algorithm which
Results 1  10
of
344,577