Results 1  10
of
37
Default priors for Bayesian and frequentist inference
 J. Royal Statist. Soc. B
, 2010
"... We investigate the choice of default prior for use with likelihood to facilitate Bayesian and frequentist inference. Such a prior is a density or relative density that weights an observed likelihood function leading to the elimination of parameters not of interest and accordingly providing a density ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
We investigate the choice of default prior for use with likelihood to facilitate Bayesian and frequentist inference. Such a prior is a density or relative density that weights an observed likelihood function leading to the elimination of parameters not of interest and accordingly providing a density type assessment for a parameter of interest. For regular models with independent coordinates we develop a secondorder prior for the full parameter based on an approximate location relation from near a parameter value to near the observed data point; this derives directly from the coordinate distribution functions and is closely linked to the original Bayes approach. We then develop a modified prior that is targetted on a component parameter of interest and avoids the marginalization paradoxes of Dawid, Stone and Zidek (1973); this uses some extensions of WelchPeers theory that modify the Jeffreys prior and builds more generally on the approximate location property. A third type of prior is then developed that targets a vector interest parameter in the presence of a vector nuisance parameter and is based more directly on the original Jeffreys approach. Examples are given to clarify the computation of the priors and the flexibility of the approach.
Inferential models: A framework for priorfree posterior probabilistic inference
 J. Amer. Statist. Assoc
, 2013
"... ar ..."
(Show Context)
Default Bayesian estimation of the fundamental frequency
 IEEE Trans. on ASLP
, 2013
"... Abstract—Joint fundamental frequency and model order estimation is an important problem in several applications. In this paper, a default estimation algorithm based on a minimum of prior information is presented. The algorithm is developed in a Bayesian framework, and it can be applied to both real ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Joint fundamental frequency and model order estimation is an important problem in several applications. In this paper, a default estimation algorithm based on a minimum of prior information is presented. The algorithm is developed in a Bayesian framework, and it can be applied to both real and complexvalued discretetime signals which may have missing samples or may have been sampled at a nonuniform sampling frequency. The observation model and prior distributions corresponding to the prior information are derived in a consistent fashion using maximum entropy and invariance arguments. Moreover, several approximations of the posterior distributions on the fundamental frequency and the model order are derived, and one of the stateoftheart joint fundamental frequency and model order estimators is demonstrated to be a special case of one of these approximations. The performance of the approximations are evaluated in a smallscale simulation study on both synthetic and real world signals. The simulations indicate that the proposed algorithm yields more accurate results than previous algorithms. The simulation code is available online. Index Terms—Fundamental frequency estimation, Bayesian model comparison, Zellner’s gprior.
A note on pvalues interpreted as plausibilities
 Statist. Sinica
, 2014
"... ar ..."
(Show Context)
Analysis of neonatal brain lacking ATRX or MeCP2
, 2014
"... reveals changes in nucleosome density, CTCF binding and chromatin looping ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
reveals changes in nucleosome density, CTCF binding and chromatin looping
CONSISTENCY OF SEQUENCE CLASSIFICATION WITH ENTROPIC PRIORS
"... Entropic priors, recently revisited within the context of theoretical physics, were originally introduced for image processing and for general statistical inference. Entropic priors seem to represent a very promising approach to “objective ” prior determination when such information is not available ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Entropic priors, recently revisited within the context of theoretical physics, were originally introduced for image processing and for general statistical inference. Entropic priors seem to represent a very promising approach to “objective ” prior determination when such information is not available. The attention has been mostly limited to continuous parameter spaces and our focus in this work is on the application of the entropic prior idea to Bayesian inference with discrete classes in signal processing problems. Unfortunately, it is well known that entropic priors, when applied to sequences, may lead to excessive spreading of the entropy as the number of samples grows. In this paper we show that the spreading of the entropy may be tolerated if the posterior probabilities remain consistent. We derive a condition based on conditional entropies and KLdivergences for posterior consistency using the Asymptotic Equipartition Property (AEP). Furthermore, we show that entropic priors can be modified to force posterior consistency by adding a constraint to joint entropy maximization. Simulations on the application of entropic priors to a coin flipping experiment are included. 1
Cluster Analysis, Model Selection, and Prior Distributions on Models
, 2011
"... Clustering is an important and challenging statistical problem for which there is an extensive literature. Modeling approaches include mixture models and product partition models. Here we develop a product partition model and a model selection procedure based on Bayes factors from intrinsic priors. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Clustering is an important and challenging statistical problem for which there is an extensive literature. Modeling approaches include mixture models and product partition models. Here we develop a product partition model and a model selection procedure based on Bayes factors from intrinsic priors. We also find that the choice of the prior on model space is of utmost importance, almost overshadowing the other parts of the clustering problem, and we examine the behavior of posterior odds based on different model space priors. We find, somewhat surprisingly, that procedures based on the oftenused uniform prior (in which all models are given the same prior probability) lead to inconsistent model selection procedures. We examine other priors, and find that a new prior, the hierarchical uniform prior leads to consistent model selection procedures and has other desirable properties. Lastly, we examine our procedures, and competitors, on a range of examples.
Submitted APPROXIMATION OF IMPROPER PRIOR BY VAGUE
"... Abstract. We propose a convergence mode for prior distributions which allows a sequence of probability measures to have an improper limiting measure. We define a sequence of vague priors as a sequence of probability measures that converges to a noninformative prior. We consider some cases where vag ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We propose a convergence mode for prior distributions which allows a sequence of probability measures to have an improper limiting measure. We define a sequence of vague priors as a sequence of probability measures that converges to a noninformative prior. We consider some cases where vague priors have necessarily large variances and other cases where they have not. We give some constructions of vague priors that approximate the Haar measures or the Jeffreys priors. Then, we study the consequences of the convergence of prior distributions on the posterior analysis. We also revisit the JeffreysLindley paradox.
Inference in TwoPiece LocationScale models
"... This paper addresses the use of Jeffreys priors in the context of univariate threeparameter locationscale models, where skewness is introduced by differing scale parameters either side of the location. We focus on various commonly used parameterizations for these models. Jeffreys priors are shown n ..."
Abstract
 Add to MetaCart
This paper addresses the use of Jeffreys priors in the context of univariate threeparameter locationscale models, where skewness is introduced by differing scale parameters either side of the location. We focus on various commonly used parameterizations for these models. Jeffreys priors are shown not to allow for posterior inference in the wide and practically relevant class of distributions obtained by skewing scale mixtures of normals. Easily checked conditions under which independence Jeffreys priors can be used for valid inference are derived. We empirically investigate the posterior coverage for a number of Bayesian models, which are also used to conduct inference on real data.