Results 1  10
of
39
Default priors for Bayesian and frequentist inference
 J. Royal Statist. Soc. B
, 2010
"... We investigate the choice of default prior for use with likelihood to facilitate Bayesian and frequentist inference. Such a prior is a density or relative density that weights an observed likelihood function leading to the elimination of parameters not of interest and accordingly providing a density ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
We investigate the choice of default prior for use with likelihood to facilitate Bayesian and frequentist inference. Such a prior is a density or relative density that weights an observed likelihood function leading to the elimination of parameters not of interest and accordingly providing a density type assessment for a parameter of interest. For regular models with independent coordinates we develop a secondorder prior for the full parameter based on an approximate location relation from near a parameter value to near the observed data point; this derives directly from the coordinate distribution functions and is closely linked to the original Bayes approach. We then develop a modified prior that is targetted on a component parameter of interest and avoids the marginalization paradoxes of Dawid, Stone and Zidek (1973); this uses some extensions of WelchPeers theory that modify the Jeffreys prior and builds more generally on the approximate location property. A third type of prior is then developed that targets a vector interest parameter in the presence of a vector nuisance parameter and is based more directly on the original Jeffreys approach. Examples are given to clarify the computation of the priors and the flexibility of the approach.
Inferential models: A framework for priorfree posterior probabilistic inference
 J. Amer. Statist. Assoc
, 2013
"... ar ..."
(Show Context)
Default Bayesian estimation of the fundamental frequency
 IEEE Trans. on ASLP
, 2013
"... Abstract—Joint fundamental frequency and model order estimation is an important problem in several applications. In this paper, a default estimation algorithm based on a minimum of prior information is presented. The algorithm is developed in a Bayesian framework, and it can be applied to both real ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract—Joint fundamental frequency and model order estimation is an important problem in several applications. In this paper, a default estimation algorithm based on a minimum of prior information is presented. The algorithm is developed in a Bayesian framework, and it can be applied to both real and complexvalued discretetime signals which may have missing samples or may have been sampled at a nonuniform sampling frequency. The observation model and prior distributions corresponding to the prior information are derived in a consistent fashion using maximum entropy and invariance arguments. Moreover, several approximations of the posterior distributions on the fundamental frequency and the model order are derived, and one of the stateoftheart joint fundamental frequency and model order estimators is demonstrated to be a special case of one of these approximations. The performance of the approximations are evaluated in a smallscale simulation study on both synthetic and real world signals. The simulations indicate that the proposed algorithm yields more accurate results than previous algorithms. The simulation code is available online. Index Terms—Fundamental frequency estimation, Bayesian model comparison, Zellner’s gprior.
A note on pvalues interpreted as plausibilities
 Statist. Sinica
, 2014
"... ar ..."
(Show Context)
Analysis of neonatal brain lacking ATRX or MeCP2
, 2014
"... reveals changes in nucleosome density, CTCF binding and chromatin looping ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
reveals changes in nucleosome density, CTCF binding and chromatin looping
Cluster Analysis, Model Selection, and Prior Distributions on Models
, 2011
"... Clustering is an important and challenging statistical problem for which there is an extensive literature. Modeling approaches include mixture models and product partition models. Here we develop a product partition model and a model selection procedure based on Bayes factors from intrinsic priors. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Clustering is an important and challenging statistical problem for which there is an extensive literature. Modeling approaches include mixture models and product partition models. Here we develop a product partition model and a model selection procedure based on Bayes factors from intrinsic priors. We also find that the choice of the prior on model space is of utmost importance, almost overshadowing the other parts of the clustering problem, and we examine the behavior of posterior odds based on different model space priors. We find, somewhat surprisingly, that procedures based on the oftenused uniform prior (in which all models are given the same prior probability) lead to inconsistent model selection procedures. We examine other priors, and find that a new prior, the hierarchical uniform prior leads to consistent model selection procedures and has other desirable properties. Lastly, we examine our procedures, and competitors, on a range of examples.
CONSISTENCY OF SEQUENCE CLASSIFICATION WITH ENTROPIC PRIORS
"... Entropic priors, recently revisited within the context of theoretical physics, were originally introduced for image processing and for general statistical inference. Entropic priors seem to represent a very promising approach to “objective ” prior determination when such information is not available ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Entropic priors, recently revisited within the context of theoretical physics, were originally introduced for image processing and for general statistical inference. Entropic priors seem to represent a very promising approach to “objective ” prior determination when such information is not available. The attention has been mostly limited to continuous parameter spaces and our focus in this work is on the application of the entropic prior idea to Bayesian inference with discrete classes in signal processing problems. Unfortunately, it is well known that entropic priors, when applied to sequences, may lead to excessive spreading of the entropy as the number of samples grows. In this paper we show that the spreading of the entropy may be tolerated if the posterior probabilities remain consistent. We derive a condition based on conditional entropies and KLdivergences for posterior consistency using the Asymptotic Equipartition Property (AEP). Furthermore, we show that entropic priors can be modified to force posterior consistency by adding a constraint to joint entropy maximization. Simulations on the application of entropic priors to a coin flipping experiment are included. 1
Two dogmas of strong objective Bayesianism
 International Studies in the Philosophy of Science
, 2010
"... ..."
Discussion Papers Hidden Skewness: On the Diffi culty of Multiplicative Compounding under Random Shocks
, 1337
"... Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle ..."
A BAYESIAN FRAMEWORK FOR THE RATIO OF TWO POISSON RATES IN THE CONTEXT OF VACCINE EFFICACY TRIALS ∗
, 2010
"... Abstract. In many applications, we assume that two random observations x and y are generated according to independent Poisson distributions P(λS) and P(µT) and we are interested in performing statistical inference on the ratio φ = λ/µ of the two incidence rates. In vaccine efficacy trials, x and y a ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In many applications, we assume that two random observations x and y are generated according to independent Poisson distributions P(λS) and P(µT) and we are interested in performing statistical inference on the ratio φ = λ/µ of the two incidence rates. In vaccine efficacy trials, x and y are typically the numbers of cases in the vaccine and the control groups respectively, φ is called the relative risk and the statistical model is called ‘partial immunity model’. In this paper we start by defining a natural semiconjugate family of prior distributions for this model, allowing straightforward computation of the posterior inference. Following theory on reference priors, we define the reference prior for the partial immunity model when φ is the parameter of interest. We also define a family of reference priors with partial information on µ while remaining uninformative about φ. We notice that these priors belong to the semiconjugate family. We then demonstrate using numerical examples that Bayesian credible intervals for φ enjoy attractive frequentist properties when using reference priors, a typical property of reference priors.