Results 1  10
of
18
PowerExpectedPosterior Priors for Variable Selection in Gaussian Linear Models
, 2012
"... Summary: Imaginary training samples are often used in Bayesian statistics to develop prior distributions, with appealing interpretations, for use in model comparison. Expectedposterior priors are defined via imaginary training samples coming from a common underlying predictive distribution m ∗ , us ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Summary: Imaginary training samples are often used in Bayesian statistics to develop prior distributions, with appealing interpretations, for use in model comparison. Expectedposterior priors are defined via imaginary training samples coming from a common underlying predictive distribution m ∗ , using an initial baseline prior distribution. These priors can have subjective and also default Bayesian implementations, based on different choices of m ∗ and of the baseline prior. One of the main advantages of the expectedposterior priors is that impropriety of baseline priors causes no indeterminacy of Bayes factors; but at the same time they strongly depend on the selection and the size of the training sample. Here we combine ideas from the powerprior and the unitinformation prior methodologies to greatly diminish the effect of training samples on a Bayesian variableselection problem using the expectedposterior prior approach: we raise the likelihood involved in the expectedposterior prior distribution to a power that produces a prior information content equivalent to one data point. The result is that in practice our powerexpectedposterior (PEP) methodology is sufficiently insensitive to the size n ∗ of the training sample that one may take n ∗ equal to the fulldata sample size and dispense with training samples altogether; this promotes stability of the resulting Bayes factors, removes the arbitrariness arising from individual
Bayesian inference of multiple gaussian graphical models
 Journal of the American Statistical Association
, 2015
"... In this paper, we propose a Bayesian approach to inference on multiple Gaussian graphical models. Specifically, we address the problem of inferring multiple undirected networks in situations where some of the networks may be unrelated, while others share common features. We link the estimation of th ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper, we propose a Bayesian approach to inference on multiple Gaussian graphical models. Specifically, we address the problem of inferring multiple undirected networks in situations where some of the networks may be unrelated, while others share common features. We link the estimation of the graph structures via a Markov random field (MRF) prior which encourages common edges. We learn which sample groups have a shared graph structure by placing a spikeandslab prior on the parameters that measure network relatedness. This approach allows us to share information between sample groups, when appropriate, as well as to obtain a measure of relative network similarity across groups. Our modeling framework incorporates relevant prior knowledge through an edgespecific informative prior and can encourage similarity to an established network. Through simulations, we demonstrate the utility of our method in summarizing relative network similarity and compare its performance against related methods. We find improved accuracy of network estimation, particularly when the sample sizes within each subgroup are moderate. We also illustrate the application of our model to infer protein networks for various cancer subtypes and under different experimental conditions.
Nonparametric Bayesian testing for monotonicity. Unpublished manuscript
, 2013
"... This paper studies the problem of testing whether a function is monotone from a nonparametric Bayesian perspective. Two new families of tests are constructed. The first uses constrained smoothing splines, together with a hierarchical stochasticprocess prior that explicitly controls the prior proba ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
This paper studies the problem of testing whether a function is monotone from a nonparametric Bayesian perspective. Two new families of tests are constructed. The first uses constrained smoothing splines, together with a hierarchical stochasticprocess prior that explicitly controls the prior probability of monotonicity. The second uses regression splines, together with two proposals for the prior over the regression coefficients. The finitesample performance of the tests is shown via simulation to improve upon existing frequentist and Bayesian methods. The asymptotic properties of the Bayes factor for comparing monotone versus nonmonotone regression functions in a Gaussian model are also studied. Our results significantly extend those currently available, which chiefly focus on determining the dimension of a parametric linear model.
Uniformly most powerful Bayesian tests. Ann Stat 41:1716–1741
, 2013
"... Abstract Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerf ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor in favor of the alternative hypothesis exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in oneparameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between pvalues and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and pvalues on sample size are discussed.
CLINICAL TRIALS ARTICLE Clinical Trials 2013; 0: 1–10
"... Bayesian adaptive phase II screening design for combination trials ..."
(Show Context)
A.A. Physics, 1989,
, 2014
"... Bayesian hypothesis testing provides an attractive alternative that overcomes the difficulties with interpreting the pvalue or calibrating it. Bayesian hypothesis testing also provides measures of evidence for not only the alternate but the null hypothesis as well. Bayesian hypothesis testing has ..."
Abstract
 Add to MetaCart
(Show Context)
Bayesian hypothesis testing provides an attractive alternative that overcomes the difficulties with interpreting the pvalue or calibrating it. Bayesian hypothesis testing also provides measures of evidence for not only the alternate but the null hypothesis as well. Bayesian hypothesis testing has its own challenges including choice of prior distributions for the model parameters. Researchers have suggested various choices of prior distributions. In Dynamic Factor Volatility Modeling: A Bayesian Latent Threshold Approach, Nakajima and West used a threshold based prior for model selection and prediction. [NW13b] In this dissertation, we investigate the properties of this \Threshold prior method " as applied to testing a point null hypothesis. We show how the answers using this prior compare to the traditional gprior method. We also investigate the convergence rates of the Bayes factors in favor of the null and alternate hypotheses as sample size goes to innity when the null and alternate hypotheses are true. In addition we compare the Threshold prior and gprior methods for a multivariate regression model. We also investigate the alternative approach of using a loss function along with traditional priors, instead of the threshold approach. ii
The Whetstone and the Alum Block: Balanced Objective Bayesian Comparison of Nested Models for Discrete Data
"... ar ..."
Bayes Factor Single Arm Timetoevent User’s Guide
"... 1.1 Bayesian hypothesis test The BayesFactorTTE software implements a Bayesian hypothesis testbased method for trials with a single arm timetoevent (TTE) as described in [1]. ..."
Abstract
 Add to MetaCart
(Show Context)
1.1 Bayesian hypothesis test The BayesFactorTTE software implements a Bayesian hypothesis testbased method for trials with a single arm timetoevent (TTE) as described in [1].
Bayes Factor Single Arm Binary User’s Guide
"... 1.2 Bayesian hypothesis test..................... 2 ..."
(Show Context)