Results 1  10
of
20
Is Bayes posterior just quick and dirty confidence?
, 2010
"... Bayes (1763) introduced the observed likelihood function to statistical inference and provided a weight function to calibrate the parameter; he also introduced a confidence distribution on the parameter space but restricted attention to models now called location models; of course the names likeliho ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
Bayes (1763) introduced the observed likelihood function to statistical inference and provided a weight function to calibrate the parameter; he also introduced a confidence distribution on the parameter space but restricted attention to models now called location models; of course the names likelihood and confidence did not appear until much later: Fisher (1922) for likelihood and Neyman (1937) for confidence. Lindley (1958) showed that the Bayes and the confidence results were different when the model was not location. This paper examines the occurrence of true statements from the Bayes approach and from the confidence approach, and shows that the proportion of true statements in the Bayes case depends critically on the presence of linearity in the model; and with departure from this linearity the Bayes approach can be seriously misleading. Bayesian integration of weighted likelihood provides a first order linear approximation to confidence, but without linearity can give substantially incorrect results.
Inferential models: A framework for priorfree posterior probabilistic inference
 J. Amer. Statist. Assoc
, 2013
"... ar ..."
(Show Context)
The bootstrap and Markov chain Monte Carlo
"... This note concerns the use of parametric bootstrap sampling to carry out Bayesian inference calculations. This is only possible in a subset of those problems amenable to MCMC analysis, but when feasible the bootstrap approach offers both computational and theoretical advantages. The discussion here ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This note concerns the use of parametric bootstrap sampling to carry out Bayesian inference calculations. This is only possible in a subset of those problems amenable to MCMC analysis, but when feasible the bootstrap approach offers both computational and theoretical advantages. The discussion here is in terms of a simple example, with no attempt at a general analysis.
A note on pvalues interpreted as plausibilities
 Statist. Sinica
, 2014
"... ar ..."
(Show Context)
Coherent frequentism
, 2009
"... By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution. The closure of the set of expected losses correspon ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution. The closure of the set of expected losses corresponding to the dual frequentist posteriors constrains decisions without arbitrarily forcing optimization under all circumstances. This decision theory reduces to those that maximize expected utility when the pair of frequentist posteriors is induced by an exact or approximate confidence set estimator or when an automatic reduction rule is applied to the pair. In such cases, the resulting frequentist posterior is coherent in the sense that, as a probability distribution of the parameter of interest, it satisfies the axioms of the decisiontheoretic and logictheoretic systems typically cited in support of the Bayesian posterior. Unlike the pvalue, the confidence level of an interval hypothesis derived from such a measure is suitable as an estimator of the indicator of hypothesis truth since it converges in samplespace probability to 1 if the hypothesis is true or to 0 otherwise under general conditions.
Bayesian Analysis or Evidence Based Statistics
"... The original Bayes proposal leads to likelihood and confidence for many simple examples. More generally it gives approximate confidence but to achieve exact confidence reliability it needs refinement of the argument and needs more than just the usual minimum of the likelihood function from observed ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The original Bayes proposal leads to likelihood and confidence for many simple examples. More generally it gives approximate confidence but to achieve exact confidence reliability it needs refinement of the argument and needs more than just the usual minimum of the likelihood function from observed data. A general
Vector exponential models and second order analysis
 Pakistan Journal Statistics Operation Research
, 2012
"... For an exponential model with scalar parameter, WelchP:1963 examined the role of Bayesian analysis in statistical inference, more specifically the use of the Jeffreys:1946 prior. They determined that Bayesian intervals and thus in effect Bayesian quantiles had second order confidence accuracy. We us ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
For an exponential model with scalar parameter, WelchP:1963 examined the role of Bayesian analysis in statistical inference, more specifically the use of the Jeffreys:1946 prior. They determined that Bayesian intervals and thus in effect Bayesian quantiles had second order confidence accuracy. We use a Taylor Pak.j.stat.oper.res. Vol.VIII No.3 2012 pp293299 D.A.S. Fraser, Uyen Hoang, KexinJi, Xufei Li, Li Li, Wei Lin, Jie Su. series expansion of the logmodel to develop a second order version of the vector exponential model; this is developed as a contribution to theory in statistics at a time when algorithms are prominent, and it provides a basis for generalizing the WelchPeers approach to the vector parameter context.
By
, 2011
"... Creating accurate, current digital maps and 3D scenes is a high priority in today’s fast changing environment. The nation’s maps are in a constant state of revision, with many alterations or new additions each day. Digital maps have become quite common. Google maps, Mapquest and others are example ..."
Abstract
 Add to MetaCart
(Show Context)
Creating accurate, current digital maps and 3D scenes is a high priority in today’s fast changing environment. The nation’s maps are in a constant state of revision, with many alterations or new additions each day. Digital maps have become quite common. Google maps, Mapquest and others are examples. These also have 3D viewing capability. Many details are now included, such as the height of low bridges, in the attribute data for the objects displayed on digital maps and scenes. To expedite the updating of these datasets, they should be created autonomously, without human intervention, from data streams. Though systems exist that attain fast, or even realtime performance mapping and reconstruction, they are typically restricted to creating sketches from the data stream, and not accurate maps or scenes. The ever increasing amount of image data available from private companies, governments and the internet, suggest the development of an automated system is of utmost importance. The proposed framework can create 3D views autonomously; which extends the functionality of digital mapping. The first step to creating 3D views is to recon
unknown title
, 2013
"... The differential prior gives second order inference for scalar and vector parameters ..."
Abstract
 Add to MetaCart
The differential prior gives second order inference for scalar and vector parameters
© Brazilian Statistical Association, 2011 On default priors and approximate location models
"... Abstract. A prior for statistical inference can be one of three basic types: a mathematical prior originally proposed in Bayes [Philos. Trans. R. Soc. Lond. 53 (1763) 370–418; 54 (1764) 269–325], a subjective prior presenting an opinion, or a truly objective prior based on an identified frequency re ..."
Abstract
 Add to MetaCart
Abstract. A prior for statistical inference can be one of three basic types: a mathematical prior originally proposed in Bayes [Philos. Trans. R. Soc. Lond. 53 (1763) 370–418; 54 (1764) 269–325], a subjective prior presenting an opinion, or a truly objective prior based on an identified frequency reference. In this note we consider a method for deriving a mathematical prior based on approximate location models. This produces a mathematical posterior, and any practical interpretation of such a posterior is in terms of exact or approximate confidence under the postulated model. We describe how a proposed prior can be simply checked for consistency with confidence methods, using expansions about the maximum likelihood estimator. 1