Results 1  10
of
257
Adaptive Sparseness Using Jeffreys Prior
, 2001
"... In this paper we introduce a new sparseness inducing prior which does not involve any (hyper) parameters that need to be adjusted or estimated. Although other applications are possible, we focus here on supervised learning problems: regression and classification. Experiments with several publicly av ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
In this paper we introduce a new sparseness inducing prior which does not involve any (hyper) parameters that need to be adjusted or estimated. Although other applications are possible, we focus here on supervised learning problems: regression and classification. Experiments with several publicly
1BAYESIAN ESTIMATION OF A BIVARIATE COPULA USING THE JEFFREYS PRIOR
"... Abstract: A bivariate distribution with continuous margins can be uniquely decomposed via a copula and its marginal distributions. We consider the problem of estimating the copula function and adopt a Bayesian approach. On the space of copula functions, we construct a finite dimensional approximati ..."
Abstract
 Add to MetaCart
approximation subspace which is parametrized by a doubly stochastic matrix. A major problem here is the selection of a prior distribution on the space of doubly stochastic matrices also known as the Birkhoff polytope. The main contributions of this paper are the derivation of a simple formula for the Jeffreys
Jeffreys Priors versus Experienced Physicist Priors Arguments against Objective Bayesian Theory
, 1998
"... I review the problem of the choice of the priors from the point of view of a physicist interested in measuring a physical quantity, and I try to show that the reference priors often recommended for the purpose (Jeffreys priors) do not fit to the problem. Although it may seem surprising, it is easier ..."
Abstract
 Add to MetaCart
I review the problem of the choice of the priors from the point of view of a physicist interested in measuring a physical quantity, and I try to show that the reference priors often recommended for the purpose (Jeffreys priors) do not fit to the problem. Although it may seem surprising
The Optimality of Jeffreys Prior for Online Density Estimation and the Asymptotic Normality of Maximum Likelihood Estimators
"... We study online learning under logarithmic loss with regular parametric models. We show that a Bayesian strategy predicts optimally only if it uses Jeffreys prior. This result was known for canonical exponential families; we extend it to parametric models for which the maximum likelihood estimator i ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We study online learning under logarithmic loss with regular parametric models. We show that a Bayesian strategy predicts optimally only if it uses Jeffreys prior. This result was known for canonical exponential families; we extend it to parametric models for which the maximum likelihood estimator
Jeffreys Priors versus Experienced Physicist Priors Arguments against Objective Bayesian Theory
, 1998
"... I review the problem of the choice of the priors from the point of view of a physicist interested in measuring a physical quantity, and I try to show that the reference priors often recommended for the purpose (Jeffreys priors) do not t to the problem. Although it may seem surprising, it is easier f ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
I review the problem of the choice of the priors from the point of view of a physicist interested in measuring a physical quantity, and I try to show that the reference priors often recommended for the purpose (Jeffreys priors) do not t to the problem. Although it may seem surprising, it is easier
RAYLEIGH DISTRIBUTION REVISITED VIA EX TENSION OF JEFFREYS PRIOR INFORMATION AND A NEW LOSS FUNCTION Authors:
"... In this paper we present Bayes estimators of the parameter of the Rayleigh distribution, that stems from an extension of Jeffreys prior (AlKutubi (2005)) with a new loss function (AlBayyati (2002)). The performance of the proposed estimators has been compared in terms of bias and the mean squared ..."
Abstract
 Add to MetaCart
In this paper we present Bayes estimators of the parameter of the Rayleigh distribution, that stems from an extension of Jeffreys prior (AlKutubi (2005)) with a new loss function (AlBayyati (2002)). The performance of the proposed estimators has been compared in terms of bias and the mean squared
Commentary on “The Optimality of Jeffreys Prior for Online Density Estimation and the Asymptotic Normality of Maximum Likelihood Estimators”
"... In the field of prediction with expert advice, a standard goal is to sequentially predict data as well as the best expert in some reference set of ‘expert predictors’. Universal data compression, a subfield of information theory, can be thought of as a special case. Here, the set of expert predictor ..."
Abstract
 Add to MetaCart
, which is often taken to be Jeffreys ’ prior — in that case we abbreviate it to J.B. The text below has been written so as to be (hopefully) understandable for readers who do not know too many details of these concepts; for such details, see e.g. Grünwald
Posterior Distributions in Limited Information Analysis of the Simultaneous Equations Model Using the Jeffreys Prior
 Journal of Econometrics
, 1998
"... Posterior distributions in limited information analysis of the simultaneous equations model using the ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
Posterior distributions in limited information analysis of the simultaneous equations model using the
Jeffreys prior analysis of the simultaneous equations model in the case with n + 1 endogenous variables
 J. Econometrics
, 2005
"... COWLES FOUNDATION DISCUSSION PAPER NO. 1198 ..."
Bayesian Data Analysis
, 1995
"... I actually own a copy of Harold Jeffreys’s Theory of Probability but have only read small bits of it, most recently over a decade ago to confirm that, indeed, Jeffreys was not too proud to use a classical chisquared pvalue when he wanted to check the misfit of a model to data (Gelman, Meng and Ste ..."
Abstract

Cited by 2194 (63 self)
 Add to MetaCart
the following: (1) in thinking about prior distributions, we should go beyond Jeffreys’s principles and move toward weakly informative priors; (2) it is natural for those of us who work in social and computational sciences to favor complex models, contra Jeffreys’s preference for simplicity; and (3) a key
Results 1  10
of
257