### Quantified Naturalness from Bayesian Statistics

"... We present a formulation of naturalness made in the framework of Bayesian statistics, which unravels the conceptual problems related to previous approaches. Among other things, the relative interpretation of the measure of naturalness turns out to be unambiguously established by Jeffreys ’ scale. Al ..."

Abstract
- Add to MetaCart

(Show Context)
We present a formulation of naturalness made in the framework of Bayesian statistics, which unravels the conceptual problems related to previous approaches. Among other things, the relative interpretation of the measure of naturalness turns out to be unambiguously established by Jeffreys ’ scale. Also, the usual sensitivity formulation (so-called Barbieri-Giudice measure) appears to be embedded in our formulation under an extended form. We derive the general sensitivity formula applicable to an arbitrary number of observables. Several consequences and de-velopments are further discussed. As a final illustration, we work out the map of combined fine-tuning associated to the gauge hierarchy problem and neutralino dark matter in a classic supersymmetric model. ar X iv

### SUMMARY

"... Dramatically expanded routine adoption of the Bayesian approach has substantially increased the need to assess both the confirmatory and contradictory information in our prior distribution with regard to the information provided by our likelihood function. We propose a diagnostic 15 approach that st ..."

Abstract
- Add to MetaCart

Dramatically expanded routine adoption of the Bayesian approach has substantially increased the need to assess both the confirmatory and contradictory information in our prior distribution with regard to the information provided by our likelihood function. We propose a diagnostic 15 approach that starts with the familiar posterior matching method. For a given likelihood model, we identify the difference in information needed to form two likelihood functions that, when combined respectively with a given prior and a baseline prior, will lead to the same posterior uncertainty. In cases with independent, identically distributed samples, sample size is the nat-ural measure of information, and this difference can be viewed as the prior data size M(k), 20 with regard to a likelihood function based on k observations. When there is no detectable prior-likelihood conflict relative to the baseline, M(k) is roughly constant over k, a constant that captures the confirmatory information. Otherwise M(k) tends to decrease with k because the contradictory prior detracts information from the likelihood function. In the case of extreme con-tradiction, M(k)/k will approach its lower bound −1, representing a complete cancelation of 25 prior and likelihood information due to conflict. We also report an intriguing super-informative phenomenon where the prior effectively gains an extra (1 + r)−1 percent of prior data size rela-tive to its nominal size when the prior mean coincides with the truth, where r is the percentage of the nominal prior data size relative to the total data size underlying the posterior. We demonstrate our method via several examples, including an application exploring the effect of immunoglobu- 30 lin levels on lupus nephritis. We also provide a theoretical foundation of our method for virtually all likelihood-prior pairs that possess asymptotic conjugacy. Some key words: confirmatory information; contradictory information; non-informative prior; prior distribution; prior-likelihood conflict; super-informative prior. ar

### Generalized Fiducial Inference for Ultrahigh Dimensional Regression ∗

, 2013

"... ar ..."

(Show Context)
### unknown title

"... Mutual information is critically dependent on prior assumptions: would the correct estimate of mutual information please identify itself? ..."

Abstract
- Add to MetaCart

Mutual information is critically dependent on prior assumptions: would the correct estimate of mutual information please identify itself?

### Mutual Information is Critically Dependent on Prior Assumptions: Would the Correct Estimate of Mutual Information Please Identify Itself?

"... Motivation: Mutual Information (MI) is a quantity that measures the dependence between two arbitrary random variables and has been repeatedly used to solve a wide variety of bioinformatic problems. Recently, when attempting to quantify the effects of sampling variance on computed values of MI in pro ..."

Abstract
- Add to MetaCart

(Show Context)
Motivation: Mutual Information (MI) is a quantity that measures the dependence between two arbitrary random variables and has been repeatedly used to solve a wide variety of bioinformatic problems. Recently, when attempting to quantify the effects of sampling variance on computed values of MI in proteins, we encountered striking differences among various novel estimates of MI. These differences revealed that estimating the “true ” value of MI is not a straightforward procedure, and minor variations of assumptions yielded remarkably different estimates. Results: We describe four formally-equivalent estimates of MI, three of which explicitly account for sampling variance, that yield non-equal values of MI even given exact frequencies. These MI estimates are essentially non-predictive of each other, converging only in the limit of implausibly large data sets. Lastly, we show that all four estimates are biologically-reasonable estimates of MI, despite their disparity, since each is actually the Kullback-Leibler divergence between random variables conditioned on equally-plausible hypotheses. Conclusions: For sparse contingency tables of the type universally observed in protein coevolution studies, our results show that estimates of MI, and hence inferences about physical phenomena such as coevolution, are critically dependent on at least three prior assumptions. These assumptions are (a) how observation counts relate to expected frequencies, (b) the relationship between joint and marginal frequencies, and (c) how non-observed categories are interpreted. In any biologically-relevant data, these assumptions will affect the MI estimate as much or more-so than observed data, and are independent of uncertainty in frequency parameters. Contact: Andrew D. Fernandes