Results 1  10
of
33
Game Theory, Maximum Entropy, Minimum Discrepancy And Robust Bayesian Decision Theory
 ANNALS OF STATISTICS
, 2004
"... ..."
Probability Update: Conditioning vs. CrossEntropy
 In Proc. Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI
, 1997
"... Conditioning is the generally agreedupon method for updating probability distributions when one learns that an event is certainly true. But it has been argued that we need other rules, in particular the rule of crossentropy minimization, to handle updates that involve uncertain information. In thi ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
(Show Context)
Conditioning is the generally agreedupon method for updating probability distributions when one learns that an event is certainly true. But it has been argued that we need other rules, in particular the rule of crossentropy minimization, to handle updates that involve uncertain information. In this paper we reexamine such a case: van Fraassen's Judy Benjamin problem [1987], which in essence asks how one might update given the value of a conditional probability. We argue thatcontrary to the suggestions in the literatureit is possible to use simple conditionalization in this case, and thereby obtain answers that agree fully with intuition. This contrasts with proposals such as crossentropy, which are easier to apply but can give unsatisfactory answers. Based on the lessons from this example, we speculate on some general philosophical issues concerning probability update. 1 INTRODUCTION How should one update one's beliefs, represented as a probability distribution Pr over some ...
A Field Guide to Recent Work on the Foundations of Statistical Mechanics
 FORTHCOMING IN DEAN RICKLES (ED.): THE ASHGATE COMPANION TO CONTEMPORARY PHILOSOPHY OF PHYSICS. LONDON: ASHGATE.
, 2008
"... ..."
Relative entropy and inductive inference
 in AIP Conference Proceedings on Bayesian Inference and Maximum Entropy Methods in Science and Engineering
, 2004
"... ..."
(Show Context)
The Constraint Rule of the Maximum Entropy Principle
, 1995
"... The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distri ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distributions. In practical applications, however, the information consists of empirical data. A constraint rule is then employed to construct constraints on probability distributions out of these data. Usually one adopts the rule to equate the expectation values of certain functions with their empirical averages. There are, however, various other ways in which one can construct constraints from empirical data, which makes the maximum entropy principle lead to very different probability assignments. This paper shows that an argument by Jaynes to justify the usual constraint rule is unsatisfactory and investigates several alternative choices. The choice of a constraint rule is also show...
Probability distribution and entropy as a measure of uncertainty, arXiv:condmat/0612076
"... The relationship between three probability distributions and their maximizable entropy forms is discussed without postulating entropy property. For this purpose, the entropy I is defined as a measure of uncertainty of the probability distribution of a random variable x by a variational relationship ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
The relationship between three probability distributions and their maximizable entropy forms is discussed without postulating entropy property. For this purpose, the entropy I is defined as a measure of uncertainty of the probability distribution of a random variable x by a variational relationship dxxddI − = , a definition underlying the maximization of entropy for corresponding distribution.
Optimal query forgery for private information retrieval
 IEEE Trans. Inform. Theory
, 2010
"... Abstract—We present a mathematical formulation for the optimization of query forgery for private information retrieval, in the sense that the privacy risk is minimized for a given traffic and processing overhead. The privacy risk is measured as an informationtheoretic divergence between the user’ ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
Abstract—We present a mathematical formulation for the optimization of query forgery for private information retrieval, in the sense that the privacy risk is minimized for a given traffic and processing overhead. The privacy risk is measured as an informationtheoretic divergence between the user’s query distribution and the population’s, which includes the entropy of the user’s distribution as a special case. We carefully justify and interpret our privacy criterion from diverse perspectives. Our formulation poses a mathematically tractable problem that bears substantial resemblance with ratedistortion theory. Index Terms—Entropy, Kullback–Leibler divergence, privacy risk, private information retrieval, query forgery.
Generalizing the lottery paradox
 The British Journal for the Philosophy of Science
"... This paper is concerned with formal solutions to the lottery paradox on which high probability defeasibly warrants acceptance. It considers some recently proposed solutions of this type and presents an argument showing that these solutions are trivial in that they boil down to the claim that perfect ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
This paper is concerned with formal solutions to the lottery paradox on which high probability defeasibly warrants acceptance. It considers some recently proposed solutions of this type and presents an argument showing that these solutions are trivial in that they boil down to the claim that perfect probability is sufficient for rational acceptability. The argument is then generalized, showing that a broad class of similar solutions faces the same problem. Over the past decades, there has been a steadily growing interest in utilizing probability theory to elucidate, or even analyze, concepts central to traditional epistemology. Special attention in this regard has been given to the notion of rational acceptability. Many have found the following thesis at least prima facie a promising starting point for a probabilistic elucidation of that notion: Sufficiency Thesis (ST) A propositionϕis rationally acceptable if Pr(ϕ)>t, where Pr is a probability distribution over propositions and t is a threshold value close to 1. 1 Another plausible constraint is that when some propositions are rationally
Information in statistical physics
 Studies in History and Philosophy of Modern Physics 36 (2005), 323–353. condmat/0501322
"... We review with a tutorial scope the information theory foundations of quantum statistical physics. Only a small proportion of the variables that characterize a system at the microscopic scale can be controlled, for both practical and theoretical reasons, and a probabilistic description involving the ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We review with a tutorial scope the information theory foundations of quantum statistical physics. Only a small proportion of the variables that characterize a system at the microscopic scale can be controlled, for both practical and theoretical reasons, and a probabilistic description involving the observers is required. The criterion of maximum von Neumann entropy is then used for making reasonable inferences. It means that no spurious information is introduced besides the known data. Its outcomes can be given a direct justification based on the principle of indifference of Laplace. We introduce the concept of relevant entropy associated with some set of relevant variables; it characterizes the information that is missing at the microscopic level when only these variables are known. For equilibrium problems, the relevant variables are the conserved ones, and the Second Law is recovered as a second step of the inference process. For nonequilibrium problems, the increase of the relevant entropy expresses an irretrievable loss of information from the relevant variables towards the irrelevant ones. Two examples illustrate the flexibility of the choice of relevant variables and the multiplicity of the associated entropies: the thermodynamic entropy (satisfying the Clausius–Duhem inequality) and the Boltzmann entropy (satisfying the Htheorem). The identification of entropy with missing information is also supported by the paradox of Maxwell’s demon. Spinecho experiments show that irreversibility itself is not an absolute concept: use of hidden information may overcome the arrow of time.