#### DMCA

## Maximum entropy and the glasses you are looking through (2000)

### Cached

### Download Links

- [www-stat.wharton.upenn.edu]
- [www-stat.wharton.upenn.edu]
- [www.cwi.nl]
- DBLP

### Other Repositories/Bibliography

Venue: | IN: PROCEEDINGS OF THE SIXTEENTH ANNUAL CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI2000 |

Citations: | 14 - 6 self |

### Citations

12389 |
Elements of Information Theory
- Cover, Thomas
- 1991
(Show Context)
Citation Context ...experimental situation'? It is this question we will partially answer through our gametheoretic reinterpretation of MaxEnt, which we proceed to discuss. 5 MAXENT AS A GAME The information inequality (=-=Cover & Thomas, 1991-=-) tells us that for all distributions P and Q over X, EP [; ln P (X)] EP [; ln Q(X)]� (5) with equality i P = Q. This implies inf Q2PX EP [; ln(Q(X)=M(X))] = EP [; ln(P (X)=M(X))] and hence entropy ca... |

1880 |
Statistical Decision Theory and Bayesian Analysis, 2nd ed
- Berger
- 1985
(Show Context)
Citation Context ...(X))]. The maximum attainable entropy for distributions in a set C is therefore given by sup P2C HM (P ) = sup P2C inf Q2PX EP [\Gamma ln Q(X) M(X) ]: (6) Readers familiar with game theory (see e.g. (=-=Berger, 1985-=-)) will recognize (6) as the maximin gain of a twoplayer zero-sum game. If they are acquainted with Von Neumann's minimax theorem, they may further suspect that the following equality holds: inf Q2PX ... |

794 | Bayesian network classifiers - Friedman, Geiger, et al. - 1997 |

76 | The Minimum Description Length Principle and Reasoning under Uncertainty
- Grünwald
- 1998
(Show Context)
Citation Context ...At the time the author was supported by a TALENT-grant awarded by the Netherlands Organization for Scientific Research (NWO). A very preliminary version of some of the work reported here appeared in (=-=Grunwald, 1998-=-). several different games with different worst-case optimal strategies. We use this insight to formally distinguish between qualitatively different ways of applying a MaxEnt distribution, ranging fro... |

68 |
Papers on probability, statistics, and statistical physics. D
- JAYNES
- 1983
(Show Context)
Citation Context ...y been criticized on the grounds that it leads to highly representation dependent results. Our distinction allows us to avoid this problem in many cases. 1 INTRODUCTION The Maximum Entropy Principle (=-=Jaynes, 1989-=-) is an often successful yet controversial method for inductive inference. It has been justified and criticized in many different ways (Jaynes, 1989; Grove et al., 1994; Halpern & Koller, 1995). Here ... |

55 | Random worlds and maximum entropy.
- Grove, Halpern, et al.
- 1994
(Show Context)
Citation Context ...ODUCTION The Maximum Entropy Principle (Jaynes, 1989) is an often successful yet controversial method for inductive inference. It has been justi ed and criticized in many di erent ways (Jaynes, 1989� =-=Grove et al., 1994-=-� Halpern & Koller, 1995). Here we giveanovel game-theoretic justi cation that is fundamentally different from previous ones: we show that the MaxEnt distribution for a given constraint is the distrib... |

44 | A general minimax result for relative entropy
- Haussler
- 1997
(Show Context)
Citation Context ...t, minimax P me M gives P me M (X = 1) = 0:5 (one can show that it coincides with the traditional MaxEnt distribution over the convex hull of C) which -- to us -- seems more reasonable. Related Work (=-=Haussler, 1997-=-) has given a related (but still essentially different) minimax result involving logarithmic regret rather than loss. (Halpern & Koller, 1995) note that MaxEnt can be made representation independent f... |

25 |
Bayesian network classiers
- Friedman, Geiger, et al.
- 1997
(Show Context)
Citation Context ...obability distributions de ned over discrete random variables X1�::: �Xk�Y of a certain parametric form. They usually perform exceedingly well when used topredict values of Y conditional on X1�:::Xk (=-=Friedman et al., 1997-=-) under the 0=1 (classi cation) loss function. Yet they make all kinds of unwarranted independence assumptions that might lead to disastrous results if they were used to predict, say, the value of X2 ... |

23 | Representation Dependence in Probabilistic Inference.
- Halpern, Koller
- 2004
(Show Context)
Citation Context ... Entropy Principle (Jaynes, 1989) is an often successful yet controversial method for inductive inference. It has been justi ed and criticized in many di erent ways (Jaynes, 1989� Grove et al., 1994� =-=Halpern & Koller, 1995-=-). Here we giveanovel game-theoretic justi cation that is fundamentally different from previous ones: we show that the MaxEnt distribution for a given constraint is the distribution that minimizes the... |

1 |
Robust Bayes and maximum generalised entropy
- Dawid
- 2000
(Show Context)
Citation Context ... strategy', i.e. for all P 2 C, EPs[\Gamma ln P me M (X) M(X) ] = EP me M [\Gamma ln P me M (X) M(X) ] (9) A similar theorem with much less conditions on\Omega X and C will be provided in (Grunwald & =-=Dawid, 2000-=-). Basic Interpretation Consider the decisiontheoretic setting where an Agent has to make decisions about the outcomes in some space\Omega X . Agent's decisions come from a decision space D and the lo... |