Results 1  10
of
31
Representing partial ignorance
 IEEE Trans. on Systems, Man and Cybernetics
, 1996
"... Ignorance is precious, for once lost it can never be regained. This paper advocates the use of nonpurely probabilistic approaches to higherorder uncertainty. One of the major arguments of Bayesian probability proponents is that representing uncertainty is always decisiondriven and as a consequenc ..."
Abstract

Cited by 36 (11 self)
 Add to MetaCart
Ignorance is precious, for once lost it can never be regained. This paper advocates the use of nonpurely probabilistic approaches to higherorder uncertainty. One of the major arguments of Bayesian probability proponents is that representing uncertainty is always decisiondriven and as a consequence, uncertainty should be represented by probability. Here we argue that representing partial ignorance is not always decisiondriven. Other reasoning tasks such as belief revision for instance are more naturally carried out at the purely cognitive level. Conceiving knowledge representation and decisionmaking as separate concerns opens the way to nonpurely probabilistic representations of incomplete knowledge. It is pointed out that within a numerical framework, two numbers are needed to account for partial ignorance about events, because on top of truth and falsity, the state of total ignorance must be encoded independently of the number of underlying alternatives. The paper also points out that it is consistent to accept a Bayesian view of decisionmaking and a nonBayesian view of knowledge representation because it is possible to map nonprobabilistic degrees of belief to betting probabilities when needed. Conditioning rules in nonBayesian settings are reviewed,
Can the Maximum Entropy Principle Be Explained as a Consistency Requirement?
, 1997
"... The principle of maximumentropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in mathe ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
The principle of maximumentropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in mathematical formulation and in intended scope, into the principle of maximum relative entropy or of minimum information. It has been claimed that these principles are singled out as unique methods of statistical inference that agree with certain compelling consistency requirements. This paper reviews these consistency arguments and the surrounding controversy. It is shown that the uniqueness proofs are flawed, or rest on unreasonably strong assumptions. A more general class of 1 inference rules, maximizing the socalled R'enyi entropies, is exhibited which also fulfill the reasonable part of the consistency assumptions. 1 Introduction In any application of probability theory to the pro...
Justifying Conditionalization: Conditionalization Maximizes Expected Epistemic Utility
 MIND
, 2006
"... According to Bayesian epistemology, the epistemically rational agent updates her beliefs by conditionalization: that is, her posterior subjective probability after taking account of evidence X, p new, is to be set equal to her prior conditional probability p old($X). Bayesians can be challenged to ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
(Show Context)
According to Bayesian epistemology, the epistemically rational agent updates her beliefs by conditionalization: that is, her posterior subjective probability after taking account of evidence X, p new, is to be set equal to her prior conditional probability p old($X). Bayesians can be challenged to provide a justification for their claim that conditionalization is recommended by rationality—whence the normative force of the injunction to conditionalize? There are several existing justifications for conditionalization, but none directly addresses the idea that conditionalization will be epistemically rational if and only if it can reasonably be expected to lead to epistemically good outcomes. We apply the approach of cognitive decision theory to provide a justification for conditionalization using precisely that idea. We assign epistemic utility functions to epistemically rational agents; an agent’s epistemic utility is to depend both upon the actual state of the world and on the agent’s credence distribution over possible states. We prove that, under independently motivated conditions, conditionalization is the unique updating rule that maximizes expected epistemic utility.
Probability and time
, 2013
"... Probabilistic reasoning is often attributed a temporal meaning, in which conditioning is regarded as a normative rule to compute future beliefs out of current beliefs and observations. However, the wellestablished ‘updating interpretation’ of conditioning is not concerned with beliefs that evolve i ..."
Abstract

Cited by 14 (11 self)
 Add to MetaCart
Probabilistic reasoning is often attributed a temporal meaning, in which conditioning is regarded as a normative rule to compute future beliefs out of current beliefs and observations. However, the wellestablished ‘updating interpretation’ of conditioning is not concerned with beliefs that evolve in time, and in particular with future beliefs. On the other hand, a temporal justification of conditioning was proposed already by De Moivre and Bayes, by requiring that current and future beliefs be consistent. We reconsider the latter approach while dealing with a generalised version of the problem, using a behavioural theory of imprecise probability in the form of coherent lower previsions as well as of coherent sets of desirable gambles, and letting the possibility space be finite or infinite. We obtain that using conditioning is normative, in the imprecise case, only if one establishes future behavioural commitments at the same time of current beliefs. In this case it is also normative that present beliefs be conglomerable, which is a result that touches on a longterm controversy at the foundations of probability. In the remaining case, where one commits to some future behaviour after establishing present beliefs, we characterise the several possibilities to define consistent future assessments; this shows in particular that temporal consistency does not preclude changes of mind. And yet, our analysis does not support that rationality requires consistency in general, even though pursuing consistency makes sense and is useful, at least as a way to guide and evaluate the assessment process. These considerations narrow down in the special case of precise
Objective Bayesian nets
 We Will Show Them! Essays in Honour of Dov Gabbay
, 2005
"... I present a formalism that combines two methodologies: objective Bayesianism and Bayesian nets. According to objective Bayesianism, an agent’s degrees of belief (i) ought to satisfy the axioms of probability, (ii) ought to satisfy constraints imposed by background knowledge, and (iii) should otherwi ..."
Abstract

Cited by 13 (11 self)
 Add to MetaCart
(Show Context)
I present a formalism that combines two methodologies: objective Bayesianism and Bayesian nets. According to objective Bayesianism, an agent’s degrees of belief (i) ought to satisfy the axioms of probability, (ii) ought to satisfy constraints imposed by background knowledge, and (iii) should otherwise be as noncommittal as possible (i.e. have maximum entropy). Bayesian nets offer an efficient way of representing and updating probability functions. An objective Bayesian net is a Bayesian net representation of the maximum entropy probability function. I show how objective Bayesian nets can be constructed, updated and combined, and how they can deal with cases in which the agent’s background knowledge includes knowledge of qualitative influence relationships, e.g. causal influences. I then sketch a number of applications of the resulting formalism, showing how it can shed light on probability logic, causal modelling, logical reasoning, semantic reasoning, argumentation
Objective Bayesianism, Bayesian Conditionalisation
, 2008
"... Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
(Show Context)
Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating. The paper also reviews some existing criticisms and justifications of conditionalisation, arguing in particular that the diachronic Dutch book justification fails because diachronic Dutch book arguments are subject to a reductio: in certain circumstances one can Dutch book an agent however she changes her degrees of belief. One may also criticise objective Bayesianism on the grounds that its norms are not compulsory but voluntary, the result of a stance. It is argued that this second objection also misses the mark, since objective
Constrained Bayesian Inference for Low Rank Multitask Learning
"... We present a novel approach for constrained Bayesian inference. Unlike current methods, our approach does not require convexity of the constraint set. We reduce the constrained variational inference to a parametric optimization over the feasible set of densities and propose a general recipe for such ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We present a novel approach for constrained Bayesian inference. Unlike current methods, our approach does not require convexity of the constraint set. We reduce the constrained variational inference to a parametric optimization over the feasible set of densities and propose a general recipe for such problems. We apply the proposed constrained Bayesian inference approach to multitask learning subject to rank constraints on the weight matrix. Further, constrained parameter estimation is applied to recover the sparse conditional independence structure encoded by prior precision matrices. Our approach is motivated by reverse inference for high dimensional functional neuroimaging, a domain where the high dimensionality and small number of examples requires the use of constraints to ensure meaningful and effective models. For this application, we propose a model that jointly learns a weight matrix and the prior inverse covariance structure between different tasks. We present experimental validation showing that the proposed approach outperforms strong baseline models in terms of predictive performance and structure recovery. 1
Objective Bayesianism with predicate languages. Synthese
, 2008
"... Objective Bayesian probability is often defined over rather simple domains, e.g., finite event spaces or propositional languages. This paper investigates the extension of objective Bayesianism to firstorder logical languages. It is argued that the objective Bayesian should choose a probability func ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
(Show Context)
Objective Bayesian probability is often defined over rather simple domains, e.g., finite event spaces or propositional languages. This paper investigates the extension of objective Bayesianism to firstorder logical languages. It is argued that the objective Bayesian should choose a probability function, from all those that satisfy constraints imposed by background knowledge, that is closest to a particular frequencyinduced probability function which generalises the λ = 0 function of Carnap’s continuum of inductive methods.
Quantum theory as inductive inference
, 2010
"... We present the elements of a new approach to the foundations of quantum theory and information theory which is based on the algebraic approach to integration, information geometry, and maximum relative entropy methods. It enables us to deal with conceptual and mathematical problems of quantum theory ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
We present the elements of a new approach to the foundations of quantum theory and information theory which is based on the algebraic approach to integration, information geometry, and maximum relative entropy methods. It enables us to deal with conceptual and mathematical problems of quantum theory without any appeal to Hilbert space framework and without frequentist or subjective interpretation of probability. PACS: 89.70.Cf 02.50.Cw 03.67.a 03.65.w 1