Results 1 - 10
of
55
The Good Old Davis-Putnam Procedure Helps Counting Models
- Journal of Artificial Intelligence Research
, 1999
"... As was shown recently, many important AI problems require counting the number of models of propositional formulas. The problem of counting models of such formulas is, according to present knowledge, computationally intractable in a worst case. Based on the Davis-Putnam procedure, we present an algor ..."
Abstract
-
Cited by 51 (2 self)
- Add to MetaCart
(Show Context)
As was shown recently, many important AI problems require counting the number of models of propositional formulas. The problem of counting models of such formulas is, according to present knowledge, computationally intractable in a worst case. Based on the Davis-Putnam procedure, we present an algorithm, CDP, that computes the exact number of models of a propositional CNF or DNF formula F . Let m and n be the number of clauses and variables of F , respectively, and let p denote the probability that a literal l of F occurs in a clause C of F , then the average running time of CDP is shown to be O(m d n), where d = d \Gamma1 log 2 (1\Gammap) e. The practical performance of CDP has been estimated in a series of experiments on a wide variety of CNF formulas. 1. Introduction Given a propositional formula F in CNF or DNF, one may want to know what is the number (F ) of its models, that is, assignments of truth values to its variables that satisfy F . This problem of counting models ...
Statistical Foundations for Default Reasoning
, 1993
"... We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and first-order statements. We then assign equal probability to all w ..."
Abstract
-
Cited by 49 (7 self)
- Add to MetaCart
We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and first-order statements. We then assign equal probability to all worlds consistent with KB in order to assign a degree of belief to a statement '. The degree of belief can be used to decide whether to defeasibly conclude '. Various natural patterns of reasoning, such as a preference for more specific defaults, indifference to irrelevant information, and the ability to combine independent pieces of evidence, turn out to follow naturally from this technique. Furthermore, our approach is not restricted to default reasoning; it supports a spectrum of reasoning, from quantitative to qualitative. It is also related to other systems for default reasoning. In particular, we show that the work of [ Goldszmidt et al., 1990 ] , which applies maximum entropy ideas t...
From Statistics to Beliefs
, 1992
"... An intelligent agent uses known facts, including statistical knowledge, to assign degrees of belief to assertions it is uncertain about. We investigate three principled techniques for doing this. All three are applications of the principle of indifference, because they assign equal degree of belief ..."
Abstract
-
Cited by 48 (13 self)
- Add to MetaCart
An intelligent agent uses known facts, including statistical knowledge, to assign degrees of belief to assertions it is uncertain about. We investigate three principled techniques for doing this. All three are applications of the principle of indifference, because they assign equal degree of belief to all basic "situations " consistent with the knowledge base. They differ because there are competing intuitions about what the basic situations are. Various natural patterns of reasoning, such as the preference for the most specific statistical data available, turn out to follow from some or all of the techniques. This is an improvement over earlier theories, such as work on direct inference and reference classes, which arbitrarily postulate these patterns without offering any deeper explanations or guarantees of consistency. The three methods we investigate have surprising characterizations: there are connections to the principle of maximum entropy, a principle of maximal independence, an...
Probabilistic Default Reasoning with Conditional Constraints
- ANN. MATH. ARTIF. INTELL
, 2000
"... We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of -, lexicographic, ..."
Abstract
-
Cited by 39 (18 self)
- Add to MetaCart
(Show Context)
We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of -, lexicographic, and conditional entailment for conditional constraints, which are probabilistic generalizations of Pearl's entailment in system , Lehmann's lexicographic entailment, and Geffner's conditional entailment, respectively. We show that the new formalisms have nice properties. In particular, they show a similar behavior as referenceclass reasoning in a number of uncontroversial examples. The new formalisms, however, also avoid many drawbacks of reference-class reasoning. More precisely, they can handle complex scenarios and even purely probabilistic subjective knowledge as input. Moreover, conclusions are drawn in a global way from all the available knowledge as a whole. We then show that the new formalisms also have nice general nonmonotonic properties. In detail, the new notions of -, lexicographic, and conditional entailment have similar properties as their classical counterparts. In particular, they all satisfy the rationality postulates proposed by Kraus, Lehmann, and Magidor, and they have some general irrelevance and direct inference properties. Moreover, the new notions of - and lexicographic entailment satisfy the property of rational monotonicity. Furthermore, the new notions of -, lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints. Finally, we provide algorithms for reasoning under the new formalisms, and we analyze its computational com...
Interval-Valued Probabilities
, 1998
"... 0 =h 0 in the diagram. The sawtooth line reflects the fact that even when the principle of indifference can be applied, there may be arguments whose strength can be bounded no more precisely than by an adjacent pair of indifference arguments. Note that a=h in the diagram is bounded numerically on ..."
Abstract
-
Cited by 27 (1 self)
- Add to MetaCart
(Show Context)
0 =h 0 in the diagram. The sawtooth line reflects the fact that even when the principle of indifference can be applied, there may be arguments whose strength can be bounded no more precisely than by an adjacent pair of indifference arguments. Note that a=h in the diagram is bounded numerically only by 0.0 and the strength of a 00 =h 00 . Keynes' ideas were taken up by B. O. Koopman [14, 15, 16], who provided an axiomatization for Keynes' probability values. The axioms are qualitative, and reflect what Keynes said about probability judgment. (It should be remembered that for Keynes probability judgment was intended to be objective in the sense that logic is objective. Although different people may accept different premises, whether or not a conclusion follows logically from a given set of premises is objective. Though Ramsey [26] attacked this aspect of Keynes' theory, it can be argued
Efficiency and nash equilibria in a scrip system for p2p networks
- IN ACM CONFERENCE ON ELECTRONIC COMMERCE
, 2006
"... A model of providing service in a P2P network is analyzed. It is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained. The effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welf ..."
Abstract
-
Cited by 26 (5 self)
- Add to MetaCart
A model of providing service in a P2P network is analyzed. It is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained. The effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welfare) is analyzed, and it is shown that by maintaining the appropriate ratio between the total amount of money and the number of agents, efficiency is maximized. The work has implications for many online systems, not only P2P networks but also a wide variety of online forums for which scrip systems are popular, but formal analyses have been lacking.
Pruning Redundant Association Rules Using Maximum Entropy Principle
- In Advances in Knowledge Discovery and Data Mining, 6th Pacific-Asia Conference, PAKDD’02
, 2002
"... Data mining algorithms produce huge sets of rules, practically impossible to analyze manually. It is thus important to develop methods for removing redundant rules from those sets. We present a solution to the problem using the Maximum Entropy approach. The problem of eciency of Maximum Entropy comp ..."
Abstract
-
Cited by 24 (4 self)
- Add to MetaCart
Data mining algorithms produce huge sets of rules, practically impossible to analyze manually. It is thus important to develop methods for removing redundant rules from those sets. We present a solution to the problem using the Maximum Entropy approach. The problem of eciency of Maximum Entropy computations is addressed by using closed form solutions for the most frequent cases. Analytical and experimental evaluation of the proposed technique indicates that it eciently produces small sets of interesting association rules.
Probabilistic Logic Programming under Maximum Entropy
- In Proc. ECSQARU-99, LNCS 1638
, 1999
"... . In this paper, we focus on the combination of probabilistic logic programming with the principle of maximum entropy. We start by defining probabilistic queries to probabilistic logic programs and their answer substitutions under maximum entropy. We then present an efficient linear programming char ..."
Abstract
-
Cited by 22 (5 self)
- Add to MetaCart
(Show Context)
. In this paper, we focus on the combination of probabilistic logic programming with the principle of maximum entropy. We start by defining probabilistic queries to probabilistic logic programs and their answer substitutions under maximum entropy. We then present an efficient linear programming characterization for the problem of deciding whether a probabilistic logic program is satisfiable. Finally, and as a central contribution of this paper, we introduce an efficient technique for approximative probabilistic logic programming under maximum entropy. This technique reduces the original entropy maximization task to solving a modified and relatively small optimization problem. 1 Introduction Probabilistic propositional logics and their various dialects are thoroughly studied in the literature (see especially [19] and [5]; see also [15] and [16]). Their extensions to probabilistic first-order logics can be classified into first-order logics in which probabilities are defined over the do...
Combining probabilistic logic programming with the power of maximum entropy
- ARTIF. INTELL
, 2004
"... This paper is on the combination of two powerful approaches to uncertain reasoning: logic programming in a probabilistic setting, on the one hand, and the information-theoretical principle of maximum entropy, on the other hand. More precisely, we present two approaches to probabilistic logic progra ..."
Abstract
-
Cited by 21 (4 self)
- Add to MetaCart
This paper is on the combination of two powerful approaches to uncertain reasoning: logic programming in a probabilistic setting, on the one hand, and the information-theoretical principle of maximum entropy, on the other hand. More precisely, we present two approaches to probabilistic logic programming under maximum entropy. The first one is based on the usual notion of entailment under maximum entropy, and is defined for the very general case of probabilistic logic programs over Boolean events. The second one is based on a new notion of entailment under maximum entropy, where the principle of maximum entropy is coupled with the closed world assumption (CWA) from classical logic programming. It is only defined for the more restricted case of probabilistic logic programs over conjunctive events. We then analyze the nonmonotonic behavior of both approaches along benchmark examples and along general properties for default reasoning from conditional knowledge bases. It turns out that both approaches have very nice nonmonotonic features. In particular, they realize some inheritance of probabilistic knowledge along subclass relationships, without suffering from the problem of inheritance blocking and from the drowning problem. They both also satisfy the property of rational monotonicity and several irrelevance properties. We finally present algorithms for both approaches, which are based on generalizations of techniques from probabilistic