Results 1  10
of
13
Probabilistic Modelling, Inference and Learning using Logical Theories
"... This paper provides a study of probabilistic modelling, inference and learning in a logicbased setting. We show how probability densities, being functions, can be represented and reasoned with naturally and directly in higherorder logic, an expressive formalism not unlike the (informal) everyday l ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
This paper provides a study of probabilistic modelling, inference and learning in a logicbased setting. We show how probability densities, being functions, can be represented and reasoned with naturally and directly in higherorder logic, an expressive formalism not unlike the (informal) everyday language of mathematics. We give efficient inference algorithms and illustrate the general approach with a diverse collection of applications. Some learning issues are also considered.
Probabilities on Sentences in an Expressive Logic
, 2012
"... 1 Automated reasoning about uncertain knowledge has many applications. One difficulty when developing such systems is the lack of a completely satisfactory integration of logic and probability. We address this problem directly. Expressive languages like higherorder logic are ideally suited for repre ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
1 Automated reasoning about uncertain knowledge has many applications. One difficulty when developing such systems is the lack of a completely satisfactory integration of logic and probability. We address this problem directly. Expressive languages like higherorder logic are ideally suited for representing and reasoning about structured knowledge. Uncertain knowledge can be modeled by using graded probabilities rather than binary truthvalues. The main technical problem studied in this paper is the following: Given a set of sentences, each having some probability of being true, what probability should be ascribed to other (query) sentences? A natural wishlist, among others, is that the probability distribution (i) is consistent with the knowledge base, (ii) allows for a consistent inference procedure and in particular (iii) reduces to deductive logic in the limit of probabilities being 0 and 1, (iv) allows (Bayesian) inductive reasoning and (v) learning in the limit and in particular (vi) allows confirmation of universally quantified hypotheses/sentences. We translate this wishlist into technical requirements for a prior probability
MultiAgent Filtering with Infinitely Nested Beliefs
"... In partially observable worlds with many agents, nested beliefs are formed when agents simultaneously reason about the unknown state of the world and the beliefs of the other agents. The multiagent filtering problem is to efficiently represent and update these beliefs through time as the agents act ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
In partially observable worlds with many agents, nested beliefs are formed when agents simultaneously reason about the unknown state of the world and the beliefs of the other agents. The multiagent filtering problem is to efficiently represent and update these beliefs through time as the agents act in the world. In this paper, we formally define an infinite sequence of nested beliefs about the state of the world at the current time t, and present a filtering algorithm that maintains a finite representation which can be used to generate these beliefs. In some cases, this representation can be updated exactly in constant time; we also present a simple approximation scheme to compact beliefs if they become too complex. In experiments, we demonstrate efficient filtering in a range of multiagent domains. 1
Factored Models for Probabilistic Modal Logic
"... Modal logic represents knowledge that agents have about other agents ’ knowledge. Probabilistic modal logic further captures probabilistic beliefs about probabilistic beliefs. Models in those logics are useful for understanding and decision making in conversations, bargaining situations, and competi ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Modal logic represents knowledge that agents have about other agents ’ knowledge. Probabilistic modal logic further captures probabilistic beliefs about probabilistic beliefs. Models in those logics are useful for understanding and decision making in conversations, bargaining situations, and competitions. Unfortunately, probabilistic modal structures are impractical for large realworld applications because they represent their state space explicitly. In this paper we scale up probabilistic modal structures by giving them a factored representation. This representation applies conditional independence for factoring the probabilistic aspect of the structure (as in Bayesian Networks (BN)). We also present two exact and one approximate algorithm for reasoning about the truth value of probabilistic modal logic queries over a model encoded in a factored form. The first exact algorithm applies inference in BNs to answer a limited class of queries. Our second exact method applies a variable elimination scheme and is applicable without restrictions. Our approximate algorithm uses sampling and can be used for applications with very large models. Given a query, it computes an answer and its confidence level efficiently. 1
Probabilistic and logical beliefs
 Proceedings of the Workshop on Languages, Methodologies and Development Tools for Multiagent Systems (LADS’007
, 2007
"... Abstract. This paper proposes a method of integrating two different concepts of belief in artificial intelligence: belief as a probability distribution and belief as a logical formula. The setting for the integration is a highly expressive logic. The integration is explained in detail, as its compar ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract. This paper proposes a method of integrating two different concepts of belief in artificial intelligence: belief as a probability distribution and belief as a logical formula. The setting for the integration is a highly expressive logic. The integration is explained in detail, as its comparison to other approaches to integrating logic and probability. An illustrative example is given to motivate the usefulness of the ideas in agent applications. 1
Reasoning under the principle of maximum entropy for modal logics
 K45, KD45, and S5. In Theoretical Aspects of Rationality and Knowledge (TARK
, 2013
"... ABSTRACT We propose modal Markov logic as an extension of propositional Markov logic to reason under the principle of maximum entropy for modal logics K45, KD45, and S5. Analogous to propositional Markov logic, the knowledge base consists of weighted formulas, whose weights are learned from data. H ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
ABSTRACT We propose modal Markov logic as an extension of propositional Markov logic to reason under the principle of maximum entropy for modal logics K45, KD45, and S5. Analogous to propositional Markov logic, the knowledge base consists of weighted formulas, whose weights are learned from data. However, in contrast to Markov logic, in our framework we use the knowledge base to define a probability distribution over nonequivalent epistemic situations (pointed Kripke structures) rather than over atoms, and use this distribution to assign probabilities to modal formulas. As in all probabilistic representations, the central task in our framework is inference. Although the size of the state space grows doubly exponentially in the number of propositions in the domain, we provide an algorithm that scales only exponentially in the size of the knowledge base. Finally, we briefly discuss the case of languages with an infinite number of propositions.
Proceedings of the TwentyThird AAAI Conference on Artificial Intelligence (2008) Factored Models for Probabilistic Modal Logic
"... Modal logic represents knowledge that agents have about other agents ’ knowledge. Probabilistic modal logic further captures probabilistic beliefs about probabilistic beliefs. Models in those logics are useful for understanding and decision making in conversations, bargaining situations, and competi ..."
Abstract
 Add to MetaCart
(Show Context)
Modal logic represents knowledge that agents have about other agents ’ knowledge. Probabilistic modal logic further captures probabilistic beliefs about probabilistic beliefs. Models in those logics are useful for understanding and decision making in conversations, bargaining situations, and competitions. Unfortunately, probabilistic modal structures are impractical for large realworld applications because they represent their state space explicitly. In this paper we scale up probabilistic modal structures by giving them a factored representation. This representation applies conditional independence for factoring the probabilistic aspect of the structure (as in Bayesian Networks (BN)). We also present two exact and one approximate algorithm for reasoning about the truth value of probabilistic modal logic queries over a model encoded in a factored form. The first exact algorithm applies inference in BNs to answer a limited class of queries. Our second exact method applies a variable elimination scheme and is applicable without restrictions. Our approximate algorithm uses sampling and can be used for applications with very large models. Given a query, it computes an answer and its confidence level efficiently.
Modal Markov Logic for Multiple Agents
"... Modal Markov Logic for a single agent has previously been proposed as an extension to propositional Markov logic. While the framework allowed reasoning under the principle of maximum entropy for various modal logics, it is not feasible to apply its counting based inference to reason about the belief ..."
Abstract
 Add to MetaCart
(Show Context)
Modal Markov Logic for a single agent has previously been proposed as an extension to propositional Markov logic. While the framework allowed reasoning under the principle of maximum entropy for various modal logics, it is not feasible to apply its counting based inference to reason about the beliefs and knowledge of multiple agents due to magnitude of the numbers involved. We propose a modal extension of propositional Markov logic that avoids this problem by coarsening the state space. The problem stems from the fact that in the singleagent setting, the state space is only doubly exponential in the number of propositions in the domain, but the state space can potentially become infinite in the multiagent setting. In addition, the proposed framework adds only the overhead of deciding satisfiability for the chosen modal logic on the top of the complexity of exact inference in propositional Markov logic. The proposed framework allows one to find a distribution that matches probabilities of formulas obtained from training data (or provided by an expert). Finally, we show how one can compute lower and upper bounds on probabilities of arbitrary formulas. 1