Results 1 
9 of
9
A Tractable FirstOrder Probabilistic Logic
"... Tractable subsets of firstorder logic are a central topic in AI research. Several of these formalisms have been used as the basis for firstorder probabilistic languages. However, these are intractable, losing the original motivation. Here we propose the first nontrivially tractable firstorder pr ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Tractable subsets of firstorder logic are a central topic in AI research. Several of these formalisms have been used as the basis for firstorder probabilistic languages. However, these are intractable, losing the original motivation. Here we propose the first nontrivially tractable firstorder probabilistic language. It is a subset of Markov logic, and uses probabilistic class and part hierarchies to control complexity. We call it TML (Tractable Markov Logic). We show that TML knowledge bases allow for efficient inference even when the corresponding graphical models have very high treewidth. We also show how probabilistic inheritance, default reasoning, and other inference patterns can be carried out in TML. TML opens up the prospect of efficient largescale firstorder probabilistic inference.
Efficient Lifting of MAP LP Relaxations Using kLocality
"... Inference in large scale graphical models is an important task in many domains, and in particular for probabilistic relational models (e.g,. Markov logic networks). Such models often exhibit considerable symmetry, and it is a challenge to devise algorithms that exploit this symmetry to speed up infe ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Inference in large scale graphical models is an important task in many domains, and in particular for probabilistic relational models (e.g,. Markov logic networks). Such models often exhibit considerable symmetry, and it is a challenge to devise algorithms that exploit this symmetry to speed up inference. Here we address this task in the context of the MAP inference problem and its linear programming relaxations. We show that symmetry in these problems can be discovered using an elegant algorithm known as the kdimensional WeisfeilerLehman (kWL) algorithm. We run kWL on the original graphical model, and not on the far larger graph of the linear program (LP) as proposed in earlier work in the field. Furthermore, the algorithm is polynomial and thus far more practical than other previous approaches which rely on orbit partitions that are GI complete to find. The fact that kWL can be used in this manner follows from the recently introduced notion of klocal LPs and their relation to Sherali Adams relaxations of graph automorphisms. Finally, for relational models such as Markov logic networks, the benefits of our approach are even more dramatic, as we can discover symmetries in the original domain graph, as opposed to running lifting on the much larger grounded model. 1
Filtering with abstract particles
 In International Conference on Machine Learning (ICML), 2014. Robert H. Swendsen and JianSheng
"... By using particles, beam search and sequential Monte Carlo can approximate distributions in an extremely flexible manner. However, they can suffer from sparsity and inadequate coverage on large state spaces. We present a new filtering method for discrete spaces that addresses this issue by using “ ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
By using particles, beam search and sequential Monte Carlo can approximate distributions in an extremely flexible manner. However, they can suffer from sparsity and inadequate coverage on large state spaces. We present a new filtering method for discrete spaces that addresses this issue by using “abstract particles, ” each of which represents an entire region of state space. These abstract particles are combined into a hierarchical decomposition, yielding a compact and flexible representation. Empirically, our method outperforms beam search and sequential Monte Carlo on both a text reconstruction task and a multiple object tracking task.
Tractable Markov Logic
"... Tractable subsets of firstorder logic are a central topic in AI research. Several of these formalisms have been used as the basis for firstorder probabilistic languages. However, these are intractable, losing the original motivation. Here we propose the first nontrivially tractable firstorder pro ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Tractable subsets of firstorder logic are a central topic in AI research. Several of these formalisms have been used as the basis for firstorder probabilistic languages. However, these are intractable, losing the original motivation. Here we propose the first nontrivially tractable firstorder probabilistic language. It is a subset of Markov logic, and uses probabilistic class and part hierarchies to control complexity. We call it TML (Tractable Markov Logic). We show that TML knowledge bases allow for efficient inference even when the corresponding graphical models have very high treewidth. We also show how probabilistic inheritance, default reasoning, and other inference patterns can be carried out in TML. TML opens up the prospect of efficient largescale firstorder probabilistic inference. 1.
Exploiting Determinism to Scale Relational Inference
"... One key challenge in statistical relational learning (SRL) is scalable inference. Unfortunately, most realworld problems in SRL have expressive models that translate into large grounded networks, representing a bottleneck for any inference method and weakening its scalability. In this paper we intr ..."
Abstract
 Add to MetaCart
One key challenge in statistical relational learning (SRL) is scalable inference. Unfortunately, most realworld problems in SRL have expressive models that translate into large grounded networks, representing a bottleneck for any inference method and weakening its scalability. In this paper we introduce Preference Relaxation (PR), a twostage strategy that uses the determinism present in the underlying model to improve the scalability of relational inference. The basic idea of PR is that if the underlying model involves mandatory (i.e. hard) constraints as well as preferences (i.e. soft constraints) then it is potentially wasteful to allocate memory for all constraints in advance when performing inference. To avoid this, PR starts by relaxing preferences and performing inference with hard constraints only. It then removes variables that violate hard constraints, thereby avoiding irrelevant computations involving preferences. In addition it uses the removed variables to enlarge the evidence database. This reduces the effective size of the grounded network. Our approach is general and can be applied to various inference methods in relational domains. Experiments on realworld applications show how PR substantially scales relational inference with a minor impact on accuracy.
Representations and Algorithms
, 2015
"... This thesis develops probabilistic programming as a productive metaphor for understanding cognition, both with respect to mental representations and the manipulation of such representations. In the first half of the thesis, I demonstrate the representational power of probabilistic programs in the ..."
Abstract
 Add to MetaCart
(Show Context)
This thesis develops probabilistic programming as a productive metaphor for understanding cognition, both with respect to mental representations and the manipulation of such representations. In the first half of the thesis, I demonstrate the representational power of probabilistic programs in the domains of concept learning and social reasoning. I provide examples of richly structured concepts, defined in terms of systems of relations, subparts, and recursive embeddings, that are naturally expressed as programs and show initial experimental evidence that they match human generalization patterns. I then proceed to models of reasoning about reasoning, a domain where the expressive power of probabilistic programs is necessary to formalize our intuitive domain understanding due to the fact that, unlike previous formalisms, probabilistic programs allow
Knowledge Extraction and Joint Inference Using Tractable Markov Logic
"... The development of knowledge base creation systems has mainly focused on information extraction without considering how to effectively reason over their databases of facts. One reason for this is that the inference required to learn a probabilistic knowledge base from text at any realistic scale is ..."
Abstract
 Add to MetaCart
(Show Context)
The development of knowledge base creation systems has mainly focused on information extraction without considering how to effectively reason over their databases of facts. One reason for this is that the inference required to learn a probabilistic knowledge base from text at any realistic scale is intractable. In this paper, we propose formulating the joint problem of fact extraction and probabilistic model learning in terms of Tractable Markov Logic (TML), a subset of Markov logic in which inference is loworder polynomial in the size of the knowledge base. Using TML, we can tractably extract new information from text while simultaneously learning a probabilistic knowledge base. We will also describe a testbed for our proposal: creating a biomedical knowledge base and making it available for querying on the Web. 1
© The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav
"... KnowRob: A knowledge processing infrastructure for cognitionenabled robots ..."
Abstract
 Add to MetaCart
(Show Context)
KnowRob: A knowledge processing infrastructure for cognitionenabled robots