Results 11  20
of
144
New advances in logicbased probabilistic modeling by PRISM
 Probabilistic Inductive Logic Programming
, 2008
"... Abstract. We review a logicbased modeling language PRISM and report recent developments including belief propagation by the generalized insideoutside algorithm and generative modeling with constraints. The former implies PRISM subsumes belief propagation at the algorithmic level. We also compare t ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We review a logicbased modeling language PRISM and report recent developments including belief propagation by the generalized insideoutside algorithm and generative modeling with constraints. The former implies PRISM subsumes belief propagation at the algorithmic level. We also compare the performance of PRISM with stateoftheart systems in statistical natural language processing and probabilistic inference in Bayesian networks respectively, and show that PRISM is reasonably competitive. 1
A top down interpreter for LPAD and CPlogic
 In Congress of the Italian Association for Artificial Intelligence. Number 4733 in LNAI
"... are two different but related languages for expressing probabilistic information in logic programming. The paper presents a top down interpreter for computing the probability of a query from a program in one of these two languages when the program is acyclic. The algorithm is based on the one availa ..."
Abstract

Cited by 17 (12 self)
 Add to MetaCart
(Show Context)
are two different but related languages for expressing probabilistic information in logic programming. The paper presents a top down interpreter for computing the probability of a query from a program in one of these two languages when the program is acyclic. The algorithm is based on the one available for ProbLog. The performances of the algorithm are compared with those of a Bayesian reasoner and with those of the ProbLog interpreter. On programs that have a small grounding, the Bayesian reasoner is more scalable, but programs with a large grounding require the top down interpreter. The comparison with ProbLog shows that, even if the added expressiveness effectively requires more computation resources, the top down interpreter can still solve problem of significant size. 1
Inference in probabilistic logic programs using weighted CNF’s.
, 2011
"... Abstract Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. Several classical probabilistic inference tasks (such as MAP and computing marginals) have not yet received a lot of attention for this formalism. The contribution of this paper is ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
(Show Context)
Abstract Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. Several classical probabilistic inference tasks (such as MAP and computing marginals) have not yet received a lot of attention for this formalism. The contribution of this paper is that we develop efficient inference algorithms for these tasks. This is based on a conversion of the probabilistic logic program and the query and evidence to a weighted CNF formula. This allows us to reduce the inference tasks to wellstudied tasks such as weighted model counting. To solve such tasks, we employ stateoftheart methods. We consider multiple methods for the conversion of the programs as well as for inference on the weighted CNF. The resulting approach is evaluated experimentally and shown to improve upon the stateoftheart in probabilistic logic programming.
Tabling and Answer Subsumption for Reasoning on Logic Programs with Annotated Disjunctions
 In Technical Communications of the International Conference on Logic Programming. LIPIcs, vol. 7. Schloss Dagstuhl– LeibnizZentrum fuer Informatik
"... Abstract Probabilistic Logic Programming is an active field of research, with many proposals for languages, semantics and reasoning algorithms. One such proposal, Logic Programming with Annotated Disjunctions (LPADs) represents probabilistic information in a sound and simple way. This paper present ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
(Show Context)
Abstract Probabilistic Logic Programming is an active field of research, with many proposals for languages, semantics and reasoning algorithms. One such proposal, Logic Programming with Annotated Disjunctions (LPADs) represents probabilistic information in a sound and simple way. This paper presents the algorithm "Probabilistic Inference with Tabling and Answer subsumption" (PITA) for computing the probability of queries. Answer subsumption is a feature of tabling that allows the combination of different answers for the same subgoal in the case in which a partial order can be defined over them. We have applied it in our case since probabilistic explanations (stored as BDDs in PITA) possess a natural lattice structure. PITA has been implemented in XSB and compared with ProbLog, cplint and CVE. The results show that, in almost all cases, PITA is able to solve larger problems and is faster than competing algorithms.
Programming with Personalized PageRank: A Locally Groundable FirstOrder Probabilistic Logic
"... Many informationmanagement tasks (including classification, retrieval, information extraction, and information integration) can be formalized as inference in an appropriate probabilistic firstorder logic. However, most probabilistic firstorder logics are not efficient enough for realisticallysize ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
(Show Context)
Many informationmanagement tasks (including classification, retrieval, information extraction, and information integration) can be formalized as inference in an appropriate probabilistic firstorder logic. However, most probabilistic firstorder logics are not efficient enough for realisticallysized instances of these tasks. One key problem is that queries are typically answered by “grounding ” the query— i.e., mapping it to a propositional representation, and then performing propositional inference—and with a large database of facts, groundings can be very large, making inference and learning computationally expensive. Here we present a firstorder probabilistic language which is wellsuited to approximate “local ” grounding: in particular, every query Q can be approximately grounded with a small graph. The language is an extension of stochastic logic programs where inference is performed by a variant of personalized PageRank. Experimentally, we show that the approach performs well on an entity resolution task, a classification task, and a joint inference task; that the cost of inference is independent of database size; and that speedup in learning is possible by multithreading.
ILP turns 20  Biography and future challenges
 MACH LEARN
, 2011
"... Inductive Logic Programming (ILP) is an area of Machine Learning which has now reached its twentieth year. Using the analogy of a human biography this paper recalls the development of the subject from its infancy through childhood and teenage years. We show how in each phase ILP has been characteri ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
Inductive Logic Programming (ILP) is an area of Machine Learning which has now reached its twentieth year. Using the analogy of a human biography this paper recalls the development of the subject from its infancy through childhood and teenage years. We show how in each phase ILP has been characterised by an attempt to extend theory and implementations in tandem with the development of novel and challenging realworld applications. Lastly, by projection we suggest directions for research which will help the subject coming of age.
MetaInterpretive Learning of HigherOrder Dyadic Datalog: Predicate Invention revisited
"... In recent years Predicate Invention has been underexplored within Inductive Logic Programming due to difficulties in formulating efficient search mechanisms. However, a recent paper demonstrated that both predicate invention and the learning of recursion can be efficiently implemented for regular an ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
In recent years Predicate Invention has been underexplored within Inductive Logic Programming due to difficulties in formulating efficient search mechanisms. However, a recent paper demonstrated that both predicate invention and the learning of recursion can be efficiently implemented for regular and contextfree grammars, by way of abduction with respect to a metainterpreter. New predicate symbols are introduced as constants representing existentially quantified higherorder variables. In this paper we generalise the approach of MetaInterpretive Learning (MIL) to that of learning higherorder dyadic datalog programs. We show that with an infinite signature the higherorder dyadic datalog class H 2 2 has universal Turing expressivity though H 2 2 is decidable given a finite signature. Additionally we show that KnuthBendix ordering of the hypothesis space together with logarithmic clause bounding allows our Dyadic MIL implementation Metagol D to PAClearn minimal cardinailty H 2 2 definitions. This result is consistent with our experiments which indicate that MetagolD efficiently learns compact H2 2 definitions involving predicate invention for robotic strategies and higherorder concepts in the NELL language learning domain.
Propositionalizing the EM algorithm by BDDs
"... Abstract. We propose an EM algorithm working on binary decision diagrams (BDDs). It opens a way to applying BDDs to statistical inference in general and machine learning in particular. We also present the complexity analysis of noisyOR models. 1 ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We propose an EM algorithm working on binary decision diagrams (BDDs). It opens a way to applying BDDs to statistical inference in general and machine learning in particular. We also present the complexity analysis of noisyOR models. 1
Revising Probabilistic Prolog Programs
"... Abstract. The ProbLog (probabilistic prolog) language has been introduced in [1], where various algorithms have been developed for solving and approximating ProbLog queries. Here, we define and study the problem of revising ProbLog theories from examples. 1 ProbLog: Probabilistic Prolog A ProbLog pr ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
Abstract. The ProbLog (probabilistic prolog) language has been introduced in [1], where various algorithms have been developed for solving and approximating ProbLog queries. Here, we define and study the problem of revising ProbLog theories from examples. 1 ProbLog: Probabilistic Prolog A ProbLog program consists – as Prolog – of a set of definite clauses. However, in ProbLog every clause ci is labeled with the probability pi that it is true. Example 1. Within bibliographic data analysis, the similarity structure among items can improve information retrieval results. Consider a collection of papers {a, b, c, d} and some pairwise similarities similar(a, b), e.g., based on key word analysis. Two items X and Y are related(X, Y) if they are similar (such as a and c) or if X is similar to some item Z which is related to Y. Uncertainty in the data and in the inference can elegantly be represented by the attached probabilities:
F.: Experimentation of an expectation maximization algorithm for probabilistic logic programs
 Intelligenza Artificiale
, 2012
"... Statistical Relational Learning and Probabilistic Inductive Logic Programming are two emerging fields that use representation languages able to combine logic and probability. In the field of Logic Programming, the distribution semantics is one of the prominent approaches for representing uncertainty ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
Statistical Relational Learning and Probabilistic Inductive Logic Programming are two emerging fields that use representation languages able to combine logic and probability. In the field of Logic Programming, the distribution semantics is one of the prominent approaches for representing uncertainty and underlies many languages such as ICL, PRISM, ProbLog and LPADs. Learning the parameters for such languages requires an Expectation Maximization algorithm since their equivalent Bayesian networks contain hidden variables. EMBLEM (EM over BDDs for probabilistic Logic programs Efficient Mining) is an EM algorithm for languages following the distribution semantics that computes expectations directly on the Binary Decision Diagrams that are built for inference. In this paper we present experiments comparing EMBLEM with LeProbLog, Alchemy, CEM, RIB and LFIProbLog on six real world datasets. The results show that EMBLEM is able to solve problems on which the other systems fail and it often achieves significantly higher areas under the Precision Recall and the ROC curves in a similar time.