Results 1  10
of
41
Runtime composite event recognition
 in Proceedings of International Conference on Distributed EventBased Systems (DEBS). ACM, 2012
"... Events are particularly important pieces of knowledge, as they represent activities of special significance within an organisation: the automated recognition of events is of utmost importance. We present RTEC, an Event Calculus dialect for runtime event recognition and its Prolog implementation. RT ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
(Show Context)
Events are particularly important pieces of knowledge, as they represent activities of special significance within an organisation: the automated recognition of events is of utmost importance. We present RTEC, an Event Calculus dialect for runtime event recognition and its Prolog implementation. RTEC includes a number of novel techniques allowing for efficient runtime recognition, scalable to large data streams. It can be used in applications where data might arrive with a delay from, or might be revised by, the underlying event sources. We evaluate RTEC using a realworld application.
F.: Experimentation of an expectation maximization algorithm for probabilistic logic programs
 Intelligenza Artificiale
, 2012
"... Statistical Relational Learning and Probabilistic Inductive Logic Programming are two emerging fields that use representation languages able to combine logic and probability. In the field of Logic Programming, the distribution semantics is one of the prominent approaches for representing uncertainty ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
Statistical Relational Learning and Probabilistic Inductive Logic Programming are two emerging fields that use representation languages able to combine logic and probability. In the field of Logic Programming, the distribution semantics is one of the prominent approaches for representing uncertainty and underlies many languages such as ICL, PRISM, ProbLog and LPADs. Learning the parameters for such languages requires an Expectation Maximization algorithm since their equivalent Bayesian networks contain hidden variables. EMBLEM (EM over BDDs for probabilistic Logic programs Efficient Mining) is an EM algorithm for languages following the distribution semantics that computes expectations directly on the Binary Decision Diagrams that are built for inference. In this paper we present experiments comparing EMBLEM with LeProbLog, Alchemy, CEM, RIB and LFIProbLog on six real world datasets. The results show that EMBLEM is able to solve problems on which the other systems fail and it often achieves significantly higher areas under the Precision Recall and the ROC curves in a similar time.
Welldefinedness and efficient inference for probabilistic logic programming under the distribution semantics. Theory and Practice of Logic Programming
 BIBLIOGRAPHY
"... ar ..."
Dyna: Extending Datalog For Modern AI ⋆
"... Abstract. Modern statistical AI systems are quite large and complex; this interferes with research, development, and education. We point out that most of the computation involves databaselike queries and updates on complex views of the data. Specifically, recursive queries look up and aggregate rel ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract. Modern statistical AI systems are quite large and complex; this interferes with research, development, and education. We point out that most of the computation involves databaselike queries and updates on complex views of the data. Specifically, recursive queries look up and aggregate relevant or potentially relevant values. If the results of these queries are memoized for reuse, the memos may need to be updated through change propagation. We propose a declarative language, which generalizes Datalog, to support this work in a generic way. Through examples, we show that a broad spectrum of AIalgorithms can be concisely captured by writing down systems of equations in our notation. Many strategies could be used to actually solve those systems. Our examples motivatecertainextensionstoDatalog, whichareconnectedtofunctional and objectoriented programming paradigms. 1 Why a New DataOriented Language for AI? Modern AI systems are frustratingly big, making them timeconsuming to engineer
Event Processing Under Uncertainty
"... Big data is recognized as one of the three technology trends at the leading edge a CEO cannot afford to overlook in 2012. Big data is characterized by volume, velocity, variety and veracity (“data in doubt”). As big data applications, many of the emerging event processing applications must process e ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Big data is recognized as one of the three technology trends at the leading edge a CEO cannot afford to overlook in 2012. Big data is characterized by volume, velocity, variety and veracity (“data in doubt”). As big data applications, many of the emerging event processing applications must process events that arrive from sources such as sensors and social media, which have inherent uncertainties associated with them. Consider, for example, the possibility of incomplete data streams and streams including inaccurate data. In this tutorial we classify the different types of uncertainty found in event processing applications and discuss the implications on event representation and reasoning. An area of research in which uncertainty has been studied is Artificial Intelligence. We discuss, therefore, the main Artificial Intelligencebased event processing systems that support probabilistic reasoning. The presented approaches are illustrated using an example concerning crime detection.
The Principles and Practice of Probabilistic Programming
"... models, probabilistic programs Probabilities describe degrees of belief, and probabilistic inference describes rational reasoning under uncertainty. It is no wonder, then, that probabilistic models have exploded onto the scene of modern artificial intelligence, cognitive science, and applied statist ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
models, probabilistic programs Probabilities describe degrees of belief, and probabilistic inference describes rational reasoning under uncertainty. It is no wonder, then, that probabilistic models have exploded onto the scene of modern artificial intelligence, cognitive science, and applied statistics: these are all sciences of inference under uncertainty. But as probabilistic models have become more sophisticated, the tools to formally describe them and to perform probabilistic inference have wrestled with new complexity. Just as programming beyond the simplest algorithms requires tools for abstraction and composition, complex probabilistic modeling requires new progress in model representation—probabilistic programming languages. These languages provide compositional means for describing complex probability distributions; implementations of these languages provide generic inference engines: tools for performing efficient probabilistic
E.: Probabilistic Datalog+/ under the Distribution
 Semantics, International Workshop on Description Logics
"... Abstract. We apply the distribution semantics for probabilistic ontologies (named DISPONTE) to the Datalog+/ language. In DISPONTE the formulas of a probabilistic ontology can be annotated with an epistemic or a statistical probability. The epistemic probability represents a degree of confidence in ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We apply the distribution semantics for probabilistic ontologies (named DISPONTE) to the Datalog+/ language. In DISPONTE the formulas of a probabilistic ontology can be annotated with an epistemic or a statistical probability. The epistemic probability represents a degree of confidence in the formula, while the statistical probability considers the populations to which the formula is applied. The probability of a query is defined in terms of finite set of finite explanations for the query, where an explanation is a set of possibly instantiated formulas that is sufficient for entailing the query. The probability of a query is computed from the set of explanations by making them mutually exclusive. We also compare the DISPONTE approach for Datalog+/ ontologies with that of Probabilistic Datalog+/, where an ontology is composed of a Datalog+/ theory whose formulas are associated to an assignment of values for the random variables of a companion Markov Logic Network. 1
The Magic of Logical Inference in Probabilistic Programming
 UNDER CONSIDERATION FOR PUBLICATION IN THEORY AND PRACTICE OF LOGIC PROGRAMMING
, 2011
"... Today, many different probabilistic programming languages exist and even more inference mechanisms for these languages. Still, most logic programming based languages use backward reasoning based on SLD resolution for inference. While these methods are typically computationally efficient, they often ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Today, many different probabilistic programming languages exist and even more inference mechanisms for these languages. Still, most logic programming based languages use backward reasoning based on SLD resolution for inference. While these methods are typically computationally efficient, they often can neither handle infinite and/or continuous distributions, nor evidence. To overcome these limitations, we introduce distributional clauses, a variation and extension of Sato’s distribution semantics. We also contribute a novel approximate inference method that integrates forward reasoning with importance sampling, a wellknown technique for probabilistic inference. To achieve efficiency, we integrate two logic programming techniques to direct forward sampling. Magic sets are used to focus on relevant parts of the program, while the integration of backward reasoning allows one to identify and avoid regions of the sample space that are inconsistent with the evidence.
kLog  A Language for Logical and Relational Learning with Kernels
"... kLog is a logical and relational language for kernelbased learning. It allows users to specify logical and relational learning problems at a high level in a declarative way. It builds on simple but powerful concepts: learning from interpretations, entity/relationship data modeling, logic programmin ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
kLog is a logical and relational language for kernelbased learning. It allows users to specify logical and relational learning problems at a high level in a declarative way. It builds on simple but powerful concepts: learning from interpretations, entity/relationship data modeling, logic programming and deductive databases (Prolog and Datalog), and graph kernels. kLog is a statistical relational learning system but unlike other statistical relational learning models, it does not represent a probability distribution directly. It is rather a kernelbased approach to learning that employs features derived from a grounded entity/relationship diagram. These features are derived using a novel technique called graphicalization that is used to transform the relational representations into graph based representations. Once the graphs are computed, kLog employs graph kernels for defining feature spaces. kLog can use numerical and symbolic data, background knowledge in the form of Prolog or Datalog programs (as in inductive logic programming systems) and several statistical procedures can be used to fit the model parameters. The kLog framework can – in principle – be applied to tackle the same range ✩PF was a visiting professor at K.U. Leuven and FC a postdoctoral fellow at K.U. Leuven while this work was initiated
An algebraic Prolog for reasoning about possible worlds
 In Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI
, 2011
"... We introduce aProbLog, a generalization of the probabilistic logic programming language ProbLog. An aProbLog program consists of a set of definite clauses and a set of algebraic facts; each such fact is labeled with an element of a semiring. A wide variety of labels is possible, ranging from proba ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We introduce aProbLog, a generalization of the probabilistic logic programming language ProbLog. An aProbLog program consists of a set of definite clauses and a set of algebraic facts; each such fact is labeled with an element of a semiring. A wide variety of labels is possible, ranging from probability values to reals (representing costs or utilities), polynomials, Boolean functions or data structures. The semiring is then used to calculate labels of possible worlds and of queries. We formally define the semantics of aProbLog and study the aProbLog inference problem, which is concerned with computing the label of a query. Two conditions are introduced that allow one to simplify the inference problem, resulting in four different algorithms and settings. Representative basic problems for each of these four settings are: is there a possible world where a query is true (SAT), how many such possible worlds are there (#SAT), what is the probability of a query being true (PROB), and what is the most likely world where the query is true (MPE). We further illustrate these settings with a number of tasks requiring more complex semirings. 1