• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Bucket elimination: A unifying framework for reasoning (1999)

by R Dechter
Venue:Artif. Intell
Add To MetaCart

Tools

Sorted by:
Results 11 - 20 of 298
Next 10 →

Probabilistic Theorem Proving

by Vibhav Gogate, Pedro Domingos
"... Many representation schemes combining firstorder logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logic ..."
Abstract - Cited by 70 (23 self) - Add to MetaCart
Many representation schemes combining firstorder logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logical structure into account. We propose the first method that has the full power of both graphical model inference and first-order theorem proving (in finite domains with Herbrand interpretations). We first define probabilistic theorem proving, their generalization, as the problem of computing the probability of a logical formula given the probabilities or weights of a set of formulas. We then show how this can be reduced to the problem of lifted weighted model counting, and develop an efficient algorithm for the latter. We prove the correctness of this algorithm, investigate its properties, and show how it generalizes previous approaches. Experiments show that it greatly outperforms lifted variable elimination when logical structure is present. Finally, we propose an algorithm for approximate probabilistic theorem proving, and show that it can greatly outperform lifted belief propagation. 1

Context specific multiagent coordination and planning with factored MDPs

by Carlos Guestrin - In AAAI , 2002
"... We present an algorithm for coordinated decision making in cooperative multiagent settings, where the agents ’ value function can be represented as a sum of context-specific value rules. The task of finding an optimal joint action in this setting leads to an algorithm where the coordination structur ..."
Abstract - Cited by 64 (3 self) - Add to MetaCart
We present an algorithm for coordinated decision making in cooperative multiagent settings, where the agents ’ value function can be represented as a sum of context-specific value rules. The task of finding an optimal joint action in this setting leads to an algorithm where the coordination structure between agents depends on the current state of the system and even on the actual numerical values assigned to the value rules. We apply this framework to the task of multiagent planning in dynamic systems, showing how a joint value function of the associated Markov Decision Process can be approximated as a set of value rules using an efficient linear programming algorithm. The agents then apply the coordination graph algorithm at each iteration of the process to decide on the highest-value joint action, potentially leading to a different coordination pattern at each step of the plan. 1
(Show Context)

Citation Context

...s never larger and in many cases exponentially smaller than the complexity bounds on the table-based coordination graph in GKP, which, in turn, was exponential only in the induced width of the graph (=-=Dechter 1999-=-). However, the computational costs involved in managing sets of rules usually imply that the computational advantage of the rule-based approach will only manifest in problems that possess a fair amou...

Clp(bn): Constraint logic programming for probabilistic knowledge

by Vítor Santos Costa, James Cussens - In Proceedings of the 19th Conference on Uncertainty in Artificial Intelligence (UAI03 , 2003
"... Abstract. In Datalog, missing values are represented by Skolem constants. More generally, in logic programming missing values, or existentially quantified variables, are represented by terms built from Skolem functors. The CLP(BN) language represents the joint probability distribution over missing v ..."
Abstract - Cited by 63 (7 self) - Add to MetaCart
Abstract. In Datalog, missing values are represented by Skolem constants. More generally, in logic programming missing values, or existentially quantified variables, are represented by terms built from Skolem functors. The CLP(BN) language represents the joint probability distribution over missing values in a database or logic program by using constraints to represent Skolem functions. Algorithms from inductive logic programming (ILP) can be used with only minor modification to learn CLP(BN) programs. An implementation of CLP(BN) is publicly available as part of YAP Prolog at
(Show Context)

Citation Context

...rk as a store: both constraints stores and Bayesian networks are graphs; in fact, it is well known that there is a strong connection betweens8 Vítor Santos Costa and David Page and James Cussens both =-=[12]-=-. It is natural to see the last step of probabilistic inference as constraint solving. And it is natural to see marginalization as projection. Moreover, because constraint stores are opaque to the act...

Algorithms and Complexity Results for #SAT and Bayesian Inference

by Fahiem Bacchus , Shannon Dalmao, Toniann Pitassi - IN 44TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS , 2004
"... Bayesian inference is an important problem with numerous applications in probabilistic reasoning. Counting satisfying assignments is a closely related problem of fundamental theoretical importance. In this paper, we show that plain old DPLL equipped with memoization (an algorithm we call #DPLLCache) ..."
Abstract - Cited by 62 (6 self) - Add to MetaCart
Bayesian inference is an important problem with numerous applications in probabilistic reasoning. Counting satisfying assignments is a closely related problem of fundamental theoretical importance. In this paper, we show that plain old DPLL equipped with memoization (an algorithm we call #DPLLCache) can solve both of these problems with time complexity that is at least as good as state-of-the-art exact algorithms, and that it can also achieve the best known time-space tradeoff. We then proceed to show that there are instances where #DPLLCache can achieve an exponential speedup over existing algorithms.

Topological Parameters for time-space tradeoff

by Rina Dechter - ARTIFICIAL INTELLIGENCE , 1996
"... In this paper we propose a family of algorithms combining tree-clustering with conditioning that trade space for time. Such algorithms are useful for reasoning in probabilistic and deterministic networks as well as for accomplishing optimization tasks. By analyzing the problem structure it will be p ..."
Abstract - Cited by 57 (10 self) - Add to MetaCart
In this paper we propose a family of algorithms combining tree-clustering with conditioning that trade space for time. Such algorithms are useful for reasoning in probabilistic and deterministic networks as well as for accomplishing optimization tasks. By analyzing the problem structure it will be possible to select from a spectrum the algorithm that best meets a given time-space specification.

Optimization by learning and simulation of Bayesian and Gaussian networks

by P. Larrañaga, R. Etxeberria, J. A. Lozano, J.M. Peña, J. M. Pe~na , 1999
"... Estimation of Distribution Algorithms (EDA) constitute an example of stochastics heuristics based on populations of individuals every of which encode the possible solutions to the optimization problem. These populations of individuals evolve in succesive generations as the search progresses -- organ ..."
Abstract - Cited by 56 (7 self) - Add to MetaCart
Estimation of Distribution Algorithms (EDA) constitute an example of stochastics heuristics based on populations of individuals every of which encode the possible solutions to the optimization problem. These populations of individuals evolve in succesive generations as the search progresses -- organized in the same way as most evolutionary computation heuristics. In opposition to most evolutionary computation paradigms which consider the crossing and mutation operators as essential tools to generate new populations, EDA replaces those operators by the estimation and simulation of the joint probability distribution of the selected individuals. In this work, after making a review of the different approaches based on EDA for problems of combinatorial optimization as well as for problems of optimization in continuous domains, we propose new approaches based on the theory of probabilistic graphical models to solve problems in both domains. More precisely, we propose to adapt algorit...

Learning partially observable deterministic action models

by Eyal Amir - In Proc. Nineteenth International Joint Conference on Artificial Intelligence (IJCAI ’05 , 2005
"... We present exact algorithms for identifying deterministic-actions ’ effects and preconditions in dynamic partially observable domains. They apply when one does not know the action model (the way actions affect the world) of a domain and must learn it from partial observations over time. Such scenari ..."
Abstract - Cited by 55 (2 self) - Add to MetaCart
We present exact algorithms for identifying deterministic-actions ’ effects and preconditions in dynamic partially observable domains. They apply when one does not know the action model (the way actions affect the world) of a domain and must learn it from partial observations over time. Such scenarios are common in real world applications. They are challenging for AI tasks because traditional domain structures that underly tractability (e.g., conditional independence) fail there (e.g., world features become correlated). Our work departs from traditional assumptions about partial observations and action models. In particular, it focuses on problems in which actions are deterministic of simple logical structure and observation models have all features observed with some frequency. We yield tractable algorithms for the modified problem for such domains. Our algorithms take sequences of partial observations over time as input, and output deterministic action models that could have lead to those observations. The algorithms output all or one of those models (depending on our choice), and are exact in that no model is misclassified given the observations. Our algorithms take polynomial time in the number of time steps and state features for some traditional action classes examined in the AI-planning literature, e.g., STRIPS actions. In contrast, traditional approaches for HMMs and Reinforcement Learning are inexact and exponentially intractable for such domains. Our experiments verify the theoretical tractability guarantees, and show that we identify action models exactly. Several applications in planning, autonomous exploration, and adventure-game playing already use these results. They are also promising for probabilistic settings, partially observable reinforcement learning, and diagnosis. 1.
(Show Context)

Citation Context

...ence in the combined probabilistic-logical system is a probabilistic inference. For example, 370LEARNING PARTIALLY OBSERVABLE DETERMINISTIC ACTION MODELS one can consider variable elimination (e.g., =-=Dechter, 1999-=-) in which there are additional potential functions. Parametrized Actions In many systems and situations it is natural to use parametrized actions. These are action schemas whose effect depend on thei...

AND/OR Branch-and-Bound for Graphical Models

by Radu Marinescu, Rina Dechter , 2005
"... ..."
Abstract - Cited by 51 (16 self) - Add to MetaCart
Abstract not found

Interestingness of Frequent Itemsets Using Bayesian Networks as Background Knowledge

by Szymon Jaroszewicz - In Proceedings of the SIGKDD Conference on Knowledge Discovery and Data Mining , 2004
"... ..."
Abstract - Cited by 47 (5 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...r computing the marginals. Approximate methods like Gibbs sampling are an interesting topic for future work. Best known approaches to exact marginalizations are join trees [12] and bucket elimination =-=[5]-=-. We chose bucket elimination method which is easier to implement and according to [5] as efficient as join tree based methods. Also, join trees are mainly useful for computing marginals for single at...

Value Elimination: Bayesian Inference via Backtracking Search

by Fahiem Bacchus, Shannon Dalmao, Toniann Pitassi - IN UAI-03 , 2003
"... We present Value Elimination, a new algorithm for Bayesian Inference. Given the same variable ordering information, Value Elimination can achieve performance that is within a constant factor of variable elimination or recursive conditioning, and on some problems it can perform exponentially bet ..."
Abstract - Cited by 43 (2 self) - Add to MetaCart
We present Value Elimination, a new algorithm for Bayesian Inference. Given the same variable ordering information, Value Elimination can achieve performance that is within a constant factor of variable elimination or recursive conditioning, and on some problems it can perform exponentially better, irrespective of the variable ordering used by these algorithms. Value Elimination
(Show Context)

Citation Context

...ination involves summing out individual variables, in the process creating new functions over typically larger sets of variables. Variable elimination can be used to solve a number of other problems (=-=Dechter 1999-=-). It has a close relationship to backtracking that is most apparent when we examine its application to SAT. SAT is the problem of determining whether or not a satisfying assignment exists for a CNF f...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University