Results 1  10
of
30
IPE and L2U: Approximate algorithms for credal networks
 IN PROCEEDINGS OF THE SECOND STARTING AI RESEARCHER SYMPOSIUM
, 2004
"... ..."
(Show Context)
Controlled Generation of Hard and Easy Bayesian Networks: Impact on Maximal Clique Tree in Tree Clustering
 Artificial Intelligence
, 2006
"... This article presents and analyzes algorithms that systematically generate random Bayesian networks of varying difficulty levels, with respect to inference using tree clustering. The results are relevant to research on efficient Bayesian network inference, such as computing a most probable explanati ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
This article presents and analyzes algorithms that systematically generate random Bayesian networks of varying difficulty levels, with respect to inference using tree clustering. The results are relevant to research on efficient Bayesian network inference, such as computing a most probable explanation or belief updating, since they allow controlled experimentation to determine the impact of improvements to inference algorithms. The results are also relevant to research on machine learning of Bayesian networks, since they support controlled generation of a large number of data sets at a given difficulty level. Our generation algorithms, called BPART and MPART, support controlled but random construction of bipartite and multipartite Bayesian networks. The Bayesian network parameters that we vary are the total number of nodes, degree of connectivity, the ratio of the number of nonroot nodes to the number of root nodes, regularity of the underlying graph, and characteristics of the conditional probability tables. The main dependent parameter is the size of the maximal clique as generated by tree clustering. This article presents extensive empirical analysis using the H��� � tree clustering approach as well as theoretical analysis related to the random generation of Bayesian networks using BPART and MPART. 1
Adapting Bayes Network Structures to Nonstationary Domains
"... When an incremental structural learning method gradually modifies a Bayesian network (BN) structure to fit observations, as they are read from a database, we call the process structural adaptation. Structural adaptation is useful when the learner is set to work in an unknown environment, where a BN ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
When an incremental structural learning method gradually modifies a Bayesian network (BN) structure to fit observations, as they are read from a database, we call the process structural adaptation. Structural adaptation is useful when the learner is set to work in an unknown environment, where a BN is to be gradually constructed as observations of the environment are made. Existing algorithms for incremental learning assume that the samples in the database have been drawn from a single underlying distribution. In this paper we relax this assumption, so that the underlying distribution can change during the sampling of the database. The method that we present can thus be used in unknown environments, where it is not even known whether the dynamics of the environment are stable. We briefly state formal correctness results for our method, and demonstrate its feasibility experimentally. 1
Dominance Testing via Model Checking
"... Dominance testing, the problem of determining whether an outcome is preferred over another, is of fundamental importance in many applications. Hence, there is a need for algorithms and tools for dominance testing. CPnets and TCPnets are some of the widely studied languages for representing and rea ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
Dominance testing, the problem of determining whether an outcome is preferred over another, is of fundamental importance in many applications. Hence, there is a need for algorithms and tools for dominance testing. CPnets and TCPnets are some of the widely studied languages for representing and reasoning with preferences. We reduce dominance testing in TCPnets to reachability analysis in a graph of outcomes. We provide an encoding of TCPnets in the form of a Kripke structure for CTL. We show how to compute dominance using NuSMV, a model checker for CTL. We present results of experiments that demonstrate the feasibility of our approach to dominance testing.
Approximate Algorithms for Credal Networks with Binary Variables
, 2007
"... This paper presents a family of algorithms for approximate inference in credal networks (that is, models based on directed acyclic graphs and setvalued probabilities) that contain only binary variables. Such networks can represent incomplete or vague beliefs, lack of data, and disagreements among ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
This paper presents a family of algorithms for approximate inference in credal networks (that is, models based on directed acyclic graphs and setvalued probabilities) that contain only binary variables. Such networks can represent incomplete or vague beliefs, lack of data, and disagreements among experts; they can also encode models based on belief functions and possibilistic measures. All algorithms for approximate inference in this paper rely on exact inferences in credal networks based on polytrees with binary variables, as these inferences have polynomial complexity. We are inspired by approximate algorithms for Bayesian networks; thus the Loopy 2U algorithm resembles Loopy Belief Propagation, while the IPE and SV2U algorithms are respectively based on Localized Partial Evaluation and variational techniques.
Updating credal networks is approximable in polynomial time
 International Journal of Approximate Reasoning
"... This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Macroscopic models of clique tree growth for Bayesian networks
 In Proceedings of the TwentySecond National Conference on Artificial Intelligence (AAAI07
, 2007
"... In clique tree clustering, inference consists of propagation in a clique tree compiled from a Bayesian network. In this paper, we develop an analytical approach to characterizing clique tree growth as a function of increasing Bayesian network connectedness, speci cally: (i) the expected number of mo ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
In clique tree clustering, inference consists of propagation in a clique tree compiled from a Bayesian network. In this paper, we develop an analytical approach to characterizing clique tree growth as a function of increasing Bayesian network connectedness, speci cally: (i) the expected number of moral edges in their moral graphs or (ii) the ratio of the number of nonroot nodes to the number of root nodes. In experiments, we systematically increase the connectivity of bipartite Bayesian networks, and nd that clique tree size growth is wellapproximated by Gompertz growth curves. This research improves the understanding of the scaling behavior of clique tree clustering, provides a foundation for benchmarking and developing improved BN inference algorithms, and presents an aid for analytical tradeoff studies of tree clustering using growth curves.
Improving the Reliability of Causal Discovery from Small Data Sets Using Argumentation
"... We address the problem of improving the reliability of independencebased causal discovery algorithms that results from the execution of statistical independence tests on small data sets, which typically have low reliability. We model the problem as a knowledge base containing a set of independence ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
We address the problem of improving the reliability of independencebased causal discovery algorithms that results from the execution of statistical independence tests on small data sets, which typically have low reliability. We model the problem as a knowledge base containing a set of independence facts that are related through Pearl’s wellknown axioms. Statistical tests on finite data sets may result in errors in these tests and inconsistencies in the knowledge base. We resolve these inconsistencies through the use of an instance of the class of defeasible logics called argumentation, augmented with a preference function, that is used to reason about and possibly correct errors in these tests. This results in a more robust conditional independence test, called an argumentative independence test. Our experimental evaluation shows clear positive improvements in the accuracy of argumentative over purely statistical tests. We also demonstrate significant improvements on the accuracy of causal structure discovery from the outcomes of independence tests both on sampled data from randomly generated causal models and on realworld data sets.
Understanding the Scalability of Bayesian Network Inference using Clique Tree Growth Curves
"... Bayesian networks (BNs) are used to represent and ef ciently compute with multivariate probability distributions in a wide range of disciplines. One of the main approaches to perform computation in BNs is clique tree clustering and propagation. In this approach, BN computation consists of propagati ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
Bayesian networks (BNs) are used to represent and ef ciently compute with multivariate probability distributions in a wide range of disciplines. One of the main approaches to perform computation in BNs is clique tree clustering and propagation. In this approach, BN computation consists of propagation in a clique tree compiled from a Bayesian network. There is a lack of understanding of how clique tree computation time, and BN computation time in more general, depends on variations in BN size and structure. On the one hand, complexity results tell us that many interesting BN queries are NPhard or worse to answer, and it is not hard to nd application BNs where the clique tree approach in practice cannot be used. On the other hand, it is wellknown that treestructured BNs can be used to answer probabilistic queries in polynomial time. In this article, we develop an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, speci cally: (i) the ratio of the number of a BN's nonroot nodes to the number of root nodes, or (ii) the expected number of moral edges in their moral graphs. Our approach is based on combining analytical and experimental results. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for each set. For the special case of bipartite BNs, we consequently have two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, we systematically increase the degree of the root nodes in bipartite Bayesian networks, and nd that root clique growth is wellapproximated by Gompertz growth curves. It is believed that this research improves the understanding of the scaling behavior of clique tree clustering, provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms, and presents an aid for analytical tradeoff studies of clique tree clustering using growth curves.
TAN classifiers based on decomposable distributions
 Machine Learning
, 2005
"... Abstract. In this paper we present several Bayesian algorithms for learning Tree Augmented Naive Bayes (TAN) models. We extend the results in Meila & Jaakkola (2000a) to TANs by proving that accepting a prior decomposable distribution over TAN’s, we can compute the exact Bayesian model averaging ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we present several Bayesian algorithms for learning Tree Augmented Naive Bayes (TAN) models. We extend the results in Meila & Jaakkola (2000a) to TANs by proving that accepting a prior decomposable distribution over TAN’s, we can compute the exact Bayesian model averaging over TAN structures and parameters in polynomial time. Furthermore, we prove that the kmaximum a posteriori (MAP) TAN structures can also be computed in polynomial time. We use these results to correct minor errors in Meila & Jaakkola (2000a) and to construct several TAN based classifiers. We show that these classifiers provide consistently better predictions over Irvine datasets and artificially generated data than TAN based classifiers proposed in the literature.