Results 1 
5 of
5
Constraint relaxation for learning the structure of Bayesian networks
, 2009
"... This paper introduces constraint relaxation, a new strategy for learning the structure of Bayesian networks. Constraint relaxation identifies and “relaxes ” possibly inaccurate independence constraints on the structure of the model. We describe a heuristic algorithm for constraint relaxation that co ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
This paper introduces constraint relaxation, a new strategy for learning the structure of Bayesian networks. Constraint relaxation identifies and “relaxes ” possibly inaccurate independence constraints on the structure of the model. We describe a heuristic algorithm for constraint relaxation that combines greedy search in the space of undirected skeletons with edge orientation based on the constraints. This approach produces significant improvements in the structural accuracy of the learned models compared to four wellknown structure learning algorithms in an empirical evaluation using data sampled from both realworld and randomly generated networks. 1
Variante de Grow Shrink para mejorar la calidad de Markov blankets
"... Abstract. This work introduces GrowShrink with Search (GSS), a novel adaptation of the GrowShrink (GS) algorithm that learns a set of direct dependences of a random variable; called the Markov Blanket (MB) of the variable. We focus on the use of MBs for learning undirected probabilistic graphical ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This work introduces GrowShrink with Search (GSS), a novel adaptation of the GrowShrink (GS) algorithm that learns a set of direct dependences of a random variable; called the Markov Blanket (MB) of the variable. We focus on the use of MBs for learning undirected probabilistic graphical models (a.k.a. Markov networks). As in the GS algorithm, GSS learns the MB by executing a series of statistical tests of conditional independence. The reliability of these tests decreases with the amount of data. While GS ignores this fact deciding on potentially incorrect MBs, GSS decides through a novel quality measure also introduced in this work, based on the posterior probability of a MB given the data. GSS proceeds as an optimization search over all possible independence assignments of the tests performed, searching for the assignment that maximizes this quality measure. This is in direct contrast to GS that performs a greedy optimization based on local decisions, i.e., the independence tests. Experimental results shows improvements up to 10%
1 Efficient IndependenceBased MAP Approach for Robust Markov Networks Structure Discovery
"... ar ..."