#### DMCA

## Tractable Bayesian Learning of Tree Augmented Naive Bayes Classifiers (2003)

### Cached

### Download Links

Venue: | In Proceedings of the Twentieth International Conference on Machine Learning |

Citations: | 4 - 1 self |

### Citations

3469 |
UCI repository of machine learning databases
- Blake, Merz
- 1998
(Show Context)
Citation Context ...g stage for tbmatan (calculating every Nv,u,C(j, i, c)) is only the first step of the TAN learning process. 5. Empirical Results We tested four algorithms over 16 datasets from the Irvine repository (=-=Blake et al., 1998-=-). The dataset characteristics are described in Table 1. To discretize continuous attributes we used equal frequency discretization with 5 intervals. For each dataset and algorithm we tested both accu... |

1157 | Learning Bayesian networks: The combination of knowledge and statistical data. - Heckerman, Geiger, et al. - 1995 |

880 | Approximating discrete probability distributions with dependence trees - Chow, Liu - 1968 |

818 | On the optimality of the simple Bayesian classifier under zero-one loss. - Domingos, Pazzani - 1997 |

796 | Bayesian network classifiers. - Friedman, Geiger, et al. - 1997 |

439 | An analysis of Bayesian classifier - Langley, Iba, et al. - 1992 |

265 | Induction of selective bayesian classifiers, - Langley, Sage - 1994 |

123 |
Semi-naive bayesian classifier,
- Kononenko
- 1991
(Show Context)
Citation Context ...assumptions that are made and keep the “way of reasoning”, we can get a more accurate classifier. This has been tried in different ways (Friedman et al., 1997; Keogh & Pazzani, 1999; Kohavi & John=-=, ; Kononenko, 1991-=-; Langley & Sage, 1994; Pazzani, 1995). From our point of view TAN is the more coherent and best performing enhancement to Naive Bayes up to now. TAN models are a restricted family of Bayesian network... |

93 |
NTL: A library for doing number theory. In http://www.shoup.net/ntl,
- Shoup
- 2009
(Show Context)
Citation Context ...wledge, such accurate computation does not exist. Therefore, we have used a brute force solution to accurately implement TBMATAN. More concretely, we have calculated the determinants by means of NTL (=-=Shoup, 2003)-=-, a library that allows us to calculate determinants with the desired precision arithmetic. This solution makes the time for classifying a new observation grow faster than O(#C · n 3 ), and hence mak... |

74 | Searching for dependencies in Bayesian classifiers
- Pazzani
- 1995
(Show Context)
Citation Context ...“way of reasoning”, we can get a more accurate classifier. This has been tried in different ways (Friedman et al., 1997; Keogh & Pazzani, 1999; Kohavi & John, ; Kononenko, 1991; Langley & Sage, 19=-=94; Pazzani, 1995-=-). From our point of view TAN is the more coherent and best performing enhancement to Naive Bayes up to now. TAN models are a restricted family of Bayesian networks in which the class variable has no ... |

71 | Learning augmented bayesian classifiers: A comparison of distribution-based and classification-based approaches. - Keogh, Pazzani - 1999 |

66 | BMA: Bayesian model averaging
- Raftery, Hoeting, et al.
- 2005
(Show Context)
Citation Context ... the model underlying the data is known to be in M we have that: � P (V = S, C = sC|D, ξ) = P (V = S, C = sC|M)P (M|D, ξ) (8) M∈M Applying this equation is commonly known as Bayesian model avera=-=ging (Hoeting et al., 1998-=-). In practice, the problem is that for most models it is very hard to find a closed form for the integral. This has led to the introduction of methods such as Local Bayesian Model Averaging (LBMA) (C... |

44 | Tractable Bayesian learning of tree belief networks. - Meila, Jaakkola - 2006 |

35 | Mining Complex Models from Arbitrarily Large Databases in Constant Time. - Hulten, Domingos - 2002 |

32 | On the optimality of the simple Bayesian classi under zero-one loss - Domingos, Pazzani - 1997 |

32 | Bayesian network classi - Friedman, Geiger, et al. - 1997 |

20 | An Analysis of Bayesian Classi - Langley, Iba, et al. - 1992 |

9 | Bayes Optimal Instance-Based Learning.
- Kontkanen, Myllymaki, et al.
- 1998
(Show Context)
Citation Context ... & López de Màntaras, 2003a; Kontkanen, Myllymaki, Silander, & Tirri, 1998) that Naive Bayes predictions and probability estimations can benefit from incorporating uncertainty in model selection. In=-= (Kontkanen et al., 1998-=-) Kontkanen et al. introduce an approach named Bayesian Instance-Based Learning that can be seen as a version of Bayesian model averaging (Hoeting, Madigan, Raftery, & Volinsky, 1998) and demonstrate ... |

5 | Applying General Bayesian Techniques to Improve TAN Induction
- Cerquides
- 1999
(Show Context)
Citation Context ...se of Naive distributions and the principle of indifference is presented in (Cerquides & López de Màntaras, 2003a, 2003b). In the case of TAN, a development inspired in the same idea is presented in=-= (Cerquides, 1999-=-a), where to overcome the difficulty of exactly calculating the averaged classifier the idea of local Bayesian model averaging is introduced to calculate an approximation. In this case predictions are... |

2 |
Counting spanning trees. Diplomarbeit
- Rubey
- 2000
(Show Context)
Citation Context ...g row u and column v. ⎡ ⎤ deg v1 −a1,2 −a1,3 . . . a1,n ⎢ −a2,1 A = ⎢ deg v2 −a2,3 . . . a2,n ⎥ ⎣. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ⎦ (35) Pr=-=oof: See (West, 1999; Rubey, 2000). −an-=-,1 −an,2 −an,3 . . . deg vn 16 �s(a) Error rate (b) Log score Figure 4: Comparison of SSTBMATAN and TBMATAN A.2 The Matrix Tree Theorem for Decomposable Distributions Let P(E) be a distribution ... |

1 | ξ) = |Q(βW)| × D(θC(.); N ′ C (.) + NC(.)) × � c∈C × � � u,v∈E Wu,vβu,v D(θ ρE - unknown authors - 1998 |

1 | The indifferent naive bayes classifier - Màntaras, R - 2003 |

1 | IIIA-2003-01, Institut d’Investigació en Intel.ligència Artificial - rep |

1 | The indifferent naive bayes classifier. - Cerquides, Mantaras, et al. - 2003 |

1 | The indierent naive bayes classi - Cerquides, Mantaras, et al. - 2003 |