#### DMCA

## Estimating Continuous Distributions in Bayesian Classifiers (1995)

Citations: | 498 - 2 self |

### Citations

11951 | Maximum likelihood from incomplete data via the em algorithm - Dempster, Laird, et al. - 1977 |

3797 |
Density Estimation for Statistics and Data Analysis
- Silverman
- 1986
(Show Context)
Citation Context ... this problem whenever it must estimate p(XjC) for some continuous attribute X. This is a general problem in statistics, and a variety of methods are available for solving it (Venables & Ripley 1994, =-=Silverman 1986-=-). In this section we discuss the theoretical properties of kernel density estimation and their implications for the Flexible Bayes algorithm. Statisticians are principally concerned with the consiste... |

2451 | Generalized Additive Models - Hastie, Tibshirani - 1990 |

1396 | A Bayesian method for the induction of probabilistic networks from data - Cooper, Herskovits - 1991 |

1154 | Learning Bayesian networks: The combination of knowledge and statistical data - Heckerman, Geiger, et al. - 1994 |

890 | The CN2 induction algorithm - Clark, Niblett - 1989 |

855 | Uci repository of machine learning databases - Murphy, Aha - 1992 |

755 | Irrelevant Features and the Subset Selection Problem - John, Kohavi, et al. - 1994 |

540 | Supervised and unsupervised discretization of continuous features - Dougherty, Kohavi, et al. - 1995 |

438 | An analysis of Bayesian classifier - Langley, Iba, et al. - 1992 |

273 | Operations for learning with graphical models
- Buntine
- 1994
(Show Context)
Citation Context ...process. Thus, when depicted graphically, a naive Bayesian classifier has the form shown in Figure 1, in which all arcs are directed from the class attribute to the observable, predictive attributes (=-=Buntine 1994-=-). These assumptions support very efficient algorithms for both classification and learning. To see this, let C be the random variable denoting the class of an instance and let X be a vector of random... |

271 | AutoClass: Bayesian classification system. - Cheeseman, Kelly, et al. - 1988 |

265 | Induction of selective bayesian classifiers, - Langley, Sage - 1994 |

154 | Learning Gaussian networks - Geiger, Heckerman - 1994 |

129 |
Recent developments in nonparametric density estimation
- Izenman
- 1991
(Show Context)
Citation Context ...ss the theoretical properties of kernel density estimation and their implications for the Flexible Bayes algorithm. Statisticians are principally concerned with the consistencysof a density estimate (=-=Izenman 1991-=-). Definition 1 (Strong Pointwise Consistency) If f is a probability density function and fn is an estimate of f based on n examples, thensf n is strongly pointwise consistent if f ! f(x) almost surel... |

123 | Semi-naive bayesian classifier, - Kononenko - 1991 |

90 | Inductive and bayesian learning in medical diagnosis. - Kononenko - 1993 |

68 | Machine Learning as an Experimental Science. - Kibler, Langley - 1988 |

34 | The equivalence of weak, strong and complete convergence in l1 for kernel density estimates,” The Annals of Statistics, - Devroye - 1983 |

18 | Learning Bayesian networks using feature selection, - Provan, Singh - 1995 |

11 | Searching for attribute dependencies in bayesian classifiers. - Pazzani - 1995 |

4 | Experience with adaptive pobabilistic neural networks and adaptive general regression neural networks - Specht, Romsdahl - 1994 |

1 | W.&Freeman,D.(1988),Autoclass:ABayesian classicationsystem,in\MachineLearning:ProceedingsoftheFifthInternationalWorkshop - Cheeseman, Kelly, et al. |

1 | in\ProceedingsoftheSixthEuropeanWorking SessiononLearning",Pittman,Porto,Portugal - Kononenko, Semi-naiveBayesianclassier |