## A Kurtosis-Based Dynamic Approach to Gaussian Mixture Modeling (1999)

### Cached

### Download Links

- [www.cs.uoi.gr]
- [carol.wins.uva.nl]
- [www.cs.uoi.gr]
- DBLP

### Other Repositories/Bibliography

Venue: | IEEE Trans. Systems, Man, and Cybernetics, Part A |

Citations: | 20 - 6 self |

### BibTeX

@ARTICLE{Vlassis99akurtosis-based,

author = {Nikos Vlassis and Aristidis Likas},

title = {A Kurtosis-Based Dynamic Approach to Gaussian Mixture Modeling},

journal = {IEEE Trans. Systems, Man, and Cybernetics, Part A},

year = {1999},

volume = {29},

pages = {393--399}

}

### Years of Citing Articles

### OpenURL

### Abstract

We address the problem of probability density function estimation using a Gaussian mixture model updated with the expectation-maximization (EM) algorithm. To deal with the case of an unknown number of mixing kernels, we define a new measure for Gaussian mixtures, called total kurtosis, which is based on the weighted sample kurtoses of the kernels. This measure provides an indication of how well the Gaussian mixture fits the data. Then we propose a new dynamic algorithm for Gaussian mixture density estimation which monitors the total kurtosis at each step of the EM algorithm in order to decide dynamically on the correct number of kernels and possibly escape from local maxima. We show the potential of our technique in approximating unknown densities through a series of examples with several density estimation problems.

### Citations

9054 | Maximum likelihood from incomplete data via the EM algorithm - Dempster, Laird, et al. - 1977 |

1212 |
Pattern Recognition and Neural Networks
- Ripley
- 1996
(Show Context)
Citation Context ...ts of the EM algorithm. However, in most approaches the number K of kernels of the mixture is considered known in advance, and it turns out that the automatic estimation of K is a dicult problem [1], =-=[11]-=-. Statistical methods or neural network models for estimating the number of kernels of a Gaussian mixture have been proposed in the literature [1], [12], [13], [9]. However, most of them usually canno... |

1077 |
The EM Algorithm and Extensions
- McLachlan, Krishnan
- 1997
(Show Context)
Citation Context ...ds exist for the estimation of the 3K parameters of the mixture from a set of samples of x, the automatic estimation of K remains a dicult problem [11]. B. The EM algorithm The EM algorithm [3], [2], =-=[14]-=- is a powerful statistical tool forsnding maximum likelihood solutions to problems involving observed and hidden variables. The algorithm applies in cases where we ask for maximum likelihood estimates... |

828 |
On estimation of a probability density function and mode
- Parzen
- 1962
(Show Context)
Citation Context ...kernel is constant and known and the mixing weights are equal to the reciprocal of the total number of inputs. The network can be regarded as a distributed implementation of the Parzen windows method =-=[6]-=-. Some of the limitations of the original network model were relaxed in subsequent works [7], [8], [9], [10], leading to network models that implement some variants of the EM algorithm. However, in mo... |

724 |
Statistical Analysis of Finite Mixture Distributions
- Titterington, Smith, et al.
- 1985
(Show Context)
Citation Context ...aximization (EM) algorithm, Gaussian mixture modeling, number of mixing kernels, probability density function estimation, total kurtosis, weighted kurtosis. I. Introduction The Gaussian mixture model =-=[1]-=- has been proposed as a general model for estimating an unknown probability density function, or simply density. The virtues of the model lie mainly in its good approximation properties and the variet... |

549 |
Mixture densities, maximum likelihood and the EM algorithm
- Redner, Walker
- 1984
(Show Context)
Citation Context ...unknown probability density function, or simply density. The virtues of the model lie mainly in its good approximation properties and the variety of estimation algorithms that exist in the literature =-=[-=-2], [1]. The model assumes that the unknown density can be written as a weightedsnite sum of Gaussian kernels, with dierent mixing weights and dierent parameters, namely, means and covariance matrices... |

281 |
Probabilistic Neural Networks
- Specht
- 1990
(Show Context)
Citation Context ...ormation about the density to be approximated is available beforehand. In the neural networks literature, a feed-forward network that implements a Gaussian mixture is the Probabilistic Neural Network =-=[5]-=-. The network uses one Gaussian kernel for each input sample, while the variance of each kernel is constant and known and the mixing weights are equal to the reciprocal of the total number of inputs. ... |

92 |
On bootstrapping the likelihood ratio test statistic for the number of components in a normal mixture
- McLachlan
- 1987
(Show Context)
Citation Context ...omatic estimation of K is a dicult problem [1], [11]. Statistical methods or neural network models for estimating the number of kernels of a Gaussian mixture have been proposed in the literature [1], =-=[12]-=-, [13], [9]. However, most of them usually cannot satisfy the necessary regularity conditions for estimating the asymptotic distributions of the underlying tests, and thus have to resort to costly heu... |

29 |
Maximum-likelihood training of probabilistic neural networks
- Streit, Luginbuhl
- 1994
(Show Context)
Citation Context ...er of inputs. The network can be regarded as a distributed implementation of the Parzen windows method [6]. Some of the limitations of the original network model were relaxed in subsequent works [7], =-=[8]-=-, [9], [10], leading to network models that implement some variants of the EM algorithm. However, in most approaches the number K of kernels of the mixture is considered known in advance, and it turns... |

12 | The probabilistic growing cell structures algorithm
- Vlassis, Dimopoulos, et al.
- 1327
(Show Context)
Citation Context ...ts. The network can be regarded as a distributed implementation of the Parzen windows method [6]. Some of the limitations of the original network model were relaxed in subsequent works [7], [8], [9], =-=[10]-=-, leading to network models that implement some variants of the EM algorithm. However, in most approaches the number K of kernels of the mixture is considered known in advance, and it turns out that t... |

11 |
Traven, “A neural-network approach to statistical pattern classification by semiparametric estimation of a probability density functions
- C
- 1991
(Show Context)
Citation Context ... number of inputs. The network can be regarded as a distributed implementation of the Parzen windows method [6]. Some of the limitations of the original network model were relaxed in subsequent works =-=[7]-=-, [8], [9], [10], leading to network models that implement some variants of the EM algorithm. However, in most approaches the number K of kernels of the mixture is considered known in advance, and it ... |

7 | Mixture density estimation based on maximum likelihood and test statistics
- Vlassis, Papakonstantinou, et al.
- 1999
(Show Context)
Citation Context ...input data. This new measure is computed from the individualsweighted sample kurtoses of the mixing kernels, dened by analogy to the weighted means and variances of the kernels andsrst introduced in [=-=4]-=- for on-line density estimation. Based on the progressive change of the total kurtosis, our algorithm performs kernel splitting and increases the number of kernels of the mixture. This splitting aims ... |

6 |
Testing for the number of components in a mixture of normal distributions using moment estimators
- Furman, Lindsay
- 1994
(Show Context)
Citation Context ... estimation of K is a dicult problem [1], [11]. Statistical methods or neural network models for estimating the number of kernels of a Gaussian mixture have been proposed in the literature [1], [12], =-=[13]-=-, [9]. However, most of them usually cannot satisfy the necessary regularity conditions for estimating the asymptotic distributions of the underlying tests, and thus have to resort to costly heuristic... |

6 |
A comparison between the simulated annealing and the EM algorithms in normal mixture decompositions
- Ingrassia
- 1992
(Show Context)
Citation Context ...er of kernels, but we cannot be sure whether the solution constitutes an acceptable approximation to the the unknown density; the two densities may dier signicantly based on other distance measures [1=-=6]-=-. On the other hand, we know that a lower bound for the total kurtosis is the zero value. Therefore, we can expect that the lower the total kurtosis value of the obtained solution is, the better is th... |

4 |
Self-organizing Neural Networks based on gaussian mixture model for PDF estimation and pattern classification
- Shimoji
- 1994
(Show Context)
Citation Context ... inputs. The network can be regarded as a distributed implementation of the Parzen windows method [6]. Some of the limitations of the original network model were relaxed in subsequent works [7], [8], =-=[9]-=-, [10], leading to network models that implement some variants of the EM algorithm. However, in most approaches the number K of kernels of the mixture is considered known in advance, and it turns out ... |

3 |
Trav'en, "A neural network approach to statistical pattern classification by "semiparametric" estimation of probability density functions
- C
- 1991
(Show Context)
Citation Context ... number of inputs. The network can be regarded as a distributed implementation of the Parzen windows method [6]. Some of the limitations of the original network model were relaxed in subsequent works =-=[7]-=-, [8], [9], [10], leading to network models that implement some variants of the EM algorithm. However, in most approaches the number K of kernels of the mixture is considered known in advance, and it ... |