Results 1 
2 of
2
Segmental Modeling Using a Continuous Mixture of Nonparametric Models
 IEEE Trans on SAP
, 1997
"... The aim of the research described in this paper is to overcome the modeling limitation of conventional hidden Markov models. We present a segmental model that consists of two elements. The first is a nonparametric representation of both the mean and variance trajectories, which describes the local d ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
The aim of the research described in this paper is to overcome the modeling limitation of conventional hidden Markov models. We present a segmental model that consists of two elements. The first is a nonparametric representation of both the mean and variance trajectories, which describes the local dynamics. The second element is some parameterized transformation (e.g., random shift) of the trajectory that is global to the segment and models longterm variations such as speaker identity. Introduction Speech sounds are produced by a timevarying dynamic system. Consequently, speech signals are highly correlated and nonstationary. In spite of this fact, in most implementations of hidden Markov models (HMMs) to speech recognition, the assumption that successive observations in a state are independent and identically distributed is inherent to the model. These limitations of the HMM are due to the fact that the HMM is a framebased approach. An alternative approach is segmental modeling, w...
A Maximumentropy Solution to the Framedependency Problem in Speech Recognition
, 2001
"... The HMM assumption of conditional independence of observations causes a variety of problems for speechrecognition applications. Previous attempts to construct acoustic models that remove this assumption have suffered from a significant increase in the number of parameters to train. Another weakness ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
The HMM assumption of conditional independence of observations causes a variety of problems for speechrecognition applications. Previous attempts to construct acoustic models that remove this assumption have suffered from a significant increase in the number of parameters to train. Another weakness of current acoustic models is that they do not account for the origin of derived features (estimated derivatives). We show how to both remove the independence assumption and properly account for derived features, with little or no increase in the number of parameters to train, by applying the principle of maximum entropy. We also show that ignoring the origins of derived features in training HMM acoustic models can lead to severe distortions of the effective language model. Evaluation of our maxent model on a simple problem cuts an alreadylow error rate in half compared to an equivalent HMM with the same number of parameters.