Results 1 
4 of
4
Generalized Linear Discriminant Sequence Kernels For Speaker Recognition
, 2002
"... Support Vector Machines have recently shown dramatic performance gains in many application areas. We show that the same gains can be realized in the area of speaker recognition via sequence kernels. A sequence kernel provides a numerical comparison of speech utterances as entire sequences rather tha ..."
Abstract

Cited by 95 (23 self)
 Add to MetaCart
Support Vector Machines have recently shown dramatic performance gains in many application areas. We show that the same gains can be realized in the area of speaker recognition via sequence kernels. A sequence kernel provides a numerical comparison of speech utterances as entire sequences rather than a probability at the frame level. We introduce a novel sequence kernel derived from generalized linear discriminants. The kernel has several advantages. First, the kernel uses an explicit expansion into "feature space"this property allows all of the support vectors to be collapsed into a single vector creating a small speaker model. Second, the kernel retains the computational advantage of generalized linear discriminants trained using meansquared error training. Finally, the kernel shows dramatic reductions in equal error rates over standard meansquared error training in matched and mismatched conditions on a NIST speaker recognition task.
A Sequence Kernel and its Application to Speaker Recognition
 in Neural Information Processing Systems 14
, 2001
"... A novel approach for comparing sequences of observations using an explicitexpansion kernel is demonstrated. The kernel is derived using the assumption of the independence of the sequence of observations and a meansquared error training criterion. The use of an explicit expansion kernel reduces ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
A novel approach for comparing sequences of observations using an explicitexpansion kernel is demonstrated. The kernel is derived using the assumption of the independence of the sequence of observations and a meansquared error training criterion. The use of an explicit expansion kernel reduces classifier model size and computation dramatically, resulting in model sizes and computation onehundred times smaller in our application. The explicit expansion also preserves the computational advantages of an earlier architecture based on meansquared error training.
Using Polynomial Networks For Speech Recognition
 in Neural Networks for Signal Processing X, Proceedings of the 2000 IEEE Workshop
, 2000
"... . We consider the problem of using polynomial networks for speech recognition. Previous applications of polynomials to speech recognition have yielded systems which are difficult to train and have only moderate accuracy. We show that through a novel training algorithm, a probabilistic interpretation ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
. We consider the problem of using polynomial networks for speech recognition. Previous applications of polynomials to speech recognition have yielded systems which are difficult to train and have only moderate accuracy. We show that through a novel training algorithm, a probabilistic interpretation, and a novel scoring method, polynomials networks can be applied to speech recognition in a manner that is accurate and straightforward.
Multimodality
, 2001
"... image integration for radiotherapy treatment, an easy approach ..."
(Show Context)