Results 1 
8 of
8
Statistical Image Object Recognition using Mixture Densities
, 2000
"... . In this paper, we present a mixture density based approach to invariant image object recognition. To allow for a reliable estimation of the mixture parameters, the dimensionality of the feature space is optionally reduced by applying a robust variant of linear discriminant analysis. Invariance to ..."
Abstract

Cited by 18 (11 self)
 Add to MetaCart
. In this paper, we present a mixture density based approach to invariant image object recognition. To allow for a reliable estimation of the mixture parameters, the dimensionality of the feature space is optionally reduced by applying a robust variant of linear discriminant analysis. Invariance to ane transformations is achieved by incorporating invariant distance measures such as tangent distance. We propose an approach to estimating covariance matrices with respect to image variabilities as well as a new approach to combined classication, called the virtual test sample method. Application of the proposed classier to the well known US Postal Service handwritten digits recognition task (USPS) yields an excellent error rate of 2:2%. We also propose a simple, but eective approach to compensate for local image transformations, which signicantly increases the performance of tangent distance on a database of 1,617 medical radiographs taken from clinical daily routine. Keywords: statistical pattern recognition, density estimation, invariant image object recognition, combined classication
Learning of variability for invariant statistical pattern recognition
 In ECML 2001, 12th European Conference on Machine Learning
, 2001
"... ..."
(Show Context)
Combined Classification of Handwritten Digits using the `Virtual Test Sample Method'
, 2001
"... . In this paper, we present a combined classication approach ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
. In this paper, we present a combined classication approach
Structured Covariance Matrices for Statistical Image Object Recognition
 In 22nd Symposium of the German Association for Pattern Recognition
, 2000
"... In this paper we present different approaches to structuring covariance matrices within statistical classifiers. This is motivated by the fact that the use of full covariance matrices is infeasible in many applications. On the one hand, this is due to the high number of model parameters that have to ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
In this paper we present different approaches to structuring covariance matrices within statistical classifiers. This is motivated by the fact that the use of full covariance matrices is infeasible in many applications. On the one hand, this is due to the high number of model parameters that have to be estimated, on the other hand the computational complexity of a classifier based on full covariance matrices is very high. We propose the use of diagonal and bandmatrices to replace full covariance matrices and we also show that computation of tangent distance is equivalent to using a structured covariance matrix within a statistical classifier. In the last few years, the use of Bayesian classifiers based on Gaussian mixture densities or kernel densities proved to be very efficient for many pattern recognition tasks, among them speech recognition, machine translation and object recognition in images [1, 2, 3, 7]. One drawback of this approach is the fact that ...
Tangent Distance Models
"... We trace the evolution of pattern recognition models based on the tangent distance. We compare these models with the nearest neighbour classifier and a comparable attempts to use invariant manifolds in support vector machines. 1 ..."
Abstract
 Add to MetaCart
(Show Context)
We trace the evolution of pattern recognition models based on the tangent distance. We compare these models with the nearest neighbour classifier and a comparable attempts to use invariant manifolds in support vector machines. 1
Improving Automatic Speech Recognition Using Tangent Distance
"... In this paper we present a new approach to variance modelling in automatic speech recognition (ASR) that is based on tangent distance (TD). Using TD, classifiers can be made invariant w.r.t. small transformations of the data. Such transformations generate a manifold in a high dimensional feature spa ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper we present a new approach to variance modelling in automatic speech recognition (ASR) that is based on tangent distance (TD). Using TD, classifiers can be made invariant w.r.t. small transformations of the data. Such transformations generate a manifold in a high dimensional feature space when applied to an observation vector. While conventional classifiers determine the distance between an observation and a prototype vector, TD approximates the minimum distance between their manifolds, resulting in classification that is invariant w.r.t. the underlying transformation. Recently, this approach was successfully applied in image object recognition. In this paper we describe how TD can be incorporated into ASR systems based on Gaussian mixture densities (GMD). The proposed method is embedded into a probabilistic framework. Experiments performed on the SieTill corpus for telephone line recorded German digit strings show a significant improvement in comparison with a conventional GMD approach using a comparable amount of model parameters. x µ