Results 1  10
of
122
Generalized Relevance Learning Vector Quantization
 Neural Networks
, 2002
"... We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions. The factors allow an appropriate scaling of the input dimensions according to their relevance. They are adapted automatically during training according to the specif ..."
Abstract

Cited by 68 (23 self)
 Add to MetaCart
(Show Context)
We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions. The factors allow an appropriate scaling of the input dimensions according to their relevance. They are adapted automatically during training according to the specific classification task whereby training can be interpreted as stochastic gradient descent on an appropriate error function. This method leads to a more powerful classifier and to an adaptive metric with little extra cost compared to standard GLVQ. Moreover, the size of the weighting factors indicates the relevance of the input dimensions. This proposes a scheme for automatically pruning irrelevant input dimensions. The algorithm is verified on artificial data sets and the iris data from the UCI repository.
Adaptive relevance matrices in learning vector quantization
, 2009
"... We propose a new matrix learning scheme to extend relevance learning vector quantization (RLVQ), an efficient prototypebased classification algorithm, towards a general adaptive metric. By introducing a full matrix of relevance factors in the distance measure, correlations between different feature ..."
Abstract

Cited by 55 (31 self)
 Add to MetaCart
(Show Context)
We propose a new matrix learning scheme to extend relevance learning vector quantization (RLVQ), an efficient prototypebased classification algorithm, towards a general adaptive metric. By introducing a full matrix of relevance factors in the distance measure, correlations between different features and their importance for the classification scheme can be taken into account and automated, general metric adaptation takes place during training. In comparison to the weighted Euclidean metric used in RLVQ and its variations, a full matrix is more powerful to represent the internal structure of the data appropriately. Large margin generalization bounds can be transfered to this case leading to bounds which are independent of the input dimensionality. This also holds for local metrics attached to each prototype which corresponds to piecewise quadratic decision boundaries. The algorithm is tested in comparison to alternative LVQ schemes using an artificial data set, a benchmark multiclass problem from the UCI repository, and a problem from bioinformatics, the recognition of splice sites for C.elegans.
A general framework for unsupervised processing of structured data
 NEUROCOMPUTING
, 2004
"... ..."
(Show Context)
Supervised Neural Gas with General Similarity Measure
 Neural Processing Letters
, 2003
"... Prototype based classi cation oers intuitive and sparse models with excellent generalization ability. However, these models usually crucially depend on the underlying Euclidian metric; moreover, online variants likely suer from the problem of local optima. We here propose a generalization of learni ..."
Abstract

Cited by 37 (21 self)
 Add to MetaCart
(Show Context)
Prototype based classi cation oers intuitive and sparse models with excellent generalization ability. However, these models usually crucially depend on the underlying Euclidian metric; moreover, online variants likely suer from the problem of local optima. We here propose a generalization of learning vector quantization with three additional features: (I) it directly integrates neighborhood cooperation, hence is less aected by local optima; (II) the method can be combined with any dierentiable similarity measure whereby metric parameters such as relevance factors of the input dimensions can automatically be adapted according to the given data; (III) it obeys a gradient dynamics hence shows very robust behavior, and the chosen objective is related to margin optimization.
Dynamics and generalization ability of LVQ algorithms
 Journal of Machine Learning Research
, 2006
"... Learning vector quantization (LVQ) schemes constitute intuitive, powerful classification heuristics with numerous successful applications but, so far, limited theoretical background. We study LVQ rigorously within a simplifying model situation: two competing prototypes are trained from a sequence of ..."
Abstract

Cited by 29 (16 self)
 Add to MetaCart
(Show Context)
Learning vector quantization (LVQ) schemes constitute intuitive, powerful classification heuristics with numerous successful applications but, so far, limited theoretical background. We study LVQ rigorously within a simplifying model situation: two competing prototypes are trained from a sequence of examples drawn from a mixture of Gaussians. Concepts from statistical physics and the theory of online learning allow for an exact description of the training dynamics in highdimensional feature space. The analysis yields typical learning curves, convergence properties, and achievable generalization abilities. This is also possible for heuristic training schemes which do not relate to a cost function. We compare the performance of several algorithms, including Kohonen’s LVQ1 and LVQ+/, a limiting case of LVQ2.1. The former shows close to optimal performance, while LVQ+/ displays divergent behavior. We investigate how early stopping can overcome this difficulty. Furthermore, we study a crisp version of robust soft LVQ, which was recently derived from a statistical formulation. Surprisingly, it exhibits relatively poor generalization. Performance improves if a window for the selection of data is introduced; the resulting algorithm corresponds to cost function based LVQ2. The dependence of these results on the model parameters, for example, prior class probabilities, is investigated systematically, simulations confirm our analytical findings. Keywords: prototype based classification, learning vector quantization, WinnerTakesAll algorithms, online learning, competitive learning 1.
Neural Maps in Remote Sensing Image Analysis
 Neural Networks
, 2003
"... We study the application of SelfOrganizing Maps for the analyses of remote sensing spectral images. Advanced airborne and satellitebased imaging spectrometers produce very highdimensional spectral signatures that provide key information to many scientific inves tigations about the surface and at ..."
Abstract

Cited by 25 (13 self)
 Add to MetaCart
We study the application of SelfOrganizing Maps for the analyses of remote sensing spectral images. Advanced airborne and satellitebased imaging spectrometers produce very highdimensional spectral signatures that provide key information to many scientific inves tigations about the surface and atmosphere of Earth and other planets. These new, so phisticated data demand new and advanced approaches to cluster detection, visualization, and supervised classification. In this article we concentrate on the issue of faithful topo logical mapping in order to avoid false interpretations of cluster maps created by an SaM. We describe several new extensions of the standard SaM, developed in the past few years: the Growing SelfOrganizing Map, magnification control, and Generalized Relevance Learn ing Vector Quantization, and demonstrate their effect on both lowdimensional traditional multispectral imagery and 200dimensional hyperspectral imagery.
On the Generalization Ability of GRLVQ networks
 NEURAL PROCESSING LETTERS
"... We derive a generalization bound for prototypebased classifiers with adaptive metric. The bound depends on the margin of the classifier and is independent of the dimensionality of the data. It holds for classifiers based on the Euclidean metric extended by adaptive relevance terms. In particular, ..."
Abstract

Cited by 21 (16 self)
 Add to MetaCart
(Show Context)
We derive a generalization bound for prototypebased classifiers with adaptive metric. The bound depends on the margin of the classifier and is independent of the dimensionality of the data. It holds for classifiers based on the Euclidean metric extended by adaptive relevance terms. In particular, the result holds for relevance learning vector quantization [3] and generalized relevance learning vector quantization [11].
Distance learning in discriminative vector quantization
 Neural Computation
"... Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers which are based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the as ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
(Show Context)
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers which are based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data driven distance measure is found. In this article, we consider full matrix adaptation in advanced LVQ schemes; in particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real life data sets to matrix learning in GLVQ, which is a derivation of LVQlike learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.
Relevancebased feature extraction for hyperspectral images
 IEEE Trans. on Neural Networks
, 2008
"... Abstract—Hyperspectral imagery affords researchers all discriminating details needed for fine delineation of many material classes. This delineation is essential for scientific research ranging from geologic to environmental impact studies. In a data mining scenario, one cannot blindly discard infor ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
(Show Context)
Abstract—Hyperspectral imagery affords researchers all discriminating details needed for fine delineation of many material classes. This delineation is essential for scientific research ranging from geologic to environmental impact studies. In a data mining scenario, one cannot blindly discard information because it can destroy discovery potential. In a supervised classification scenario, however, the preselection of classes presents one with an opportunity to extract a reduced set of meaningful features without degrading classification performance. Given the complex correlations found in hyperspectral data and the potentially large number of classes, meaningful feature extraction is a difficult task. We turn to the recent neural paradigm of generalized relevance learning vector quantization (GRLVQ) [B. Hammer and T. Villmann, Neural Networks, vol. 15, pp. 1059–1068, 2002], which is based on, and substantially extends, learning vector
Regularization in Matrix Relevance Learning
, 2008
"... We present a regularization method which extends the recently introduced Generalized Matrix LVQ. This learning algorithm extends the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, relevance learning can display a tendency towards oversimplification in th ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
We present a regularization method which extends the recently introduced Generalized Matrix LVQ. This learning algorithm extends the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, relevance learning can display a tendency towards oversimplification in the course of training. An overly pronounced elimination of dimensions in feature space can have negative effects on the performance and may lead to instabilities in the training. Complementing the standard GMLVQ cost function by an appropriate regularization term prevents this unfavorable behavior and can help to improve the generalization ability. The approach is first tested and illustrated in terms of artificial model data. Furthermore we apply the scheme to a benchmark classification problem from the medical domain. For both data sets, we demonstrate the usefulness of regularization also in the case of rank limited relevance matrices, i.e. GMLVQ with an implicit, low dimensional representation of the data.