Results 1 
6 of
6
Distance learning in discriminative vector quantization
 Neural Computation
"... Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers which are based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the as ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
(Show Context)
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers which are based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data driven distance measure is found. In this article, we consider full matrix adaptation in advanced LVQ schemes; in particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real life data sets to matrix learning in GLVQ, which is a derivation of LVQlike learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.
Regularization in Matrix Relevance Learning
, 2008
"... We present a regularization method which extends the recently introduced Generalized Matrix LVQ. This learning algorithm extends the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, relevance learning can display a tendency towards oversimplification in th ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
We present a regularization method which extends the recently introduced Generalized Matrix LVQ. This learning algorithm extends the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, relevance learning can display a tendency towards oversimplification in the course of training. An overly pronounced elimination of dimensions in feature space can have negative effects on the performance and may lead to instabilities in the training. Complementing the standard GMLVQ cost function by an appropriate regularization term prevents this unfavorable behavior and can help to improve the generalization ability. The approach is first tested and illustrated in terms of artificial model data. Furthermore we apply the scheme to a benchmark classification problem from the medical domain. For both data sets, we demonstrate the usefulness of regularization also in the case of rank limited relevance matrices, i.e. GMLVQ with an implicit, low dimensional representation of the data.
Distance measures for prototype based classification
"... The basic concepts of distance based classification are introduced in terms of clearcut example systems. The classical kNearestNeigbhor (kNN) classifier serves as the starting point of the discussion. Learning Vector Quantization (LVQ) is introduced, which represents the reference data by a few ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
The basic concepts of distance based classification are introduced in terms of clearcut example systems. The classical kNearestNeigbhor (kNN) classifier serves as the starting point of the discussion. Learning Vector Quantization (LVQ) is introduced, which represents the reference data by a few prototypes. This requires a data driven training process; examples of heuristic and cost function based prescriptions are presented. While the most popular measure of dissimilarity in this context is the Euclidean distance, this choice is frequently made without justification. Alternative distances can yield better performance in practical problems. Several examples are discussed, including more general Minkowski metrics and statistical divergences for the comparison of, e.g., histogram data. Furthermore, the framework of relevance learning in LVQ is presented. There, parameters of adaptive distance measures are optimized in the training phase. A practical application of Matrix Relevance LVQ in the context of tumor classification illustrates the approach.
Fast CommitteeBased Structure Learning
"... Current methods for causal structure learning tend to be computationally intensive or intractable for large datasets. Some recent approaches have speeded up the process by first making hard decisions about the set of parents and children for each variable, in order to break largescale problems into ..."
Abstract
 Add to MetaCart
(Show Context)
Current methods for causal structure learning tend to be computationally intensive or intractable for large datasets. Some recent approaches have speeded up the process by first making hard decisions about the set of parents and children for each variable, in order to break largescale problems into sets of tractable local neighbourhoods. We use this principle in order to apply a structure learning committee for orientating edges between variables. We find that a combination of weak structure learners can be effective in recovering causal dependencies. Though such a formulation would be intractable for large problems at the global level, we show that it can run quickly when processing local neighbourhoods in turn. Experimental results show that this localized, committeebased approach has advantages over standard causal discovery algorithms both in terms of speed and accuracy.
External Supervisor:
, 2012
"... Dr. Marco Wiering (Artificial Intelligence,University of Groningen) ..."
(Show Context)
Machine Learning Reports,Research group on Computational Intelligence,
"... We present a regularization method which extends the recently introduced Generalized Matrix LVQ. This learning algorithm extends the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, relevance learning can display a tendency towards oversimplification in th ..."
Abstract
 Add to MetaCart
We present a regularization method which extends the recently introduced Generalized Matrix LVQ. This learning algorithm extends the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, relevance learning can display a tendency towards oversimplification in the course of training. An overly pronounced elimination of dimensions in feature space can have negative effects on the performance and may lead to instabilities in the training. Complementing the standard GMLVQ cost function by an appropriate regularization term prevents this unfavorable behavior and can help to improve the generalization ability. The approach is first tested and illustrated in terms of artificial model data. Furthermore we apply the scheme to a benchmark classification problem from the medical domain. For both data sets, we demonstrate the usefulness of regularization also in the case of rank limited relevance matrices, i.e. GMLVQ with an implicit, low dimensional representation of the data. Machine Learning Reports,Research group on Computational Intelligence,