### Citations

4590 |
Time-delayed self-organizing maps
- Kangas
- 1990
(Show Context)
Citation Context ...ction The motivation for our earlier work on SOMs that learn metrics was that in data exploration (visualization, clustering) there often is no rigorous basis for choosing a metric. The SOM algorithm =-=[7]-=- can use many kinds of metrics but the user has to make the choice, usually bysxing the distance measure of SOM to be Euclidean (or some other global alternative), and selecting, transforming, and sca... |

3416 |
Uci repository of machine learning databases. http://www.ics.uci.edu/ mlearn/MLRepository.html
- Merz, Murphy
- 1996
(Show Context)
Citation Context ... the winner units of test samples. This measure has a slightly unintuitive corollary: since it requires a density estimator, even though traditional SOMs are not Data set n C N Landsat Satellite Data =-=[1]-=- 36 6 6435 Letter Recognition Data [1] 16 26 20000 LVQ PAK (Phoneme) [6] 20 13 3656 TIMIT Data [11] 12 41 14994 Table 1: The data sets and their dimensionality (n), number of classes (C) and samples (... |

545 |
A nonlinear mapping for data structure analysis
- Sammon
- 1969
(Show Context)
Citation Context ...nce in the beginning. We introduce a more accurate distance computation algorithm for such cases; it combines graph search with the earlier methods. The algorithm is demonstrated for Sammon's mapping =-=[10-=-], a variant of multidimensional scaling. 2 The learning metrics principle The data to be analyzed are vector-valued samples x 2 R n , called here primary data. The dierence from the standard SOM sett... |

62 | LVQ_PAK: A Program Package for the Correct Application of Learning Vector Quantization Algorithms
- Kohonen, Kangas, et al.
- 1992
(Show Context)
Citation Context ...ve corollary: since it requires a density estimator, even though traditional SOMs are not Data set n C N Landsat Satellite Data [1] 36 6 6435 Letter Recognition Data [1] 16 26 20000 LVQ PAK (Phoneme) =-=[-=-6] 20 13 3656 TIMIT Data [11] 12 41 14994 Table 1: The data sets and their dimensionality (n), number of classes (C) and samples (N ). trained with such estimators, dierent estimators yield dierent re... |

52 | Bankruptcy analysis with selforganizing maps in learning metrics
- Kaski, Sinkkonen, et al.
- 2001
(Show Context)
Citation Context ...he choice, usually bysxing the distance measure of SOM to be Euclidean (or some other global alternative), and selecting, transforming, and scaling the input variables. The learning metrics principle =-=[4, 5]-=- was developed to formalize the idea that it is possible to learn a metric from dependencies between the primary data and another set called auxiliary data. The SOM computed in learning metrics become... |

22 | Flexible Discriminant by Mixture Models
- Hastie, Tibshirani
- 1996
(Show Context)
Citation Context ...onditional density generated by such a model is ^ p(cjx) = P jsjc j exp(kx j k 2 =2 2 ) P j j exp(kx j k 2 =2 2 ) : (3) Both standard Parzen estimators and a joint density model called MDA2 [2] belong to the model family. However, instead of optimizing the model to maximize the joint likelihood of primary and auxiliary data, as in the above-mentioned algorithms, we directly optimize the con... |

15 | Informative discriminant analysis
- Kaski, Peltonen
(Show Context)
Citation Context ...ted by estimating p(cjx) explicitly. An alternative, not discussed here, is to dene for the whole system a cost function so that the method can be asymptotically shown to use the learning metric (1) [=-=3, 4]-=-. 3 Computation of distances In practice we need to resort to approximations since only asnite data set and limited computational resources are available. In this section we review approximation metho... |

14 |
2004), Principle of learning metrics for data analysis
- Kaski, Sinkkonen
(Show Context)
Citation Context ...he choice, usually bysxing the distance measure of SOM to be Euclidean (or some other global alternative), and selecting, transforming, and scaling the input variables. The learning metrics principle =-=[4, 5]-=- was developed to formalize the idea that it is possible to learn a metric from dependencies between the primary data and another set called auxiliary data. The SOM computed in learning metrics become... |

11 | A model-based distance for clustering - Rattray - 2000 |

9 | Learning more accurate metrics for self-organizing maps
- Peltonen, Klami, et al.
- 2002
(Show Context)
Citation Context ...that the clusters discriminate between companies with high and low bankruptcy risk [5]. Thesrst work used a coarse estimate for the metric that was fast to compute but only accurate locally. We later =-=[8]-=- developed more accurate ways to compute longer distances. In this work the details have beensnalized, and the resulting algorithms are compared to provide a recommendation of which variants to use. C... |