Results 1  10
of
211
A Growing Neural Gas Network Learns Topologies
 Advances in Neural Information Processing Systems 7
, 1995
"... An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebblike learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994), this m ..."
Abstract

Cited by 401 (5 self)
 Add to MetaCart
(Show Context)
An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebblike learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994), this model has no parameters which change over time and is able to continue learning, adding units and connections, until a performance criterion has been met. Applications of the model include vector quantization, clustering, and interpolation. 1 INTRODUCTION In unsupervised learning settings only input data is available but no information on the desired output. What can the goal of learning be in this situation? One possible objective is dimensionality reduction: finding a lowdimensional subspace of the input vector space containing most or all of the input data. Linear subspaces with this property can be computed directly by principal component analysis or iteratively with a number of network models (S...
Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds
 Journal of Machine Learning Research
, 2003
"... The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. ..."
Abstract

Cited by 385 (10 self)
 Add to MetaCart
The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation.
Principal manifolds and nonlinear dimensionality reduction via tangent space alignment
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 2004
"... Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unorganized ..."
Abstract

Cited by 261 (15 self)
 Add to MetaCart
(Show Context)
Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unorganized data points sampled with noise from the manifold, we represent the local geometry of the manifold using tangent spaces learned by fitting an affine subspace in a neighborhood of each data point. Those tangent spaces are aligned to give the internal global coordinates of the data points with respect to the underlying manifold by way of a partial eigendecomposition of the neighborhood connection matrix. We present a careful error analysis of our algorithm and show that the reconstruction errors are of secondorder accuracy. We illustrate our algorithm using curves and surfaces both in 2D/3D and higher dimensional Euclidean spaces, and 64by64 pixel face images with various pose and lighting conditions. We also address several theoretical and algorithmic issues for further research and improvements.
Constructive Incremental Learning From Only Local Information
 NEURAL COMPUTATION
"... We introduce a constructive, incremental learning system for regression problems that models data by means of spatially localized linear models. In contrast to other approaches, the size and shape of the receptive field of each locally linear model as well as the parameters of the locally linear mod ..."
Abstract

Cited by 208 (40 self)
 Add to MetaCart
We introduce a constructive, incremental learning system for regression problems that models data by means of spatially localized linear models. In contrast to other approaches, the size and shape of the receptive field of each locally linear model as well as the parameters of the locally linear model itself are learned independently, i.e., without the need for competition or any other kind of communication. Independent learning is accomplished by incrementally minimizing a weighted local cross validation error. As a result, we obtain a learning system that can allocate resources as needed while dealing with the biasvariance dilemma in a principled way. The spatial localization of the linear models increases robustness towards negative interference. Our learning system can be interpreted as a nonparametric adaptive bandwidth smoother, as a mixture of experts where the experts are trained in isolation, and as a learning system which profits from combining independent expert knowledge on the same problem. This paper illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields.
Situs: A package for docking crystal structures into lowresolution maps from electron microscopy
 J. Struct. Biol
, 1999
"... Threedimensional image reconstructions of largescale protein aggregates are routinely determined by electron microscopy (EM). We combine lowresolution EM data with highresolution structures of proteins determined by xray crystallography. A set of visualization and analysis procedures, termed the ..."
Abstract

Cited by 111 (7 self)
 Add to MetaCart
(Show Context)
Threedimensional image reconstructions of largescale protein aggregates are routinely determined by electron microscopy (EM). We combine lowresolution EM data with highresolution structures of proteins determined by xray crystallography. A set of visualization and analysis procedures, termed the Situs package, has been developed to provide an efficient and robust method for the localization of protein subunits in lowresolution data. Topologyrepresenting neural networks are employed to vectorquantize and to correlate features within the structural data sets. Microtubules decorated with kinesinrelated ncd motors are used as model aggregates to demonstrate the utility of this package of routines. The precision of the docking has allowed for the extraction of unique conformations of the macromolecules and is limited only by the reliability of the underlying structural data. 1999 Academic Press Key Words: topology representing neural networks; multiresolution; visualization; macromolecular
Mapping a manifold of perceptual observations
 Advances in Neural Information Processing Systems 10
, 1998
"... Nonlinear dimensionality reduction is formulated here as the problem of trying to find a Euclidean featurespace embedding of a set of observations that preserves as closely as possible their intrinsic metric structure – the distances between points on the observation manifold as measured along geod ..."
Abstract

Cited by 88 (2 self)
 Add to MetaCart
(Show Context)
Nonlinear dimensionality reduction is formulated here as the problem of trying to find a Euclidean featurespace embedding of a set of observations that preserves as closely as possible their intrinsic metric structure – the distances between points on the observation manifold as measured along geodesic paths. Our isometric feature mapping procedure, or isomap, is able to reliably recover lowdimensional nonlinear structure in realistic perceptual data sets, such as a manifold of face images, where conventional global mapping methods find only local minima. The recovered map provides a canonical set of globally meaningful features, which allows perceptual transformations such as interpolation, extrapolation, and analogy – highly nonlinear transformations in the original observation space – to be computed with simple linear operations in feature space. 1
A Survey of Fuzzy Clustering Algorithms for Pattern Recognition  Part 11
"... the concepts of fuzzy clustering and soft competitive learning in clustering algorithms is proposed on the basis of the existing literature. Moreover, a set of functional attributes is selected for use as dictionary entries in the comparison of clustering algorithms. In this paper, five clustering a ..."
Abstract

Cited by 81 (2 self)
 Add to MetaCart
(Show Context)
the concepts of fuzzy clustering and soft competitive learning in clustering algorithms is proposed on the basis of the existing literature. Moreover, a set of functional attributes is selected for use as dictionary entries in the comparison of clustering algorithms. In this paper, five clustering algorithms taken from the literature are reviewed, assessed and compared on the basis of the selected properties of interest. These clustering models are 1) selforganizing map (SOM); 2) fuzzy learning vector quantization (FLVQ); 3) fuzzy adaptive resonance theory (fuzzy ART); 4) growing neural gas (GNG); 5) fully selforganizing simplified adaptive resonance theory (FOSART). Although our theoretical comparison is fairly simple, it yields observations that may appear parodoxical. First, only FLVQ, fuzzy ART, and FOSART exploit concepts derived from fuzzy set theory (e.g., relative and/or absolute fuzzy membership functions). Secondly, only SOM, FLVQ, GNG, and FOSART employ soft competitive learning mechanisms, which are affected by asymptotic misbehaviors in the case of FLVQ, i.e., only SOM, GNG, and FOSART are considered effective fuzzy clustering algorithms. Index Terms—Ecological net, fuzzy clustering, modular architecture, relative and absolute membership function, soft and hard competitive learning, topologically correct mapping. I.
Generalized Relevance Learning Vector Quantization
 Neural Networks
, 2002
"... We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions. The factors allow an appropriate scaling of the input dimensions according to their relevance. They are adapted automatically during training according to the specif ..."
Abstract

Cited by 68 (23 self)
 Add to MetaCart
(Show Context)
We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions. The factors allow an appropriate scaling of the input dimensions according to their relevance. They are adapted automatically during training according to the specific classification task whereby training can be interpreted as stochastic gradient descent on an appropriate error function. This method leads to a more powerful classifier and to an adaptive metric with little extra cost compared to standard GLVQ. Moreover, the size of the weighting factors indicates the relevance of the input dimensions. This proposes a scheme for automatically pruning irrelevant input dimensions. The algorithm is verified on artificial data sets and the iris data from the UCI repository.
Using Humanoid Robots to Study Human Behavior
, 2000
"... xcept the eye DOFs, which have no load sensing. The robot is currently mounted at the pelvis, so that we do not have to worry about balance and can focus our studies on upperbody movement. We plan to explore fullbody motion in the future, probably with a new robot design. Inverse kinematics and ..."
Abstract

Cited by 67 (17 self)
 Add to MetaCart
xcept the eye DOFs, which have no load sensing. The robot is currently mounted at the pelvis, so that we do not have to worry about balance and can focus our studies on upperbody movement. We plan to explore fullbody motion in the future, probably with a new robot design. Inverse kinematics and trajectory formation One problem that robots with eyes face is visually guided manipulationfor example, choosing appropriate joint angles that let it reach out and touch a visual target. We use learning algorithms (described later in the article) to learn the relationship between where the robot senses its limb is using joint sensors and where the robot sees its limb (referred to in robotics as a model of the forward kine OUR UNDERSTANDING OF HUMAN BEHAVIOR ADVANCES AS OUR HUMAN ROBOTICS WORK PROGRESSES AND VICE VERSA . THIS TEAM 'S WORK FOCUSES ON TRAJECTORY FORMATION AND PLANNING ,<F1
Building "Fungus Eaters": Design Principles of Autonomous Agents
 In Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior SAB96 (From Animals to Animats
, 1996
"... We describe a set of design principles for building "Fungus Eaters". "Fungus Eaters" are complete autonomous systems. The goal is to extract and describe in a compact way a large part of the insights which have been acquired in the animats field. The principles have been develope ..."
Abstract

Cited by 62 (6 self)
 Add to MetaCart
We describe a set of design principles for building "Fungus Eaters". "Fungus Eaters" are complete autonomous systems. The goal is to extract and describe in a compact way a large part of the insights which have been acquired in the animats field. The principles have been developed from a cognitive science perspective. Although they represent only a very modest beginning, they make immediately clear what sort of ideas about intelligence and cognition they endorse. They all contrast sharply with classical thinking. Moreover, they provide powerful heuristics for design. 1 Introduction In their review paper of the first SAB conference in 1990, JeanArcady Meyer and Agnès Guillot argue that the animat approach will play an important role in resolving some of the fundamental controversies in the study of intelligence or cognition (Meyer and Guillot, 1991). Four years later, at the third SAB conference, they propose three types of goals for animat research, short term, intermediate term, and ...