Results 1  10
of
20
Extreme learning machine: Theory and applications
, 2006
"... It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradientbased learning algorithms are extensively used to train neural net ..."
Abstract

Cited by 121 (9 self)
 Add to MetaCart
It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradientbased learning algorithms are extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these conventional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for singlehidden layer feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning speed. The experimental results based on a few artificial and real benchmark function approximation and classification problems including very large complex applications show that the new algorithm can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks.
Universal approximation using incremental constructive feedforward networks with . . .
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 2005
"... According to conventional neural network theories, singlehiddenlayer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implem ..."
Abstract

Cited by 97 (15 self)
 Add to MetaCart
According to conventional neural network theories, singlehiddenlayer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implementations, tuning all the parameters of the networks may cause learning complicated and inefficient, and it may be difficult to train networks with nondifferential activation functions such as threshold networks. Unlike conventional neural network theories, this paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions X and the activation functions for RBF nodes can be any integrable piecewise continuous functions X and @ A aH. The proposed incremental method is efficient not only for SFLNs with continuous (including nondifferentiable) activation functions but also for SLFNs with piecewise continuous (such as threshold) activation functions. Compared to other popular methods such a new network is fully automatic and users need not intervene the learning process by manually tuning control parameters.
A fast and accurate online sequential learning algorithm for feedforward networks
 IEEE Trans. Neural Netw
, 2006
"... Abstract—In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OSELM) and ca ..."
Abstract

Cited by 49 (7 self)
 Add to MetaCart
Abstract—In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OSELM) and can learn data onebyone or chunkbychunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OSELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OSELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OSELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OSELM is faster than the other sequential algorithms and produces better generalization performance. Index Terms—Extreme learning machine (ELM), growing and pruning RBF network (GAPRBF), GGAPRBF, minimal resource allocation network (MRAN), online sequential ELM (OSELM), resource allocation network (RAN), resource allocation network via extended kalman filter (RANEKF), stochastic gradient descent backpropagation (SGBP). I.
Multicategory Classification Using An Extreme Learning Machine for Microarray Gene Expression Cancer Diagnosis
 No.3, Pp. 485 – 495
, 2007
"... Abstract—In this paper, the recently developed Extreme Learning Machine (ELM) is used for directing multicategory classification problems in the cancer diagnosis area. ELM avoids problems like local minima, improper learning rate and overfitting commonly faced by iterative learning methods and compl ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract—In this paper, the recently developed Extreme Learning Machine (ELM) is used for directing multicategory classification problems in the cancer diagnosis area. ELM avoids problems like local minima, improper learning rate and overfitting commonly faced by iterative learning methods and completes the training very fast. We have evaluated the multicategory classification performance of ELM on three benchmark microarray data sets for cancer diagnosis, namely, the GCM data set, the Lung data set, and the Lymphoma data set. The results indicate that ELM produces comparable or better classification accuracies with reduced training time and implementation complexity compared to artificial neural networks methods like conventional backpropagation ANN, Linder’s SANN, and Support Vector Machine methods like SVMOVO and Ramaswamy’s SVMOVA. ELM also achieves better accuracies for classification of individual categories.
Incremental extreme learning machine with fully complex hidden nodes
, 2007
"... Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Trans. Neural Networks 17(4) (2006) 879–892] has recently proposed an incremental extreme learning machine (IELM), which randomly adds hidden nodes incrementally and analytically ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Trans. Neural Networks 17(4) (2006) 879–892] has recently proposed an incremental extreme learning machine (IELM), which randomly adds hidden nodes incrementally and analytically determines the output weights. Although hidden nodes are generated randomly, the network constructed by IELM remains as a universal approximator. This paper extends IELM from the real domain to the complex domain. We show that, as long as the hidden layer activation function is complex continuous discriminatory or complex bounded nonlinear piecewise continuous, IELM can still approximate any target functions in the complex domain. The universal capability of the IELM in the complex domain is further verified by two function approximations and one channel equalization problems.
Fast modular network implementation for support vector machines
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 2005
"... Support vector machines (SVMs) have been extensively used. However, it is known that SVMs face difficulty in solving large complex problems due to the intensive computation involved in their training algorithms, which are at least quadratic with respect to the number of training examples. This paper ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Support vector machines (SVMs) have been extensively used. However, it is known that SVMs face difficulty in solving large complex problems due to the intensive computation involved in their training algorithms, which are at least quadratic with respect to the number of training examples. This paper proposes a new, simple, and efficient network architecture which consists of several SVMs each trained on a small subregion of the whole data sampling space and the same number of simple neural quantizer modules which inhibit the outputs of all the remote SVMs and only allow a single local SVM to fire (produce actual output) at any time. In principle, this regioncomputing based modular network method can significantly reduce the learning time of SVM algorithms without sacrificing much generalization performance. The experiments on a few real large complex benchmark problems demonstrate that our method can be significantly faster than single SVMs without losing much generalization performance.
Protein Sequence Classification Using Extreme Learning Machine
 IN: IJCNN05
, 2005
"... Traditionally, two protein sequences are classified into the same class if they have high homology in terms of feature patterns extracted through sequence alignment algorithms. These algorithms compare an unseen protein sequence with all the identified protein sequences and returned the higher score ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Traditionally, two protein sequences are classified into the same class if they have high homology in terms of feature patterns extracted through sequence alignment algorithms. These algorithms compare an unseen protein sequence with all the identified protein sequences and returned the higher scored protein sequences. As the sizes of the protein sequence databases are very large, it is a very time consuming job to perform exhaustive comparison of existing protein sequence. Therefore, there is a need to build an improved classification system for effectively identifying protein sequences. In this paper, a recently developed machine learning algorithm referred to as the Extreme Learning Machine (ELM) is used to classify protein sequences with ten classes of superfamilies downloaded from a public domain database. A comparative study on system performance is conducted between ELM and the main conventional neural network classifier Backpropagation Neural Networks. Results show that ELM needs up to four orders of magnitude less training time compared to BP Network. The classification accuracy of ELM is also higher than that of BP network. For given network architecture, ELM does not have any control parameters (i.e, stopping criteria, learning rate, learning epoches, etc) to be manually tuned and can be implemented easily.
A New Machine Learning Paradigm for Terrain Reconstruction
"... Abstract—Terrain models that permit multiresolution access are essential for model predictive control of unmanned aerial vehicles in lowlevel flights. The authors present the extreme learning machine (ELM), a recently proposed learning paradigm, as a mechanism for learning the stored digital elevat ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Terrain models that permit multiresolution access are essential for model predictive control of unmanned aerial vehicles in lowlevel flights. The authors present the extreme learning machine (ELM), a recently proposed learning paradigm, as a mechanism for learning the stored digital elevation information to allow multiresolution access. We give results of simulations designed to compare the performance of our approach with two other approaches for multiresolution access, namely: 1) linear interpolation on Delaunay triangles of the sampled terrain data points and 2) terrain learning using support vector machines (SVMs). The results show that to achieve the same mean square error during access, the memory needed in our approach is significantly lower. Additionally, the offline training time for the ELM network is much less than that for the SVM. Index Terms—Delaunay triangulation, extreme learning machine, radial basis function (RBF) networks, support vector machine (SVM), terrain mapping. I.