#### DMCA

## 2 Hybrid Learning Enhancement of RBF Network with Particle Swarm Optimization

### Citations

79 |
R.C.: Particle swarm optimization,
- Kennedy, Eberhart
- 1995
(Show Context)
Citation Context ...aining algorithms in RBF models: supervised and unsupervised learning. 4 Particle Swarm Optimization Particle Swarm Optimization (PSO) algorithm, originally introduced by Kennedy and Eberhart in 1995 =-=[5]-=-, simulates the knowledge evolvement of a social organism, in which each individual is treated as an infinitesimal particle in the ndimensional space, with the position vector and velocity vector of p... |

44 | Document clustering using particle swarm optimization,”
- Cui, Potok, et al.
- 2005
(Show Context)
Citation Context ...restricts the real applications of the algorithms. In addition, Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Self-Organizing Maps (SOM) are also been considered in clustering process =-=[4]-=-. In this study, PSO is explored to enhance RBF learning mechanism. The paper is structured as follows. Section 2, related work about RBF Network training is introduced. Section 3 presents RBF Network... |

42 |
Radial basis function networks for classifying process faults.
- Leonard, Kramer
- 1991
(Show Context)
Citation Context ...earity in the hidden layer neurons. The output layer has no nonlinearity and the connections of the output layer are only weighted, the connections from the input to the hidden layer are not weighted =-=[1]-=-. Due to their better approximation capabilities, simpler network structures and faster learning algorithms, RBF Networks have been widely applied in many science and engineering fields. It is three l... |

14 |
Particle Swarm Optimization for Evolving Artificial Neural Network”.
- Zhang, Shao, et al.
- 2000
(Show Context)
Citation Context ... benign examples and 241 are malignant examples. The first 349 examples of the whole data set were used for training, the following 175 examples for validation, and the final 175 examples for testing =-=[8]-=-. The ending conditions of PSO-RBFN are set to minimum error of 0.005 or maximum iteration of 10000. Alternatively, the stopping conditions for BP-RBFN are set to a minimum error of 0.005 or maximum i... |

8 |
Training Feedforward Neural Network Using Multi-phase Particle Swarm Optimization”.
- Al-kazemi, Mohan
- 2002
(Show Context)
Citation Context ...orward propagations = 2 × maximum number of iterations), while the maximum number of iterations in PSORBFN is set to 10000 (number of forward propagations = swarm size × maximum number of iterations) =-=[9]-=-. The stopping criteria are the maximum number of iterations that the algorithm has been reached or the minimum error. The architecture of the RBF Network was fixed in one hidden layer (number of inpu... |

7 |
A modified error function for the backpropagation algorithm, Neurocomputing 57
- Wang, Tang, et al.
- 2004
(Show Context)
Citation Context ... (t)) and Vi (t) = (Vi1(t), Vi2 (t),…., Vin (t)). The particles move according to the following equations: V id ( t + 1) = W × V id ( t) + c1r1( Pid ( t) − X id ( t)) + c2 r 2( P gd ( t) − X id ( t)) =-=(7)-=- X id ( t + 1) = X id ( t) + V id ( t + 1) (8) i = 1,2,…,M ; d = 1,2,…,n Where c and c are the acceleration coefficients, Vector P = (P , P ,…, P ) is the 1 2 i i1 i2 in best previous position (the po... |

6 |
Training Radial Basis Function Networks with Particle Swarms,”
- Liu
(Show Context)
Citation Context .... The second stage involves weights establishment by connecting the hidden layer with the output layer. This is determined by Singular Value Decomposition (SVD) or Least Mean Squares (LMS) algorithms =-=[2]-=-. Clustering algorithms have been successfully used in training RBF Networks such as Optimal Partition Algorithm (OPA) to determine the centers and widths of RBFs. In most traditional algorithms, such... |

6 |
A modified particle swarm
- Shi, Eberhart
(Show Context)
Citation Context ..., 1). Generally, the value of Vid is restricted in the interval [-Vmax, Vmax ]. Inertia weight w was first introduced by Shi and Eberhart in order to accelerate the convergence speed of the algorithm =-=[6]-=-. 5 BP-RBF Network In our paper, the standard BP is selected as the simplest and most widely used algorithm to train feed-forward RBF Networks and considered for the full-training386 S. Noman et al. ... |

5 |
Training RBF Network with Selective Backpropagation, Neurocomputing Elsevier Journal
- Vakil, Pavesic
- 2004
(Show Context)
Citation Context ...If one epoch of training is finished, repeat the training for another epoch. BP-RBF Network doesn’t need the momentum term as it is common for the MLP. It does not help in training of the RBF Network =-=[14]-=-. 6 PSO-RBF Network PSO has been applied to improve RBF Network in various aspects such as network connections (centers, weights), network architecture and learning algorithm. The main process in this... |

5 |
Training RBF neural network via quantum-behaved particle swarm optimization
- Sun, Xu, et al.
- 2006
(Show Context)
Citation Context ...d Chaos. An innovative Hybrid Recursive Particle Swarm Optimization (HRPSO) learning algorithm with normalized fuzzy c-mean (NFCM) clustering, PSO and Recursive Least Squares (RLS) has been presented =-=[16]-=- to generate RBF networks modeling system with small numbers of descriptive RBFs for fast approximating two complex and nonlinear functions. On other hand, a newly evolutionary search technique called... |

2 |
Z.: Training RBF neural networks with PSO and improved subtractive clustering algorithms
- Chen, Qin
(Show Context)
Citation Context ...ach term Φ k (.) forms the activation function in a unit of the hidden layer. The output layer then implements a linear combination of this new space. k k k C m i= 1 Φ ( x , c , σ ) = φ ( x , c , σ ) =-=(3)-=- Moreover, the most popular choice for ϕ (.) is the Gaussian form defined as In this case, the φ ( x , c ki th j i ki ki i ki 2 , σ ) = exp[ −( x − c ) / 2 ] (4) ki output in equation (1) becomes: i k... |

2 |
Training RBF neural network with hybrid particle swarm optimization
- Gao, Feng, et al.
- 2006
(Show Context)
Citation Context ...rb routine that was included in Matlab neural networks toolbox as standard training algorithm for RBF network.Hybrid Learning Enhancement of RBF Network with PSO 383 A hybrid PSO (HPSO) was proposed =-=[15]-=- with simulated annealing and Chaos search technique to train RBF Network. The HPSO algorithm combined the strong ability of PSO, SA, and Chaos. An innovative Hybrid Recursive Particle Swarm Optimizat... |

1 |
C.J.: UCI Repository of
- Blake, Merz
(Show Context)
Citation Context ...his work included the standard PSO and BP for RBF Network training. For evaluating all of these algorithms we used five benchmark classification problems obtained from the machine learning repository =-=[10]-=-. Table 1 Execution parameters for PSO Parameter Value Population Size 20 Iterations 10000 W [0.9,0.4] C1 2.0 C2 2.0 The parameters of the PSO algorithm were set as: weight w decreasing linearly betwe... |

1 |
Evolving RBF Neural Networks for Pattern Classification
- Qin, Chen, et al.
- 2005
(Show Context)
Citation Context ...PSO is still fresh. This section presents some existing work of training RBF Network based on Evolutionary Algorithms (EAs) such as PSO especially based on unsupervised learning only (Clustering). In =-=[11]-=-, they have proposed a PSO learning algorithm to automate the design of RBF Networks, to solve pattern classification problems. Thus, PSO-RBF finds the size of the network and the parameters that conf... |