Results 1 
8 of
8
Particle swarm optimization feedforward neural network for modeling runoff
, 2010
"... ..."
(Show Context)
Training Neural Networks Using Multiobjective Particle Swarm Optimization
"... Abstract. This paper suggests an approach to neural network training through the simultaneous optimization of architectures and weights with a Particle Swarm Optimization (PSO)based multiobjective algorithm. Most evolutionary computationbased training methods formulate the problem in a single obje ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper suggests an approach to neural network training through the simultaneous optimization of architectures and weights with a Particle Swarm Optimization (PSO)based multiobjective algorithm. Most evolutionary computationbased training methods formulate the problem in a single objective manner by taking a weighted sum of the objectives from which a single neural network model is generated. Our goal is to determine whether Multiobjective Particle Swarm Optimization can train neural networks involving two objectives: accuracy and complexity. We propose rules for automatic deletion of unnecessary nodes from the network based on the following idea: a connection is pruned if its weight is less than the value of the smallest bias of the entire network. Experiments performed on benchmark datasets obtained from the UCI machine learning repository show that this approach provides an effective means for training neural networks that is competitive with other evolutionary computationbased methods. 1
2 Hybrid Learning Enhancement of RBF Network with Particle Swarm Optimization
"... Abstract. This study proposes RBF Network hybrid learning with Particle Swarm Optimization (PSO) for better convergence, error rates and classification results. In conventional RBF Network structure, different layers perform different tasks. Hence, it is useful to split the optimization process of h ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This study proposes RBF Network hybrid learning with Particle Swarm Optimization (PSO) for better convergence, error rates and classification results. In conventional RBF Network structure, different layers perform different tasks. Hence, it is useful to split the optimization process of hidden layer and output layer of the network accordingly. RBF Network hybrid learning involves two phases. The first phase is a structure identification, in which unsupervised learning is exploited to determine the RBF centers and widths. This is done by executing different algorithms such as kmean clustering and standard derivation respectively. The second phase is parameters estimation, in which supervised learning is implemented to establish the connections weights between the hidden layer and the output layer. This is done by performing different algorithms such as Least Mean Squares (LMS) and gradient based methods. The incorporation of PSO in RBF Network hybrid learning is accomplished by optimizing the centers, the widths and the weights of RBF Network. The results for training, testing and validation of five datasets (XOR, Balloon, Cancer, Iris and Ionosphere) illustrates the effectiveness of PSO in enhancing RBF Network learning compared to conventional Backpropogation.
HIOPGA: A New Hybrid Metaheuristic Algorithm to Train Feedforward Neural Networks for Prediction
"... Abstract Most of neural network training algorithms make use of gradientbased search and because of their disadvantages, researchers always interested in using alternative methods. In this paper to train feedforward, neural network for prediction problems a new Hybrid Improved Oppositionbased Par ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Most of neural network training algorithms make use of gradientbased search and because of their disadvantages, researchers always interested in using alternative methods. In this paper to train feedforward, neural network for prediction problems a new Hybrid Improved Oppositionbased Particle swarm optimization and Genetic Algorithm (HIOPGA) is proposed. The oppositionbased PSO is utilized to search better in solution space. In addition, to restrain model overfit with training pattern, a new cross validation method is proposed. Several benchmark problems with varying dimensions are chosen to investigate the capabilities of the proposed algorithm as a training algorithm. The result of HIOPGA is compared with standard backpropagation algorithm with momentum term.
PSOGP: A GENETIC PROGRAMMING BASED ADAPTABLE EVOLUTIONARY HYBRID PARTICLE SWARM OPTIMIZATION
, 2008
"... Abstract. In this study we describe a method for extending particle swarm optimization. We have presented a novel approach for avoiding premature convergence to local minima by the introduction of diversity in the swarm. The swarm is made more diverse and is encouraged to explore by employing a mec ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In this study we describe a method for extending particle swarm optimization. We have presented a novel approach for avoiding premature convergence to local minima by the introduction of diversity in the swarm. The swarm is made more diverse and is encouraged to explore by employing a mechanism which allows each particle to use a different equation to update its velocity. This equation is also continuously evolved through the use of genetic programming to ensure adaptability. We compare two variations of our algorithm, one utilizing random initialization while in the second one we utilize partial nonrandom initalization which forces some particles to use the standard PSO velocity update equation. Results from experimentation suggest that the modified PSO with complete random initialization shows promise and has potential for improvement. It is particularly very good at finding the exact optimum.
Evolutionary Algorithms for Neural Network Learning Enhancement
"... Artificial Neural Network (ANN) is one of the modern computational methods proposed to solve the majority of real world problems. BackPropagation (BP) algorithm (as a gradient descend method) is one of the most popular methods for ANN training. However, there are some unavoidable disadvantages such ..."
Abstract
 Add to MetaCart
Artificial Neural Network (ANN) is one of the modern computational methods proposed to solve the majority of real world problems. BackPropagation (BP) algorithm (as a gradient descend method) is one of the most popular methods for ANN training. However, there are some unavoidable disadvantages such as slow convergence speed, easily gets into partial extreme value and infirm global searching capability. To solve these problems, many solutions have been presented so far. Among them, Evolutionary Algorithms (EAs) have shown a good performance in this regard. EAs use some mechanisms inspired by biological evolution to find an optimization solution. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Imperialist Competitive Algorithm (ICA) are in the class of algorithms. In this study, the mentioned optimization algorithms are chosen and applied in feedforward neural network to enhance the learning process in terms of convergence rate and classification accuracy. 1.
PARTICLE SWARM OPTIMIZATION FOR NEURAL NETWORK LEARNING ENHANCEMENT
"... Abstract. Backpropagation (BP) algorithm is widely used to solve many real world problems by using the concept of Multilayer Perceptron (MLP). However, major disadvantages of BP are its convergence rate is relatively slow and always being trapped at the local minima. To overcome this problem, Geneti ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Backpropagation (BP) algorithm is widely used to solve many real world problems by using the concept of Multilayer Perceptron (MLP). However, major disadvantages of BP are its convergence rate is relatively slow and always being trapped at the local minima. To overcome this problem, Genetic Algorithm (GA) has been used to determine optimal value for BP parameters such as learning and momentum rate and, also for weight optimization. Although GA has successfully improved Backpropagation Neural Network (BPNN) learning, there are still some issues such as longer training time to produce the output and usage of complex functions in selection, crossover and mutation calculation. In this study, Particle Swarm Optimization (PSO) algorithm has been chosen and applied in feedforward neural network to enhance the learning process in terms of convergence rate and classification accuracy. Two experiments have been conducted; Particle