Results 1  10
of
194
Adaptive Particle Swarm Optimization
, 2008
"... This paper proposes an adaptive particle swarm optimization (APSO) with adaptive parameters and elitist learning strategy (ELS) based on the evolutionary state estimation (ESE) approach. The ESE approach develops an ‘evolutionary factor’ by using the population distribution information and relative ..."
Abstract

Cited by 67 (2 self)
 Add to MetaCart
(Show Context)
This paper proposes an adaptive particle swarm optimization (APSO) with adaptive parameters and elitist learning strategy (ELS) based on the evolutionary state estimation (ESE) approach. The ESE approach develops an ‘evolutionary factor’ by using the population distribution information and relative particle fitness information in each generation, and estimates the evolutionary state through a fuzzy classification method. According to the identified state and taking into account various effects of the algorithmcontrolling parameters, adaptive control strategies are developed for the inertia weight and acceleration coefficients for faster convergence speed. Further, an adaptive ‘elitist learning strategy ’ (ELS) is designed for the best particle to jump out of possible local optima and/or to refine its accuracy, resulting in substantially improved quality of global solutions. The APSO algorithm is tested on 6 unimodal and multimodal functions, and the experimental results demonstrate that the APSO generally outperforms the compared PSOs, in terms of solution accuracy, convergence speed and algorithm reliability.
A note on the learning automata based algorithms for adaptive parameter selection in PSO
 Applied Soft Computing
, 2011
"... in PSO ..."
(Show Context)
Frankenstein’s PSO: A composite particle swarm optimization algorithm
 IRIDIA, CoDE, Université Libre de Bruxelles
, 2007
"... Abstract — During the last decade, many variants of the original particle swarm optimization (PSO) algorithm have been proposed. In many cases, the difference between two variants can be seen as an algorithmic component being present in one variant but not in the other. In the first part of the pape ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
Abstract — During the last decade, many variants of the original particle swarm optimization (PSO) algorithm have been proposed. In many cases, the difference between two variants can be seen as an algorithmic component being present in one variant but not in the other. In the first part of the paper, we present the results and insights obtained from a detailed empirical study of several PSO variants from a component difference point of view. In the second part of the paper, we propose a new PSO algorithm that combines a number of algorithmic components that showed distinct advantages in the experimental study concerning optimization speed and reliability. We call this composite algorithm Frankenstein’s PSO in an analogy to the popular character of Mary Shelley’s novel. Frankenstein’s PSO performance evaluation shows that by integrating components in novel ways effective optimizers can be designed. Index Terms — Continuous optimization, experimental analysis, integration of algorithmic components, particle swarm optimization (PSO), runtime distributions, swarm intelligence. I.
Editorial survey: swarm intelligence for data mining
 MACH LEARN (2011) 82: 1–42
, 2011
"... This paper surveys the intersection of two fascinating and increasingly popular domains: swarm intelligence and data mining. Whereas data mining has been a popular academic topic for decades, swarm intelligence is a relatively new subfield of artificial intelligence which studies the emergent col ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
This paper surveys the intersection of two fascinating and increasingly popular domains: swarm intelligence and data mining. Whereas data mining has been a popular academic topic for decades, swarm intelligence is a relatively new subfield of artificial intelligence which studies the emergent collective intelligence of groups of simple agents. It is based on social behavior that can be observed in nature, such as ant colonies, flocks of birds, fish schools and bee hives, where a number of individuals with limited capabilities are able to come to intelligent solutions for complex problems. In recent years the swarm intelligence paradigm has received widespread attention in research, mainly as Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). These are also the most popular swarm intelligence metaheuristics for data mining. In addition to an overview of these nature inspired computing methodologies, we discuss popular data mining techniques based on these principles and schematically list the main differences in our literature tables. Further, we provide a unifying framework that categorizes the swarm intelligence based data mining algorithms into two approaches: effective search and data organizing. Finally, we list interesting issues for future research, hereby identifying methodological gaps in current research as well as mapping opportunities provided by swarm intelligence to current challenges within data mining research.
Adaptive Computational Chemotaxis in Bacterial Foraging Optimization: An Analysis
 IEEE Computer Society Press, ISBN 0769531091
, 2008
"... Some researchers have illustrated how individual and groups of bacteria forage for nutrients and to model it as a distributed optimization process, which is called the Bacterial Foraging Optimization (BFOA). One of the major driving forces of BFOA is the chemotactic movement of a virtual bacterium, ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
(Show Context)
Some researchers have illustrated how individual and groups of bacteria forage for nutrients and to model it as a distributed optimization process, which is called the Bacterial Foraging Optimization (BFOA). One of the major driving forces of BFOA is the chemotactic movement of a virtual bacterium, which models a trial solution of the optimization problem. In this article, we analyze the chemotactic step of a one dimensional BFOA in the light of the classical Gradient Descent Algorithm (GDA). Our analysis points out that chemotaxis employed in BFOA may result in sustained oscillation, especially for a flat fitness landscape, when a bacterium cell is very near to the optima. To accelerate the convergence speed near optima we have made the chemotactic step size C adaptive. Computer simulations over several numerical benchmarks indicate that BFOA with the new chemotactic operation shows better convergence behavior as compared to the classical BFOA.
MultiObjective Particle Swarm Optimization with time variant inertia and acceleration coefficients
, 2007
"... ..."
A modified PSO structure resulting in high exploration ability with convergence guaranteed
 IEEE Transactions on System, Man and Cybernetics: Part B
, 2007
"... Abstract—Particle swarm optimization (PSO) is a populationbased stochastic recursion procedure, which simulates the social behavior of a swarm of ants or a school of fish. Based upon the general representation of individual particles, this paper introduces a decreasing coefficient to the updating ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Particle swarm optimization (PSO) is a populationbased stochastic recursion procedure, which simulates the social behavior of a swarm of ants or a school of fish. Based upon the general representation of individual particles, this paper introduces a decreasing coefficient to the updating principle, so that PSO can be viewed as a regular stochastic approximation algorithm. To improve exploration ability, a random velocity is added to the velocity updating in order to balance exploration behavior and convergence rate with respect to different optimization problems. To emphasize the role of this additional velocity, the modified PSO paradigm is named PSO with controllable random exploration velocity (PSOCREV). Its convergence is proved using Lyapunov theory on stochastic process. From the proof, some properties brought by the stochastic components are obtained such as “divergence before convergence ” and “controllable exploration. ” Finally, a series of benchmarks is proposed to verify the feasibility of PSOCREV. Index Terms—Lyapunov theory, particle swarm optimization with controllable random exploration velocity (PSOCREV), stochastic approximation, supermartingale convergence. I.
Orthogonal Learning Particle Swarm Optimization
 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
, 2010
"... Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood’s best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when sear ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood’s best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with
Synergy of PSO and Bacterial Foraging Optimization  A Comparative Study on Numerical Benchmarks
 NUMERICAL BENCHMARKS, SECOND INTERNATIONAL SYMPOSIUM ON HYBRID ARTIFICIAL INTELLIGENT SYSTEMS (HAIS 2007), ADVANCES IN SOFT COMPUTING SERIES
, 2007
"... Social foraging behavior of Escherichia coli bacteria has recently been explored to develop a novel algorithm for distributed optimization and control. The Bacterial Foraging Optimization Algorithm (BFOA), as it is called now, is currently gaining popularity in the community of researchers, for its ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
(Show Context)
Social foraging behavior of Escherichia coli bacteria has recently been explored to develop a novel algorithm for distributed optimization and control. The Bacterial Foraging Optimization Algorithm (BFOA), as it is called now, is currently gaining popularity in the community of researchers, for its effectiveness in solving certain difficult realworld optimization problems. Until now, very little research work has been undertaken to improve the convergence speed and accuracy of the basic BFOA over multimodal fitness landscapes. This article comes up with a hybrid approach involving Particle Swarm Optimization (PSO) and BFOA algorithm for optimizing multimodal and high dimensional functions. The proposed hybrid algorithm has been extensively compared with the original BFOA algorithm, the classical g_best PSO algorithm and a state of the art version of the PSO. The new method is shown to be statistically significantly better on a fivefunction testbed and one difficult engineering optimization problem of spread spectrum radar polyphase code design.
A Novel SetBased Particle Swarm Optimization Method for Discrete Optimization Problems
, 2010
"... Particle swarm optimization (PSO) is predominately used to find solutions for continuous optimization problems. As the operators of PSO are originally designed in an ndimensional continuous space, the advancement of using PSO to find solutions in a discrete space is at a slow pace. In this paper, ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Particle swarm optimization (PSO) is predominately used to find solutions for continuous optimization problems. As the operators of PSO are originally designed in an ndimensional continuous space, the advancement of using PSO to find solutions in a discrete space is at a slow pace. In this paper, a novel setbased PSO (SPSO) method for the solutions of some combinatorial optimization problems (COPs) in discrete space is presented. The proposed SPSO features the following characteristics. First, it is based on using a setbased representation scheme that enables SPSO to characterize the discrete search space of COPs. Second, the candidate solution and velocity are defined as a crisp set, and a set with possibilities, respectively. All arithmetic operators in the velocity and position updating rules used in the original PSO are replaced by the operators and procedures defined on crisp sets, and sets with possibilities in SPSO. The SPSO method can thus follow a similar structure to the original PSO for searching in a discrete space. Based on the proposed SPSO method, most of the existing PSO variants, such as the global version PSO, the local version PSO with different topologies, and the comprehensive learning PSO (CLPSO), can be extended to their corresponding discrete versions. These discrete PSO versions based on SPSO are tested on two famous COPs: the traveling salesman problem and the multidimensional knapsack problem. Experimental results show that the discrete version of the CLPSO algorithm based on SPSO is promising.