Radius Particle Swarm Optimization
-
Upload
joel-carpenter -
Category
Documents
-
view
243 -
download
0
Transcript of Radius Particle Swarm Optimization
8/10/2019 Radius Particle Swarm Optimization
http://slidepdf.com/reader/full/radius-particle-swarm-optimization 1/5
Radius Particle Swarm Optimization
Mana AnantathanavitFaculty of Information Science and Technology
Mahanakorn University of Technology
Bangkok, Thailand
Mud-Armeen MunlinFaculty of Information Science and Technology
Mahanakorn University of Technology
Bangkok, Thailand
Abstract — Particle Swarm Optimization (PSO) is a swarm
intelligence based and stochastic algorithm to solve the
optimization problem. Nevertheless, the traditional PSO has
disadvantage from the premature convergence when finding the
global optimization. To prevent from falling into the local
optimum, we propose the Radius Particle Swarm Optimization
(R-PSO) which extends the Particle Swarm Optimization by
regrouping the agent particles within the given radius of the
circle. It initializes the group of particles, calculates the fitness
function, and finds the best particle in that group. The R-PSO
employs the group-swarm to keep the swarm diversity and
evolution by sharing information from the agent particles which
successfully maintain the balance between the global exploration
and the local exploitation. Therefore the agent particle guides the
neighbour particles to jump out of the local optimum and achieve
the global best. The proposed method is tested against the well-
known benchmark dataset. The results show that the R-PSO
performs better than the traditional PSO in solving the
multimodal complex problems.
Keywords—Particle Swarm Optimization(PSO); Local
Optimum; Global Optimum; Radius Particle SwarmOptimization(R-PSO)
I. I NTRODUCTION
Particle Swarm Optimization (PSO) was designed by
Kennedy and Eberhart [1]. This method imitates a swarm
behavior such as fish schooling and bird flocking. Bird canfind food and share information to others. Therefore, birds
flock into a promise position or region for food and this area is
shared and other birds will likely move into these share
location. The PSO imitates birds behavior by using a
population called swarm. The member in the group is a
particle. Each particle finds a solution of the problem. Thus,
position sharing of experience or information takes place andthe particles update position and search the solution
themselves.
PSO has been used in many applications [2, 3]. It is simple
to execute and consists of limited parameters. However, the
performance and efficiency of the PSO depend on the iteration
and size of particle. Therefore, the complex problem may
increase the iteration or size of particle to make the result
becomes stable and effective.
PSO has disadvantage for the premature convergence when
solving the complex problems. The PSO with particle
migration has been given by Ma gang et al. [4]. It is based on
the migratory behavior in the nature that the population is
randomly portioned into several sub-swarms and used the
tournament selection to employ the particle migration. Zhang
and Ding [5] propose the PSO on the four sub-swarms which
apply fitness adaptive strategy to update the inertia weight and
share the information to update current records adaptively andcooperatively.
In this paper, we present the Radius Particle SwarmOptimization (R-PSO) to solve the well-known complex
multimodal problem. We extend the traditional PSO algorithm
by using particles within the same radius and group them into
a new particle agent and iteratively find the best solution
under the given objective functions.
II. PARTICLE SWARM OPTIMIZATION(PSO)
The PSO is a population-based stochastic algorithm. It
initializes a population with random positions and search for
the best position under the given fitness function. At each
iteration, the velocity and the position particle in the swarmare updated following to its previous best position ( pbest i,j)
and the global best position ( gbest i,j). In PSO, there are two
update functions: velocity (v) and position ( x) and the inertia
weight (w) as given in equation (1), (2), and (3) respectively.
Where i is the index of particle in the swarm ( i = 1,…,n); n
is the population size, j is the index of position in the particle
( j = 1,…,m); m is the dimension size, t represents the iteration
number, vi,j is the velocity of the ith particle, xi,j is the position.
Note that R1 and R2 are random numbers between 0 and 1, c1
and c2 are the acceleration coefficients, and w is the positiveinertia weight.
2013 International Computer Science and Engineering Conference (ICSEC): ICSEC 2013 English Track Full Papers
978-1-4673-5324-3/13/$31.00 ©2013 IEEE 126
8/10/2019 Radius Particle Swarm Optimization
http://slidepdf.com/reader/full/radius-particle-swarm-optimization 2/5
In the swarm population, there are two topologies [6]:
gbest and lbest as shown in Fig.1. In the gbest topology, each
particle is integrally united; the trajectories oscillate handinga fusion of the best solution that its particle has found, and an
attraction to the best solution that each particle in their
neighbourhood has found. The lbest topology allows each
individual to be influenced by some number of adjacent
numbers of the particle position. There are two lbest
topologies: a von Neumann method that all particles inswarm are connected to its four neighbor particles and a ring
topology in which where each particle is connected to its two
neighbour particles.
Fig.1 Particle Swarm Topologies: The gbest topology (left) and the lbesttopology (center and right)
In traditional PSO, each particle in the swarm moves to its
personal best and the global best position [7, 8]. Although,
the engine of this activation can result in a rapid convergence
rate, it becomes from the premature convergence problem,
where it may be certainly trapped into local minimum when
solving multimodal complex problems. When particles fly the
direction of gbest topology, the diversity between particles
gradually decreases. The solution position is contracted to a
small area containing these same particles. The efficiency of
PSO is concerned with diversity of particles, especially when
efforts are made to avoid premature convergence and to
escape from local optimum. Thus, in order to adjust the
performance of PSO, different type of lbest topologies suchas ring lattice and Von Neumann have been proposed [9]. In
this paper, we proposed another lbest topology using a circle
to find the agent particle based on particles distance or radius.
III. R ADIUS PARTICLE SWARM OPTIMIZATION(R-PSO)
The main concept of the R-PSO is based on the lbest circle
topology to find the agent particle within the radius of circle as
shown in Fig.2. Each particle in the overlap radius can be in
multiple groups. Once, a group is defined, we find the best
particle in that the swarm group and assign to the agent
particle represented by abest i,j as shown in Fig.3. Finally, the
agent particle is the candidate for finding the optimal solution
or the gbest position as shown in Fig.4.
Fig. 2 The lbest circle topology and candidate agent particles.
Fig.3 The local neighborhood search for the best agent particle (abest ).
Fig.4 The global neighborhood search for the gbest.
Thus, the distance (d ) between particles in the radius-
neighborhood is computed in equation (4) and (5)
respectively.
2013 International Computer Science and Engineering Conference (ICSEC): ICSEC 2013 English Track Full Papers
127
8/10/2019 Radius Particle Swarm Optimization
http://slidepdf.com/reader/full/radius-particle-swarm-optimization 3/5
Where d is the distance, p is the particle in the radius-
neighborhood search ( p = 1,…,Ø ); Ø is the population size, j
is the index of position in the particle ( j = 1,…,m); m is thedimension size, r is the radius value, μ is the constant number
( μ = [0.1,0.9]), and vmax is a limit of each particle‘s velocity
to a specified maximum velocity. The maximum velocity is
calculated from the distance between the feasible bounds in
the benchmark function. In this paper, we discuss the
algorithm to search for the optimum solution using the agent particle abest i,j which is a radius local optimum as given in
equation (6).
Where f ( ) is the fitness function, ß is the index of particle in
the radius-neighborhood ( ß=1,…,Ø ). Therefore, the position is
applied to the entire velocity update equation:
The pseudo-code of the R-PSO algorithm is given below.
for each time step t to t max do
for each particle i in the swarm do
update position xi,j(t+1) with Eq.(2) and Eq.(7)
calculated particle’s fitness f(xi,j(t+1)
if particle’s fitness f( xi,j(t+1) better than pbest xi,j(t+1) then
update pbest xi,j(t+1)
end if
find agent particle
if agent particle’s fitness f (agent i,j(t+1)) better than
gbest then
update gbest
end if
end for
end for
Fig.5. R-PSO Algorithm
IV. EXPERIMENTS AND R ESULTS
The performance of R-PSO depends on the radius value.Therefore, experiments are carried out to find the most
suitable value of radius. The fine tuning is performed for the
five well-known test functions as given in Table I.
A. Benchmark function
The benchmark functions that are commonly used in
literature [4, 5, 11] are employed to test the performance of the
R-PSO. This benchmark is chosen for their variety-functions[12]; Sphere ( f1), Schwefel 1.2 ( f2) and Generalized
Rosenbrock ( f3) are simple unimodal problems, Generalize
Schwefel 2.6 ( f4) and Generalize Rastrigin ( f5) are highly
complex multimodal problem with few local minimum. Thefunction, the size of the variable and the solution are shown in
Table I and Table II. Table I provides characteristics of
function. The initialization bounds in Table II are an area
equal to one quarter of the search area that does not contain
the optimum solution.
TABLE I. BENCHMARK FUNCTIONS
F Common name Expression
f1 Sphere
f2 Schwefel 1.2
f3 Generalized
Rosenbrock
f4 Generalize
Schwefel 2.6
f5 Generalize
Rastrigin
TABLE II. OPTIMA AND INITIALIZATION RANGES FOR ALL FUNCTION
F Feasible
Bounds
Dimension Optimum Initialization
Bounds
f1 (-100,100) 30 0 (50,100)
f2 (-100,100) 30 0 (50,100)
f3 (-30,30) 30 0 (15,30)
f4 (-500,500) 30 0 (250,500)
f5 (-5.12,5.12) 30 0 (2.56,100)
B.
Parameter selectionWe test our R-PSO against traditional PSO algorithm. Each
algorithm runs on the all the benchmark function given in
Table I. The parameter w is set from the recommended value
by Shi and Eberchart [10] with a linear decreasing from 0.9 to
0.4. The acceleration constants c1, c2 are both 1.42694, xmax
and vmax are set to be equal, population size in the swarm is
60 and iteration number is 5000. Therefore, the number offunction evaluations is 300000. A total of 30 runs for each
experimental setting are conducted and average fitness of the
best solution throughout the run is recorded. In the case where
a gbest value is < 10-8
, it is rounded to 0.0.
The value of Radius may affect the performance of R-
PSO. Thus, improper values of Radius may potentially leadthe algorithm to be trapped in the local minimum. To select
the better value Radius, we define a μ parameter under variant
μ = [0.1,0.9], respectively.
In order to investigate the scalability of radius, different μ
ranging from 0.1 to 0.9 is analyzed. μ is a scale factor that has
value between 0 and 1, where μ = 0 means random particle
selection and μ = 1 means traditional PSO particle selection.
We have done experiment with the aforementioned five
functions to calibrate the μ and obtain the best μ = 0.4 as
2013 International Computer Science and Engineering Conference (ICSEC): ICSEC 2013 English Track Full Papers
128
8/10/2019 Radius Particle Swarm Optimization
http://slidepdf.com/reader/full/radius-particle-swarm-optimization 4/5
shown in Table III. We will employ this number in our R-PSO
to measure the performance against traditional PSO.
TABLE III. SOLUTION TO EACH R ADIUS VALUE
μ f1 f2 f3 f4 f5
0.1 0.00 2830.79 90.47 5569.33 133.63
0.2 0.00 6998.51 100.25 5355.56 167.050.3 0.00 536.85 3040.54 6025.58 116.49
0.4 0.00 0.012 0.68 4051.17 18.00
0.5 0.00 5062.98 572.89 5123.18 197.71
0.6 0.00 279.98 225.78 5891.35 31.83
0.7 0.00 9032.17 25.22 5825.92 19.91
0.8 1000.00 25058.84 3040.65 4049.57 22.88
0.9 2000.00 10405.57 25.51 4445.10 80.66
C. Results
We compare the performance of R-PSO and PSO. Table IVand Figure 6-10 present the best fitness result of the algorithm.
The results are average over the 300,000 function evaluations.
TABLE IV. MEAN FITNESS AFTER 30 TRIALS OF 300000 EVALUATION
F Common Name Evaluation R-PSO PSO
f1 Sphere Best 0 0
Mean 0 0
Worst 0 0
Std. ±0.0 ±0.0
f2 Schwefel 1.2 Best 0.04 26532.42
Mean 0.12 26666.66
Worst 0.13 26714.23
Std. ±0.002 ±33.78
f3 Generalized
Rosenbrock
Best 0.65 6.82
Mean 0.68 8.13
Worst 0.78 8.39
Std. ±0.178 ±0.273
f4 Generalize
Schwefel 2.6
Best 4049.57 5180.10
Mean 4051.17 5121.67
Worst 4065.93 5201.65
Std. ±2.830 ±10.480
f5 Generalize
Rastrigin
Best 16.13 152.13
Mean 18.00 157.27
Worst 19.68 157.28
Std. ±0.461 ±0.922
Fig.6 Algorithm Convergence for Sphere ( f1)
Fig. 7 Algorithm Convergence for Schwefel 1.2 ( f2)
Fig. 8 Algorithm Convergence for Generalized Rosenbrock ( f3)
Fig. 9 Algorithm Convergence for Generalize Schwefel 2.6 ( f4)
2013 International Computer Science and Engineering Conference (ICSEC): ICSEC 2013 English Track Full Papers
129
8/10/2019 Radius Particle Swarm Optimization
http://slidepdf.com/reader/full/radius-particle-swarm-optimization 5/5
Fig. 10 Algorithm Convergence for Generalize Rastrigin 2.6 ( f5)
Table IV presents the mean fitness for minimization result
on the functions. The performance of R-PSO outperform the
PSO for functions f2 – f5. For function f1, it is the simplesphere model and the mean fitness result is the same.
However, the convergence of PSO is faster than R-PSO due to
a single lbest but R-PSO still tries different lbest which is
already a gbest as shown in Fig.6.
For function f2, the R-PSO is very close to the solution,
while the PSO is divergent without having close the the
solution as a result of the best local optimum (or gbest) as
shown in Fig .7. The PSO is trapped in the lbest while R-PSOcan escape and find the new lbest using the agent particle. The
R-PSO can practically prevent the premature convergence and
notably increase the convergence in the evolution procedure.
For function f3, a saddle point contains serval lbest,
therefore, it is hardly to find the optimum solution. PSO
eventually diverges while the R-PSO still can demonstrates a
better result and convergence precision as shown in fig.8.
Function f4 and f5 are multimodal functions where the local
optimum increases exponentially depending on the dimension
size. It is the complex problem and very hard to find the
solution. PSO can be trapped in a local optimum on its route to
the global optimum, while R-PSO eventually can still find a
solution as shown in Fig.9 and Fig.10. It has been shown that
our R-PSO is the better optimization method than the PSO for
such complex problems in term of both convergence and
speed. Thus, the proposed lbest circle topology using radius-
neighborhood search is very effective for most test functions.
V. CONCLUSION
We propose the R-PSO by regrouping the particles within a
given radius and determine the agent particle which is the best
particle of the group for each local optimum. The R-PSO is
able to maintain appropriate swarm diversity, jump out thelocal optimum using the agent particle to achieve the global
optimum. We prove the performance of the R-PSO using well-
known complex problems. The result shows that our proposedmethod can solve the complex problems more effectively and
efficiently than the traditional PSO.
R EFERENCES
[1] J. Kennedy, and R. Eberhart . “Particle swarm optimization,” in IEEEinternational conference on neural networks,. Proceedings Vol. 4, pp.1942–1948, 1995
[2] C-Y. Tsai, and I-W. Kao. “Particle swarm optimization with selective particle regeneration for data clustering,” Expert System andApplications, Vol. 38, pp. 6565–6576, 2011
[3] R-M. Chen. “Particle swarm optimization with justification anddesigned machanisms for resource-constrainted project scheduling problem,” Expert System and Applications, Vol. 38, pp. 7102–7111,
2011
[4] M.Gang, Z.Wei and C. Xiaolin . “A novel particle swarm optimization base on particle migration,” Applied Mathematics and Computation.Vol. 218, pp. 6620–6636, 2012
[5] J.Zhang, and X. Ding . “Multi-Swarm Self-Adaptive and CoperativeParticle swarm optimization,” Engineering Application and ArtificialIntelligence. Vol. 24, pp. 958–967, 2011
[6] J. Kennedy, and R. Eberchart,”Small worlds and mega-minds: effects ofneighborhood topology on particle swarm performance,” in proceedingsof IEEE congress on Evolutionary Computation, pp. 1391-1938, 1999
[7] M.Clerc, and J. Kennedy, “The particle swarm – explosion, stability, andconvergence in a multidimension complex space,” IEEE Transaction onEvolutationary Computation 6, pp. 58-73, 2002
[8] I.Cristian Trelea, “The paricle swarm optimizationalgorithm:convergence analysis and parameter selection,” Information
Processing Letters, pp. 317-325, 2002[9] J. Kennedy, and R. Mendes, “Population structure and particle swarm
performance,” in Proceeding of IEEE Congress on EvolutionaryComputation, pp. 1671-1676, 2002
[10] Y. Shi, and R.C. Eberhart, “Empirical Study of Particle SwarmOptimization,” in Proceedings of the IEEE Congress on EvolutionaryComputation (CEC), pp. 1945-1950, 1999
[11] D. Bratton, and J. Kennedy,”Defining a Standard for Particle SwarmOptimization,” in Proceedings of the IEEE Swarm IntelligenceSymposium, pp. 120-127, 2007
[12] Available from : http://www2.denizyuret.com/pub/aitr1569/node19.html
2013 International Computer Science and Engineering Conference (ICSEC): ICSEC 2013 English Track Full Papers
130