Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf ·...

24
Particle Swarm Optimization in Dynamic Environments Tim Blackwell Department of Computing, Goldsmiths College London SE14 6NW, UK [email protected] 1 Introduction Particle Swarm Optimization (PSO) is a versatile population-based optimiza- tion technique, in many respects similar to evolutionary algorithms (EAs). PSO has been shown to perform well for many static problems [30]. However, many real-world problems are dynamic in the sense that the global optimum location and value may change with time. The task for the optimization al- gorithm is to track this shifting optimum. It has been argued [14] that EAs are potentially well-suited to such tasks, and a review of EA variants tested in the dynamic problem is given in [13, 15]. It might be wondered, therefore, what promise PSO holds for dynamic problems. Optimization with particle swarms has two major ingredients, the particle dynamics and the particle information network. The particle dynamics are de- rived from swarm simulations in computer graphics [21], and the information sharing component is inspired by social networks [32, 25]. These ingredients combine to make PSO a robust and efficient optimizer of real-valued objective functions (although PSO has also been successfully applied to combinatorial and discrete problems too). PSO is an accepted computational intelligence technique, sharing some qualities with Evolutionary Computation [1]. The application of PSO to dynamic problems has been explored by various authors [30, 23, 17, 9, 6, 24]. The overall consequence of this work is that PSO, just like EAs, must be modified for optimal results on dynamic environments typified by the moving peaks benchmark (MPB). (Moving peaks, arguably representative of real world problems, consist of a number of peaks of changing with and height and in lateral motion [12, 7].) The origin of the difficulty lies in the dual problems of outdated memory due to environment dynamism, and diversity loss, due to convergence. Of these two problems, diversity loss is by far the more serious; it has been demonstrated that the time taken for a partially converged swarm to re-diversify, find the shifted peak, and then re-converge is quite deleterious to performance [3]. Clearly, either a re-diversification mechanism must be

Transcript of Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf ·...

Page 1: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

Particle Swarm Optimization in Dynamic

Environments

Tim Blackwell

Department of Computing, Goldsmiths College London SE14 6NW, [email protected]

1 Introduction

Particle Swarm Optimization (PSO) is a versatile population-based optimiza-tion technique, in many respects similar to evolutionary algorithms (EAs).PSO has been shown to perform well for many static problems [30]. However,many real-world problems are dynamic in the sense that the global optimumlocation and value may change with time. The task for the optimization al-gorithm is to track this shifting optimum. It has been argued [14] that EAsare potentially well-suited to such tasks, and a review of EA variants testedin the dynamic problem is given in [13, 15]. It might be wondered, therefore,what promise PSO holds for dynamic problems.

Optimization with particle swarms has two major ingredients, the particledynamics and the particle information network. The particle dynamics are de-rived from swarm simulations in computer graphics [21], and the informationsharing component is inspired by social networks [32, 25]. These ingredientscombine to make PSO a robust and efficient optimizer of real-valued objectivefunctions (although PSO has also been successfully applied to combinatorialand discrete problems too). PSO is an accepted computational intelligencetechnique, sharing some qualities with Evolutionary Computation [1].

The application of PSO to dynamic problems has been explored by variousauthors [30, 23, 17, 9, 6, 24]. The overall consequence of this work is that PSO,just like EAs, must be modified for optimal results on dynamic environmentstypified by the moving peaks benchmark (MPB). (Moving peaks, arguablyrepresentative of real world problems, consist of a number of peaks of changingwith and height and in lateral motion [12, 7].) The origin of the difficulty liesin the dual problems of outdated memory due to environment dynamism, anddiversity loss, due to convergence.

Of these two problems, diversity loss is by far the more serious; it hasbeen demonstrated that the time taken for a partially converged swarm tore-diversify, find the shifted peak, and then re-converge is quite deleteriousto performance [3]. Clearly, either a re-diversification mechanism must be

Page 2: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

2 Tim Blackwell

employed at (or before) function change, and/or a measure of diversity can bemaintained throughout the run. There are four principle mechanisms for eitherre-diversification or diversity maintenance: randomization [23], repulsion [5],dynamic networks [24, 36] and multi-populations [29, 6].

Multi-swarms combine repulsion with multi-populations [6, 7]. Interest-ingly, the repulsion occurs between particles, and between swarms. The multi-population in this case is an interacting super-swarm of charged swarms . Acharged swarm is inspired by models of the atom: a conventional PSO nu-cleus is surrounded by a cloud of ‘charged’ particles. The charged particlesare responsible for maintaining the diversity of the swarm. Furthermore, andin analogy to the exclusion principle in atomic physics, each swarm is subjectto an exclusion pressure that operates when the swarms collide. This prohibitstwo or more swarms from surrounding a single peak, thereby enabling swarmsto watch secondary peaks in the eventuality that these peaks might becomeoptimal. This strategy has proven to be very effective for MPB environments.

This chapter starts with a description of the canonical PSO algorithmand then, in Section 3, explains why dynamic environments pose particularproblems for unmodified PSO. The MPB framework is also introduced inthis section. The following section describes some PSO variants that havebeen proposed to deal with diversity loss. Section 5 outlines the multi-swarmapproach and the subsequent section presents new results for a self-adaptingmulti-swarm, a multi-population with swarm birth and death.

2 Canonical PSO

In PSO, population members (particles) possess a memory of the best (withrespect to an objective function) location that they have visited in the past,pbest, and of its fitness. In addition, particles have access to the best locationof any other particle in their own network. These two locations (which willcoincide for the best particle in any network) become attractors in the searchspace of the swarm. Each particle will be repeatedly drawn back to spatialneighborhoods close to these two attractors, which themselves will be updatedif the global best and/or particle best is bettered at each particle update.Several network topologies have been tried, with the star or fully connectednetwork remaining a popular choice for unimodal functions. In this network,every particle will share information with every other particle in the swarm sothat there is a single gbest global best attractor representing the best locationfound by the entire swarm.

Particles possess a velocity which influences position updates according toa simple discretization of particle motion

v(t + 1) = v(t) + a(t + 1) (1)

x(t + 1) = x(t) + v(t + 1) (2)

Page 3: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

Particle Swarm Optimization in Dynamic Environments 3

where a, v, x and t are acceleration, velocity, position and time (iterationcounter) respectively. Eqs. 1, 2 are similar to particle dynamics in swarm sim-ulations, but PSO particles do not follow a smooth trajectory, instead movingin jumps, in a motion known as a flight [28] (notice that the time incrementdt is missing from these rules). The particles experience a linear or spring-like attraction, weighted by a random number, (particle mass is set to unity)towards each attractor. Convergence towards a good solution will not followfrom these dynamics alone; the particle flight must progressively contract.This contraction is implemented by Clerc and Kennedy with a constrictionfactor χ, χ < 1, [20]. For our purposes here, the Clerc-Kennedy PSO will betaken as the canonical swarm; χ replaces other energy draining factors extantin the literature such as a decreasing ‘inertial weight’ and velocity clamping.Moreover the constricted swarm is replete with a convergence proof, albeitabout a static attractor (although there is some experimental and theoreticalsupport for convergence in the fully interacting swarm where particles canmove attractors [10]).

Explicitly, the acceleration of particle i in Eq.1 is given by

ai = χ[cǫ · (pg − xi) + cǫ · (pi − xi)] − (1 − χ)vi (3)

where ǫ are vectors of random numbers drawn from the uniform distributionU [0, 1], c > 2 is the spring constant and pi, pg are particle and global attrac-tors. This formulation of the particle dynamics has been chosen to demon-strate explicitly constriction as a frictional force, opposite in direction, andproportional to, velocity. Clerc and Kennedy derive a relation for χ(c): stan-dard values are c = 2.05 and χ = 0.729843788. The complete PSO algorithmfor maximizing an objective function f is summarized as Algorithm 1.

3 PSO problems with moving peaks

As has been mentioned in Sect 1, PSO must be modified for optimal resultson dynamic environments typified by the moving peaks benchmark (MPB).These modifications must solve the problems of outdated memory, and of lostdiversity. This explains the origins of these problems in the context of MPB,and shows how memory loss is easily addressed. The following section thenconsiders the second, more severe, problem.

3.1 Moving Peaks

The dynamic objective function of MPB, f(x, t), is optimized at ‘peak’ lo-cations x∗ and has a global optimum at x∗∗ = argmax{f(x∗)} (once more,assuming optimization means maximizing). Dynamism entails a small move-ment of magnitude s, and in a random direction, of each x∗. This happens

Page 4: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

4 Tim Blackwell

Algorithm 1 Canonical PSO

FOR EACH particle i

Randomly initialize vi,xi = pi

Evaluate f(pi)g = arg max f(pi)

REPEATFOR EACH particle i

Update particle position xi according to eqs.. 1, 2 and 3Evaluate f(xi)//Update personal bestIF f(xi) > f(pi) THEN

pi = xi

//Update global bestIF f(xi) > f(pg) THEN

pg = arg max f(pi)UNTIL termination criterion reached

every K evaluations and is accompanied by small changes of peak height andwidth. There are p peaks in total, although some peaks may become obscured.The peaks are constrained to move in a search space of extent X in each ofthe d dimensions, [0, X ]d.

This scenario, which is not the most general, nevertheless has been putforward as representative of real world dynamic problems [12] and a bench-mark function is publicly available for download from [11]. Note that smallchanges in f(x∗) can still invoke large changes in x∗∗ due to peak promotion,so the many peaks model is far from trivial.

3.2 The problem of outdated memory

Outdated memory happens at environment change when the optima may shiftin location and/or value. Particle memory (namely the best location visitedin the past, and its corresponding fitness) may no longer be true at change,with potentially disastrous effects on the search.

The problem of outdated memory is typically solved by either assumingthat the algorithm knows just when the environment change occurs, or thatit can detect change. In either case, the algorithm must invoke an appropriateresponse. One method of detecting change is a re-evaluation of f at one ormore of the personal bests pi [17, 23]. A simple and effective response is tore-set all particle memories to the current particle position and f value at thisposition, and ensuring that pg = argmax f(pi). One possible drawback is thatthe function has not changed at the chosen pi, but has changed elsewhere.This can be remedied by re-evaluating f at all personal bests, at the expenseof doubling the total number of function evaluations per iteration.

Page 5: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

Particle Swarm Optimization in Dynamic Environments 5

3.3 The problem of lost diversity

Equally troubling as outdated memory is insufficient diversity at change. Thepopulation takes time to re-diversify and re-converge, effectively unable totrack a moving optimum.

It is helpful at this stage to introduce the swarm diameter |S|, definedas the largest distance, along any axis, between any two particles [2], as ameasure of swarm diversity (Fig. 1). Loss of diversity arises when a swarmis converging on a peak. There are two possibilities: when change occurs, thenew optimum location may either be within or outside the collapsing swarm.In the former case, there is a good chance that a particle will find itself close tothe new optimum within a few iterations and the swarm will successfully trackthe moving target. The swarm as a whole has sufficient diversity. However,if the optimum shift is significantly far from the swarm, the low velocitiesof the particles (which are of order |S|) will inhibit re-diversification andtracking, and the swarm can even oscillate about a false attractor and alonga line perpendicular to the true optimum, in a phenomenon known as linearcollapse [5]. This effect is illustrated in Fig. 2.

Fig. 1. The swarm diameter

These considerations can be quantified with the help of a prediction forthe rate of diversity loss [2, 3, 10]. In general, the swarm shrinks at a ratedetermined by the constriction factor and by the local environment at theoptimum. For static functions with spherical symmetric basins of attraction,the theoretical and empirical analysis of the above references suggest that therate of shrinkage (and hence diversity loss) is scale invariant and is given bya scaling law

|S(t)| = Cαt (4)

for constants C and α < 1, where α ≈ 0.92 and C is the swarm diameterat iteration t = 0. The number of function evaluations between change, K,can be converted into a period measured in iterations, L by considering the

Page 6: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

6 Tim Blackwell

total number of function evaluations per iteration including, where necessary,the extra test-for-change evaluations. We might expect that, for peak shiftdistance s and box size X = |S(0)|, if s >> SL = XαL, tracking will be veryhard since the swarm has already converged to a very small ball at the firstchange. The experiments reported in [3] show that canonical PSO fails to tracka single peaked dynamic environment defined by s = 8.7, L = 100, X = 10.Since SL computes to 0.0024, this is hardly surprising.

Fig. 2. Sequence of frames showing possible behavior when optimum shift is greaterthat swarm diversity. When the attractor T (square box, frame 1) shifts, particle a

is at the global best, pg. a continues along trajectory v since it is not accelerated inthis update (frame 2). Particle a continues to move along v, repositioning pg at eachupdate, becoming in effect the swarm leader (frame 3). After a while, the swarmoscillates along v, about a point perpendicular to T (frame 4). Eventually randomfluctuations will cause another particle to deviate from v and move closer towardsthe attractor. The swarm soon follows and converges on T .

4 Diversity lost and diversity regained

There are two solutions in the literature to the problem of insufficient diversity.Either a diversity increasing mechanism can be invoked at change (or at pre-determined intervals), or some permanent means can be put in place to ensure

Page 7: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

Particle Swarm Optimization in Dynamic Environments 7

there is sufficient diversity at all times [7]. These modifications are the subjectof this section.

4.1 Re-diversification

Hu and Eberhart [23] study a number of re-diversification mechanisms. Theseall involve randomization of the entire, or part of, the swarm. This happenswhen re-evaluation of the objective function at one or several of the attractorsdetects change, or at a pre-set interval. Clearly the problem with this approachis the arbitrariness of the extra parameters. Since randomization implies infor-mation loss, there is a danger of erasing too much information and effectivelyre-starting the swarm. On the other hand, too little randomization might notintroduce enough diversity to cope with the change. And, of course, if testsfor change happen at pre-determined intervals, there is a danger of missinga shift. The arbitrariness of the extra parameters can only be solved if muchprior knowledge about f ’s dynamism is available, or some other higher-levelmodification mechanism scheme is implemented. Such a scheme could inferdetails about f ’s dynamism during the run, making appropriate adjustmentsto the re-diversification parameters. So far, though, higher level modificationssuch as these have not been studied.

4.2 Maintaining diversity by repulsion

A constant, and hopefully good enough, degree of swarm diversity can bemaintained at all times either through some type of repulsive mechanism, orby adjustments to the information sharing neighborhood. Repulsion can eitherbe between particles, or from an already detected optimum. For example,Krink et al [34] study finite-size particles as a means of preventing prematureconvergence. The hard sphere collisions produce a constant diversificationpressure. Alternatively, Parsopoulos and Vrahatis [30] place a repeller at analready detected optima, in an attempt to divert the swarm and find newoptima. Neither technique, however, has been applied to the dynamic scenario.

An example of repulsion that has been tested in a dynamic context is theatom analogy [5, 4, 9, 6]. In this model, a swarm is comprised of a ‘charged’and a ‘neutral’ sub-swarm. The model can be depicted as a cloud of chargedparticles orbiting a contracting, neutral, PSO nucleus, Fig. 3. The chargedparticles can be either classical or quantum particles; either type are discussedin some depth in references [6, 7] and in the following section. Charge enhancesdiversity in the vicinity of the converging PSO sub-swarm, so that optimumshifts within this cloud should be trackable. Good tracking (outperformingcanonical PSO) has been demonstrated for unimodal dynamic environmentsof varying severities [3].

Page 8: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

8 Tim Blackwell

Fig. 3. The Atom Analogy. The situation depicted here shows a PSO sub-swarmof neutral particles (filled circles), converging at an optimum. The neutral swarmdiameter, |S0|, is shrinking by a factor of 0.92 at each iteration. This sub-swarmis surrounded by a number of charged particles with constant diversity |S−|. Bothsub-swarms share the same global attractor pg. Optimum moves to locations withinthe charged sub-swarm will be rapidly re-optimized by the swarm as a whole.

4.3 Maintaining diversity with dynamic network topology

Adjustments to the information sharing topology can be made with the in-tention of reducing, maybe temporarily, the desire to move towards the globalbest position, thereby enhancing population diversity. Li and Dam use a grid-like neighborhood structure, and Jansen and Middendorf test a hierarchicalstructure, reporting improvements over unmodified PSO for unimodal dy-namic environments [24, 36].

4.4 Maintaining diversity with multi-populations

A number of research groups have considered multi-populations as a meansof enhancing diversity. The multi-population idea is particularly helpful inmulti-modal environments such as many peaks. The aim here is to allow eachpopulation to converge on a promising peak. Then, if any secondary peak be-comes the global optimum as a result of change, a population is close at hand.Multi-population techniques include niching, speciation and multi-swarms.

In the static context, the niching PSO of Brits et al [16] can successfullyoptimize some static benchmark problems. In nichePSO, if a particle’s fitness

Page 9: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

Particle Swarm Optimization in Dynamic Environments 9

changes very little (the variance in fitness is less than a threshold) over a smallnumber of iterations, a two particle sub-swarm is created from this particle andits nearest spatial neighbor. This technique, as the authors point out, wouldfail in a dynamic environment because niching depends on a homogeneousdistribution of particles in the search space, and on a training phase.

A speciation PSO variation, known as clearing [31] has been adopted by Liin the static context and generalized by Parrot and Li to dynamic functions[27, 29]. Under clearing, the number and size of swarms is adjusted dynam-ically by constructed an ordered list of particles, ranked according to theirfitness, with spatially close particles joining a particular species. This methodrelies on a speciation radius and has no further diversity mechanism. Otherrelated work includes using different swarms in cooperation to optimize differ-ent parts of a solution [35], a two swarm min-max optimization algorithm [33]and iteration by iteration clustering of particles into sub-swarms [26]. Apartform Parrot and Li’s speciation, none of these multi-population techniqueshave been generalized as dynamic optimizers. The multi-swarm approach ofBlackwell and Branke is described in detail in the next section.

5 Multi-swarms

A combined approach might be to incorporate the virtues of the multi-population approach and of swarm diversity enhancing mechanisms such asrepulsion. Such an optimizer would be well suited to the many peaks environ-ment. Multi-swarms, first proposed in a non-optimization context [8] wouldseem to do just this. The extension of multi-swarms to a dynamic optimizerwas made by Blackwell and Branke [6], and is inspired by Branke’s own self-organizing scouts (SOS) [13]. The scouts have been shown to give excellentresults on the many peaks benchmark.

A multi-swarm is a colony of charged swarms interacting locally via ex-

clusion and globally by anti-convergence. The motivation for these operatorsis that a mechanism must be found to prevent two or more swarms from try-ing to optimize the same peak (exclusion) and also to maintain multi-swarmdiversity, that is to say the diversity amongst the population of swarms as awhole (anti-convergence). Multi-swarms have been compared very favorablyto both hierarchical swarms and to self-organizing scouts.

We consider below the main ingredients of the multi-swarm algorithm inmore depth. In particular we will assess the values of parameters with relationto the many peaks benchmark. A complete discussion of parameter choices isgiven in [7]. The multi-swarm algorithm is presented in Algorithm 2.

5.1 Atom Analogy

In the atom analogy, each swarm is pictured as an atom with a contractingnucleus of neutral PSO particles, and an enveloping cloud of charged par-ticles. All particles are in fact members of the same information network

Page 10: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

10 Tim Blackwell

Algorithm 2 Multi-Swarm

//InitializationFOR EACH particle ni

Randomly initialize vni,xni = pni

Evaluate f(pni)FOR EACH swarm n

png := argmax{f(pni)}REPEAT

// Anti-ConvergenceIF all swarms have converged THEN

Re-initialize worst swarm.FOR EACH swarm n

// Test for ChangeEvaluate f(png).IF new value is different from last iteration THEN

Re-evaluate each particle attractor.Update swarm attractor.

FOR EACH particle i of swarm n// Update ParticleApply equations (3) - (9) depending on particle type.// Update AttractorEvaluate f(xni).IF f(xni) > f(pni) THEN

pni := xni.IF f(xni) > f(png) THEN

png := xni

// Exclusion.FOR EACH swarm m 6= n

IF swarm attractor png is within rexcl of pmg THENIF f(png) ≤ f(pmg) THEN

Re-initialize swarm n

ELSERe-initialize swarm m

FOR EACH particle in re-initialized swarmRe-evaluate function value.Update swarm attractor.

UNTIL number of function evaluations performed > max

so that they all (in the star topology) have access to pg. The mutual repul-sions between the charged particles may follow a deterministic, classical, rule(Coulomb repulsion, parameterized by particle charge Q). Alternatively, in aquantum atom, the particles are positioned within a hypersphere of radiusrcloud centered on pg according to a probability distribution. So far, two uni-form distributions have been tested. References [6, 7] consider a uniform shelldistribution, p(r, dr) = ρ(r)dr = const, where ρ is a probability density, ris a shell radius, dv is a volume element and p is a probability. Recent workhas investigated a uniform volume distribution, p(x, dv) = ρ(x)dv = const. In

Page 11: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

Particle Swarm Optimization in Dynamic Environments 11

both cases, the distributions are normalized so that p(r > rcloud) = 0. Other,non-uniform and possibly dynamic distributions might be favorable for thequantum swarm in some cases, but such distributions remain unexplored.

The quantum atom has the advantages of lower complexity and an easilycontrollable distribution: the Coulomb repulsion has quadratic complexity andhighly fluctuating electron orbits [2, 3]. An order of magnitude estimation forthe parameters rcloud (for quantum swarms), or Q, for classically chargedclouds, can be made by supposing that good tracking will occur if the meancharged particle separation < |x−−pg| > is comparable to s. This separationis easy to compute for the quantum swarm: only empirical data is availablefor classical charged particles.

5.2 Exclusion

In order to demonstrate the necessity for exclusion, first consider an assemblyof non-interacting swarms, also known as a many-swarm. A many-swarm hasM swarms, and each swarm, for symmetrical configurations has N0 neutraland N− charged particles. Such a many-swarm is written M ∗ (N0 + N−).Since the swarms do not interact - either dynamically through the particlevelocity and positions updates, or by sharing information - the M swarms arecompletely independent, and any number of them may try to optimize thesame peak. This is undesirable because it is clearly inefficient to have two ormore swarms on the same peak, and in any case, we wish to distribute theswarms throughout the search space for peak watching. Hence the swarmsmust interact in some way. One possibility is to allow the swarms to interacttopologically and share a single information network. The multi-swarm ap-proach is to seek a spatial interaction between swarms. Such an interactionmight repel entire swarms from already occupied peaks. However, Coulombrepulsion, or some such similar physics-inspired repulsion would not be sat-isfactory, because the attractive pull towards the peak might be balanced bythe repulsive force away from other nearby swarms. In such an equilibrium,no swarm would be able to optimize the peak.

Exclusion is inspired by the exclusion principle in atomic and molecularphysics. This principle states that no two electrons may occupy the samestate. The exclusion principle provides an effective repulsive force betweentwo gas molecules with overlapping electron clouds [22]. However the effectiveforce does not arise from any deterministic equation that governs the electronmotion, but is a rule imposed on the probability distributions of the electronpositions. A version of this principle for interacting swarms is a rule thatforbids two swarms moving to within rexcl of each other, where the distancebetween swarms is defined as the distance between their pg’s. The exclusionoperator simple randomizes, in the entire search space, the worse swarm in anycollision, as judged by the current best value determined by the swarm, f(pg).The configuration of the interacting multi-swarm is written M(N0 + N−).

Page 12: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

12 Tim Blackwell

An order of magnitude estimation for rexcl can be made by assuming thatall p peaks are evenly distributed in Xd. The linear diameter of the basinof attraction of a peak is then, on average, dboa = X/p1/d. It is reasonableto assume that swarms that are closer than this distance should experienceexclusion, since the overall strategy is to place one swarm on each peak.

5.3 Anti-Convergence

Anti-convergence is a simple operator that is designed to ensure there is atleast one free swarm in the multi-swarm at all times. A free swarm is onethat is patrolling the search space rather than converging on a peak. A swarmis assumed to be converging when the neutral swarm diameter is less thana convergence diameter, 2rconv. The idea is that if the number of swarmsis less than the number of peaks, all swarms may converge, leaving somepeaks unwatched. One of these unwatched peaks may later become optimal.The presence of free swarms maintains multi-swarm diversity and encouragesresponse to peak promotion.

Estimations of rconv are difficult, but some progress can be made by con-sidering the rate of convergence of the neutral swarm, as given by Equation4. A lower bound on rconv can be estimated from the ideal case that a swarmimmediately tracks a shifted peak. This means that the swarm size at theshift is about s and the swarm has K function evaluations worth of time tocontract around the peak. On the other hand, rconv should certainly be lessthan rexcl because exclusion occurs before convergence.

Note that there are two levels of diversity. Diversity at the swarm level , asenforced by exclusion, enables a single swarm to track a single moving peakand diversity at the multi-swarm level enables the multi-swarm as a whole tofind new peaks.

5.4 Multi-swarm cardinality

The multi-swarm cardinality M can be estimated from p. If possible we wouldexpect that M > p is undesirable since free swarms absorb valuable functionevaluations and there is no need to have many more swarms than peaks. Anti-convergence is expected to be beneficial for M < p. Optimally, we supposethat M = p, and in this case anti-convergence can be switched off, since themulti-swarm has just the right number of swarms.

5.5 Results

A exhaustive series of multi-swarm experiments has been conducted for themany peaks benchmark. The standard settings for MPB are number of peaks,p = 10, change period in function evaluations, K = 5000, peak shift severity,s = 1.0, dimensionality, d = 5 and search space range X = 100. The peak

Page 13: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

Particle Swarm Optimization in Dynamic Environments 13

heights and widths vary randomly but are constrained to [30, 70] and [1, 12]respectively. Each experiment actually consists of 50 runs for a given set ofmulti-swarm parameters. Each run uses a different random number genera-tor seed for initialization of the swarms, the swarm update algorithm and theMPB generator. Other non-standard MPB’s were also tested for comparisons.Multi-swarm performance is quantified by the offline error which is the av-erage, at any point in time, of the error of the best solution found since thelast environment change. This measure is commonly used for scenarios suchas the MPB and is zero for perfect tracking.

Various values of multi-swarm cardinality M were tested for fixed totalnumber of particles as given by the expression Ntotal = M(N0 + N−). Asexpected, M = p was found to be optimal. The multi-swarm coped well withshift severities between 1.0 and 6.0. The multi-swarm offline errors for MPB’swith different numbers of peaks were lass than 3.0 for 5 ≤ p ≤ 200. Anti-convergence was found to bring a significant improvement when M < p. Therobustness of the algorithm, and its generalizability into higher dimensions,was also tested by taking d = 10 and varying the predicted optimal parametersettings (i.e. rexcl, rconv, rcloud and Q) by 20

In all cases, the multi-swarms with charge outperform many-swarms, PSOand multi-swarms without charge. Furthermore, quantum swarms performbetter than classical charged swarms. The offline error, as compared to hier-archical swarms and self-organizing scouts, is cut by a half, and the improve-ment over a randomization scheme is about an order of magnitude. It seems,therefore, that the multi-swarm is a very promising approach for problems ofthe MPB class.

6 Self-adapting multi-swarms

The multi-swarm model of the previous section introduced a number of newparameters. Although recommendations can be made for these settings, thisanalysis depends on prior knowledge of the environment and on many testruns. A laudable goal for any optimization technique is the reduction of hand-tunable parameters. Although parameter adjustments might improve perfor-mance for any particular problem, a general purpose method that might per-form reasonably well across a spectrum of problems is certainly attractive. Wetherefore wonder to what extent PSO and PSO-variants can find their ownbest parameter settings during a single run. Such self-adapting algorithmsmight not return the best performance on any particular problem instance.However to make fair comparisons, the total number of function evaluations(or iterations) involved in all the trials of the hand-tuned method must betaken into account. This point is emphasized by Clerc in his explorations of‘Tribes’, a self-adapting PSO [18, 19].

Here we will describe self-adaptations at the level of the multi-swarm.Future work will seek to incorporate adaptations of individual swarms into

Page 14: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

14 Tim Blackwell

this scheme. The following will assume (50 + 5−) swarms with canonical PSOneutral particles and quantum charged particles and with the swarm diversityparameter, rcloud, determined by the peak shift severity. This recipe gavethe best results for the environments studied in the previous section. Theparameters at the multi-swarm level are the number of swarms M and theexclusion and convergence radii rexcl and rconv. It is assumed in the followingthat the multi-swarm has access to the dimensionality d and the extent of thesearch space X , but not the MPB parameters p or K. (Previously, rexcl, rconv

and M were determined with knowledge of p and K.)

6.1 Swarm birth and death

The basic idea is to allow the multi-swarm to regulate its size by bringing newswarms into existence, or by removing redundant swarms. The aim, as before,is to place a swarm on each peak, and to maintain multi-swarm diversity with(at least one) patrolling swarm. The multi-swarm therefore needs a new swarmif all swarms are currently converging. Alternatively, if there are too many freeswarms (i.e. those that fail the convergence criterion), a free swarm shouldbe removed. If there is more than one free swarm, the choice for removal isarbitrary and a simple rule might be to remove the worst of the free swarms,as judged by f(pg).

This simple birth/death mechanism removes the need for the anti-con-vergence operator, and for specifying the multi-swarm cardinality. The self-adapting version of Algorithm 2 is given below in Algorithm 3, where Mfree

is the number of free swarms at iteration t, and identical steps in Algorithm2 have been abbreviated.

Algorithm 3 Self-adapting multi-swarm

Begin with a single free swarm, randomized in Xd

At each iteration t:IF Mfree = 0, generate a new free swarmELSE If Mfree > nexcess, remove worst free swarmFOR EACH swarm n

test for changeIF swarm n is excluded, randomizeELSE update particle velocities and positionsupdate attractorstest for convergence

apply exclusionREPEAT

For generality, Algorithm 3 also specifies a redundancy parameter nexcess

which is set to the desired number of free swarms. A simple choice is to sup-pose that nexcess = 1, but this may not give sufficient diversity if there are

Page 15: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

Particle Swarm Optimization in Dynamic Environments 15

many peaks. Alternatively, nexcess = ∞ means that no swarm can ever be re-moved. Intermediate values control the amount of multi-swarm diversity. Partof the purpose of the experiments reported below is to assess the algorithmfor robustness in nexcess.

The multi-swarm size M(t) is dynamic and at any iteration t is given by

M(0) = 1

M(t) =

{

M(t − 1) + 1, Mfree = 0M(t − 1) − 1, Mfree > nexcess

(5)

Swarm convergence and exclusion are now determined by a dynamic con-vergence radius r(t) defined by

r(t) =X

2M1/d(6)

which has been chosen to ensure a mean volume per swarm of (2r)d = Xd

M ,a condition which might be expected to be if the peaks are, on average, uni-formly distributed in Xd. The number of free swarms at any time is the differ-ence between the multi-swarm size and the number of converging swarms. Aswarm is defined as ‘converging’ if its diameter is less than 2r. The exclusionradius is replaced by r(t). Hence two parameters, M and rexcl and one oper-ator, anti-convergence, have been removed from the multi-swarm algorithm,at the expense of introducing a new parameter, nexcess.

6.2 Results

A number of experiments using the MPB of Section 5.5 with 10 and 200peaks were conducted to test the efficacy of the self-adapting multi-swarmfor various values of nexcess. The uniform volume distribution described inSection 5.1 was used. An empirical investigation revealed that rcloud = 0.5sfor shift length s yields optimum tracking for these MPB’s, and this was thevalue used here.

Table 6.2 shows the raw and rounded offline errors for 1 ≤ nexcess ≤ 7 andfor nexcess = ∞. Only the rounded errors are significant, but the pre-roundedvalues have been reported in order to examine algorithm functionality. Forcomparison, the best performance of an unadapted 10(100+10−) multi-swarmfor the p = 10 and p = 200 environments is, respectively, 1.75(0.06) and2.26(0.03) [7]. The best self-adapting multi-swarm errors are 1.77(0.05) and2.37(0.03), only slightly higher than the hand-tuned values. The constancy ofthe raw offline error for nexcess ≥ 5 shows that the algorithm never tries togenerate 6 or more swarms: nexcess = 5 is equivalent to setting nexcess to ∞.

Page 16: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

16 Tim Blackwell

Table 1. Variation of offline error with nexcess for 10 and 200 dynamic peaks. Theraw data demonstrates identical algorithm behavior for nexcess ≥ 5

Raw Rounded (standard error)nexcess p = 10 p = 200 p = 10 p = 200

1 1.9729028458116666 2.5383187958098343 1.97(0.09) 2.54(0.04)2 1.879641879674056 2.398361911062971 1.88(0.07) 2.40(0.02)3 1.7699076299648027 2.386396596554201 1.77(0.05) 2.39(0.03)4 1.8033988974332964 2.372590853208213 1.80(0.06) 2.37(0.03)5 1.8013758537004643 2.365026825401844 1.80(0.06) 2.37(0.03)6 1.8010120393728533 2.3651325663361167 1.80(0.06) 2.37(0.03)7 1.8010120393728533 2.3651325663361167 1.80(0.06) 2.37(0.03)

infinity 1.8010120393728533 2.3651325663361167 1.80(0.06) 2.37(0.03)

6.3 Discussion

Theoretically, nexcess = 1 would appear to be an ideal setting, allowing themulti-swarm to adapt to the number of peaks, whilst always maintaining asingle free swarm. However, inspection of the numbers of free and convergedswarms for a single function instance revealed that the self-adaptation mech-anism at nexcess = 1 frequently adds a swarm, only to remove one at thesubsequent iteration. The explanation is believed to be the following: supposea swarm (swarm A) has started to converging on a peak. A new free swarm,swarm B, will be created. Since A is just at the edge of convergence, fluctua-tions in the swarm size may cause the swarm to appear to be free at the nextiteration. Hence there will be two free swarms, and one must be removed -almost certainly swarm B, since this has had little chance to improve its pg.Swarm A will again start to converge (according to the criterion), causing thecreation of a new free swarm at the next iteration. Such repeated creationsand annihilations of the free swarm waste valuable function evaluations.

Another possible setting is nexcess = ∞. This effectively turns swarm re-moval off. Although there is no check to the number of swarms, it is notunreasonable to suppose that given enough time, the multi-swarm would sta-bilize at Mconv = p and Mfree = 1. The convergence criterion is rather naiveand may mark a free swarm as converging even though it is not associatedwith a peak. This will cause the generation of another free swarm, with nomeans of removal. This will only be a problem when Mconv > p, a situationthat might not even happen within the time-scale of the run. For example,Figures 4 and 5 show convergence data for a single run at p = 10 and p = 200.For p = 10, the eleventh swarm is generated at function evaluation, neval, =254054. There are 500000 evaluations in a run, and in the remaining evalua-tions the multi-swarm size is, at most 13, and remains fairly steady after the400000th evaluation. The p = 200 trial shows a steadily growing multi-swarm,which never attains complete coverage of all peaks, ending with 47 convergingswarms and 1 free swarm.

Page 17: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

Particle Swarm Optimization in Dynamic Environments 17

0 100000 200000 300000 400000 50000010

0

101

EvaluationsO

fflin

e er

ror

0 100000 200000 300000 400000 5000000

5

10

15

Evaluations

convfree

Fig. 4. Convergence of the self-adapting nexcess = ∞ multi-swarm for a singleinstance of the 10 peak MPB environment. Upper plot shows offline error, lowerplot shows number of converged and free swarms

The flexibility of a swarm removal mechanism is desirable for a number ofreasons. For example, two peaks might move within rexcl of each other, causinga previously converged swarm to vaporize through exclusion (the better swarmremains). Or maybe a peak i becomes invisible if its height is smaller thanother peak heights at its optimizer i.e. if f(x∗

i ) < f(xj) for some j 6= i. Ineither case, superfluous free swarms will consume function evaluations.

The redundancy nexcess can be set at any value between the two extremesof nexcess = 1 and nexcess = ∞. (nexcess = 0 gives very bad performance, noswarms at all may be added and the single swarm converges, and remains on,the first peak it finds.) The results for p = 10 and p = 200 indicate that tuningof nexcess can improve performance for runs where the multi-swarm has foundall the peaks. Tuning can prevent the multi-swarm from generating too manyfree swarms - for example 3 free swarms are optimal for p = 10. However,setting nexcess at either extreme still produces good performance, and betterthan the comparison algorithms cited in Section 5.5. Perhaps the multi-swarmitself could tune nexcess during a run. A more sophisticated convergence cri-terion would also have to be devised. For example, the convergence criterioncould take into account both the swarm diameter and the rate of improvementof f(pg).

Page 18: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

18 Tim Blackwell

0 100000 200000 300000 400000 50000010

0

101

102

Offl

ine

erro

r

0 100000 200000 300000 400000 5000000

10

20

30

40

50

Evaluation

convfree

Fig. 5. Convergence of the self-adapting Mexcess = ∞ multi-swarm for a singleinstance of the 200 peak MPB environment. Upper plot shows offline error, lowerplot shows number of converged and free swarms

7 Summary

This chapter has reviewed the application of particle swarms to dynamic op-timization. The canonical PSO algorithm must be modified for good perfor-mance in environments such as many peaks. In particular the problem ofdiversity loss must be addressed. The most promising PSO variant to dateis the multi-swarm; a multi-swarm is a colony of swarms, where each swarm,drawing from an atomic analogy, consists of a canonical PSO surrounded bya cloud of charged particles. The underlying philosophy behind multi-swarmsis to place a separate PSO on the best peaks, and to maintain a populationof patrolling particles for the purposes of identifying new peaks. Movement ofany peak that is being watched by a swarm is tracked by the charged particles.An exclusion operator ensures that only one swarm can watch any one peak,and anti-convergence seeks to maintain a free, patrolling swarm.

New work on self-adaptation has also been presented here. Self-adaptationaims at reducing the number of tunable parameters and operators. Someprogress has been made at the multi-swarm level, where a mechanism forswarm birth and death has been suggested; this scheme eliminates one oper-ator and allows the number of swarms and an exclusion parameter to adjustdynamically. One free parameter, the number of patrolling swarms, still exists,but results suggest that the algorithm is not overly sensitive to this number.

Page 19: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

Particle Swarm Optimization in Dynamic Environments 19

Self-adaptations at the level of each swarm, in particular allowing particlesto be born and to die, and self-regulation of the charged cloud radius remainunexplored.

References

1. A.Engelbrecht. Computational Intelligence. John Wiley and sons, 2002.2. T. M. Blackwell. Particle swarms and population diversity I: Analysis.

In J. Branke, editor, GECCO Workshop on Evolutionary Algorithms forDynamic Optimization Problems, pages 9–13, 2003. http://www.ubka.uni-karlsruhe.de/cgi-bin/psview?document=2003%2Fwiwi%2F1.

3. T. M. Blackwell. Particle swarms and population diversity II: Experiments.In J. Branke, editor, GECCO Workshop on Evolutionary Algorithms forDynamic Optimization Problems, pages 14–18, 2003. http://www.ubka.uni-karlsruhe.de/cgi-bin/psview?document=2003%2Fwiwi%2F1.

4. T. M. Blackwell and P. Bentley. Don’t push me! collision avoiding swarms. InCongress on Evolutionary Computation, pages 1691–1696, 2002.

5. T. M. Blackwell and P. J. Bentley. Dynamic search with charged swarms. InW. B. Langdon et al., editor, Genetic and Evolutionary Computation Confer-ence, pages 19–26. Morgan Kaufmann, 2002.

6. T. M. Blackwell and J. Branke. Multi-swarm optimization in dynamic environ-ments. In G. R. Raidl, editor, Applications of Evolutionary Computing, volume3005 of LNCS, pages 489–500. Springer, 2004.

7. T. M. Blackwell and J. Branke. Multi-swarms, exclusion and anti-convergencein dynamic environments. IEEE transactions on Evolutionary Computation, toappear.

8. T.M. Blackwell. Swarm music: Improvised music with multi-swarms. In ProcAISB ’03 Symposium on artificial intelligence and creativity in arts and science,pages 41–49, 2003.

9. T.M. Blackwell. Swarms in dynamic environments. In E. Cantu-Paz, editor, Ge-netic and Evolutionary Computation Conference, volume 2723 of LNCS, pages1–12. Springer, 2003.

10. T.M. Blackwell. Particle swarms and population diversity. Soft Computing,9:793–802, 2005.

11. J. Branke. The moving peaks benchmark website. http://www.aifb.uni-karlsruhe.de/jbr/movpeaks.

12. J. Branke. Memory enhanced evolutionary algorithms for chang-ing optimization problems. In Congress on Evolutionary Computa-tion CEC99, volume 3, pages 1875–1882. IEEE, 199. ftp://ftp.aifb.uni-karlsruhe.de/pub/jbr/branke cec1999.ps.gz.

13. J. Branke. Evolutionary approaches to dynamic environments - up-dated survey. In GECCO Workshop on Evolutionary Algorithms for Dy-namic Optimization Problems, pages 27–30, 2001. http://www.aifb.uni-karlsruhe.de/ jbr/EvoDOP/Papers/gecco-dyn2001.pdf.

14. J. Branke. Evolutionary Optimization in Dynamic Environments. Kluwer, 2001.http://www.aifb.uni-karlsruhe.de/ jbr/book.html.

Page 20: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

20 Tim Blackwell

15. J. Branke and H. Schmeck. Designing evolutionary algorithms for dynamicoptimization problems. Theory and application of evolutionary computation:recent trends, pages 239–262, 2002. S. Tsutsui and A. Ghosh, editors.

16. R. Brits, A.P. Engelbrecht, and F. van den Bergh. A niching particle swarm op-timizer. In Fourth Asia-Pacific conference on simulated evolution and learning,pages 692–696, 2002.

17. A. Carlisle and G. Dozier. Adapting particle swarm optimisationto dynamicenvironments. In Proc of int conference on artificial intelligence, pages 429–434, 2000.

18. M. Clerc. Think locally act locally - a framework for adaptive particle swarmoptimizers. Technical report, 2002. http://clerc.maurice.free.fr/pso/ (accessedJune 29, 2006).

19. M. Clerc. Particle Swarm Optimization. ISTE publishing company, 2006.20. M. Clerc and J. Kennedy. The particle swarm: explosion, stability and conver-

gence in a multi-dimensional space. IEEE transactions on Evolutionary Com-putation, 6:158–73, 2000.

21. C.Reynolds. Flocks, herds and schools: a distributed behavioral model. Com-puter Graphics, 21:25–34, 1987.

22. A.P. French and E.F. Taylor. An introduction to quantum physics. W.W. Nortonand Company, 1978.

23. X. Hu and R.C. Eberhart. Adaptive particle swarm optimisation: detection andresponse to dynamic systems. In Proc Congress on Evolutionary Computation,pages 1666–1670, 2002.

24. S. Janson and M. Middendorf. A hierachical particle swarm optimizer for dy-namc optimization problems. In G. R. Raidl, editor, Applications of evolutionarycomputing, volume 3005 of LNCS, pages 513–524. Springer, 2004.

25. J. Kennedy and R.C. Eberhart. Particle swarm optimization. In Proceedings ofthe 1995 IEEE International Conference on neural networks, pages 1942–1948,1995.

26. J. Kennnedy. Stereotyping: improving particle swarm performance with clusteranalysis. In Congress on Evolutionary Computation, pages 1507–12, 2000.

27. X. Li. Adaptively choosing neighborhood bests in a particle swarm optimizerfor multimodal function optimization. In K.Deb et al, editor, Proceedings ofthe Genetic and Evolutionary Copmutation Conference, GECCO-2004, volume3102 of LNCS, pages 105–116. Springer, 2004.

28. B. Mandelbrot. The Fractal Geometry of Nature. W. H. Freeman and Company,1983.

29. D. Parrott and X. Li. A particle swarm model for tracking multiple peaks in adynamic environment using speciation. In Congress on Evolutionary Computa-tion, pages 98–103, 2004.

30. K.E. Parsopoulos and M.N. Vrahatis. Recent approaches to global optimizationproblems through particle swarm optimization. Natural Computing, pages 235–306, 2002.

31. A. Petrowski. A clearing procedure as a niching method for genetic algorithms.In J. Grefenstette, editor, Int’l Conference on Evolutionary Computation, pages798–803. IEEE, 2003.

32. R.Eberhart and Y.Shi. Swarm intelligence. Morgan Kaufmann, 2001.33. Y. Shi and A. Khrohling. Co-evolutionary particle swarm optimization to solve

min-max problems. In Congress on Evolutionary Computation, pages 1682–1687, 2002.

Page 21: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

Particle Swarm Optimization in Dynamic Environments 21

34. J. Vesterstrom T. Krink and J. Riget. Particle swarm optimisation with spatialparticle extension. In Congress on Evolutionary Computation, page 14741479,2002.

35. F. van den bergh and A. P. Englebrecht. A cooperative approach to particleswarm optimization. IEEE transactions on Evolutionary Computation, pages225–239, 2004.

36. X.Li and K. H. Dam. Comparing particle swarms for tracking extrema in dy-namic environments. In Congress on Evolutionary Computation, pages 1772–1779, 2003.

Page 22: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,
Page 23: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,

Index

anti-convergence, 9, 12atom analogy, 2, 7, 9attractors, 2

canonical PSO, 4constriction, 3

diversitymulti-swarm, 9, 12swarm, 5, 12

diversity loss, 1, 5diversity maintenance

dynamic networks, 8multi-populations, 8repulsion, 7

exclusion, 2, 9, 11exclusion principle, 11

linear collapse, 5

many-swarm, 11moving peaks benchmark, 1, 3

offline error, 13standard settings, 12

MPB, 1multi-swarm, 2, 9

algorithm, 10self-adapting, 13

network topology, 2

outdated memory, 1, 4

particleacceleration, 3charged, 7, 9classical, 7neutral, 9quantum, 7, 10update equations, 2

particle swarm optimization, 1PSO, 1

algorithm, 4canonical, 2

re-diversification, 7

star topology, 2swarm

birth and death, 14charged, 2diameter, 5free, 14

Page 24: Particle Swarm Optimization in Dynamic Environmentsigor.gold.ac.uk/~mas01tb/papers/PSOdynenv.pdf · Particle Swarm Optimization in Dynamic Environments 3 where a, v, x and t are acceleration,