Multimodal Optimization

4
Multimodal Optimization Multimodal functions have more than one optima, but can either have a single or more than one global optima. In the first case, the problem that a optimization algorithm has to deal with is to avoid premature convergence to a local optimum. In the second, the presence of several solutions with equal optimum fitness arises the issue of how an algorithm can locate all the global optima. One of the first applications of PSO to multimodal problems was performed in [1 ]. However, in this paper the problem is to locate the global optimum in a fitness landscape with multiple local optima but a single global optimum. These problems may be challenging for standard GA because crossover operations can be destructive when performed on individuals located near different optima. The results of several versions of GA where compared with the results of PSO, which performs well independently of the degree of modality (number of optima). In [2 ], an hybrid PSO algorithm is tested on multimodal problems having single global optima. The authors include a nonuniform mutation operator proposed by Michalewicz in 1996; this modification improves the performance of PSO compared to classical PSO, which is already competitive with other techniques when dealing with the test functions analyzed. When the fitness landscape has more than one global optima, the swarm will probably fall into one of the following behaviors [3 ,4 ]: It will find only one of the optima It will wander without being able to settle in the optima The general procedure proposed to avoid this inconvenience, is to perform a transformation of the fitness function once a (possibly local) optimum is found. That is, 1. The PSO is used to find the first optima

description

Otimização

Transcript of Multimodal Optimization

Multimodal Optimization Multimodal functions have more than one optima, but can either have a single or more than one global optima. In the first case, the problem that a optimization algorithm has to deal with is to avoid premature convergence to a local optimum. In the second, the presence of several solutions with equal optimum fitness arises the issue of how an algorithm can locate all the global optima. One of the first applications of PSO to multimodal problems was performed in [1]. However, in this paper the problem is to locate the global optimum in a fitness landscape with multiple local optima but a single global optimum. These problems may be challenging for standard GA because crossover operations can be destructive when performed on individuals located near different optima. The results of several versions of GA where compared with the results of PSO, which performs well independently of the degree of modality (number of optima). In [2], an hybrid PSO algorithm is tested on multimodal problems having single global optima. The authors include a nonuniform mutation operator proposed by Michalewicz in 1996; this modification improves the performance of PSO compared to classical PSO, which is already competitive with other techniques when dealing with the test functions analyzed. When the fitness landscape has more than one global optima, the swarm will probably fall into one of the following behaviors [3,4]: It will find only one of the optima It will wander without being able to settle in the optima The general procedure proposed to avoid this inconvenience, is to perform a transformation of the fitness function once a (possibly local) optimum is found. That is, 1. The PSO is used to find the first optima 2. This result is stored 3. The fitness function is modified so this minima disappears, and the PSO continues the search. Two different techniques are proposed to transform the fitness function: Function ``deflation'', where the new fitness function is calculated as follows: (1)

, where is the optimum found. The mechanism of ``function stretching'' (introduced by Vrahatis in 1996) is proposed both to deal with the multiple global optima of multimodal functions, and with generic local minima problems. This is a two-stage transformation that, for minimization problems, can be performed by the following two equations: (2)

(3)

, where , and are arbitrary chosen positive constants, and is the sign function. The first equation elevates the function so local minima above are eliminated. The second equation assigns higher values to the neighborhood of the point. In this proposal, the swarm is stopped when the swarm has converged to a solution whose fitness is ``good enough'' (that is, it is numerically close to the optimum fitness for the problem). The best solution found by the swarm is used as a potential global optimum, but a new swarm is used to perform a local search around that point, while the original swarm continues finding new optima using the ``deflated'' or ``stretched'' fitness function. In[5], a niching PSO (NichePSO) is proposed. In this algorithm, multiple sub-swarms are created from the initial swarm to locate each of the optima in the test functions. In a way similar to the one proposed in[3], a particle is considered a started to perform a search in its neighborhood, but a measure of the particle's fitness variance over the time is used as a criterium for this selection. Particles from a subswarm can be absorbed into a different subswarm, and whole subswarms can be merged when they are likely to converge to the same solution. In[6], a species-based PSO is used to simultaneously finding all the optima, regardless of being local or global. The swarm is partitioned in species, and a given individual is used as leader (called ``seed''). Partitioning and species seed selection is performed at each iteration of the algorithm: species are based on the euclidean distance between the particles, and the seed is selected as the best particle in the species. This particle is selected as lbest for the other particles in the same species. The results are better than with NichePSO on the same test functions. Bibliography 1 J.Kennedy and W.Spears. Matching algorithms to problems: an experimental test of the particle swarm and some genetic algorithms on thfe multimodal problem generator. In Proceedings of IEEE Congress on Evolutionary Computation (CEC 1998), pages 74-77, 1998. 2 S.C. Esquivel and C.A. CoelloCoello. On the use of particle swarm optimization with multimodal functions. In Proceedings of IEEE Congress on Evolutionary Computation 2003 (CEC 2003), pages 1130-1136, 2003. 3 K.E. Parsopoulos and M.N. Vrahatis. Modification of the particle swarm optimizer for locating all the global minima. In Proceedings of the International Conference on Artificial Neural Networks and Genetic Algorithms (ICANNGA 2001), pages 324-327, 2001. 4 K.E. Parsopoulos and M.N. Vrahatis. Recent approaches to global optimization problems through particle swarm optimization. Natural Computing: an international journal, 1(2-3):235-306, 2002. 5 E.Brits, A.P. Engelbrecht, and F.vander Bergh. A niching particle swarm optimizer. In Proceedings of the 4th Asia-Pacific Conference on Simulated Evolution and Learning 2002 (SEAL 2002), pages 692-696, 2002. 6 X.Li. Adaptively choosing neighbourhood bests using species in a particle swarm optimizer for multimodal function optimization. Lecture Notes on Computer Science, 3102:105-116, 2004.