The Best-so-far Selection in Artificial Bee Colony Algorithm

14
Applied Soft Computing 11 (2011) 2888–2901 Contents lists available at ScienceDirect Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc The best-so-far selection in Artificial Bee Colony algorithm Anan Banharnsakun, Tiranee Achalakul, Booncharoen Sirinaovakul Department of Computer Engineering, King Mongkut’s University of Technology Thonburi, Bangkok, Thailand article info Article history: Received 15 February 2010 Received in revised form 31 May 2010 Accepted 28 November 2010 Available online 4 December 2010 Keywords: Artificial Bee Colony Swarm intelligence Optimization Image registration Mutual information abstract The Artificial Bee Colony (ABC) algorithm is inspired by the behavior of honey bees. The algorithm is one of the Swarm Intelligence algorithms explored in recent literature. ABC is an optimization technique, which is used in finding the best solution from all feasible solutions. However, ABC can sometimes be slow to converge. In order to improve the algorithm performance, we present a modified method for solution update of the onlooker bees in this paper. In our method, the best feasible solutions found so far are shared globally among the entire population. Thus, the new candidate solutions are more likely to be close to the current best solution. In other words, we bias the solution direction toward the best-so-far position. Moreover, in each iteration, we adjust the radius of the search for new candidates using a larger radius earlier in the search process and then reduce the radius as the process comes closer to converging. Finally, we use a more robust calculation to determine and compare the quality of alternative solutions. We empirically assess the performance of our proposed method on two sets of problems: numerical benchmark functions and image registration applications. The results demonstrate that the proposed method is able to produce higher quality solutions with faster convergence than either the original ABC or the current state-of-the-art ABC-based algorithm. © 2010 Elsevier B.V. All rights reserved. 1. Introduction Swarm Intelligence is a meta-heuristic method in the field of artificial intelligence that is used to solve optimization problems. It is based on the collective behavior of social insects, flocks of birds, or schools of fish. These animals can solve complex tasks without centralized control. Researchers have analyzed such behaviors and designed algo- rithms that can be used to solve combinatorial and numerical optimization problems in many science and engineering domains. Previous research [1–4] has shown that algorithms based on Swarm Intelligence have great potential. The algorithms that have emerged in recent years include Ant Colony Optimization (ACO) [5] based on the foraging behavior of ants, and Particle Swarm Optimization (PSO) [6] based on the behaviors of bird flocks and fish schools. Exploration and exploitation are the important mechanisms in a robust search process. While exploration process is related on the independent search for an optimal solution, exploitation uses existing knowledge to bias the search. In the recent years, there are a few algorithms based on bee foraging behavior developed to improve both exploration and exploitation for solving the numer- ical optimization problems. Corresponding author. E-mail addresses: anan [email protected] (A. Banharnsakun), [email protected] (T. Achalakul), [email protected] (B. Sirinaovakul). The Artificial Bee Colony (ABC) algorithm introduced by D. Karaboga [7] is one approach that has been used to find an opti- mal solution in numerical optimization problems. This algorithm is inspired by the behavior of honey bees when seeking a quality food source. The performance of ABC algorithm has been compared with other optimization methods such as Genetic Algorithm (GA), Differential Evolution algorithm (DE), Evolution Strategies (ES), Particle Swarm Optimization, and Particle Swarm Inspired Evo- lutionary Algorithm (PS-EA) [8–10]. The comparisons were made based on various numerical benchmark functions, which consist of unimodal and multimodal distributions. The comparison results showed that ABC can produce a more optimal solution and thus is more effective than the other methods in several optimization problems [11–13]. Yang [14] introduced an algorithm called the Virtual Bee Algo- rithm (VBA) for solving engineering optimizations that have multi- peaked functions. In the VBA algorithm, the objectives or optimiza- tion functions are encoded as virtual foods. Virtual bees are used to search for virtual foods in the search space. The position of each virtual bee is updated via the virtual pheromone from the neighbor- ing bees. The food with largest number of virtual bees or intensity of visiting bees corresponds to the optimal solution. However, the VBA algorithm was only tested using two-dimension functions. An optimization algorithm inspired by the honey bee forag- ing behavior based on the elite bee method was proposed by Sundareswaran [15]. The bee whose solution is the best possible solution in each simulation iteration is considered to be the elite 1568-4946/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2010.11.025

Transcript of The Best-so-far Selection in Artificial Bee Colony Algorithm

Page 1: The Best-so-far Selection in Artificial Bee Colony Algorithm

T

AD

a

ARRAA

KASOIM

1

aioc

roPIio(

ateaii

t

1d

Applied Soft Computing 11 (2011) 2888–2901

Contents lists available at ScienceDirect

Applied Soft Computing

journa l homepage: www.e lsev ier .com/ locate /asoc

he best-so-far selection in Artificial Bee Colony algorithm

nan Banharnsakun, Tiranee Achalakul, Booncharoen Sirinaovakul ∗

epartment of Computer Engineering, King Mongkut’s University of Technology Thonburi, Bangkok, Thailand

r t i c l e i n f o

rticle history:eceived 15 February 2010eceived in revised form 31 May 2010ccepted 28 November 2010vailable online 4 December 2010

eywords:rtificial Bee Colony

a b s t r a c t

The Artificial Bee Colony (ABC) algorithm is inspired by the behavior of honey bees. The algorithm is oneof the Swarm Intelligence algorithms explored in recent literature. ABC is an optimization technique,which is used in finding the best solution from all feasible solutions. However, ABC can sometimes beslow to converge. In order to improve the algorithm performance, we present a modified method forsolution update of the onlooker bees in this paper. In our method, the best feasible solutions found so farare shared globally among the entire population. Thus, the new candidate solutions are more likely to beclose to the current best solution. In other words, we bias the solution direction toward the best-so-farposition. Moreover, in each iteration, we adjust the radius of the search for new candidates using a larger

warm intelligence

ptimizationmage registration

utual information

radius earlier in the search process and then reduce the radius as the process comes closer to converging.Finally, we use a more robust calculation to determine and compare the quality of alternative solutions.We empirically assess the performance of our proposed method on two sets of problems: numericalbenchmark functions and image registration applications. The results demonstrate that the proposedmethod is able to produce higher quality solutions with faster convergence than either the original ABC

e-art

or the current state-of-th

. Introduction

Swarm Intelligence is a meta-heuristic method in the field ofrtificial intelligence that is used to solve optimization problems. Its based on the collective behavior of social insects, flocks of birds,r schools of fish. These animals can solve complex tasks withoutentralized control.

Researchers have analyzed such behaviors and designed algo-ithms that can be used to solve combinatorial and numericalptimization problems in many science and engineering domains.revious research [1–4] has shown that algorithms based on Swarmntelligence have great potential. The algorithms that have emergedn recent years include Ant Colony Optimization (ACO) [5] basedn the foraging behavior of ants, and Particle Swarm OptimizationPSO) [6] based on the behaviors of bird flocks and fish schools.

Exploration and exploitation are the important mechanisms inrobust search process. While exploration process is related on

he independent search for an optimal solution, exploitation uses

xisting knowledge to bias the search. In the recent years, therere a few algorithms based on bee foraging behavior developed tomprove both exploration and exploitation for solving the numer-cal optimization problems.

∗ Corresponding author.E-mail addresses: anan [email protected] (A. Banharnsakun),

[email protected] (T. Achalakul), [email protected] (B. Sirinaovakul).

568-4946/$ – see front matter © 2010 Elsevier B.V. All rights reserved.oi:10.1016/j.asoc.2010.11.025

ABC-based algorithm.© 2010 Elsevier B.V. All rights reserved.

The Artificial Bee Colony (ABC) algorithm introduced by D.Karaboga [7] is one approach that has been used to find an opti-mal solution in numerical optimization problems. This algorithmis inspired by the behavior of honey bees when seeking a qualityfood source. The performance of ABC algorithm has been comparedwith other optimization methods such as Genetic Algorithm (GA),Differential Evolution algorithm (DE), Evolution Strategies (ES),Particle Swarm Optimization, and Particle Swarm Inspired Evo-lutionary Algorithm (PS-EA) [8–10]. The comparisons were madebased on various numerical benchmark functions, which consistof unimodal and multimodal distributions. The comparison resultsshowed that ABC can produce a more optimal solution and thusis more effective than the other methods in several optimizationproblems [11–13].

Yang [14] introduced an algorithm called the Virtual Bee Algo-rithm (VBA) for solving engineering optimizations that have multi-peaked functions. In the VBA algorithm, the objectives or optimiza-tion functions are encoded as virtual foods. Virtual bees are usedto search for virtual foods in the search space. The position of eachvirtual bee is updated via the virtual pheromone from the neighbor-ing bees. The food with largest number of virtual bees or intensityof visiting bees corresponds to the optimal solution. However, the

VBA algorithm was only tested using two-dimension functions.

An optimization algorithm inspired by the honey bee forag-ing behavior based on the elite bee method was proposed bySundareswaran [15]. The bee whose solution is the best possiblesolution in each simulation iteration is considered to be the elite

Page 2: The Best-so-far Selection in Artificial Bee Colony Algorithm

A. Banharnsakun et al. / Applied Soft Computing 11 (2011) 2888–2901 2889

f the B

btti

i[

Fig. 1. The pseudo-code o

ee. A probabilistic approach is used to control the movement ofhe other bees, so majority of bees will follow the elite bee’s direc-

ion while a few bees may fly to other directions. This approachmproves the capability of convergence to a global optimum.

To improve the exploration and exploitation of foraging behav-or of honey bees for numerical function optimization, Akbari et al.16] presented an algorithm called Bee Swarm Optimization (BSO).

est-so-far ABC algorithm.

In this method, the bees of the swarm are sorted according to the fit-ness values of the most recently visited food source and these sorted

bees are divided into three types. The bees that have worst fitnessare classified as scout bees, while the rest of bees are divided equallyas experienced foragers and onlookers. Different flying patternswere introduced for each type of bee to balance the explorationand exploitation in this algorithm.
Page 3: The Best-so-far Selection in Artificial Bee Colony Algorithm

2890 A. Banharnsakun et al. / Applied Soft Computing 11 (2011) 2888–2901

d the

anvv

mamrt

iSttms

2

taaaa

Fig. 2. The comparison between the old an

The experimental results from these algorithms show that thelgorithms based on bee foraging behavior can successfully solveumerical optimization problems. However, in some cases the con-ergence speed can be an issue. This paper introduces a modifiedersion of ABC in order to improve the algorithm’s performance.

First, we test our modified algorithm using the same bench-arks as the previously cited research including the original ABC

nd the BSO algorithm. Then, we apply our method to optimize theutual information value in order to measure similarity in an image

egistration process. We compare the registration results betweenhe original ABC and our best-so-far method.

The paper is organized as follows. Section 2 describes the orig-nal ABC algorithm. Section 3 presents our best-so-far method.ection 4 describes the automated image registration problem. Sec-ion 5 presents the experiments. Section 6 compares and discusseshe performance results of the best-so-far method with the original

ethod and BSO algorithm. Finally, Section 7 offers our conclu-ions.

. The original ABC algorithm

The ABC algorithm assumes the existence of a set of computa-

ional agents called honey bees. The honey bees in this algorithmre categorized into three groups: employed bees, onlooker beesnd scout bees. The colony is equally separated into employed beesnd onlooker bees. Each solution in the search space consists ofset of optimization parameters which represent a food source

Start Automated ImageRegistration Process based on

Best-so-far ABC method

Initialize the solutionssuch as rotation value,x-axis translation value,y-axis translation value

Criterion is sEnd of Automated Image

Registration Process based onBest-so-far ABC approach

No

Yes

Memorizesolution achie

Show Result

Fig. 3. Automated image registration bas

new method for comparing the solutions.

position. The number of employed bees is equal to the number offood sources. In other words, there is only one employed bee oneach food source. The quality of food source is called its “fitnessvalue” and is associated with its position. The process of bees seek-ing for good food sources is the process used to find the optimalsolution.

The employed bees will be responsible for investigating theirfood sources and sharing the information about these food sourcesto recruit the onlooker bees. The onlooker bees will make a decisionto choose a food source based on this information. The food sourcethat has higher quality will have a larger chance to be selected byonlooker bees than one of lower quality. An employed bee whosefood source is rejected as low quality by employed and onlookerbees will change to a scout bee to search randomly for new foodsources. By this mechanism, the exploitation will be handled byemployed and onlooker bees while the exploration will be main-tained by scout bee. The details of the algorithm are as follows.

First, randomly distributed initial food source positions aregenerated. The process can be represented by Eq. (2.1). After ini-tialization, the population is subjected to repeated cycles of threemajor steps: updating feasible solutions, selecting feasible solu-tions, and avoiding suboptimal solutions.

F(xi), xi ∈ RD, i ∈ {1, 2, 3, . . . , SN}, (2.1)

xi is a position of food source as a D-dimensional vector, F(xi) is theobjective function which determines how good a solution is, andSN is the number of food sources.

Calculate the probabilityvalues of each solutionfrom employed bees

Update the new solutions ofthe onlooker bees dependingon the selected solutionsfrom employed bees by the

modified approach

atisfied ?

Update new solutionsfor the employed bees

Image Registration andcalculation of MutualInformation Process

Yes

Abandoned solution forthe scout ?

replace it with a newrandomly produced

solution

Nothe bestved so far

Image Registration andcalculation of MutualInformation Process

ed on the best-so-far ABC method.

Page 4: The Best-so-far Selection in Artificial Bee Colony Algorithm

A. Banharnsakun et al. / Applied Soft Computing 11 (2011) 2888–2901 2891

Table 1Numerical benchmark functions.

Function name Function Characteristic Ranges Global Minimum

Sphere f1(�x) =D∑

i=1

x2i

Uni-modal, Separable −100 ≤ xi ≤ 100 0

Griewank f2(�x) = 14000

(D∑

i=1

x2i

)−

(D∏

i=1

cos(

xi√1

))+ 1 Multi-modal, Non-Separable −600 ≤ xi ≤ 600 0

Rastrigin f3(�x) =D∑

i=1

(x2i

− 10 cos(2�xi) + 10) Multi-modal, Separable −5.12 ≤ xi ≤ 5.12 0

Rosenbrock f4(�x) =D∑

i=1

100(x2i

− xi+1)2 + (1 − xi)

2 Uni-modal, Non-Separable −30 ≤ xi ≤ 30 0(−0.2

√(1/D)∑D

i=1x2

i

)(1/D)∑D

cos(2�xi )

ano

ft�aaitida

bptw

fec

P

wt

btfns

ttob

x

Ackley f5(�x) = 20 + e − 20e − e i=1

Schaffer f6(�x) = 0.5 +(sin√

x21+x2

2)2−0.5

(1+0.001(x21+x2

2))

2

In order to update feasible solutions, all employed bees selectnew candidate food source position. The choice is based on theeighborhood of the previously selected food source. The positionf the new food source is calculated from equation below.

ij = xij + �ij(xij − xkj) (2.2)

In the Eq. (2.2), �ij is a new feasible solution that is modifiedrom its previous solution value (xij) based on a comparison withhe randomly selected position from its neighboring solution (xkj).ij is a random number between [−1,1] which is used to randomlydjust the old solution to become a new solution in the next iter-tion. k ∈ {1,2,3,. . ., SN} and j ∈ {1,2,3,. . ., D} are randomly chosenndexes. The difference between xij and xkj is a difference of posi-ion in a particular dimension. The algorithm changes each positionn only one dimension in each iteration. Using this approach, theiversity of solutions in the search space will increase in each iter-tion.

The old food source position in the employed bee’s memory wille replaced by the new candidate food source position if the newosition has a better fitness value. Employed bees will return toheir hive and share the fitness value of their new food sourcesith the onlooker bees.

In the next step, each onlooker bee selects one of the proposedood sources depending on the fitness value obtained from themployed bees. The probability that a food source will be selectedan be obtained from an equation below.

i = fiti∑SNn=1fitn

(2.3)

here fiti is the fitness value of the food source i, which is relatedo the objective function value (F(xi)) of the food source i.

The probability of a food source being selected by the onlookerees increases as the fitness value of a food source increases. Afterhe food source is selected, onlooker bees will go to the selectedood source and select a new candidate food source position in theeighborhood of the selected food source. The new candidate foodource can be expressed and calculated by Eq. (2.2).

In the third step, any food source position that does not improvehe fitness value will be abandoned and replaced by a new position

hat is randomly determined by a scout bee. This helps avoid sub-ptimal solutions. The new random position chosen by the scoutee will be calculated from equation below.

ij = xminj + rand[0, 1] ∗ (xmax

j − xminj ) (2.4)

Multi-modal, Non-Separable −30 ≤ xi ≤ 30 0

Multi-modal, Non-Separable −100 ≤ xi ≤ 100 0

where xminj

is the lower bound of the food source position in dimen-sion j and xmax

jis the upper bound of the food source position in

dimension j.The maximum number of cycles (MCN) is used to control the

number of iterations and is a termination criterion. The processwill be repeated until the output of the objective function reaches adefined threshold value or the number of iteration equals the MCN.A defined threshold value is set to be equal to the global minimumvalue or the global maximum value depending on the type of theoptimization problems.

Although the activities of exploitation and exploration are wellbalanced and help to mitigate both stagnation and premature con-vergence in the ABC algorithm, the convergence speed still is themajor issue in some situations in the ABC algorithm.

In our best-so-far ABC, the exploitation process is focused on thebest-so-far food source. This food source is used in the comparisonprocess for updating the new candidate food source to acceleratethe convergence speed. The searching ability of the exploration pro-cess is also enhanced by the scout bee. It will randomly generate anew food source to avoid local optima.

3. The best-so-far ABC algorithm

To enhance the exploitation and exploration processes, we pro-pose to make three major changes by introducing the best-so-farmethod, an adjustable search radius, and an objective-value-basedcomparison method.

The computational complexity is also addressed in order to eval-uate the computational complexity of the best-so-far ABC with theoriginal ABC and the BSO algorithms.

The pseudo-code of the best-so-far ABC algorithm is illustratedin Fig. 1. In this pseudo-code, the best-so-far method is shownin line number 37 where the onlooker bees will register the bestselected food source position so far within the new candidate gen-eration function. The adjustable search radius is performed by ascout bee in line number 50 and the objective-value-based com-parison method is used in line number 14, 27, 39 and 52. The fulldetails of each change are described in section 3.1–3.3.

Note that the pseudo-code in Fig. 1 is used to find the minimum

objective value. To find the maximum objective value, the conditionwhich is used to compare the old solution and the new solution, inline number 14, 27, 39 and 52, must be changed from

f (xnew(∗)) < f (xold(∗))

Page 5: The Best-so-far Selection in Artificial Bee Colony Algorithm

2892 A. Banharnsakun et al. / Applied Soft CTa

ble

2R

esu

ltob

tain

edin

the

AB

C1

exp

erim

ent.

Fun

ctio

nD

imen

sio

Max

iter

atio

nA

BC

Bes

t-so

-far

AB

C

Con

verg

ence

Iter

atio

nR

un

tim

e(s

)M

ean

ofob

ject

ive

valu

esSD

Con

verg

ence

Iter

atio

nR

un

tim

e(s

)M

ean

ofob

ject

ive

valu

esSD

Sph

ere

1050

050

00.

234

1.42

6E−1

68.

212E

−17

500

0.28

13.

303E

−188

3.29

9E−1

8720

750

750

0.59

32.

353E

−12

2.19

8E−1

275

00.

704

1.11

8E−2

391.

109E

−238

3010

0010

001.

109

2.64

9E−1

02.

136E

−10

1000

1.54

65.

6413

E−30

23.

967E

−301

Gri

ewan

k10

500

500

0.45

31.

040E

−03

2.74

1E−0

350

00.

500

2.15

7E−2

142.

092E

−213

2075

075

01.

203

4.73

4E−0

92.

177E

−08

750

1.34

42.

920E

−302

2.80

1E−3

0030

1000

1000

2.34

45.

369E

−09

2.66

4E−0

884

52.

672

00

Ras

trig

in10

500

500

0.34

44.

911E

−16

1.07

3E−1

550

00.

391

1.41

4E−2

101.

410E

−209

2075

075

00.

922

2.93

3E−1

15.

728E

−11

750

0.98

53.

567E

−300

3.56

9E−2

9930

1000

1000

1.78

23.

593E

−07

3.56

5E−0

686

31.

953

00

Ros

enbr

ock

1050

050

00.

469

0.07

022

0.09

035

462

0.46

50

020

750

750

1.37

50.

4271

110.

6113

7575

01.

484

7.19

8E−2

24.

941E

−21

3010

0010

002.

704

0.68

1802

0.77

4655

1000

2.96

93.

604E

−15

3.58

5E−1

4

Ack

ley

1050

050

00.

390

1.40

0E−1

07.

657E

−11

630.

047

00

2075

075

00.

968

3.68

1E−0

71.

302E

−07

850.

094

00

3010

0010

001.

782

5.24

6E−0

61.

702E

−06

109

0.23

40

0

Sch

affe

r2

2000

2000

0.67

27.

895E

− 11

4.01

4E−1

094

70.

468

00

omputing 11 (2011) 2888–2901

to

f (xnew(∗)) > f (xold(∗))

where f(x(*)) is the objective function value.

3.1. The best-so-far method

In order to improve the efficiency of the onlooker bees, we pro-posed to modify the parameters that are used to calculate newcandidate food sources. In the original algorithm, each onlooker beeselects a food source based on a probability that varies accordingto the fitness function explored by a single employed bee. Then thenew candidate solutions are generated by updating the onlookersolutions as shown in Eq. (2.2). In our best-so-far method, fromthe pseudo-code line number 20 to 32, all onlooker bees use theinformation from all employed bees to make a decision on a newcandidate food source. Thus, the onlookers can compare informa-tion from all candidate sources and are able to select the best-so-farposition.

On the assumption that the best-so-far position will lead to theoptimal solution, from the pseudo-code line number 33 to 46, webias the solution direction by using the best-so-far solutions-basedapproach to update the new candidate solutions of onlooker bees.We then accelerate its convergence speed by using the fitness valueof the best-so-far food source as the multiplier of the error correc-tion for this solution update. The values in all dimensions of eachfood source are also updated in each iteration in order to increasethe diversity of the feasible solutions in the search space.

The new method used to calculate a candidate food source isshown in Eq. (3.1)

vid = xij + �fb(xij − xbj) (3.1)

where: vid = The new candidate food source for onlooker bee posi-tion i dimension d, d = 1,2,3,. . .D; xij = The selected food sourceposition i in a selected dimension j; � = A random number between−1 and 1; fb = The fitness value of the best food source so far;xbj = The best-so-far food source in selected dimension j.

This change should make the best-so-far algorithm convergemore quickly because the solution will be biased towards best-so-far solution.

3.2. The adjustable search radius

Although our best-so-far method can increase the local searchability compared to the original ABC algorithm, the solution is easilyentrapped in a local optimum. In order to resolve this issue, weintroduce a global search ability for the scout bee.

From the pseudo-code line number 47 to 59, the scout bee willrandomly generate a new food source by using Eq. (3.2) wheneverthe solution stagnates in the local optimum.

�ij = xij + �ij

[ωmax − iteration

MCN(ωmax − ωmin)

]xij (3.2)

where �ij is a new feasible solution of a scout bee that is modifiedfrom the current position of an abandoned food source (xij) and �ijis a random number between [−1,1]. The value of ωmax and ωminrepresent the maximum and minimum percentage of the positionadjustment for the scout bee. The value of ωmax and ωmin are fixedto 1 and 0.2, respectively. These parameters were chosen by theexperimenter. With these selected values, the adjustment of scout

bee’s position based on its current position will linearly decreasefrom 100 percent to 20 percent in each experiment round.

Based on the assumption that the solution of scout bee will befar from the optimal solution in the first iteration and it will con-verge closely to the optimal solution in later iterations, Eq. (3.2) will

Page 6: The Best-so-far Selection in Artificial Bee Colony Algorithm

A. Banharnsakun et al. / Applied Soft Co

Tab

le3

Res

ult

obta

ined

inth

eA

BC

2ex

per

imen

t.

Fun

ctio

nD

imen

sio

Max

iter

atio

nA

BC

BSO

Bes

t-so

-far

AB

C

Con

verg

ence

Iter

atio

nR

un

tim

e(s

)M

ean

ofob

ject

ive

valu

esSD

Mea

nof

obje

ctiv

eva

lues

SDC

onve

rgen

ceIt

erat

ion

Ru

nti

me

(s)

Mea

nof

obje

ctiv

eva

lues

SD

Sph

ere

1050

0050

002.

390

9.47

2E−1

71.

615E

−17

8.47

5E−1

233.

953E

−122

833

0.57

80

020

7500

7500

5.85

92.

627E

−16

8.79

6E−1

72.

361E

−115

7.34

8E−1

1510

031.

234

00

3010

000

1000

010

.922

3.21

7E−1

65.

142E

−17

4.72

8E−1

021.

372E

−101

1115

2.14

10

0

Gri

ewan

k10

5000

5000

4.43

77.

396E

−05

7.39

6E−0

43.

823E

−46

6.67

9E−4

675

40.

797

00

2075

0075

0012

.037

2.05

4E−1

84.

227E

−19

8.40

2E−4

73.

928E

−46

828

1.73

50

030

1000

010

000

23.3

594.

638E

−18

7.74

4E−1

94.

161E

−47

7.52

3E−4

784

52.

672

00

Ras

trig

in10

5000

5000

3.45

39.

934E

−18

2.99

2E−1

84.

171E

−64

7.83

4E−6

476

70.

672

00

2075

0075

009.

172

1.92

8E−1

73.

579E

−18

7.89

9E−6

12.

215E

−60

824

1.31

30

030

1000

010

000

17.4

062.

962E

−17

4.61

8E−1

86.

228E

−59

8.49

6E−5

986

31.

953

00

Ros

enbr

ock

1050

0050

004.

672

0.00

4901

440.

0061

382

3.61

7E−0

75.

081E

−05

462

0.46

50

020

7500

7500

13.4

530.

0054

1073

0.00

9050

79.

201E

−07

1.10

2E−0

511

912.

469

00

3010

000

1000

026

.625

0.00

7711

880.

0151

573

5.86

5E−0

68.

519E

−05

2169

6.46

90

0

Ack

ley

1050

0050

003.

906

4.76

3E−1

51.

636E

−15

7.10

5E−1

95.

482E

−18

630.

047

00

2075

0075

009.

657

1.54

6E−1

42.

183E

−15

2.13

1E−1

94.

603E

−18

850.

094

00

3010

000

1000

017

.860

2.78

9E−1

42.

843E

−15

1.46

0E−1

88.

130E

−17

109

0.23

40

0

Sch

affe

r2

2000

2000

0.67

27.

895E

−11

4.01

4E−1

01.

069E

−49

3.17

0E−4

994

70.

468

00

mputing 11 (2011) 2888–2901 2893

dynamically adjust the position of scout bee by allowing a scout beein the first iteration to wander with a wider step size in the searchspace. As the number of the iteration increases, the step size for thewandering of a scout bee will decrease.

From the pseudo-code line number 7 to 19, the employed beescan thus still maintain diversity in producing a new food source (Eq.(2.2)). Using both the local and the global search methods, the con-vergence speed increases and solutions can be globally optimizedmore quickly.

3.3. Objective-value-based comparison method

In this work, we also focus on the method that is used to compareand to select between the old solution and the new solution ineach iteration. Basically, the comparison of the new solution andthe old solution is done by the fitness value. If the fitness of thenew solution is better than the fitness of the old solution, we selectthe new one and ignore the old solution. The fitness value can beobtained from the following equation.

For finding the minimum objective value

Fitness(f (x)) =

⎧⎨⎩

11 + f (x)

if f (x) ≥ 0

1 + |f (x)| if f (x) < 0(3.3)

Based on Eq. (3.3), we can see that when f(x) is larger than thezero but has a very small value, e.g. 1E−20, the fitness value of equa-tion 1/(1 + 1E−20) is rounded up to be 1 (1E−20 is ignored). Thiswill lead the fitness of all solutions to become equal to 1 in the lateriterations. In other words, there is no difference between the fit-ness values that is equal to 1/(1 + 1E−20) and 1/(1 + 1E−120). Thus,a new solution that gives a better fitness value than the old solutionwill be ignored and the solution will stagnate at the old solution.In order to solve this issue, we directly use the objective value offunction for comparison and selection of the better solution. Thepseudo-code of the new comparison process is shown in Fig. 2.

3.4. Computational complexity

We can evaluate the computational complexity of the best-so-far ABC algorithm in comparison with the original ABC and the BSOalgorithms. Let n be the total number of bees, d be the dimensionof solutions, and m be the maximum iteration. The best-so-far ABCalgorithm must spend time necessary to find the best-so-far solu-tion in each iteration, so it uses the computational time equal to(n/2)d + m(3nd). This is larger than the computational time of theoriginal ABC algorithm which is equal to (n/2)d + m(3/2nd + d). TheBSO algorithm must spend time to sort all bees based on their fit-ness in each of iteration of algorithm, so the computational time isequal to nd + m(n + nd log n + 5/2nd)). However, when we focus onthe upper bound of the complexity of problems, both Best-so-farABC and original ABC algorithms use computational time O(mnd)while the BSO algorithm uses O(mnd log n), which is the highestcomplexity.

Although the computational complexity of the best-so-far ABCis higher than the original ABC at the same maximum iteration, thebest-so-far ABC should find the optimal solution before it reachesthe maximum iteration. Thus the actual computation time of best-so-far ABC may be reduced in relative to original ABC.

4. Using best-so-far ABC in image registration

We applied our best-so-far method to applications in the imageregistration domain. The following subsections provide a briefbackground on image registration and the adaptation of our best-so-far method to this problem.

Page 7: The Best-so-far Selection in Artificial Bee Colony Algorithm

2894 A. Banharnsakun et al. / Applied Soft Computing 11 (2011) 2888–2901

gorith

4

os

Fig. 4. Iterations to convergence for original and best-so-far al

.1. Background on image registration

Image registration is a process used to align two or more imagesf a scene. It plays an important role in many research fieldsuch as the computer vision, remote sensing and medical imaging

ms when dimension = 30 (dimension of Schaffer Function = 2).

[17,18]. Usually a target image must be transformed to be geometri-cally coincident with a reference image. To do this, correspondingfeatures must be identified in the two images or else the globalintensity distributions must be compared. A similarity or fitnessmeasure [19] is also necessary. The registration process searches for

Page 8: The Best-so-far Selection in Artificial Bee Colony Algorithm

A. Banharnsakun et al. / Applied Soft Computing 11 (2011) 2888–2901 2895

Fig. 5. Image Pair I: Brain image for registration.

ntistry

at

katdrroda

tt

Fig. 6. Image Pair II: De

transformation from the target to the reference feature geometryhat maximizes this similarity measure.

The global optimization of the similarity measurement is theey to automate the registration process. There are two commonpproaches to measure the similarity, which are geometric dis-ance and statistical similarity. Hessian and Hausdorff geometricistance have been used to measure the distance in computer visionesearches [20]. These methods measure the nearest neighbor ofank kth. The weakness of the methods is the fact that the presencef outliers may affect the accuracy. Mismatch [21] is then intro-

uced. It is a new measure that makes use of a Gaussian distributions a weight function in order to be tolerant of outliers.

Least Square Error is the method that statistically determineshe smallest deviation or distance between two data sets, such ashe reference and target images. The drawback of this measure is its

Fig. 7. Image Pair III: Satellite

image for registration.

sensitivity to the presence of outliers. Cross Correlation measuressimilarity of the two images by convolving them. Unfortunately,this measure is also not robust in the presence of noise or outliers.Currently, most of the researchers in this field work with MutualInformation (MI) as a measure of similarity. It is quite similar tocorrelation except that the concept of MI stems from conditionalprobability. It is a measure of the presence of the information fromone set in another set. It is now becoming a standard to measurethe similarity in image registration.

In this paper, we use a statistical similarity approach where the

Mutual Information [22–24] is used for measuring the similarity ofthe two images. An optimization solution in the mutual informationapproach can be expressed mathematically as follows:

˛∗ = argmax˛

(MI(IR, IT )) (4.1)

image for registration.

Page 9: The Best-so-far Selection in Artificial Bee Colony Algorithm

2896 A. Banharnsakun et al. / Applied Soft Computing 11 (2011) 2888–2901

era M

wf

S

M

wt

H

H

H

wot

t

p

p

p

wvi

irm

x

y

wd�r

4

d

Fig. 8. Image Pair IV: Cam

here ˛ is the set of transformation parameters, MI is an objectiveunction, IR is a reference image and IT is a target image.

The mutual information of IR and IT can be expressed in term ofhannon’s entropy as follows:

I(IR, IT ) − H(IR) + H(IT ) − H(IR, IT ) (4.2)

here H(IR) and H(IT) are the Shannon’s entropies, and H(IR, IT) ishe joint entropy of the two images.

Shannon’s entropy can be defined as:

(IR) = −∑

a

pR(a) log(pR(a)) (4.3)

(IT ) = −∑

b

pT (b) log(pT (b)) (4.4)

(IR, IT ) = −∑

a

∑b

pR,T (a, b) log(pR,T (a, b)) (4.5)

here pR(a) and pT(b) are the marginal probability mass functionsf the reference and the target image respectively, and pR,T(a,b) ishe joint probability mass function of two images.

The joint histogram of the image pair can be used to calculatehese marginal probability mass functions:

R,T (a, b) = h(a, b)∑a

∑bh(a, b)

(4.6)

R(a) =∑

b

pR,T (a, b) (4.7)

R(b) =∑

a

pR,T (a, b) (4.8)

here h(a,b) is a number of corresponding pairs having intensityalue a in the reference image and intensity value b in the targetmage.

Various types of transformations can be applied in image reg-stration. In our work, we used rigid transformations involvingotation and translation along the x-axis and y-axis. The transfor-ation of images can be written as:

′ = xc + (x − xc) × cos � + (y − yc) × sin � + �x (4.9)

′ = yc + (x − xc) × sin � + (y − yc) × cos � + �y (4.10)

here x′ and y′ are new coordinates, x and y are the current coor-inates, xc and yc are the center coordinates of the target image,x and �y are the displacements of a target image, and � is the

otation angle of the target image around its center.

.2. Optimization solution with best-so-far ABC

We applied best-so-far ABC method to the registration problemescribed above. The goal is to find a global optimization of the

an image for registration.

similarity measure. In other words, we try to find the transforma-tion variables that maximize the mutual information values of theimages. The adopted algorithm is illustrated in Fig. 3.

In Fig. 3, initial solutions consisting of a rotation, and x and y-axistranslation values are generated. These parameter sets are treatedas the food sources for the employed bees. Each solution is used totransform the target image by re-sampling. Then we calculate themutual information score between the transformed target and thereference image. The onlooker bees will then select the solutionsthat produce higher mutual information and update those solutionsbased on our best-so-far method. The process will be repeated untilthe mutual information reaches a threshold value or the number ofiteration equals the MCN. Solutions that cannot improve the mutualinformation within a certain period will be abandoned and newsolutions will be regenerated by the scout bee.

Eq. (2.2) is used by employed bees in order to update variablesand improve the solution quality in each iteration. However, due tothe independence among the transformation parameters in imageregistration application, Eq. (3.1), which uses a randomly chosendimension, is slightly adjusted. A fixed dimension is used insteadas shown in Eq. (4.11).

vid = xid + �fb(xid − xbd) (4.11)

In our algorithm, Eq. (4.11) is introduced to improve the solu-tions in each iteration. When the solutions cannot be furtherimproved, Eq. (2.2) is used to calculate the new candidate solutions.Moreover, Eq. (3.2) is used by scout bees to randomly generate thenew solutions when the mutual information cannot be improved.

5. Experiments

We validated the performance of the proposed technique com-pared with the original technique in two sets of experiments:numerical and image registration applications. The following sub-sections describe the experimental methodology.

5.1. Numerical applications

The aim of this experiment is to compare the performance of thesearch process for the best-so-far method with the original algo-rithm and the Bee Swarm Optimization that is the state-of-the-artalgorithm for solving numerical optimization based on behavior ofbees proposed by R. Akbari [16]. Unimodal and multimodal bench-mark functions as shown in Table 1 were used in this experiment.The objective of the search process is to find the solutions that canproduce the minimum output value from these benchmark func-tions. In other words, the aim is to minimize fi(�x) in Table 1. In orderto make a performance comparison, we implemented the original

ABC based on an algorithm given in [25] as well as our best-so-farmethod. Our original ABC code was also validated by comparing thebenchmark results with the previous work proposed in [10]. Sincewe could not obtain the source code of the BSO algorithm, we onlyuse the results reported in [16] to compare with our algorithm. As a
Page 10: The Best-so-far Selection in Artificial Bee Colony Algorithm

A. Banharnsakun et al. / Applied Soft Computing 11 (2011) 2888–2901 2897

eratio

re

dwrdo4tol

tA[andnsAwlubt

Fig. 9. The mean of the mutual information across it

esult, the BSO algorithm will not be included in some comparisonxperiments.

The experiments were conducted in a similar fashion asescribed in [10,16]. The number of employed and onlooker beesere set to 100. The values of ωmax and ωmin are set to 1 and 0.2,

espectively. Each of the experiments was repeated 100 times withifferent random seeds. All the experiments in this paper were runn the same hardware (Intel Core 2 Quad with 2.4 GHz CPU andGB memory). The number of iterations to convergence, the run-

ime (execution time), and the mean and standard deviation of theutput values of benchmark functions were recorded. Note thatower mean objective values indicate better solutions.

In order to investigate the effect of the iteration increment onhe output quality, we divided the experiments into two sets, calledBC1 and ABC2. The maximum numbers of iterations proposed in

10,16] were also used in our work. In the ABC1 experiment, forll benchmark functions with exceptions of Schaffer function, theumbers of maximum iterations were 500, 750 and 1000 for theimensions of 10, 20 and 30, respectively. In ABC2, the maximumumbers of iterations were 5000, 7500 and 10,000 for the dimen-ions of 10, 20 and 30, respectively. For Schaffer function in both ofBC1 and ABC2 experiment, the numbers of maximum iterations

as set to 2000. Note that one iteration of our method requires a

onger execution time than the original method because the val-es in all dimensions of each candidate food source of onlookerees are updated on every iteration and the algorithm spends theime to find the best-so-far solution in each iteration. However, the

ns in Image Pair I–IV based on fixed initial solutions.

number of iterations to convergence is smaller. Thus, we will lookat both the number of iterations and runtime when we considerconvergence speed.

5.2. Image registration application

In our image registration experiments, once again we compareboth the solution quality and execution performance between ourbest-so-far and original ABC implementation. The objective func-tion of the image registration application is Mutual Information orMI. MI is used to measure the similarity of the two input imagesafter registration. Thus, the higher the MI value, the more accuratethe registration process.

In our experiments we used four sample image pairs. For eachpair the experiments were repeated 30 times. The number ofemployed and onlooker bees used were 50 and the maximum num-ber of iterations was 200. The search space of each variable for rigidtransformation process in term of rotation degree (�), translation inx-axis (x) and y-axis (y) were defined as shown below. The rangesof these parameters were chosen based on a visual estimate of themaximum amount of change required.

Image Pair (�) (x) (y)

I −25◦ ≤ � ≤ 25◦ −20 ≤ x ≤ 20 −20 ≤ y ≤ 20II 0◦ ≤ � ≤ 50◦ −210 ≤ x ≤ 210 −160 ≤ y ≤ 160III −1◦ ≤ � ≤ 1◦ −10 ≤ x ≤ 10 −50 ≤ y ≤ 0IV −90◦ ≤ � ≤ 90◦ −256 ≤ x ≤ 256 −256 ≤ y ≤ 256

Page 11: The Best-so-far Selection in Artificial Bee Colony Algorithm

2 Soft Computing 11 (2011) 2888–2901

u�ddt

asawt

6

6

mTi

asitTfrc

rtcw

bbi

atHsooi

Table 4Rate of convergence in the ABC2 experiment.

Function Dimensions Rate of convergence

ABC Best-so-far ABC

f1 Sphere 10 0.9265 0.585620 0.9942 0.669130 0.9954 0.7124

f2 Griewank 10 0.9966 0.535820 0.9890 0.603230 0.9908 0.5829

f3 Rastrigin 10 0.9270 0.562220 0.9662 0.607330 0.9787 0.6246

f4 Rosenbrock 10 0.9965 0.870520 0.9972 0.949230 0.9978 0.9722

F5 Ackley 10 0.9928 0.568320 0.9948 0.6791

TT

TT

898 A. Banharnsakun et al. / Applied

We tested the algorithms with two cases. In the first case, wesed a controlled initial solution with the fixed initial values of, x, and y. We offset all the initial values from the lower boundisplayed in the table above. However, the initial values may acci-entally be fixed close to the optimum solution. Thus, in the secondest case, we used randomly generated initial values.

In both test cases we investigated the solution quality of the twolgorithms when the runtime is approximately the same as well astudying the algorithm performance when the solution quality ispproximately the same. In other words, we would like to knowhich algorithm uses less computation time to create a solution of

he equivalent quality.

. Results and discussion

.1. Numerical results and discussion

The results obtained with the original and the best-so-farethod based on the numerical benchmark functions are shown in

able 2. Note that there is no result of the BSO algorithm to comparen the ABC1 experiment.

The “Convergence Iteration” column shows the number of iter-tions needed for each algorithm to converge towards the optimalolution (value not to exceed the maximum number of iterationsndicated in section 6.1). The “Runtime” column is the average run-ime needed to reach the convergence state across a hundred runs.he “Mean” column is the average output values of benchmarkunctions. The “SD” column shows the standard deviation of theesults. Mean and SD values which are smaller than 1E−304 areonsidered as 0.

In Table 2, we compare the results using approximately the sameuntime value. The best-so-far method gives better solutions thanhe original algorithm in all cases of benchmark functions. Espe-ially, the global minimal values of Ackley and Schaffer functionsere found by the best-so-far method in ABC1 experiment.

The results from the ABC1 experiment showed that severalenchmark functions did not converge within the specified num-er of iterations. The maximum number of iterations was then

ncreased in the ABC2 experiment. The results are shown in Table 3.In the ABC2 experiment in Table 3, when the number of iter-

tions was increased, the solution quality was also improved inhe original ABC as shown in mean of objective values column.

owever, our best-so-far method was still able to generate a better

olution on all the benchmark functions. The average improvementn runtime for the best-so-far method when compared with theriginal ABC was 82%, and the maximum and minimum runtimemprovement were 99% and 30% for Ackley and Schaffer function,

able 5he results obtained from the original ABC and best-so-far ABC method based on fixed in

Image pair Image size (H × W) ABC

Convergence Iteration Runtime (s) Mean (MI)

I 253 × 255 186 687.699 1.25875II 210 × 160 199 413.281 0.552284III 299 × 176 200 636.353 0.957611IV 256 × 256 200 723.533 1.3449

able 6he results obtained from the original ABC and best-so-far ABC method based on fixed in

Image pair Image size (H × W) ABC

Convergence Iteration Runtime (s) Mean (MI)

I 253 × 255 186 687.699 1.25875II 210 × 160 199 413.281 0.552284III 299 × 176 200 636.353 0.957611IV 256 × 256 200 723.533 1.3449

30 0.9964 0.7445

f6 Schaffer 2 0.9907 0.6849

respectively. For several benchmark functions, the original ABCnever reached the mean value achieved by our best-so-far methodeven when the runtime was increased.

The results from the BSO method reported in [16] were also usedto compare with the Best-so-far method in the ABC2 experiment.The results showed that the mean value of Best-so-far method isbetter than the BSO method at the same number of the iterationson all benchmark functions.

Fig. 4 shows the comparison of the convergence speed (Itera-tions) in term of number of iterations between the original and thebest-so-far method. Notice that the best-so-far method convergessubstantially faster with a much smaller number of iterationsneeded.

Note that there is no result of the BSO algorithm to illustrate theconvergence speed, so the comparison with the BSO algorithm isnot included in here.

The rate of convergence is used to indicate the speed at whicha converging sequence approaches its limit. A smaller value of therate of convergence implies that the solution converges to an opti-mal value more quickly. Suppose that the sequence {xk} converges

to the number L, so the rate of convergence can be calculated fromequation as follows

rate of convergence = limk→∞

|xk+1 − L||xk − L| (6.1)

itial solutions at approximately the same runtime value.

Best-so-far ABC

SD (MI) Convergence Iteration Runtime (s) Mean (MI) SD (MI)

9.55E−05 179 684.106 1.25883 8.46E−053.31E−02 200 411.116 0.581292 1.76E−029.28E−04 196 635.344 0.959218 2.37E−043.00E−02 188 719.947 1.36046 7.57E−05

itial solutions at approximately the same mean MI value.

Best-so-far ABC

SD (MI) Convergence Iteration Runtime (s) Mean (MI) SD (MI)

9.55E−05 108 412.756 1.25875 8.53E−053.31E−02 32 65.778 0.553137 1.81E−029.28E−04 44 142.628 0.957646 3.14E−043.00E−02 37 141.691 1.34501 7.66E−05

Page 12: The Best-so-far Selection in Artificial Bee Colony Algorithm

A. Banharnsakun et al. / Applied Soft Computing 11 (2011) 2888–2901 2899

n Ima

fwBf

ttctBfti

6

mram

ib

i

Fig. 10. The mean of the mutual information i

The results in Table 4 show that both the ABC and the Best-so-ar ABC converge linearly to the limit (global minimum). However,hen we compare the rates of convergence, we found that theest-so-far ABC gives better results than the ABC on all benchmark

unctions, especially in the Griewank and Rastrigin functions.In summary, the results in the numerical experiments indicate

hat the best-so-far ABC method can converge to the optimal solu-ion more quickly on almost all benchmark functions when it isompared with the original ABC method. The results also show thathe best-so-far ABC method can produce better solutions than theSO algorithm that is the state-of-the-art algorithm. In other words,

ewer iterations were needed to converge and thus, less computa-ional time was required. Notice that the SD is relatively low, whichmplies consistency among experimental runs.

.2. Image registration results and discussion

In this experiment, we used both the original and the best-so-farethods to register four sets of images. The two methods produced

esults that appeared indistinguishable to human eyes. However,nalysis of MI for registered images suggests that the best-so-farethod offered advantages.

The registered images are shown in Figs. 5–8. However, if we

nvestigate the resulting numbers closely, the best-so-far producesetter results faster as discussed next.

First, we discuss the results of the experiments for the fixednitial solution. Table 5 and Fig. 9 show the results in a case where

ge Pair I–IV based on random initial solutions.

the runtime of both methods is approximately the same. In Table 6,the means of MI values are fixed to be approximately the same.

In Table 5, we compare the results using approximately thesame runtime value. The best-so-far method gives slightly bettersolutions than the original algorithm in all cases of sample imagepairs, with somewhat faster convergence. Table 6 shows the con-vergence speed at approximately the same mean value of MI. Theresults indicate that the best-so-far method solutions convergedto an optimal solution more quickly than the original method inall image pairs. The maximum runtime improvement of 84% wasfound in the experiment with image pair II and the minimum of39% was with image pair I. The average runtime improvement forall image pairs was 70%.

For the random initial solution experiments, in some cases, if theinitial solution is close to the optimal solution, the results from thefirst few iterations of the original algorithm generate better results.However, when the number of iterations increases, the best-so-far method will adjust the solutions and converge to the optimalsolutions more quickly. The results of these experiments are shownin Tables 7 and 8 and Fig. 10.

In the random initial solutions experiment, although the MI val-ues from the original method are equivalent to the MI values from

the best-so-far method at approximately the same runtime value asshown in Table 7, the results in Table 8 show that the solutions of thesame case still converge to an optimal solution faster than the orig-inal algorithm when the approximate mean value is the same. Themaximum and minimum percentage of the runtime improvement,
Page 13: The Best-so-far Selection in Artificial Bee Colony Algorithm

2900 A. Banharnsakun et al. / Applied Soft Computing 11 (2011) 2888–2901

Table 7The results obtained from the original ABC and best-so-far ABC method based on random initial solutions by using approximately the same runtime value.

Image pair Image size (H × W) ABC Best-so-far ABC

Convergence Iteration Runtime (s) Mean MI SD MI Convergence Iteration Runtime (s) Mean MI SD MI

I 253 × 255 189 714.005 1.25867 9.53E-05 196 712.789 1.25878 1.13E−04II 210 × 160 199 400.781 0.580997 2.89E−03 187 399.604 0.58439 3.26E−04III 299 × 176 199 641.630 0.958047 8.09E−04 200 629.072 0.959175 6.32E−04IV 256 × 256 199 759.200 1.35800 5.38E−03 200 758.391 1.36049 4.54E−05

Table 8The results obtained from the original ABC and best-so-far ABC method based on random initial solutions by using approximately the same mean MI value.

Image pair Image size (H × W) ABC Best-so-far ABC

Convergence Iteration Runtime (s) Mean MI SD MI Convergence Iteration Runtime (s) Mean MI SD MI

86709978047800

fetm

7

Annbsbcbc

tstip

patoiarmioLfecpa

A

tP

[

[

[

[

[

[

[

[

[

[

I 253 × 255 189 714.005 1.25II 210 × 160 199 400.781 0.58III 299 × 176 199 641.630 0.95IV 256 × 256 199 759.200 1.35

ound on the experiment of image pair III and image pair I, werequal to 81% and 58%, respectively. The average improvement onhe runtime for the best-so-far method compared with the original

odel on all experiments of image pairs in this case was 68%.

. Conclusion

In this paper, a best-so-far method for solution updates in thertificial Bee Colony algorithm was proposed. We have replaced theeighboring solutions-based approach with the best-so-far tech-ique in order to increase the local search ability of the onlookerees. The searching method based on a dynamic adjustment ofearch range depending on the iteration was introduced for scoutees. The comparison and the selection of the new solution werehanged from a fitness-based comparison to an objective-value-ased comparison that can help to resolve round up issues in theomputation of the floating point “goodness” value.

In our best-so-far method, onlooker bees compare the informa-ion from all employed bees to select the best-so-far candidate foodource. This will bias the solution handled by onlooker bees towardshe optimal solution. Moreover, if the solution appears to stagnaten a local optimum, the scout bee can randomly generate a newosition in order to maintain the diversity of new food sources.

The performance of the best-so-far ABC method was then com-ared with the original ABC algorithm and the BSO algorithm usingset of benchmark functions. The computational complexity and

he rate of convergence on both best-so-far ABC method and theriginal ABC method were addressed. The results from the exper-ments provide evidence that the best-so-far ABC outperformsll the mentioned algorithms in both quality and convergenceate. We further applied the best-so-far method to optimize theutual information in an image registration application. Several

mage pairs were used in the experiments. The results showed thatur algorithm can arrive at the convergence state more quickly.astly, the mutual information value produced by the best-so-ar method is higher implying that the registration quality is alsonhanced. Thus, we can conclude that our best-so-far ABC is effi-ient from both the perspective of solution quality and algorithmerformance. The algorithm can serve as an alternative in severalpplication domains in the future.

cknowledgments

This work is supported by the Thailand Research Fundhrough the Royal Golden Jubilee Ph.D. Program under Grant No.HD/0038/2552.

[

[

9.53E−05 82 298.207 1.25867 1.17E−042.89E−03 65 138.899 0.581029 3.44E−048.09E−04 38 119.524 0.958108 6.56E−045.38E−03 59 223.725 1.35838 4.62E−05

References

[1] M.H. Aghdam, N. Ghasem-Aghaee, M.E. Basiri, Text feature selection using antcolony optimization, Expert Systems with Applications: An International Jour-nal 36 (2009) 6843–6853.

[2] B. Yagmahan, M.M. Yenisey, Ant colony optimization for multi-objective flowshop scheduling problem, Computers & Industrial Engineering 54 (2008)411–420.

[3] R.E. Perez, K. Behdinan, Particle swarm approach for structural design opti-mization, Computers and Structures 85 (2007) 1579–1588.

[4] P.Y. Yin, Particle swarm optimization for point pattern matching, Jour-nal of Visual Communication and Image Representation 17 (2006) 143–162.

[5] M. Dorigo, T. Stützle, Ant Colony Optimization, MIT Press, Cambridge, MA, USA,2004.

[6] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of the1995 IEEE International Conference on Neural Networks, vol. 4, 1995, pp.1942–1948.

[7] D. Karaboga, An idea based on honey bee swarm for numerical optimiza-tion, Technical Report-TR06, Erciyes University, Engineering Faculty, ComputerEngineering Department, Turkey, 2005.

[8] D. Karaboga, B. Akay, A comparative study of Artificial Bee Colony algorithm,Applied Mathematics and Computation 214 (2009) 108–132.

[9] D. Karaboga, B. Basturk, On the performance of Artificial Bee Colony (ABC)algorithm, Applied Soft Computing 8 (2008) 687–697.

10] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numericalfunction optimization: artificial bee colony (ABC) algorithm, Journal of GlobalOptimization 39 (2007) 459–471.

11] A. Singh, An artificial bee colony algorithm for the leaf-constrained min-imum spanning tree problem, Applied Soft Computing 9 (2009) 625–631.

12] F. Kang, J. Li, Q. Xu, Structural inverse analysis by hybrid simplex artificialbee colony algorithms, Computers and Structures 87 (2009) (2009) 861–870.

13] N. Karaboga, A new design method based on artificial bee colony algo-rithm for digital IIR filters, Journal of the Franklin Institute 346 (2009) 328–348.

14] X.S. Yang, Engineering optimizations via nature-inspired virtual bee algo-rithms, in: Lecture Notes in Computer Science, Springer (GmbH), 2005, pp.317–323.

15] K. Sundareswaran, V.T. Sreedevi, Development of novel optimization procedurebased on honey bee foraging behavior, in: IEEE International Conference onSystems, Man and Cybernetics, 2008, pp. 1220–1225.

16] R. Akbari, A. Mohammadi, K. Ziarati, A novel bee swarm optimization algorithmfor numerical function optimization, Communications in Nonlinear Science andNumber Simulation 15 (2010) 3142–3155.

17] H. Chen, P. Varshney, M. Arora, Mutual information based image registrationfor remote sensing data, International Journal of Remote Sensing 24 (2003)3701–3706.

18] W. Jacquet, E. Nyssen, P. Bottenberg, B. Truyen, P. de Groen, 2D image registra-tion using focused mutual information for application in dentistry, Computersin Biology and Medicine 39 (2009) 545–553.

19] A. Goshtasby, 2-D and 3-D Image Registration for Medical, Remote Sens-ing, and Industrial Applications, John Wiley & Sons, New Jersey, USA,

2005.

20] D.M. Mount, N.S. Netanyahu, J. LeMoigne, Efficient algorithms for robust pointpattern matching and applications to image registration, Pattern Recognition32 (1999) 17–38.

21] S. Ratanasanya, D.M. Mount, N.S. Netanyahu, T. Achalakul, Enhancment inRobust Feature Matching, ECTI-CON, 2008, pp. 505–508.

Page 14: The Best-so-far Selection in Artificial Bee Colony Algorithm

Soft Computing 11 (2011) 2888–2901 2901

[

[

A. Banharnsakun et al. / Applied

22] T.M. Cover, J.A. Thomas, Elements of Information Theory, John Wiley & Sons,New York, USA, 2006.

23] F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, P. Suetens, Multimodalityimage registration by maximization of mutual information, IEEE Transactionson Medical Imaging 16 (1997) 187–198.

[

[

24] P. Viola, W.M. Wells, Alignment by maximization of mutual information, Inter-national Journal of Computer Vision 24 (1997) 137–154.

25] D. Karaboga, Artificial Bee Colony (ABC) Algorithm Homepage [Online], Avail-able: http://mf.erciyes.edu.tr/abc [2009].