[Lecture Notes in Computer Science] MICAI 2007: Advances in Artificial Intelligence Volume 4827 ||...

10
A. Gelbukh and A.F. Kuri Morales (Eds.): MICAI 2007, LNAI 4827, pp. 570–579, 2007. © Springer-Verlag Berlin Heidelberg 2007 Using Ant Colony Optimization and Self-organizing Map for Image Segmentation Sara Saatchi and Chih-Cheng Hung School of Computing and Software Engineering Southern Polytechnic State University 1100 South Marietta Parkway Marietta, GA 30060 USA {ssaatchi, chung}@spsu.edu Abstract. In this study, ant colony optimization (ACO) is integrated with the self-organizing map (SOM) for image segmentation. A comparative study with the combination of ACO and Simple Competitive Learning (SCL) is provided. ACO follows a learning mechanism through pheromone updates. In addition, pheromone and heuristic information are normalized and the effects on the re- sults are investigated in this report. Preliminary experimental results indicate that the normalization of the parameters can improve the image segmentation results. 1 Introduction Image segmentation plays an essential role in the interpretation of various kinds of images. Image segmentation techniques can be grouped into several categories such as edge-based segmentation, region-oriented segmentation, histogram thresholding, and clustering algorithms [1]. The aim of clustering algorithms is to aggregate data into groups such that data in each group share similar features while data clusters are being distinct from each other. A problematic issue in image segmentation is detect- ing objects which might not have data pixels with similar spectral features. Therefore an image segmentation procedure that is merely relied on spectral features of an im- age is not always desirable. To overcome this problem the spatial information besides other spectral information of the data pixels should also be considered. There are a number of techniques, developed for optimization, inspired by the be- havior of natural systems [2] and other techniques [3]. Swarm intelligence has been introduced in the literature as an optimization technique [4]. The ACO algorithm was first introduced and fully implemented in [4] on the traveling salesman problem (TSP) which can be stated as finding the shortest closed path in a given set of nodes that passes each node once. The ACO algorithm which we focus on is based on a se- quence of local moves with a probabilistic decision based on a parameter, called pheromone as a guide to the objective solution. There are algorithms that while they follow the cited procedure of ACO algorithm, they do not necessarily follow all the aspects of it, which we informally refer to as ant-based algorithm or simply ant algo- rithm. There are several ant-based approaches to clustering which are based on the stochastic behavior of ants in piling up objects [5, 6, 7, 8, 9].

Transcript of [Lecture Notes in Computer Science] MICAI 2007: Advances in Artificial Intelligence Volume 4827 ||...

Page 1: [Lecture Notes in Computer Science] MICAI 2007: Advances in Artificial Intelligence Volume 4827 || Using Ant Colony Optimization and Self-organizing Map for Image Segmentation

A. Gelbukh and A.F. Kuri Morales (Eds.): MICAI 2007, LNAI 4827, pp. 570–579, 2007. © Springer-Verlag Berlin Heidelberg 2007

Using Ant Colony Optimization and Self-organizing Map for Image Segmentation

Sara Saatchi and Chih-Cheng Hung

School of Computing and Software Engineering Southern Polytechnic State University

1100 South Marietta Parkway Marietta, GA 30060 USA

{ssaatchi, chung}@spsu.edu

Abstract. In this study, ant colony optimization (ACO) is integrated with the self-organizing map (SOM) for image segmentation. A comparative study with the combination of ACO and Simple Competitive Learning (SCL) is provided. ACO follows a learning mechanism through pheromone updates. In addition, pheromone and heuristic information are normalized and the effects on the re-sults are investigated in this report. Preliminary experimental results indicate that the normalization of the parameters can improve the image segmentation results.

1 Introduction

Image segmentation plays an essential role in the interpretation of various kinds of images. Image segmentation techniques can be grouped into several categories such as edge-based segmentation, region-oriented segmentation, histogram thresholding, and clustering algorithms [1]. The aim of clustering algorithms is to aggregate data into groups such that data in each group share similar features while data clusters are being distinct from each other. A problematic issue in image segmentation is detect-ing objects which might not have data pixels with similar spectral features. Therefore an image segmentation procedure that is merely relied on spectral features of an im-age is not always desirable. To overcome this problem the spatial information besides other spectral information of the data pixels should also be considered.

There are a number of techniques, developed for optimization, inspired by the be-havior of natural systems [2] and other techniques [3]. Swarm intelligence has been introduced in the literature as an optimization technique [4]. The ACO algorithm was first introduced and fully implemented in [4] on the traveling salesman problem (TSP) which can be stated as finding the shortest closed path in a given set of nodes that passes each node once. The ACO algorithm which we focus on is based on a se-quence of local moves with a probabilistic decision based on a parameter, called pheromone as a guide to the objective solution. There are algorithms that while they follow the cited procedure of ACO algorithm, they do not necessarily follow all the aspects of it, which we informally refer to as ant-based algorithm or simply ant algo-rithm. There are several ant-based approaches to clustering which are based on the stochastic behavior of ants in piling up objects [5, 6, 7, 8, 9].

Page 2: [Lecture Notes in Computer Science] MICAI 2007: Advances in Artificial Intelligence Volume 4827 || Using Ant Colony Optimization and Self-organizing Map for Image Segmentation

Using Ant Colony Optimization and Self-organizing Map for Image Segmentation 571

Self-Organizing Map (SOM) is a one layer neural network model. It was first in-troduced by Kohonen [10]. The SOM is a prominent unsupervised neural network model providing a topology preserving mapping from a high-dimensional input space to a two-dimensional feature map. Each neuron in the SOM is connected to the neighboring neuron which is different from the simple competitive learning neural network model. SOM has been used for dimension reduction and clustering in litera-ture [11].

Competitive learning introduced in [12] is an interesting and powerful learning principle. It has been applied to many unsupervised learning problems. Simple com-petitive learning (SCL) is one of the several different competitive learning algorithms proposed in the literature. It shows the stability in data clustering applications over different run trials, but this stable result is not always the global optima. In fact, in some cases SCL converges to local optima over all run trials and the learning rate need to be adjusted in the course of experimentation so that the global optimization can be achieved. We have integrated the simple competitive learning algorithm with the ACO algorithm in [13]. In this paper we will study the integration of the self-organizing map algorithm with the ACO algorithm and compare the results with ACO-SCL algorithm. The reason is that this unsupervised neural network model has been widely used for data clustering [11]. Although SOM and SCL models are simi-lar, it is well known that only a small number of neurons might be involved in the learning process for the SCL. Our purpose is to compare the impact of the ACO algo-rithm on these two models and their difference. Also, in this paper, pheromone and heuristic information in both ACO-SCL and SOM-SCL were normalized and the effect of this normalization on SCL and SOM algorithms was investigated.

2 Ant Colony Optimization (ACO)

The ACO heuristic has been inspired by the observation of real ant colony’s foraging behavior and the fact that ants can often find the shortest path when searching for food. This is achieved by a deposited and accumulated chemical substance called pheromone by the passing ant which goes towards the food. In its searching the ant uses its own knowledge of where the smell of the food comes from (we call it as heu-ristic information) and the other ants’ decision of the path toward the food (phero-mone information). After it decides its own path, it confirms the path by depositing its own pheromone making the pheromone trail denser and more probable to be chosen by other ants. This is a learning mechanism ants possess besides their own recognition of the path. As a result of this consultation with the ants’ behaviors already shown in searching the food and returning to the nest, the best path which is the shortest is marked from the nest towards the food.

In the literature [4], it was reported that the experiments show when the ants have two or more fixed paths with the same length available from nest to the food, the ants eventually concentrate on one of the paths and when the available paths are different in length they often concentrate on the shortest path. This is shown in Figure 1, when an obstacle is placed on the established path of ants, the ants first wander around the obstacle randomly. The ants going on a shorter path reach the food and return back to the nest more quickly. As the time passes on, the shorter path is reinforced by phero-mone and eventually becomes the preferred path of the ants.

Page 3: [Lecture Notes in Computer Science] MICAI 2007: Advances in Artificial Intelligence Volume 4827 || Using Ant Colony Optimization and Self-organizing Map for Image Segmentation

572 S. Saatchi and C.-C. Hung

Fig. 1. Ants find the shortest path around an obstacle as a result of pheromone concentration

ACO uses this learning mechanism for the optimization. Furthermore, in the ACO algorithm, the pheromone level is updated based on the best solution obtained by a number of ants. The pheromone amount deposited by the succeeded ant is defined to be proportional to the quality of the solution it produces. For the real ants, the best solution is the shortest path. This path is marked with a strong pheromone trail. In the short path problem using the ACO algorithm, the pheromone amount deposited is inversely proportional to the length of the path. For a given problem the pheromone can be set to be proportional to any criteria of the desired solution. In the clustering method we introduced the criteria which include the similarity of data in each cluster, distinction of the clusters and compactness of them.

3 Self-organizing Map

SOM [10] has a topology similar to simple competitive learning (SCL) model except without the use of a neighborhood. The SOM consists of one layer of output nodes which are connected to each input node. Each output node includes a vector of weight with the same dimensionality as the input nodes. Each dimension of the weight vector is assumed to be a connection from the corresponding node to each dimension of the input node.

The algorithm starts by randomly initializing all the weights corresponding to out-put nodes. A sample set of inputs is used for training. Each training input is compared with the weight for each node in the layer and the node with the closest distance is selected as the best matching unit (BMU) or winner. The weight vectors of the BMU’s neighboring nodes are then updated such that the nodes closer to the BMU get changed more. In fact the weights of nodes at the boundary of the neighborhood win-dow are barely changed. In general, the weight W (t+1) is updated by the learning formula given below:

)())(-)(( )( )( 1)( tLtWtVL(t)ttWtW θ+=+ (1)

where L is the learning rate which decays with time, V(t) is an input data vector and t refers to iteration time. The parameter L(t) can be defined as:

Page 4: [Lecture Notes in Computer Science] MICAI 2007: Advances in Artificial Intelligence Volume 4827 || Using Ant Colony Optimization and Self-organizing Map for Image Segmentation

Using Ant Colony Optimization and Self-organizing Map for Image Segmentation 573

⎟⎠⎞

⎜⎝⎛ −=

λt

LtL exp)( 0

(2)

where L0 denotes the learning rate at time t0, λ denotes a time constant, t refers to iteration and θ defines the relation between distance of the nodes from the BMU and the influence on their learning. The parameter θ is defined as:

⎟⎟⎠

⎞⎜⎜⎝

⎛−=

)(2exp)( 2

2

t

distt

σθ (3)

where dist is the distance of a node from the BMU and σ is the radius of the neighborhood. The radius of the neighborhood shrinks over time which causes θ to decay over time. The parameter σ can be shrunk according to the following formula:

⎟⎠⎞

⎜⎝⎛−=

λσσ t

t exp)( 0 (4)

where σ0 denotes the radius of the neighborhood at time t0, λ denotes a time constant and t refers to iteration.

4 The ACO and Simple Competitive Learning

Simple competitive learning is sometimes called a 0-neighbor Kohonen algorithm. It can be considered as a special case of SOM where the size of neighborhood window is reduced to one. The topology of the simple competitive learning algorithm can be represented as a one-layered output neural net. Each input node is connected to each output node. The number of input nodes is determined by the dimension of the train-ing patterns. Unlike the output nodes in the Kohonen’s feature map, there is no par-ticular geometrical relationship between the output nodes in the simple competitive learning. In the following development, a 2-D one-layered output neural net will be used.

The algorithm ACO-SCL is described as follows. Let L denote the dimension of the input vectors, which for us is the number of spectral images (bands). We assume that a 2-D (N × N) output layer is defined for the algorithm, where N is chosen so that the expected number of the classes is less than or equal to N2. Here weights of the nodes contain cluster center values.

Step 1: Initialize the number of clusters to K and the number of ants to m. Initialize

pheromone level assigned to each pixel to 1 so that it does not have effect on the probability calculation in the first iteration.

Step 2: Initialize m sets of K different random cluster centers to be used by m ants. Step 3: For each ant, assign each pixel Xn to one of the clusters (i), randomly, with

the probability distribution Pi(Xn) given in:

Page 5: [Lecture Notes in Computer Science] MICAI 2007: Advances in Artificial Intelligence Volume 4827 || Using Ant Colony Optimization and Self-organizing Map for Image Segmentation

574 S. Saatchi and C.-C. Hung

∑ =

= K

j nn

ninni

XX

XXXP

jj

i

0)]([)]([

)]([)]([)(

βα

βα

ητητ

(5)

where Pi(Xn) is the probability of choosing pixel Xn in cluster i, τi(Xn) and ηi(Xn) are the pheromone and heuristic information assigned to pixel Xn in cluster i respectively, α and β are constant parameters that determines the relative influence of the phero-mone and heuristic information, and K is the number of clusters. Heuristic informa-tion ηi(Xn) is obtained from:

),(*),()(

ininni CPXPDistCCXCDist

Xκ=η (6)

where Xn is the nth pixel, CCi is the ith spectral cluster center, and PCi is the ith spa-tial cluster center. CDist (Xn, CCi) is the spectral Euclidean distance between Xn and CCi, and PDist (Xn, PCi) is the spatial Euclidean distance between Xn and PCi. Con-stant κ is used to balance the value of η with τ.

Step 4: For each input pixel the center of the cluster to which this input pixel be-

longs is considered as the BMU. Both spectral and spatial cluster centers are updated using:

,...,,1)),()(()()1( LitCxttCtC iiii =−Δ+←+ (7)

where Δ(t) is a monotonically slowly decreasing function of t and its value is between 0 and 1.

Step 5: Save the best solution among the m solutions found. Our criteria for best

solution include the similarity of data in each cluster, distinction of the clusters and compactness of them [14].

Step 6: Update the pheromone level on all pixels according to the best solution.

The pheromone value is updated according to Eq. 8:

τi(Xn) ← (1- ρ ) τ i(Xn)+ Σi Δτ i(Xn) (8)

where ρ is the evaporation factor (0 ≤ ρ < 1) which causes the earlier pheromones vanish over the iterations.

Δτi(Xn) in (8) is the amount of pheromone added to previous pheromone by the successful ant, which is obtained from:

⎪⎩

⎪⎨

⎧=τΔ

. otherwise 0

. icluster ofmember a is X if ),'(*),'(

)'(*

) nikAvgPDistikAvgCDist

kMinQ

(X ni (9)

In (9), Q is a positive constant which is related to the quantity of the added phero-mone by ants, Min(k′) is the maximum of the minimum distance between every two cluster centers obtained by ant k′, AvgCDist (k′,i) is the average of the spectral Euclid-ean distances within cluster i and AvgPDist(k′,i) is the average of the spatial Euclid-ean distances, between all pixels in a cluster i and their cluster center obtained by ant

Page 6: [Lecture Notes in Computer Science] MICAI 2007: Advances in Artificial Intelligence Volume 4827 || Using Ant Colony Optimization and Self-organizing Map for Image Segmentation

Using Ant Colony Optimization and Self-organizing Map for Image Segmentation 575

k′. Min(k′) causes the pheromone become bigger when clusters get more apart and hence raise the probability.

Step 7: Assign cluster center values of the best clustering solution to the clusters

centers of all ants. Step 8: If the termination criterion is satisfied go to next step. Otherwise, go to

Step 3. Step 9: Output the optimal solution.

5 The ACO and Self-organizing Map

The integrated algorithm ACO-SOM is described as follows. Let L denote the dimen-sion of the input vectors, which for us is the number of spectral images (bands). We assume that a 2-D (N × N) output layer is defined for the algorithm, where N is cho-sen so that the expected number of the classes is less than or equal to N2. Here weights of the nodes contain cluster center values.

Step 1: Initialize the number of clusters to K and the number of ants to m. Initialize

pheromone level assigned to each pixel to 1 so that it does not have effect on the probability calculation in the first iteration.

Step 2: Initialize m sets of K different random cluster centers to be used by m ants. Step 3: For each ant, assign each pixel Xn to one of the clusters (i), randomly, with

the probability distribution Pi(Xn) given in Eq. 5. Step 4: For each input pixel the center of the cluster to which this input pixel be-

longs is considered as the BMU. Both spectral and spatial cluster centers are updated using Eq. 7. All other cluster center nodes within the neighboring window of the BMU are updated according to:

,...,,1)),()(()()()1( LitCxtttCtC iiii =−Δ+←+ θ (10)

where Δ(t) is a monotonically slowly decreasing function of t and its value is between 0 and 1. The parameter θ is obtained from Eq. 3 where dist is the distance between neighboring cluster centers and the best matching cluster center.

Step 5: Save the best solution among the m solutions found. Our criteria for best

solution include the similarity of data in each cluster, distinction of the clusters and compactness of them [14].

Step 6: Update the pheromone level for all pixels according to the best solution.

The pheromone value is updated according to Eq. 8. Step 7: Assign cluster center values of the best clustering solution to the cluster

centers of all the ants.

Page 7: [Lecture Notes in Computer Science] MICAI 2007: Advances in Artificial Intelligence Volume 4827 || Using Ant Colony Optimization and Self-organizing Map for Image Segmentation

576 S. Saatchi and C.-C. Hung

Step 8: If the termination criterion is satisfied go to next step. Otherwise, go to Step 3.

Step 9: Output the optimal solution.

5.1 Normalized Pheromone and Heuristic Information

The pheromone and heuristic parameters were normalized and the effect of this nor-malization was investigated. Normalization of the pheromone assigned to a pixel in a cluster τ i(Xn) is performed such that the summation of the pheromone of all clusters

equals to one. (i.e. ∑=

=c

ini X

0

1)(τ , where c is the total number of clusters). This is the

same for the heuristic information ηi(Xn). Normalization of the parameters prevents the values from getting too large so that

only the relative influence of the parameters assigned to clusters is considered.

6 Simulation Results

For classification, the pixel-based minimum-distance classification algorithm was used in our experiments. Since SCL is very dependent on the learning rate, i.e. ∆(t) in Eqs. 7 and 10, we performed some experiments on ∆(t). Considering that ∆(t) is a monotonically slowly decreasing function of t and its value is between 0 and 1, we suggest the following formula:

1

2.0)(

+=Δ

rtt (11)

where t and r denote iteration and a rate which is a constant determined in the ex-periments. The experiments were performed over 20 run trials on two different im-ages, for r from 10 to 50 incrementing by 10. Experiments showed that the better results were obtained for r = 10. Therefore the experiment was repeated similarly but for r from 1 to 10 incrementing by 1. This experiment showed that the better results were obtained for r between 1 and 5. In our experiments for the images shown in Figures 2 and 3, r was set to 2.

ACO-SCL algorithm showed that it is dependant on the set of parameters. Parame-ters used in ACO-SCL, other than r including κ, Q, ρ, α, and β. Parameters α, β and κ are used to keep the values of τ and η in the same order. Parameter Q controls the added amount of pheromone and ρ eliminates the influence of the earlier added pheromone. Evaporation factor was set to be ρ = 0.8. We performed a course of ex-periments on the remaining parameters. Parameters κ and Q showed to have little influence on the results, while α and β were more influential. The values tested were listed as follows: κ =1000 and 10000, Q =10 and 100, α =0.1 to 50 incrementing by 10 and β =0.1 to 50 incrementing by 10. Each experiment was done for 20 run trials on different images. The results are not satisfactory with β =0.1 for images tested. The results are good with α =0.1 for images tested but unstable. There were some set of

Page 8: [Lecture Notes in Computer Science] MICAI 2007: Advances in Artificial Intelligence Volume 4827 || Using Ant Colony Optimization and Self-organizing Map for Image Segmentation

Using Ant Colony Optimization and Self-organizing Map for Image Segmentation 577

parameters that still did well for some of the images but not for the others. Knowing that α should be small while β should not be small, we set up another experiment: κ =1000 and 10000, Q =10 and 100, α =0.1 to 2 incrementing by 0.1 and β =50 to 5 decrementing by 5. All the results were acceptable but not all were stable. So in this experiment stability of the results were examined. Experiment results show that β should not be very large otherwise it becomes unstable. When β is chosen to be 5 and α is between 0.1 and 2, the result showed to be more stable. So from these sets of experiment parameters were chosen as follows: r = 2, α = 2, β = 5, κ = 1000, and Q = 10. The number of ants was chosen to be m = 10.

Some of the experimental results are shown in figures 2 and 3. Results on Aurora image clearly show that ACO can improve SCL in cases where the SCL algorithm is trapped into local optima. In order to further investigate on these algorithms, experi-ments were repeated 10 times on Aurora. Then the number of run trials that lead to

Table 1. Experimental results showing the number of runs that lead to global results out of 10 different run trials. Experiments are executed on the Aurora image.

Parameters Are Not Normalized Parameters Are Normalized ACO-SCL 4 5 ACO-SOM 4 7

(a) (b) (c) (d)

(e) (f) (g) (h)

(i) (j) (k)

Fig. 2. Experimental results for river image compared with four algorithms; (a) original, (b) SCL, (c & d) ACO-SCL, (e & f) ACO-SCL with normalized parameters, (g, h, & i) ACO-SOM, and (j & k) ACO-SOM with normalized parameters. Results are obtained over 20 runs of each algorithm. Possible results on each run trial are shown.

Page 9: [Lecture Notes in Computer Science] MICAI 2007: Advances in Artificial Intelligence Volume 4827 || Using Ant Colony Optimization and Self-organizing Map for Image Segmentation

578 S. Saatchi and C.-C. Hung

(a) (b) (c) (d)

(e) (f) (g) (h)

(i) (j) (k) (l)

Fig. 3. Experimental results for aurora image compared with four algorithms; (a) original, (b) SCL, (c & d) ACO-SCL, (e & f) ACO-SCL with normalized parameters and (g, h & i) ACO-SOM, (j, k, & l) ACO-SOM with normalized parameters. Results are obtained over 20 runs of each algorithm. Possible results on each run trial are shown.

the global optima were then counted. The results are shown in Table 1. As it can be seen the normalization of parameters has improved SOM in finding global optima.

7 Conclusion

In general the ACO can bring an advantage to the algorithms that are unstable result-ing from dependency on random initialization. Although the simple competitive learn-ing algorithm shows a high stability, the ACO still can make a contribution on the improvement of segmentation. The stable results of the SCL are not always the global optima and the ACO can be beneficial to SCL in finding the global optima. The ACO-SCL algorithm uses the same parameter set and learning rate as those used in SCL and recognizes the clusters where the SCL fails to do. This can be advantageous since for SCL to find the global optima the learning rate should be adjusted in the course of experimentation. Our observation on ACO-SOM algorithm showed similar effect. When the parameters were normalized, it was observed that the number of runs that lead to global results was improved. The ACO-SOM algorithm with normalized parameter showed to be more probable for finding the global solution in comparison to the ACO-SCL and ACO-SOM without normalization, and normalized ACO-SCL algorithms.

Page 10: [Lecture Notes in Computer Science] MICAI 2007: Advances in Artificial Intelligence Volume 4827 || Using Ant Colony Optimization and Self-organizing Map for Image Segmentation

Using Ant Colony Optimization and Self-organizing Map for Image Segmentation 579

References

1. Gonzalez, R.C., Woods, R.E.: Digital Image Processing. Addison-Wesley, Reading (1992) 2. Pham, D.T., Karaboga, D.: Intelligent Optimization Techniques: Genetic Algorithms, Tabu

Search, Simulated Annealing and Neural Networks. Springer, Heidelberg (2000) 3. Hung, C.C., Scheunders, P.M., Pham, M.-C.S., Coleman, T.: Using Intelligent Optimiza-

tion Techniques in the K-means Algorithm for Multispectral Image Classification. Int. J. Fuzzy Syst. 6(3), 105–115 (2004)

4. Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of cooperat-ing agents. IEEE Trans. Syst. Man Cyb. Part B 26, 29–41 (1996)

5. Yuqing, P., Xiangdan, H., Shang, L.: The K-means clustering Algorithm Based on Density and Ant Colony. In: International Conference on Neural Networks and Signal Processing, Nanjing, China, vol. 1, pp. 457–460 (2003)

6. Monmarch´e, N.: On data clustering with artificial ants. In: Freitas, A.A. (ed.) AAAI-1999 and GECCO-1999 Workshop on Data Mining with Evolutionary Algorithms: Research Di-rections, Orlando, Florida, pp. 23–26 (1999)

7. Monmarch´e, N., Slimane, M., Venturini, G.: AntClass: discovery of clusters in numeric data by an hybridization of ant colony with k-means algorithm, Internal Report no. 213 Laboratoire d’Informatique, E3i, Universite de Tours, Tours, France (1999)

8. Bin, W., Yi, Z., Shaohui, L., Zhongzhi, S.: CSIM: A Document Clustering Algorithm Based on Swarm Intelligence. In: Congress on Evolutionary Computation, Honolulu, HI, vol. 1, pp. 477–482 (2002)

9. Kanade, P.M., Hall, L.O.: Fuzzy Ants as a Clustering Concept. In: Proceedings of 22nd In-ternational Conference of the North American Fuzzy Information Processing Society, Chi-cago, IL, pp. 227–232 (2003)

10. Kohonen, T.: The Self-Organizing Map. Proceedings of the IEEE 78(9), 1464–1480 (1990)

11. Vesanto, J., Alhoniemi, E.: Clustering of the Self-Organizing Map. IEEE transaction on Neural Networks 11(3) (2000)

12. Rumelhart, D.E., Zipser, D.: Feature discovery by competitive learning. In: McClelland, J.L., Rumelhart, D.E. (eds.) Parallel Distributed Processing: Explorations in the Micro-structure of Cognition, pp. 151–193. MIT Press, Cambridge (1986)

13. Saatchi, S., Hung, C.C., Kuo, B.C.: A comparison of the improvement of k-means and simple competitive learning algorithms using ant colony optimization. In: 7th International Conference on Intelligent Technology, Taipei, Taiwan (2006)

14. Saatchi, S., Hung, C.C.: Hybridization of the Ant Colony Optimization with the K-means Algorithm for Clustering. In: Kalviainen, H., Parkkinen, J., Kaarna, A. (eds.) SCIA 2005. LNCS, vol. 3540, pp. 511–520. Springer, Heidelberg (2005)