[Intelligent Systems, Control and Automation: Science and Engineering] Computational Intelligence...

11
Chapter 3 A Comparison Study Between Two Hyperspectral Clustering Methods: KFCM and PSO-FCM Amin Alizadeh Naeini, Saeid Niazmardi, Shahin Rahmatollahi Namin, Farhad Samadzadegan, and Saeid Homayouni Abstract Thanks to its high spectral resolution, hyperspectral imagery recently has been extremely considered in various remote sensing applications. A funda- mental step in the processing of these data is image segmentation through a clustering process. One of the most widely used algorithms for clustering is fuzzy C-Means (FCM). However, the presence of spectrally overlapped classes in remote sensing data, and intrinsic sensitivity of FCM to initialized values and complex nonlinear patterns, affects the results of clustering. Particularly, this problem gets worse in case of hyperspectral data. To overcome the mentioned problems, two FCM approaches, i.e. clustering based on integration of particle swarm optimization (PSO) and FCM (i.e. PSO-FCM), and Clustering Based on Kernel-Based FCM (KFCM) are presented in this paper. The objective is an evaluation study of hyperspectral clustering methods. Experiments on the AVIRIS images taken over the northwest Indiana’s Indian Pine, show that the PSO-FCM yields better performance in comparison with KFCM. 3.1 Introduction Clustering is an unsupervised learning task that partitions data objects into a certain number of clusters, in a way that data in the same cluster should be similar to each other, while data in different clusters should be dissimilar [1]. By this definition, clustering can be very useful in remote sensing data analysis, because it can reveal useful information concerning the structure of the dataset. One of the most widely- used clustering algorithms is fuzzy clustering algorithm. A.A. Naeini (*) • S. Niazmardi • S. Rahmatollahi Namin • F. Samadzadegan • S. Homayouni Department of Surveying Engineering, College of Engineering, University of Tehran, Tehran, Iran e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected] A. Madureira et al., Computational Intelligence and Decision Making: Trends and Applications, Intelligent Systems, Control and Automation: Science and Engineering 61, DOI 10.1007/978-94-007-4722-7_3, # Springer Science+Business Media Dordrecht 2013 23

Transcript of [Intelligent Systems, Control and Automation: Science and Engineering] Computational Intelligence...

Page 1: [Intelligent Systems, Control and Automation: Science and Engineering] Computational Intelligence and Decision Making Volume 61 || A Comparison Study Between Two Hyperspectral Clustering

Chapter 3

A Comparison Study Between Two

Hyperspectral Clustering Methods: KFCM

and PSO-FCM

Amin Alizadeh Naeini, Saeid Niazmardi, Shahin Rahmatollahi Namin,

Farhad Samadzadegan, and Saeid Homayouni

Abstract Thanks to its high spectral resolution, hyperspectral imagery recently

has been extremely considered in various remote sensing applications. A funda-

mental step in the processing of these data is image segmentation through

a clustering process. One of the most widely used algorithms for clustering is

fuzzy C-Means (FCM). However, the presence of spectrally overlapped classes in

remote sensing data, and intrinsic sensitivity of FCM to initialized values and

complex nonlinear patterns, affects the results of clustering. Particularly, this

problem gets worse in case of hyperspectral data. To overcome the mentioned

problems, two FCM approaches, i.e. clustering based on integration of particle

swarm optimization (PSO) and FCM (i.e. PSO-FCM), and Clustering Based on

Kernel-Based FCM (KFCM) are presented in this paper. The objective is an

evaluation study of hyperspectral clustering methods. Experiments on the AVIRIS

images taken over the northwest Indiana’s Indian Pine, show that the PSO-FCM

yields better performance in comparison with KFCM.

3.1 Introduction

Clustering is an unsupervised learning task that partitions data objects into a certain

number of clusters, in a way that data in the same cluster should be similar to each

other, while data in different clusters should be dissimilar [1]. By this definition,

clustering can be very useful in remote sensing data analysis, because it can reveal

useful information concerning the structure of the dataset. One of the most widely-

used clustering algorithms is fuzzy clustering algorithm.

A.A. Naeini (*) • S. Niazmardi • S. Rahmatollahi Namin • F. Samadzadegan • S. Homayouni

Department of Surveying Engineering, College of Engineering, University of Tehran, Tehran, Iran

e-mail: [email protected]; [email protected]; [email protected]; [email protected];

[email protected]

A. Madureira et al., Computational Intelligence and Decision Making: Trends andApplications, Intelligent Systems, Control and Automation: Science and Engineering 61,

DOI 10.1007/978-94-007-4722-7_3, # Springer Science+Business Media Dordrecht 2013

23

Page 2: [Intelligent Systems, Control and Automation: Science and Engineering] Computational Intelligence and Decision Making Volume 61 || A Comparison Study Between Two Hyperspectral Clustering

Fuzzy clustering algorithms aim to model fuzzy unsupervised patterns efficiently.

One of the widely used fuzzy clustering algorithms is Fuzzy C-Means (FCM) algo-

rithm [2]. TheFCMalgorithm is based on an iterative optimization of a fuzzy objective

function. However, the main drawback of the FCM algorithm is that the results are

highly sensitive to the selection of initial cluster centers and it might converge to the

local optima. In order to solve this problem, one possibility is to use the swarm based

methods such as PSO [3].Moreover, the main problem of FCM is the nature of remote

sensing data; as the information classes in images are usually overlapped in spatial and

spectral domain [4]. However, by changing the characteristics of classifiers, one can

overcome the mentioned problems, and achieve better results. One of the newest

solutions is using the kernel based methods in clustering algorithms. This clustering

technique is based on the FCM and the kernel concepts. Kernel function can transform

the input data space into a new higher (possibly infinite) dimension space through

some nonlinearmapping.Where the complex nonlinear problems in the original space

can more likely be linearly treated and solved in the transformed space, according

to the well-known Cover’s theorem [5]. In recent years, some clustering techniques

for remotely sensed image data has been proposed based on the two cited methods.

In [6] after the preprocessing of data, the FCM clustering algorithm optimized by

a particle swarm algorithm (PSO-FCM) and then utilized for the wetland extraction.

In [7, 8] a kernel-based fuzzy Cmeans clustering is used for clustering and recognition

of multispectral remote sensing images.

Nonetheless, due to high dimensionality of hyperspectral data, stated problems

are intensified. In the other word, regarding optimization and nonlinear complex

problems, the number of local optima and nonlinear complexity are increased. If we

modify the classifier to handle these issues, more accurate results are expectable.

This can be done by using aforementioned algorithms. However, there is no study,

which investigates the efficiency of kernel method and PSO in unsupervised

classification of hyperspectral data.

The objective of this study is comparing two new FCM methods based on PSO

and kernel for hyperspectral image clustering. In addition, a comparison has been

done between these two methods and FCM and K-Means.

This paper is organized as the followings. In Sect. 3.2, a brief overview of the

clustering is given. The methodology is described in Sect. 3.3, it includes two

clustering methods based on PSO and kernel approaches. The results and discussion

are presented in Sect. 3.4. Finally, the conclusion is given in Sect. 3.5.

3.2 Basic Concepts of Data Clustering

Clustering is the process of identifying natural groupings within the data, based on

some similarity measure. Hence, similarity measures are fundamental components

in most clustering algorithms. The most popular way to evaluate a similarity

measure is the use of distance measures. The most widely used distance measure

is the Euclidean distance [9].

24 A.A. Naeini et al.

Page 3: [Intelligent Systems, Control and Automation: Science and Engineering] Computational Intelligence and Decision Making Volume 61 || A Comparison Study Between Two Hyperspectral Clustering

Another important parameter isCluster Validity Indexwhich is used to evaluate ofclustering methods. Cluster validity indices can be categorized in three different

criteria: internal criteria, relative criteria, and external criteria. Indices based on

internal criteria assess the fit between the structure imposed by the clustering algo-

rithm and the data. Indices based on relative criteria compare multiple structures

(generated by different algorithms, for example) and decide which of them is better in

some sense. External indices measure the performance by matching cluster structure

to the a priori information, namely the “true” class labels (often referred to as ground

truth) [10]. Typically, clustering results are evaluated using the external criterion

especially in remote sensing data that the goal is extraction of the specified classes

[11, 12].

Clustering can be performed in two different modes: crisp and fuzzy clustering.

In crisp clustering, the clusters are disjoint and non-overlapping in nature [13]. Any

pattern may belong to one and only one class in this case. In case of fuzzy

clustering, a pattern may belong to all the classes with a certain fuzzy membership

grade [9]. The K-means (or hard c-means) algorithm starts with K cluster-centroids

(these centroids are initially selected randomly or derived from some a prioriinformation). Each pattern in the data set is then assigned to the closest cluster

centre. Centroids are updated by using the mean of the associated patterns. The

process is repeated until some stopping criterion is met. The FCM [2] seems to be

the most popular algorithm in the field of fuzzy clustering. In the classical FCM

algorithm, a within cluster sum function Jm is minimized to evolve the proper

cluster centers as follows:

jm ¼Xci¼1

XNj¼1

umij jjvi � Xjjj;m � 1 (3.1)

Where jjvi � xjjj is a distance measure between the center Vi of its cluster and

the pattern Xj. Also umij is a fuzzy membership function and m is a constant known

as the index of fuzziness. Given C clusters, we can determine their cluster centers vjfor j ¼ 1 to C by means of the following expression:

Vj ¼Pni¼1

umijxi

Pni¼1

umij

(3.2)

Now differentiating the performance criterion with respect to vj (treating umijas constants) and with respect toumij (treatingvj as constants) and setting them to zero

the following relation can be obtained:

uik ¼XCj¼1

dikdjk

� � �2m�1

¼XCj¼1

xk � vij jxk � vj�� �� ! �2

m�1

(3.3)

3 A Comparison Study Between Two Hyperspectral Clustering Methods. . . 25

Page 4: [Intelligent Systems, Control and Automation: Science and Engineering] Computational Intelligence and Decision Making Volume 61 || A Comparison Study Between Two Hyperspectral Clustering

3.3 Clustering of Hyperspectral Data Based on PSO and Kernel

Function

3.3.1 FCM Clustering Based on PSO (PSO-FCM)

3.3.1.1 Particle Swarm Optimization

PSO is a population based stochastic optimization technique inspired by the social

behavior of bird flock (and fish school, etc.), and has been developed by Kennedy

and Eberhart [14]. As a relatively new evolutionary paradigm, it has grown in the

past decade and many studies related to PSO have been published. In PSO, each

particle is an individual, and the swarm is composed of particles. The problem

solution space is formulated as a search space. Each position in the search space is

a candidate solution of the problem. Particles cooperate to find the best position

(best solution) in the search space (solution space). Each particle moves according

to its velocity which is computed as:

vid tþ 1ð Þ ¼ wvidðtÞ þ c1r1 pidðtÞ � xidðtÞð Þþ c2r2 pgdðtÞ � xidðtÞ

� �(3.4)

xidðtþ 1Þ ¼ xidðtÞ þ vidðtþ 1Þ (3.5)

In (3.4) and (3.5), xid(t) is the position of particle i at time t, vid(t) is the velocity of

particle i at time t, pid(t) is the best position found by particle i itself so far, pgd(t)

is the best position found by the whole swarm so far, o is an inertia weight scaling

the previous time step velocity, c1 and c2 are two acceleration coefficients that scalethe influence of the best personal position of the particle (pid(t)) and the best global

position (pgd(t)), r1 and r2 are random variables between 0 and 1 [15].

3.3.1.2 PSO-FCM

The FCM algorithm tends to converge faster than the PSO algorithm because it

requires fewer function evaluations. But, it usually gets stuck in local optima. We

integrate FCM with PSO to form a hybrid clustering algorithm called PSO-FCM,

which maintains the merits of FCM and PSO. More specifically, PSO-FCM will

apply FCMwith four iterations to the particles in the swarm every eight generations

such that the fitness value of each particle is improved [16]. A particle is a vector

of real numbers of dimension k � d, where k is the number of clusters and d is the

dimension of data to be clustered. The objective function of the FCM algorithm

defined in Eq. 3.1 is the fitness functions of the hybrid clustering algorithms. The

hybrid PSO-FCM algorithm can be summarize as follows [17].

26 A.A. Naeini et al.

Page 5: [Intelligent Systems, Control and Automation: Science and Engineering] Computational Intelligence and Decision Making Volume 61 || A Comparison Study Between Two Hyperspectral Clustering

1. Randomly generation of particles.

2. Calculation of cluster centers using Eq. 3.2.

3. Calculation of fitness function using Eq. 3.1.

4. Updating of Pbest and Gbest according to fitness function of FCM

5. Updating velocity by Eq. 3.4.

6. Updating position by Eq. 3.5.

7. Repeat step 2–6 until the stopping criteria is met.

3.3.2 Kernel-Based Fuzzy C-Means Algorithm

The Fuzzy C-means algorithm use the Euclidian distance for calculating the

similarities between pixels and cluster centers. However, this distance has some

problems which affect the results of clustering. For example, it is sensitive to

clusters shapes and outlier. In order to tackle these problems, a new modification

of FCM, named kernel FCM was introduced [18]. The Basic idea of KFCM is to

compute the Euclidian distance in another space of higher dimension via a non-

linear map function’, and by this map function, we expect simpler relation in new

space (feature space) so the clusters can better be separated.

Nevertheless, mapping all data into feature map, can lead to expensive cost.

In order to handle data in feature space, one can use their pair-wise scalar product

and this scalar product can be computed directly by kernel function. Thus kernel

function is a function, K : X � X ! R, such that [19]:

8x; y 2 X; <’ðxÞ; ’ðyÞ> ¼ kðx; yÞ (3.6)

So we can rewrite the Euclidian distance between pixel i and cluster j, in feature

space as follows:

dij ¼ jj’ðxiÞ � ’ðvjÞjj2 ¼ kðxi; xjÞ þ kðvj; vjÞ � 2kðxi; vjÞ (3.7)

There are many different kernels, but here we use radial base kernel due to its

robustness [20].

Kðx; yÞ ¼ exp�ðx� yÞ2

s2

!(3.8)

Therefore, the Euclidian distance can be written as follows:

dij ¼ jj’ðxiÞ � ’ðvjÞjj2 ¼ 2ð1� kðxi; vjÞÞ (3.9)

3 A Comparison Study Between Two Hyperspectral Clustering Methods. . . 27

Page 6: [Intelligent Systems, Control and Automation: Science and Engineering] Computational Intelligence and Decision Making Volume 61 || A Comparison Study Between Two Hyperspectral Clustering

By using this distance in FCM objective function, we can derive the objective

function of KFCM.

JðX;U;CÞ ¼ 2Xcj¼1

Xni¼1

umji ð1� kðxi; vjÞÞ (3.10)

Where U is fuzzy partition matrix, Vj is the centroid of jth cluster and Xi is the

vector ith pixel.

To optimize the KFCM objective function, an alternative optimization method

is used. By this method the clusters centers and fuzzy partition matrix can be

calculated by following equations in each iteration [21]:

uji ¼ 1

Pcl¼1

ð1�kðxi;vjÞÞð1�kðxi;vlÞÞ� � 1

ðm�1Þ(3.11)

vj ¼Pni¼1

umji kðxi; vjÞxiPni¼1

umji kðxi; vjÞ(3.12)

For using this method one should tune the kernel parameter (s in 3.8). For tuning

the parameter, we normalize the data and run the algorithm with ten different

sigmas from 0.1 to 1, with a 0.1 increment, and the results have been compared

with kappa coefficient, at last, the sigma of the best performance (1 in here) was

chosen for comparison with other algorithms.

3.4 Results and Discussion

3.4.1 Dataset

The performances of two mentioned methods are evaluated using a sample

hyperspectral image which is taken over northwest Indiana’s Indian Pine test site

in June 1992 [22]. This data set was chosen because its ground truth for evaluating

algorithm is available. The data consists of 145 � 145 pixels with 220 bands. The

20 water absorption bands were removed from the original image. In addition, 15

noisy bands were also removed, resulting in a total of 185 bands [23]. The original

ground truth has actually 16 classes, but in this study five classes of them are used.

Also the ground truth map of five classes is shown in Figs. 3.1 and 3.2. These

classes are selected because they have suitable spatial distribution.

28 A.A. Naeini et al.

Page 7: [Intelligent Systems, Control and Automation: Science and Engineering] Computational Intelligence and Decision Making Volume 61 || A Comparison Study Between Two Hyperspectral Clustering

3.4.2 Performance Measure

In this paper, confusion matrix was used to evaluate the true labels and the labels

returned by the clustering algorithms as the quality assessment measure [12].

In addition, the Kappa coefficient of agreement is defined in Eq. 3.13 for individual

classes, The Khat index [24] is calculated using Eq. 3.14.

K ¼NPri¼1

xii �Pri¼1

ðxiþ � xþiÞ

N2 �Pr

i¼1

ðxiþ � xþiÞ(3.13)

Fig. 3.1 Color composite of the image subset

3 A Comparison Study Between Two Hyperspectral Clustering Methods. . . 29

Page 8: [Intelligent Systems, Control and Automation: Science and Engineering] Computational Intelligence and Decision Making Volume 61 || A Comparison Study Between Two Hyperspectral Clustering

K ¼NPri¼1

xii �Pri¼1

ðxiþ � xþiÞ

N2 �Pr

i¼1

ðxiþ � xþiÞ(3.14)

In Eq. 3.13, K is the kappa coefficient and in Eq. 3.14, ki is Khat index for

individual classes, r is the number of columns (and rows) in a confusion matrix, xii

is entry (i,i) of the confusion matrix, xi+ and x + i are the marginal totals of row i

and column j, respectively, and N is the total number of observations [24, 25].

In this study, four clustering methods, i.e. kmeans, FCM, PSO-FCM and KFCM,

are compared to each other. These methods were developed based on the para-

meters listed in Table 3.1.

According to Fig. 3.3, it is clear that methods PSO-FCM, KFCM and FCM with

kappa values 76.21, 67.28, 66.51 have better performance (accuracy) than kmeans

method with kappa value 58.16

Because of the existence of overlap in information classes especially spectral

domain in hyperspectral data, algorithms based on FCM obtains better results.

Among the three methods based on FCM, two new presented methods i.e. PSO-

FCM and KFCM have better results in contrast to FCM. It should be noted that

PSO-FCM have global and local search, while KFCM is only able to do local

search, it seems that transferring data to a space of high dimension can separate

some clusters and enhance the FCM results. Therefore, it can be said, these two

methods were efficient in hyperspectral clustering. In the other words, the men-

tioned methods can help reaching better performance for FCM.

Fig. 3.2 Ground truth of the

area with five classes

30 A.A. Naeini et al.

Page 9: [Intelligent Systems, Control and Automation: Science and Engineering] Computational Intelligence and Decision Making Volume 61 || A Comparison Study Between Two Hyperspectral Clustering

Between two presented methods, PSO-FCM has better accuracy (about 12%).

The reason is that PSO-FCM has both local ability search of FCM and global ability

search of PSO, while KFCM method firstly is sensitive to both initialized values

and sigma parameter, which means with either different initialized values or sigma

parameter, KFCM converges to different values.

In order to have better comparison, the obtained results from clustering different

methods are presented in Table 3.2. The results and their comparison with the

thematic map, represents that in FCM and K-means, clusters are somehow com-

bined, e.g. the Hay-windrowed class can also be seen in other clusters; but in KFCM

Table 3.1 Parameters used

in the clustering hyperspectral

data sets

Algorithm Parameters Assigned value

kmeans Iterations 50

FCM Iterations 50

m 2

PSO-FCM Iterations 100

FCM iterations 4

PSO iterations 8

Psize 35

W 0.72

C1 0.49

C2 0.49

KFCM Iterations 50

m 2

Sigma 1

‘Assigned Value’ refers to the number (value) of the parameters

involved in the algorithms

Fig. 3.3 Comparison of kappa coefficient in four clustering methods

3 A Comparison Study Between Two Hyperspectral Clustering Methods. . . 31

Page 10: [Intelligent Systems, Control and Automation: Science and Engineering] Computational Intelligence and Decision Making Volume 61 || A Comparison Study Between Two Hyperspectral Clustering

and PSO-FCM, this issue is reduced. In addition, in all the methods, some parts

were not clustered properly. As an example, in no-tilled corn and soybean lands, the

KFCMs results show not necessarily all clusters can be better separated by using

kernel method. For example, Wood class is completely clustered in FCM results but

is mixed with other clusters in KFCM results.

3.5 Conclusion

In this article, two fuzzy clustering methods are evaluated. These methods are based

on PSO and kernel approaches, respectively. Results show that the presented

methods have better accuracy than standard FCM. Also, PSO-FCM yields better

result than KFCM due to its inherent global and local search existing in this method.

Also, there is an interesting ability of KFCM in data transformation which makes

possible to introduce PSO in order to solve its problem. Accordingly, our future

investigations will be dedicated to combining PSO and KFCM so that not only can

find the optimum parameters of KFCM, but also will overcome the sensitivity to

initialized values.

References

1. Wunsch D, Xu R (2008) Kernel-based clustering. In: Clustering, 1st edn. IEEE, New Jersey,

pp 163–178

2. Bezdek J (1981) Pattern recognition with fuzzy objective function algorithms. Kluwer Aca-

demic Publishers, Norwell

3. Izakian H, Abraham A, Snasel V (2009) Fuzzy clustering using hybrid fuzzy c-means and

fuzzy particle swarm optimization. Paper presented at the World Congress on Nature &

Biologically Inspired Computing, Coimbatore, India, 9–11 December

4. Tso B, Mather PM (2009) Classification methods for remotely sensed data. CRC, Boca Raton

5. Cover T (1965) Geometrical and statistical properties of systems of linear compute. IEEE

Trans Electron Comput 14:326–334

6. Liu H, Pei T, Zhou C, Zhu AX (2008) Multi-temporal MODIS-data-based PSO-FCM cluster-

ing applied to wetland extraction in the Sanjiang Plain. Paper presented at the International

Conference on Earth Observation Data Processing and Analysis, Wuhan, China

7. Yun-song S, Yu-feng S (2010) Remote sensing image classification and recognition based

on KFCM. In: Proceedings of the 5th international conference on Computer & Education,

Hefei, 239 China, pp 1062–1065

Table 3.2 Khat index obtained by the four investigated methods on the AVIRIS image

Corn no-till Grass\trees Hay-windrowed Soybean no-till Woods

K-means 47.8 0 99.8 62.7 99.9

FCM 50.71 71.89 89.72 47.83 98.58

PSO-FCM 50.53 84.32 99.54 66.75 97.44

KFCM 68.76 67.07 74.49 40.87 93.65

32 A.A. Naeini et al.

Page 11: [Intelligent Systems, Control and Automation: Science and Engineering] Computational Intelligence and Decision Making Volume 61 || A Comparison Study Between Two Hyperspectral Clustering

8. Niazmardi S, Homayouni S, Safari A (2011) Remotely sensed image clustering based on

kernel-based fuzzy C-means algorithm. SMPR, Tehran

9. Jain AK, Murty MN, Flynn PJ (1999) Data clustering: a review. ACM Comput Surv (CSUR)

24231:264–323

10. Jain AK (2010) Data clustering: 50 years beyond K-means. Pattern Recognit Lett 31:651–666

11. Saeedi S, Samadzadegan F, El-Sheimy N (2009) Object extraction from LIDAR data using an

artificial Swarm Bee colony clustering algorithm. In: Stilla U, Rottensteiner F, Paparoditis N

(eds) CMRT09 IAPRS 38 (Part 3)

12. Zhong S, Ghosh J (2003) A comparative study of generative models for document clustering.

Paper presented at the SIAM international conference on data mining workshop on clustering

high dimensional data and its applications, San Fransisco

13. Abraham A, Das S, Roy S (2008) Swarm intelligence algorithms for data clustering. In:

Maimon O, Rokach L (eds) Soft computing for knowledge discovery and data mining.

Springer, New York, pp 279–313. doi:10.1007/978-0-387-69935-6_12

14. Kennedy J, Eberhart R (1995) Particle swarm optimization. Paper presented at the IEEE

International Conference Neural Network (ICNN), Perth, WA

15. Yang F, Zhang C, Sun T (2009) Particle swarm optimization and differential evolution in fuzzy

clustering. In: Advances in neuro-information processing. Springer, Berlin, pp 501–508

16. Yang F, Sun T, Zhang C (2009) An efficient hybrid data clustering method based on

K-harmonic means and particle swarm optimization. Expert Syst Appl 36:9847–9852

17. Li W, Yushu L, Xinxin Z, Yuanqing X (2006) Particle swarm optimization for fuzzy c-means

clustering. Paper presented at the intelligent control and automation. WCICA 2006. The Sixth

World Congress on Intelligent Control and Automation

18. Zhang DQ, Chen SC (2002) Fuzzy clustering using kernel method. In: International confer-

ence on control and automation (ICCA’02), Xiamen, China, pp 123–127

19. de Oliveria JV, Pedeycz W (eds) (2007) Advances in fuzzy clustering and its applications.

Wiley, Chichester

20. Wang L, Jin Y, Du W, Inoue K, Urahama K (2005) Robust kernel fuzzy clustering. In: Fuzzy

systems and knowledge discovery, vol 3613. Springer, Berlin/Heidelberg, pp 454–461

21. Graves D, Pedrycz W (2010) Kernel-based fuzzy clustering and fuzzy clustering: a compara-

tive experimental study. Fuzzy Set Syst 161:522–543

22. Graves D, Pedrycz W (2007) Performance of kernel-based fuzzy clustering. Electron Lett

43:1445–1446

23. Mojaradi B, Emami H, Varshosaz M, Jamali S (2008) A novel band selection method for

hyperspectral data analysis. Int Arch Photogramm Remote Sens Spat Inf Sci 37:447–454

24. Kumar M (2004) Feature selection for classification of hyperspectral remotely sensed data

using NSGA-II. Water Resources Seminar CE 597D

25. Carletta J (1996) Assessing agreement on classification tasks: the kappa statistic. Comput

Linguist 22:249–254

3 A Comparison Study Between Two Hyperspectral Clustering Methods. . . 33