An active contour model based on fused texture features for image segmentation

20
Author's Accepted Manuscript An active contour model based on fused texture Features for image segmentation Qinggang Wu, Yong Gan, Bin Lin, Qiuwen Zhang, Huawen Chang PII: S0925-2312(14)01352-6 DOI: http://dx.doi.org/10.1016/j.neucom.2014.04.085 Reference: NEUCOM14796 To appear in: Neurocomputing Received date: 3 December 2013 Revised date: 12 April 2014 Accepted date: 28 April 2014 Cite this article as: Qinggang Wu, Yong Gan, Bin Lin, Qiuwen Zhang, Huawen Chang, An active contour model based on fused texture Features for image segmentation, Neurocomputing, http://dx.doi.org/10.1016/j.neucom.2014.04.085 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. www.elsevier.com/locate/neucom

Transcript of An active contour model based on fused texture features for image segmentation

Page 1: An active contour model based on fused texture features for image segmentation

Author's Accepted Manuscript

An active contour model based on fusedtexture Features for image segmentation

Qinggang Wu, Yong Gan, Bin Lin, QiuwenZhang, Huawen Chang

PII: S0925-2312(14)01352-6DOI: http://dx.doi.org/10.1016/j.neucom.2014.04.085Reference: NEUCOM14796

To appear in: Neurocomputing

Received date: 3 December 2013Revised date: 12 April 2014Accepted date: 28 April 2014

Cite this article as: Qinggang Wu, Yong Gan, Bin Lin, Qiuwen Zhang, HuawenChang, An active contour model based on fused texture Features for imagesegmentation, Neurocomputing, http://dx.doi.org/10.1016/j.neucom.2014.04.085

This is a PDF file of an unedited manuscript that has been accepted forpublication. As a service to our customers we are providing this early version ofthe manuscript. The manuscript will undergo copyediting, typesetting, andreview of the resulting galley proof before it is published in its final citable form.Please note that during the production process errors may be discovered whichcould affect the content, and all legal disclaimers that apply to the journalpertain.

www.elsevier.com/locate/neucom

Page 2: An active contour model based on fused texture features for image segmentation

An Active Contour Model based on Fusd Texture Features for Image Segmentation

Qinggang Wu a,*, Yong Gan a, Bin Lin b, Qiuwen Zhang a, Huawen Chang a

a School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou, 450002, China

b Information Science and Technology College, Dalian Maritime University, Dalian, 116026, China

[email protected]

Abstract—Texture image segmentation plays an important role in various com-puter vision tasks. In this paper, a convex texture image segmentation model is proposed. First, the texture features of Gabor and GLCM (Gray Level Co-occurrence Matrix) are extracted for original image. Then, the two kinds of tex-ture features are fused together to effectively construct a discriminative feature space by concatenating with each other. In the image segmentation step, a con-vex energy function is defined by taking the non-convex vector-valued model of Active Contour without Edges (ACWE) into a global minimization frame-work (GMAC). The proposed global minimization energy function with fused textures (GMFT) can avoid the existence of local minima in the minimization of the vector-valued ACWE model. In addition, a fast dual formulation is adopted to achieve the efficient contour evolution. The experimental results on synthetic and natural animal images demonstrate that the proposed GMFT model obtains more satisfactory segmentation results compared to two state-of-the-art methods in terms of segmentation accuracy and efficiency.

Keywords—Active contour model (ACM), Gray Level Co-occurrence Matrix (GLCM), Gabor filter, texture fusion, texture segmentation, dual formulation.

1 Introduction

Image segmentation is one of the most extensively studied problems in computer vision tasks [1], which is crucial to image analysis, understanding, and interpretation. Texture image segmentation has been an important topic in image processing field for a long time [2], which aims at segmenting a texture image into several regions with different texture features. There is no known method that is able to consistently and accurately segment all kinds of texture images. Generally, the overall quality of tex-ture segmentation is determined by both the performance of texture features and the segmentation approach [3].

There are numerous methods focusing on image segmentation [1] [4-12], such as region-growing, split-and-merge, Bayesian, neural networks (NN) and active contour model (ACM), etc. Recently, ACM has been one of the most successful methods for image segmentation [13]. Compared with other methods, ACMs have many advan-

Page 3: An active contour model based on fused texture features for image segmentation

tages [14]. First, ACMs can achieve sub-pixel accuracy of object boundaries. Second, various prior knowledge including shape, intensity distribution and texture features can be easily incorporated into ACMs for robust image segmentation. Third, the re-sultant contours are closed and quite regular, which are convenient for further applica-tions, such as shape analysis, classification, and recognition. A review of major ACMs can be found in [15]. One of the most famous ACMs is the Active Contour without Edges (ACWE) [16] which can be seen as a simplified Mumford–Shah (MS) function [17]. Due to the problem of intensity inhomogeneity, texture objects and low contrast in images, there are still some difficulties in practical applications. Recently, lots of researchers have made great improvements on ACM to overcome these diffi-culties, such as [18-27]. X. F. Wang [26] proposed an efficient ACM by using local features for image segmentation. Another main problem of ACMs is that the contour tends to be trapped into local minima during the process of evolution, which still exist in both [13] and [18]. Chan et al. [28] proposed a standard global minimization framework (GMAC) based on the non-compactly supported smooth approximation of the Heaviside function. Bresson [29] unifies three different models into GMAC and proposes a dual formulation for the minimization problem.

For the texture feature extraction, there are numerous methods which can be classi-fied into four categories [30]: statistical, geometrical, model-based and signal processing methods. Most of literatures focused on the analysis of individual texture feature methods, while few papers consider the fusion of texture features. Classical methods for extracting texture features include GLCM, Gabor and MRF (Markov Random Field) [31-33], etc. Proper fusion of different texture features is expected to produce an improved texture features with better discriminative ability. The available examples often combine different texture features by the concatenation operation. Solberg and Jain [34] adopted a variety of texture features to perform a supervised classification of four satellite-based synthetic aperture radar (SAR) images and noted that a feature fusion improved the classification rate. In addition to the direct fusion, the optimization of multiple features is also proposed. Z. Q. Zhao et al. [35] recognize human face using neural networks based on multiple features. L. Shang et al. [36] proposed a fast independent component analysis (ICA) optimized radial basis proba-bilistic NN to recognize palmprint. B. Li et al. [37] proposed a locally linear discrimi-nant embedding methods for recognizing human face. D. A. Clausi et al. [38] states that GLCM captures the higher frequency texture information, while Gabor captures the lower and mid-frequency texture information. GLCM and Gabor feature are fused together for unsupervised segmentation to deal with boundary confusion.

In this paper, a novel algorithm is proposed to solve the above problems. Firstly, GLCM and Gabor features of the original texture images are extracted and fused to-gether after principal component analysis (PCA) optimization. Secondly, the fused feature sets are incorporated in a modified convex vector-valued ACWE model. Compared with two state-of-the-art ACMs, the proposed global minimization energy function with fused textures (GMFT) has two main superiorities. Firstly, it can suc-cessfully segment animal texture images by utilizing the fusion of GLCM and Gabor. Secondly, it can avoid the local minima in the process of contour evolution by defin-

Page 4: An active contour model based on fused texture features for image segmentation

ing a new convex energy function in the GMAC framework. In addition, a fast dual formulation is employed to make the computation more efficient.

The rest of the paper is organized as follows. Section 2 discusses the feature ex-traction techniques of GLCM and Gabor, and the feature fusion strategy. Section 3 presents a convex vector-valued ACWE model, a fast minimization method of dual formation and the flowchart of the proposed algorithm in detail. In Section 4, we vali-date our method by various experiments on synthetic and animal texture images. Con-clusions are drawn in Section 5.

2 Texture Feature Extraction

This section discusses the texture feature extraction techniques of GLCM and Gabor, and the feature fusion strategy. GLCM and Gabor are two commonly used texture features, which belong to the statistical methods and signal processing methods, re-spectively.

2.1 GLCM Feature

GLCM is a texture feature extraction method proposed by Haralick et al. [31]. The calculation of GLCM contains two steps. The first one is to compute a co-occurring probability matrix. The elements of the matrix are the conditional joint probabilities of all pair wise combinations of gray levels ( ),i j in a given spatial window (the size is N ). In the process of computing GLCM matrix, two parameters need to be deter-mined: interpixel orientation ( )θ and distance ( )δ .

( ) ( ), Pr , | , , ,P i j i j G Nδ θ= (1)

Usually, a variety of orientations and inter-pixel distances are selected. Besides, the quantization of gray level G and the window size N should also be determined. Coarser quantization G can significantly accelerate calculations and reduce noise overcoming the high computational complexity of GLCM. However, abundant texture information is also lost [38]. The window size parameter N affects the ability of GLCM to capture texture features. Small windows can lead to poor local estimates while large windows increase the risk of misleading classification for the multiple texture features appearing in the window.

On the basis of co-occurring probability matrix, many texture statistics are defined. The second step is to apply the predefined statistics to extract corresponding texture features. A texture statistic can identify some structural aspect of the co-occurring probabilities which in turn reflect some qualitative characteristic of the local image texture, e.g., smoothness or roughness. Each window generates a feature vector which is associated with the center pixel of the window. As a result, all pixels in the image have a feature vector associated with it. 14 GLCM statistics are designed [31]. How-ever, only six statistics of them are advocated [38] and will be used in our paper: con-

Page 5: An active contour model based on fused texture features for image segmentation

trast (Con), dissimilarity (Dis),entropy (Ent), homogeneity (Hom), inverse difference (Inv), uniformity (Uni).

2.2 Gabor Feature

Gabor is a frequency transform method and has the ability to model the frequency and orientation sensitivity characteristic of the human visual system. It has been applied in various image processing tasks, such as texture feature extraction, face recognition, and so on. D. A. Clausi [38] describes the use of Gabor filters for texture segmenta-tion. A Gabor function is a Gaussian modulated complex sinusoid function in spatial domain. The two-dimensional Gaussian function has an aspect ratio of x yσ σ . The complex exponential has a spatial frequency of F and an orientationθ . The mathe-matical tractability of Gabor filter in the spatial-frequency domain is appealing since it can be simplified as a Gaussian function centered on the frequency of interest, e.g.

( ) ( )( )( )22 2 2, exp 2 x yH u v u F vπ σ σ= − − + (2)

Typically, a filter configuration is created by allowing for the complete coverage of spatial-frequency plane. The filters are set up in a pseudo wavelet format to match the filter’s frequency with its spatial extent. Each pixel will have a response to each filter, so each pixel is represented by a feature vector dimensioned to the number of filters. Although there exist many techniques to extract features from Gabor filter outputs, there is experimental evidence by Clausi and Jernigan [39] to support using the mag-nitude of Gabor filter response.

2.3 Feature Fusion Strategy

The aforementioned GLCM and Gabor are sensitive to different kinds of texture fea-tures. The GLCM produce more consistent measurements at higher signal frequency texture features, while Gabor filters produce the lower and mid-frequency texture information. Sometimes, it is necessary to simultaneously use GLCM and Gabor ra-ther than only one of them to achieve better segmentation results. Thus, the GLCM can be combined with Gabor by substituting or supplementing the high frequency band of Gabor features. The fused texture features is expected to possess both the advantages of GLCM and Gabor.

The fusion of GLCM and Gabor is composed of the following three steps. Firstly, 24-dimensional (24-D) GLCM texture features of the original image are extracted. The 24-D GLCM feature maps are obtained by the combination of 4 different orienta-tions (0o, 45o, 90o, and 135o) and 6 different texture statistics (Con, Dis, Ent, Hom, Inv, Uni). The other two parameters of window size and inter-pixel distance are fixed according to the characteristic of texture image. Secondly, 5- dimensional (5-D) Ga-bor texture features of the original image are extracted. The 5-D Gabor feature maps are obtained by the combination of 5 different orientations ( )0, 5, 2 5,3 5, 4 5π π π π and a fixed frequency of 5 pixels per cycle (ppc). GLCM is expected to amend the

Page 6: An active contour model based on fused texture features for image segmentation

disadvantage of Gabor, but there are strong correlations within the layers of 24-D GLCM features or 5-D Gabor features. The simple concatenation of the two kinds of texture features may leads to the erroneous segmentation results for the texture fea-tures with low contrast. To overcome the disadvantage of the simple concatenation, it is necessary to optimize GLCM and Gabor to reduce the correlations prior to the fu-sion of texture features. Here, the linear optimization technique of PCA is adopted. The optimized 24-D GLCM features and 5-D Gabor features are combined in the following manner.

( ) ( )29 24 5D D DFused GLCM GaborT PCA T PCA T− − −= ∪ (3)

The optimized 5-D Gabor feature maps are concatenated with the optimized 24-D GLCM feature maps. Thus, the fused 29-D texture feature maps are constructed. The fused texture features can not only extract the texture features with high signal fre-quencies at the boundary of two texture patches, but also the texture features with low and mid-frequency within a texture patch. In other words, the fused texture features possess both the advantages of GLCM and Gabor. Thus, it is expected to significantly improve the segmentation accuracy. In the next section, the selected strong texture features will be incorporated into a convex energy function GMFT.

3 Convex Energy Function GMFT

In order to successfully segment texture images, we define a convex energy function by taking the non-convex vector-valued ACWE into GMAC framework. This section gives the definition of the convex energy function GMFT, the fast minimization algo-rithm and the flowchart of the proposed texture segmentation algorithm.

3.1 The Definition of the Convex Energy Function

The proposed convex energy function GMFT modifies the vector valued ACWE model by incorporating the stopping term ( , )g x y and then employs the similar con-vexation techniques as in [28].

Let 2RΩ ⊂ be the image domain, :I RΩ ⊂ be a given image, ( 1, 2, , , 30)iI i n n= < be a set of texture feature maps obtained by the fusion of

GLCM and Gabor, and C be a closed contour in image domain Ω . We propose to minimize the following convex energy function:

1 2

1 2 2

0 1 1 1

: ({ , , , })

1 1min ( ) ( ) ( ) ( )

GMFTn

n nin in out out

Fused g i i i i i ii i

R I I I

GMFT TV I m I m dxdyn nφ

φ φ μ λ λ φΩ≤ ≤

= =

=

⎧ ⎫⎪ ⎪⎛ ⎞⎪ ⎪= + − − −⎨ ⎬⎜ ⎟

⎝ ⎠⎪ ⎪⎪ ⎪⎭⎩

∑ ∑∫ (4)

Page 7: An active contour model based on fused texture features for image segmentation

The first term is the weighted total variation energy ( ) ( , )gTV g x y dxdyφ φΩ

= ∇∫ .

The incorporated stopping term ( , )g x y is defined as the average of the edge indica-

tor functions ( , )ig x y on iI as follows.

1

1( , ) ( , )n

ii

g x y g x yn =

= ∑ (5)

where ( )

( )2

1( , ) 0 ( , ) 11 ,

i i

i

g x y g x yI x y

= < <+ ∇

. The edge stopping function

( , )g x y attracts the contour towards the object boundary with high gradient in the feature maps. It is the average of feature maps obtained by the fusion of GLCM and Gabor features. The second term is the region information energy, where { }1 2, , , nI I I is the combination of GLCM and Gabor. in

im and outim are the mean

values of feature maps iI inside and outside the contour. The two coefficients iniλ

and outiλ are the weights attached to the i th− feature map and depend on the magni-

tude of the quantities in outi im m− . The parameter μ controls the tradeoff between the

data-driven energy and the weighted total variation energy. Obviously, the proposed energy function (4) is convex with respect to the level set functionφ . So, the proposed energy function 1

FusedGMFT will never get stuck in local minima in the process of contour evolution.

Another issue that needs to be addressed is feature selection. The feature selection can be achieved by the Maximum Difference Scheme (MDS); and as in [18], in

iλ and outiλ are determined on the fused feature maps iI in this paper.

| |

( 1, 2, , )in out

in out i ii i

m mi n

Mλ λ

−= = = (6)

where 1max in out

i ii nM m m

≤ ≤= − . The setting of the coefficients in

iλ and outiλ can direct-

ly affect the discriminatory capacity of each feature map which drives the curve evo-lution. Thus, we set in

iλ and outiλ to zeros for those indices i where

( )0.5in outi im m M α α− < = such that the “bad” features can be effectively removed

in the process of curve evolution. According to the theorem proposed by Chan [28], for any given fixed values in

im and out

im , a global minimizer can be found by carrying out the proposed convex mi-nimization.

Page 8: An active contour model based on fused texture features for image segmentation

3.2 Fast Minimization Algorithm by Dual Formulation

In this subsection, a minimization algorithm based on dual formulation of the weighted TV norm is presented. Firstly, the energy function (4) can be rewritten as the following convex unconstrained minimization problem:

{ ( ) }21 2min ( ) ( ) ( , , , ) ( )GMFT

Fused g nGMFT TV R I I I dxdyφ φ μ φ αν φΩ

= + +∫ (7)

where ( ) 1: max 0,2 12

ν ξ ξ⎧ ⎫= − −⎨ ⎬

⎩ ⎭ is an exact penalty function provided that the

constraint α is chosen large enough compared to iniλ and out

iλ such as:

1 2 ( )|| ( , , , ) ||GMFT

n LR I I Iα ∞ Ω

> (8)

And then the minimization problem (7) is defined again by dual formulation and described as follows:

( ) }23 2

1 21min ( , ) ( ) || || ( , , , ) ( )

2GMFT

Fused g nLGMFT u TV u R I I I u v u dxdyφ φ φ μ α

θ Ω

⎧ = + − + +⎨⎩ ∫ (9)

Since the energy function 3FusedGMFT is convex, its minimization can be computed

by minimizing 3FusedGMFT w.r.t. φ and u separately, and to iterate until convergence.

Thus, the following minimization problems are considered: (1) u being fixed, the energy function 3

FusedGMFT is first minimized w.r.t. φ .

div u pφ θ= − (10)

where the dual variable ( )1 2,p p p= is given by:

( , ) ( div ) | ( div ) | 0g x y p u p u pθ θ∇ − − ∇ − = (11)

Choosing ( )0 1 8tδ≤ ≤ , the previous equation can be solved by a fixed point me-

thod: 0 0p = and for any 0n ≥ :

1 ( ( ) / )

1 | ( ( ) / ) |( , )

n nn

n

p t div p upt div p u

g x y

δ θδ θ

+ + ∇ −=

+ ∇ − (12)

(2) φ being fixed, the energy function 3FusedGMFT is minimized w.r.t. u .

{ }{ }1 2min max ( , , , ),0 ,1GMFTnu R I I Iφ θμ= − (13)

Page 9: An active contour model based on fused texture features for image segmentation

3.3 Flowchart of the Proposed Texture Segmentation Algorithm

The proposed algorithm segments the texture images by minimizing the proposed energy function GMFT on the combination of the PCA optimized 24-D GLCM tex-ture features and 5-D Gabor texture features. The flowchart of the proposed algorithm is shown in Fig. 1. Firstly, the 24-D GLCM based texture feature maps and the 5-D Gabor based texture feature maps of the input texture image are extracted and opti-mized by PCA separately. Secondly, PCA optimized GLCM features and Gabor fea-tures are fused together. Finally, the texture features with stronger discrimination is selected from the fused texture features by MDS. Then the contour is initialized by the level set function 0φ . The level set function nφ of the convex energy function GMFT is updated by the fast dual formulation until the iteration error 1n nφ φ+ − is smaller than a given threshold ε .

4 Experimental Results and Discussion

This section presents two groups of experiments to compare the proposed algorithm with other methods. The fist group compares the performance of different kinds of texture features in Subsection 4.2. Hereinafter, the proposed algorithm with 29-D

Fig. 1. Flowchart of the proposed algorithm.

Page 10: An active contour model based on fused texture features for image segmentation

fused texture features is called 29 DFusedGMFT − , with 5-D Gabor features is

called 5 DGaborGMFT − , and with 24-D GLCM features is called 24 D

GLCMGMFT − . The second group compares the segmentation result of the proposed GMFT with two state-of-the-art methods: Active Contour with Texture Descriptor (TDAC ) [18] and Global Mi-nimization Active Contour ( GMTD ) [20] in Subsection 4.3. In our experiments, the TDAC method whose texture features are substituted by the fused 29-D GLCM and Gabor texture features is called 29 D

FusedTDAC − . The images used in our experiments also can be divided into two groups. The first

group contains 20 synthetic images which are composed of two types of textures from the Brodatz database [40]. The second group comprises 30 natural animal images.

4.1 Parameter setting

The parameters in the proposed algorithm have been chosen empirically. In the process of GLCM feature extraction, the window size is set to 7d = according to the resolution of texture primitives. The inter-pixel distance 1δ = and inter-pixel orienta-tion { }0, 4, 2,3 4π π π are set for the calculation of the co-occurrence matrices, and the gray levels are quantized to 64q = . In the process of Gabor feature extraction, the frequency parameter determines the ability of Gabor to extract texture features. In our experiment, the frequency parameter is set as 5 ppc, the orientation

{ }0, 5,2 5,3 5,4 5θ π π π π= are set. In the feature optimization step, the parameter of

cumulative contribution rate is set ccr=99.9%. In the feature selection step, iniλ and

outiλ are determined by the threshold parameter α which is set 0.5α = according to

discussion of [18]. In the curve evolution step, the weight 1θ = , the time step 1 8tδ = and the error threshold of termination iteration 1 3eε = − are chosen to

achieve accurate edge location, balanced against acceptable convergence speed.

4.2 Performance of different texture features

Fig. 2 shows the 5-D Gabor features for a squirrel image, while Fig. 3 shows the corresponding 24-D GLCM features. The size of the squirrel image is 288*209. From these two figures, we found that Gabor and GLCM texture information are different. The Gabor feature maps shown in Fig. 2 only capture the lower and mid-frequency texture information, while the GLCM feature maps in Fig. 3 capture the higher texture information alone. This brings great challenges to the accurate segmentation of squir-rel. The segmentation results only on Gabor and only on GLCM texture features are shown in Fig. 4 (b) and (c), respectively. The squirrel can not be accurately seg-mented from such complex background. However, there are inherent textural differ-ences between squirrel and background. The segmentation result on the fused texture features is more satisfactory as shown in Fig. 4 (d), removing the irrelevant noise and improving the localization accuracy. Thus, the fused texture feature is superior to the individual Gabor and GLCM texture features.

Page 11: An active contour model based on fused texture features for image segmentation

(a) (b) (c) (d) (e) (f)

Fig. 2. Extracted Gabor texture features maps. (a) Original squirrel image. (b) - (f) From left to right are the Gabor texture features at five different orientations: 0, / 5, 2 / 5,3 / 5, 4 / 5π π π π .

(a) (b) (c) (d) (e) (f)

Fig. 3. Originally extracted GLCM texture features maps. Columns (a) - (f): Con, Dis, Ent, Hom, Inv,and Uni, respectively. Rows 1-4: four different orientation: 0, / 4, / 2 and 3 / 4π π π , respectively.

(a) (b) (c) (d) Fig. 4. Experiments of the proposed algorithm with different texture features. Column (a): initial contours. Columns (b) – (d): segmentation results by GLCM features, Gabor features and the combined features, respectively.

Fig. 5 plots the scatter diagram of GLCM and Gabor texture features for squirrel and back-ground. The original squirrel image is shown in Fig. 5 (a), the blue rectangle contains squirrel, while the red rectangle contains background. The width of both blue rectan-gle and red rectangle are 40 pixels. Randomly selecting a line, 40 pixels are obtained. Three GLCM based texture features of Cor, Idm, and Uni for the 40 pixels are plotted in Fig. 5(b). We observe that the three texture features are mingled together. Thus, it is difficult to distinguish squirrel and background by using the three GLCM based texture feature. After PCA optimization and texture feature fusion, the first three prin-cipal components are shown in Fig. 5 (c). It can be found that the discrepancy be-tween squirrel and background are increased. Thus, the squirrel can be easily seg-mented from background.

Page 12: An active contour model based on fused texture features for image segmentation

(a)

(b) (c)

Fig. 5. Comparison of texture features between squirrel and background. (a) Original squirrel image, blue rectangle contains squirrel, while red rectangle contains background. (b) Scatter diagram of three GLCM based texture features. (c) Scatter diagram of the first three principal components in the fusion of PCA optimized GLCM and Gabor texture features.

To further demonstrate the superiority of the fused texture features, the compari-sons are conducted on synthetic images between the proposed 29 D

FusedGMFT − and the methods 5 D

GaborGMFT − and 24 DGLCMGMFT − . The synthetic images include two kinds of tex-

ture features whose boundaries interact with each other in a random manner. The segmentation results are shown in Fig. 6. The initial contour of the image in the first row is a large central rectangle, the center of which is ( )2, 2x yN N , and the length

is ( )20xN − , the width is ( )20yN − , the size of the image is x yN N∗ . For the images

in the second row and the last row, the initial contour is a center circle:

( ) ( )220 2 2x yx N y N rφ = − − + − + , ( )min 3, 3x yr N N= . The initial contour of the

image in the third row is uniformly distributed circles. For the image in the fourth row, the initial contour is a circle at the bottom-left of the image:

( )( ) ( )220 3 4 4x yx N y N rφ = − − ∗ + − + , and ( )min 5, 5x yr N N= . Obviously, the

third column indicates that satisfactory segmentation results can not be obtained by GLCM features. From the fourth column, Gabor features also can not successfully segment the texture images. The segmentation results of the proposed algorithm with the fused texture features in the last column are satisfactory. From Fig. 6, it can be concluded that the fused texture features are superior to individual textures in terms of segmentation accuracy.

Page 13: An active contour model based on fused texture features for image segmentation

(a) (b) (c) (d) (e)

Fig. 6. Experiments of the proposed algorithm with different texture features. Column (a): original images. Column (b): initial contours. Columns (c) – (e): segmentation results by GLCM features, Gabor features and the combined features, respectively.

4.3 Comparisons with two state-of-the-art methods

Fig. 7 compares the segmentation results of the proposed algorithm 29 DFusedGMFT −

with 29 DFusedTDAC − [18] and GMTD [20] by the experiments on complex animal texture

images. The similar conclusion with Fig. 6 will be drawn from these animal images. The contrast between the interested animals and the backgrounds are relatively small. In this figure, the initial contour of the image in the first row is uniformly distributed circles, while the initial contour for the image in the second row is a large central rectangle. The initial contour of the image in the third row is a center circle. For im-age in the last row, the initial contour is a circle at the bottom-left of the image. The segmentation results of 29 D

FusedTDAC − shown in the third column are erroneous since this model tends to be trapped into local minima although the texture features of which are the 29-D fusion of GLCM and Gabor. From the fourth column, the segmentation re-sults of GMTD are better than that of 29 D

FusedTDAC − since it avoids the local minima.

Page 14: An active contour model based on fused texture features for image segmentation

(a) (b) (c) (d) (e)

Fig. 7. Experiments of different algorithms for animal texture images. Column (a): original images. Column (b): initial contours. Columns (c) – (e): segmentation results by

29 DFusedTDAC − [13], GMTD [51] and the proposed 29

FusedGMFT , respectively.

But there are some noises in the lower frequency of the image in that the texture fea-ture of GMTD only includes GLCM. The segmentation results of the proposed algo-rithm 29

FusedGMFT shown in the last column are more satisfactory and obviously supe-rior to the previous two columns. The reason is that the proposed 29 D

FusedGMFT − model not only obtains the global minima, but also combines the advantages of both GLCM in capturing the higher frequency information and Gabor in capturing the lower and mid-frequency information.

In order to further demonstrate the performance of the proposed algorithm, the false alarm rate for the above three models is computed for 50 synthetic texture images and natural animal images, as shown in Fig. 8. The false alarm rate of the proposed algo-rithm 29 D

FusedGMFT − is only 5% and far less than those of 29 DFusedTDAC − and GMTD . It is

clearly shown that the proposed algorithm can obtain more satisfactory segmentation results than the other two models in terms of accuracy.

Fig. 8. False alarm rate for comparison of different methods.

Page 15: An active contour model based on fused texture features for image segmentation

4.4 Computational performance of the proposed algorithm

All of three methods (the proposed 29 DFusedGMFT − , 29 D

FusedTDAC − [18] and GMTD [20]) have been implemented with Matlab 7.0 in a 3.39GHz workstation. For all of the three methods, the first step of feature extraction is ignored in our comparison be-cause it is not the main factor of increasing the minimization speed. Table 1 presents the details about the size and computational time for the images in Fig. 7. Compared with 29 D

FusedTDAC − and GMTD , it is clearly shown that the proposed algorithm is more efficient for the same image by taking advantage dual formulation although the num-ber of fused texture feature is same as 29 D

FusedTDAC − or larger than that of GMTD .

Table 1. Comparison of the cpu response time(s) between the proposed 29 DFusedGMFT − and the

other two methods of 29 DFusedTDAC − and GMTD for the images in Fig. 7 in the same order.

IMAGES SIZE 29 DFusedTDAC −

(S) GMTD (S) 29 DFusedGMFT −

(S)

6 481*321 196.5994 120.5135 86.1805 7 481*321 376.2576 147.6583 98.2141 8 481*321 341.9059 137.3393 84.6897 9 481*321 413.6255 102.8946 74.0571

5 Conclusion

In this paper, a novel algorithm has been proposed to segment texture images. The proposed algorithm mainly contains two aspects: the fusion of different texture fea-tures and the establishment of a convex energy function. Firstly, the GLCM feature extracts the higher frequency texture information, while the Gabor feature extracts the lower and mid-frequency texture information. Since the two kinds of texture informa-tion are different, they can be fused together after the operation of PCA optimization to deal with the texture segmentation in all frequencies. Secondly, a convex energy function is defined by introducing the vector-valued ACWE model into a GMAC framework to avoid the existence of local minima in the process of energy minimiza-tion. In addition, the fast dual formulation has been adopted to overcome the draw-backs of the usual gradient descent flow methods. The proposed algorithm can solve the problems of local minima of energy function and complex texture segmentation. Compared with the individual texture features, the proposed algorithm using the fused texture features obtains more satisfactory segmentation results.

Acknowledgement

This work is supported in part by the Doctorate Research Funding of Zhengzhou University of Light Industry under grant No. 2013BSJJ041, in part by the Scientific and Technological Project of Henan Province under grant No. 14A520034, in part by the Outstanding Innovative Talent Program Foundation of Henan Province under

Page 16: An active contour model based on fused texture features for image segmentation

grant No. 134200510025, in part by the Project of the Distinguished Professorship in Henan Province China for Professor De-Shuang Huang, in part by the Scientific Re-search Foundation for the Returned Overseas Chinese Scholars from Ministry of Hu-man Resources and Social Security, in part by the Program for Liaoning Excellent Talents in University (LNET) under grant no. LJQ2013054, in part by the National Natural Science Foundation of China under grant Nos. 61201454 and 61302118.

References

1. N. R. Pal, S. K. Pal, “A review on image segmentation techniques,” Pattern Recognition, vol. 26, no. 9, pp. 1277–1294, 1993.

2. T.R. Reed, J.M.H. Dubuf, “A review of recent texture segmentation and feature extraction techniques,” Computer Vision, Graphics, and Image Processing: Image Understanding, vol. 57, no. 3, pp. 359-372, 1993.

3. C. Sagiv, N. Sochen, and Y.Y. Zeevi. “Integrated active contours for texture segmenta-tion,” IEEE Transactions on Image Processing, vol. 15, no. 6, pp. 1633–1646, 2006.

4. Y. Y. Li, H. Z. Shi, L. C. Jiao, and R. C. Liu, “Quantum evolutionary clustering algorithm based on watershed applied to SAR image segmentation,” Neurocomputing, vol. 87, pp. 90–98, 2012.

5. H. Y. Zhou, Y. Yuan, F. Q. Lin, and T. W. Liu, “Level set image segmentation with Baye-sian analysis,” Neurocomputing, vol. 71, pp. 1994–2000, 2008.

6. D. S. Huang, “Radial basis probabilistic neural networks: model and application,” Interna-tional Journal of Pattern Recognition, and Artificial Intelligence, vol. 13, no. 7, pp. 1083-1101, 1999.

7. D. S. Huang, and J. X. Du, “A constructive hybrid structure optimization methodology for radial basis probabilistic neural networks,” IEEE Transactions on Neural Networks, vol. 19, no. 12, pp. 2099-2115, 2008.

8. C. Gao, D. G. Zhou, and Y. C. Guo, “Automatic iterative algorithm for image segmenta-tion using a modified pulse-coupled neural network,” Neurocomputing, vol. 119, pp. 332–338, 2013.

9. S. Bhattacharyya, U. Maulik, and P. Dutta, “A parallel bi-directional self-organizing neural network (PBDSONN) architecture for color image extraction and segmentation,” Neuro-computing, vol. 86, pp. 1–23, 2012.

10. H. Q. Liu, L. C. Jiao, and F. Zhao, “Non-local spatial spectral clustering for image seg-mentation,” Neurocomputing, vol. 74, pp. 461–471, 2010.

11. Q. H. Huang, X. Bai, Y. G. Li, L. W. Jin, and X. L. Li, “Optimized graph-based segmenta-tion for ultrasound images,” Neurocomputing, vol. 129, pp. 216–224, 2014.

12. G. Manuel, V. Ivan, O. M. Jose, H. Carmen, “Two lattice computing approaches for the unsupervised segmentation of hyperspectral images,” Neurocomputing, vol. 72, pp. 2111–2120, 2009.

13. T. Chan, B. Sanberg, and L. Vese, “Active contours without edges for vector-valued im-ages,” Journal of Visual Communication and Image Representation, vol. 11, no. 2, pp. 130–141, 2000.

14. D. Cremers, M. Rousson, and R. Deriche, “A review of statistical approaches to level set segmentation: integrating color, texture, motion and shape,” International Journal of Com-puter Vision, vol. 72, no. 2, pp. 195-215, 2007.

Page 17: An active contour model based on fused texture features for image segmentation

15. L. He, Z. Peng, B. Everding, X. Wang, C. Han, K. Weiss, and W. Wee, “A comparative study of deformable contour methods on medical image segmentation,” Image and Vision Computing, vol. 26, no. 5, pp. 141-163, 2008.

16. T. Chan and L. Vese, “Active contours without edges,” IEEE Transactions on Image Processing, vol. 10, no. 2, pp. 266–277, 2001.

17. D. Mumford and J. Shah, “Optimal approximations by piecewise smooth functions and as-sociated variational problems,” Communications on Pure and Applied Mathematics, vol. 42, pp. 577–685, 1989.

18. M. Lianantonakis and Y.R. Petillot, “Sidescan Sonar Segmentation Using Texture De-scriptors and Active Contours,” IEEE Journal of Oceanic Engineering, vol. 32, no. 3, pp. 744-752, 2007.

19. C. Li, C. Kao, and J. Gore, “Minimization of region-scalable fitting energy for image seg-mentation,” IEEE Transactions on Image Processing, vol. 17, no. 10, pp. 1940–1949, 2008.

20. Q. G. Wu, J. B. An, and B. Lin, “A texture segmentation algorithm based on PCA and global minimization active contour model for aerial insulator images,” IEEE Journal of Se-lected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 5, pp. 1509-1518, 2012.

21. Y. J. Chen, J. W. Zhang, A. Mishra, and J. W. Yang, “Image segmentation and bias correc-tion via an improved level set method,” Neurocomputing, vol. 74, pp. 3520–3530, 2011.

22. S. A. Mohand, and D. Ziou, “Object tracking in videos using adaptive mixture models and active contours,” Neurocomputing, vol. 71, pp. 2001–2011, 2008.

23. K. Tabb, N. Davey, R. Adams, and S. George, “The recognition and analysis of animate objects using neural networks and active contour models,” Neurocomputing, vol. 43, pp. 145–172, 2002.

24. X. F. Wang, and D. S. Huang, “A novel density-based clustering framework by using level set method,” IEEE Transactions on Knowledge and Data Engineering, vol. 21, no. 11, pp. 1515-1531, 2009.

25. K. H. Zhang, H. H. Song, and L. Zhang, “Active contours driven by local image fitting energy,” Pattern Recognition, vol. 43, no. 4, pp. 1199–1206, 2010.

26. X. F. Wang, D. S. Huang, and H. Xu, “An efficient local Chan-Vese model for image segmentation,” Pattern Recognition, vol. 43, no. 3, pp. 603-618, 2010.

27. A. Mishra, P. Fieguth, and D. A. Clausi, “Decoupled active contour (DAC) for boundary detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 2, pp. 310-324, 2011.

28. T. Chan, S. Esedoglu and M. Nikolova, “Algorithms for finding global minimizers of im-age segmentation and denoising models,” SIAM Journal on Applied Mathematics, vol. 66, no. 5, pp. 1632-1648, 2006.

29. X. Bresson, S. Esedoglu and P. Vandergheynst, J.-P. Thiran, S. Osher, “Fast Global Mini-mization of the Active Contour/Snake Model,” Journal of Mathematical Imaging and Vi-sion, vol. 28, no. 2, pp. 151-167, 2007.

30. M. Tuceryan and A. K. Jain, “Texture Analysis,” The Handbook of Pattern Recognition and Computer Vision (2nd Edition), 1998.

31. R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classifica-tion,” IEEE Transactions on System, Man, Cybernetics, vol. 3, no. 6, pp. 610–621, 1973.

32. R. S. Javier, “TEXSOM: texture segmentation using self-organizing maps,” Neurocomput-ing, vol. 21, pp. 7–18, 1998.

Page 18: An active contour model based on fused texture features for image segmentation

33. D. A. Clausi, “Comparison and fusion of co-occurrence, Gabor and MRF texture features for classification of SAR sea-ice imagery,” Atmosphere Ocean, vol. 39, no. 3, pp. 183–194, 2000.

34. A. H. S. Solberg and A. K. Jain, “Texture fusion and feature selection applied to SAR im-agery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 35, no. 2, pp. 475–479, Mar. 1997.

35. Z. Q. Zhao, D. S. Huang, and B. Y. Sun, “Human face recognition based on multiple fea-tures using neural networks committee,” Pattern Recognition Letters, vol. 25, no. 12, pp. 1351-1358, 2004.

36. L. Shang, D. S. Huang, J. X. Du, and C. H. Zheng, “Palmprint recognition using fast ICA algorithm and radial basis probabilistic neural network,” Neurocomputing, vol. 69, nos. 13-15, pp. 1782-1786, 2006.

37. B. Li, and D. S. Huang, “Locally linear discriminant embedding: an efficient method for face recognition,” Pattern Recognition, vol. 41, no. 12, pp. 3813-3821, 2008.

38. D. A. Clausi and H. Deng, “Design-based texture feature fusion using Gabor filters and co-occurrence probabilities,” IEEE Transactions on Image Processing, vol. 14, no. 7, pp. 925-936, 2005.

39. D. A. Clausi and M. E. Jernigan, “Designing Gabor filters for optimal texture separabili-ty,” Pattern Recognition, vol. 33, pp. 1835–1849, 2000.

40. P. Brodatz, “Texture - A Photographic Album for Artists and Designers,” Reinhold, New York, 1968.

Page 19: An active contour model based on fused texture features for image segmentation

Qinggang Wu received the M.S. and Ph.D. degrees in computer science, both from Dalian Maritime University, China, in 2008 and 2012, respectively.

He is currently an associate professor with the School of Computer and Commu-nication Engineering, Zhengzhou University of Light Industry, Zhengzhou, China. His research interests include remote sensing image processing, image segmentation, pattern recognition, and computer vision.

Yong Gan received the B. Eng. Degree in semiconductor physics and devices in

1986 from Xi'an Jiaotong University, the M.S. degree in computer device and equip-ment in 1989 from Xi’an Institute of Microelectronics, and Ph.D. degree in computer science in 2006 from Xi'an Jiaotong University, Xi’an, Shanxi, China, respectively.

He is currently the dean of the School of Computer and Communication Engi-neering, Zhengzhou University of Light Industry, Zhengzhou, China. His research interests include image processing, data communication and security, and emergency management.

Bin Lin received the B.Sc. degree in computer communications in 1999, the M.Sc.

degree in computer science in 2003 from Dalian Maritime University (China). She received the Ph.D. degree in electrical and computer engineering from University of Waterloo (Canada) in 2009.

She is currently an Associate Professor with the Information Science and Tech-nology College, Dalian Maritime University. Her research interests include artificial intelligence, pattern recognition, neural networks, and marine remote sensing.

Qiuwen Zhang received his Ph.D. degree in Communication and Information System from Shanghai University, Shanghai, China, in 2012, and M. S. degree in computer science from Henan University of Technology, Zhengzhou, China, in 2009.

He is currently an associate professor in Zhengzhou University of Light Industry, China. His current research interests include image and video encoding.

Hua-wen Chang received his Ph.D. degree in computer science and technology

from Sichuan University, Chengdu, China, in 2012, and M. S. degree in computer science from Guilin University of Technology, Guilin, China, in 2007.

He is currently an associate professor in Zhengzhou University of Light Industry, China. His current research interests include image and video quality assessment, image fusion, sparse representation.

Page 20: An active contour model based on fused texture features for image segmentation

Qinggang Wu

Yong Gan

Bin Lin

Qiuwen Zhang

Hua-wen Chang