Research Article Log-Gabor Energy Based Multimodal Medical...

13
Research Article Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain Yong Yang, 1 Song Tong, 1 Shuying Huang, 2 and Pan Lin 3 1 School of Information Technology, Jiangxi University of Finance and Economics, Nanchang 330032, China 2 School of Soſtware and Communication Engineering, Jiangxi University of Finance and Economics, Nanchang 330032, China 3 Institute of Biomedical Engineering, Xi’an Jiaotong University, Xi’an 710049, China Correspondence should be addressed to Shuying Huang; [email protected] Received 7 June 2014; Revised 5 August 2014; Accepted 6 August 2014; Published 24 August 2014 Academic Editor: Ezequiel L´ opez-Rubio Copyright © 2014 Yong Yang et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shiſt invariant and can effectively suppress the pseudo- Gibbs phenomena. e source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. e phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low- frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. e proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. 1. Introduction Medical imaging has attracted increasing attention in the recent years due to its vital component in medical diagnostics and treatment [1]. However, each imaging modality reports on a restricted domain and provides information in limited domains that some are common, and some are unique [2]. For example, computed tomography (CT) image can provide dense structures like bones and hard tissues with less distortion whereas magnetic resonance imaging (MRI) image is better visualized in the case of soſt tissues [3]. Similarly, T1- MRI image provides the details of an anatomical structure of tissues while T2-MRI image provides information about normal and pathological tissues [4]. As a result, multimodal medical images which have relevant and complementary information are necessary to be combined for a compen- dious figure [5]. e multimodal medical image fusion is the possible way to integrate complementary information from multiple modality images [6]. e image fusion not only obtains a more accurate and complete description of the same target, but also reduces the randomness and redundancy to increase the clinical applicability of image-guided diagnosis and assessment of medical problems [7]. Generally image fusion techniques can be divided into spatial domain and frequency domain techniques [8]. Spa- tial domain techniques are carried out directly on the source images. Weighted average method is the simplest spatial domain approach. However, along with simplicity, this method leads to several undesirable side effects like reduced contrast [9]. Other spatial domain techniques have been developed, such as intensity-hue-saturation (IHS), principal component analysis (PCA), and the Brovey transform [10]. Although the fused images obtained by these methods have high spatial quality, they usually overlook the high quality Hindawi Publishing Corporation Computational and Mathematical Methods in Medicine Volume 2014, Article ID 835481, 12 pages http://dx.doi.org/10.1155/2014/835481

Transcript of Research Article Log-Gabor Energy Based Multimodal Medical...

Page 1: Research Article Log-Gabor Energy Based Multimodal Medical ...downloads.hindawi.com/journals/cmmm/2014/835481.pdf · elds of multiscale transform based image fusion [ ]. However,

Research ArticleLog-Gabor Energy Based Multimodal Medical Image Fusion inNSCT Domain

Yong Yang1 Song Tong1 Shuying Huang2 and Pan Lin3

1 School of Information Technology Jiangxi University of Finance and Economics Nanchang 330032 China2 School of Software and Communication Engineering Jiangxi University of Finance and Economics Nanchang 330032 China3 Institute of Biomedical Engineering Xirsquoan Jiaotong University Xirsquoan 710049 China

Correspondence should be addressed to Shuying Huang shuyinghuang2010126com

Received 7 June 2014 Revised 5 August 2014 Accepted 6 August 2014 Published 24 August 2014

Academic Editor Ezequiel Lopez-Rubio

Copyright copy 2014 Yong Yang et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis image-guidedradiotherapy and treatment planning In this paper a novel nonsubsampled Contourlet transform (NSCT) based method formultimodal medical image fusion is presented which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena The source medical images are initially transformed by NSCT followed by fusing low- and high-frequencycomponents The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear anddetail parts is employed to fuse the high-frequency coefficients The proposed fusion method has been compared with the discretewavelet transform (DWT) the fast discrete curvelet transform (FDCT) and the dual tree complex wavelet transform (DTCWT)based image fusion methods and other NSCT-based methods Visually and quantitatively experimental results indicate that theproposed fusionmethod can obtainmore effective and accurate fusion results ofmultimodalmedical images than other algorithmsFurther the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected withrecurrent tumor images

1 Introduction

Medical imaging has attracted increasing attention in therecent years due to its vital component inmedical diagnosticsand treatment [1] However each imaging modality reportson a restricted domain and provides information in limiteddomains that some are common and some are unique[2] For example computed tomography (CT) image canprovide dense structures like bones and hard tissues with lessdistortionwhereasmagnetic resonance imaging (MRI) imageis better visualized in the case of soft tissues [3] Similarly T1-MRI image provides the details of an anatomical structureof tissues while T2-MRI image provides information aboutnormal and pathological tissues [4] As a result multimodalmedical images which have relevant and complementaryinformation are necessary to be combined for a compen-dious figure [5] The multimodal medical image fusion is

the possible way to integrate complementary informationfrommultiplemodality images [6]The image fusion not onlyobtains amore accurate and complete description of the sametarget but also reduces the randomness and redundancy toincrease the clinical applicability of image-guided diagnosisand assessment of medical problems [7]

Generally image fusion techniques can be divided intospatial domain and frequency domain techniques [8] Spa-tial domain techniques are carried out directly on thesource images Weighted average method is the simplestspatial domain approach However alongwith simplicity thismethod leads to several undesirable side effects like reducedcontrast [9] Other spatial domain techniques have beendeveloped such as intensity-hue-saturation (IHS) principalcomponent analysis (PCA) and the Brovey transform [10]Although the fused images obtained by these methods havehigh spatial quality they usually overlook the high quality

Hindawi Publishing CorporationComputational and Mathematical Methods in MedicineVolume 2014 Article ID 835481 12 pageshttpdxdoiorg1011552014835481

2 Computational and Mathematical Methods in Medicine

of spectral information and suffer from spectral degradation[10] Li et al [11] introduced the artificial neural network(ANN) to make image fusion However the performanceof ANN depends on the sample images and this is not anappealing characteristic Yang and Blum [12] used a statisticalapproach to fuse the images In their method the distortionis modeled as a mixture of Gaussian probability densityfunctions which is a limiting assumption Since the actualobjects usually contain structures at many different scalesor resolutions and multiscale techniques can provide themeans to exploit this fact the frequency domain techniquesespecially the multiscale techniques have attracted more andmore interest in image fusion [13]

In frequency domain techniques each source image isfirst decomposed into a sequence of multiscale coefficientsVarious fusion rules are then employed in the selection ofthese coefficients which are synthesized via inverse trans-forms to form the fused image Recently series of frequencydomain methods have been explored by using multiscaletransform including Laplacian pyramid transform gra-dient pyramid transform filter-subtract-decimate pyramidtransform discrete wavelet transform (DWT) and complexwavelet transform (CWT) [14ndash20] There is evidence thatmultiscale transform based signal decomposition is similarto the human visual system As we know the wavelet analysiswith its upstanding localize peculiarity in both time andfrequency domain has become one of the most commonlyused methods in the fields of multiscale transform basedimage fusion [16] However wavelet analysis cannot effec-tively represent the line singularities and plane singularitiesof the images and thus cannot represent the directions of theedges of images accurately To overcome these shortcomingsof the wavelet transform Do and Vetterli [17] proposedContourlet transform which can give the asymptotic optimalrepresentation of contours and has been successfully used forimage fusion However the up- and downsampling processof Contourlet decomposition and reconstruction results inthe Contourlet transform lacking shift-invariance and havingpseudo-Gibbs phenomena in the fused image [19] Later daCunha et al [20] proposed the nonsubsampled Contourlettransform (NSCT) based on Contourlet transform Thismethod inherits the advantages of Contourlet transformwhile possessing shift-invariance and effectively suppressingpseudo-Gibbs phenomena

Although quite good results have been reported by NSCTbased method there is still much room to improve the fusionperformance in the coefficient selection as follows

(a) The low-frequency coefficients of the fused image canbe simply acquired by averaging the low-frequencycoefficients of the input images This rule decreasedcontrast in the fused images [21] and cannot give thefused subimage of high quality for medical images

(b) The popularly used larger absolute rule is imple-mented in the value of a single pixel of the currenthigh-frequency subband The disadvantage of thismethod is that the coefficients only know the value ofa single pixel but not any of the relationship between

the corresponding coefficients in high-frequency sub-bands [22]

(c) Most fusion rules of the NSCT-based methods areimplemented in multifocus images [23] remote sens-ing images [24] and infrared and visible images [25]The results are not of the same quality as those of themultimodal medical images For example Chai et al[22] proposed aNSCTmethod based on features con-trast ofmultiscale products to fusemultifocus imagesHowever it has been proven that this algorithm is notable to utilize prominent information present in thesubbands efficiently and results in the poor qualitywhen it is used to fuse multimodal medical images[26]

In this paper a novel fusion framework based on NSCTis proposed for multimodal medical images The maincontribution of the method lies in the proposed fusionrule which can capture the best membership of sourceimagesrsquo coefficients to the corresponding fused coefficientThe phase congruency and Log-Gabor energy are unifiedas the fusion rules for low- and high-frequency coefficientsrespectively The phase congruency provides a contrast andbrightness-invariant representation of low-frequency coeffi-cients whereas Log-Gabor energy efficiently determines thefrequency coefficients from the clear and detail parts in thehigh frequency The combinations of these two techniquescan preserve more details from source images and thusimprove the quality of the fused images Experiments indicatethat the proposed framework can provide a better fusionoutcome when compared to series of traditional imagefusion methods in terms of both subjective and objectiveevaluations

The rest of the paper is organized as follows NSCT andphase congruency are described in Section 2 followed bythe proposed multimodal medical image fusion frameworkin Section 3 Experimental results and discussions are givenin Section 4 and the concluding remarks are presented inSection 5

2 Preliminaries

This section provides the description of concepts on whichthe proposed framework is based These concepts includingNSCT and phase congruency are described as follows

21 Nonsubsampled Contourlet Transform (NSCT) Contour-let transform can be divided into two stages [19] Laplacianpyramid (LP) and directional filter bank (DFB) and offersan efficient directional multiresolution image representationAmong them LP is first utilized to capture the point singular-ities and then followed by the DFB to link the singular pointinto linear structures LP is used to decompose the originalimages into low-frequency and high-frequency subimagesand DFB divides the high-frequency subbands into direc-tional subbands The Contourlet decomposed schematicdiagram is shown in Figure 1

Computational and Mathematical Methods in Medicine 3

DFBLPImage

Highfrequency

Highfrequency

darr (2 2)

Figure 1 Contourlet decomposed schematic diagram

ImageNSPFB

Lowfrequency

Highfrequency

Highfrequency

NSDFB

NSDFB

Figure 2 NSCT decomposed schematic diagram

The NSCT is proposed based on the theory of Contourlettransform NSCT inherits the advantage of Contourlettransform enhances directional selectivity and shift-invari-ance and effectively overcomes the pseudo-Gibbs phenom-ena NSCT is built on nonsubsampled pyramid filter bank(NSPFB) and nonsubsampled directional filter bank(NSDFB) [21] Figure 2 gives the NSCT decompositionframework with 119896 = 2 levels

TheNSPFB ensures themultiscale performance by takingadvantage of a two-channel nonsubsampled filter bank andone low-frequency subband andone high-frequency subbandthat can be produced at each decomposition level TheNSDFB is a two-channel nonsubsampled filter bank con-structed by eliminating the downsamplers and upsamplersand combining the directional fan filter banks in the nonsub-sampled directional filter [23] NSDFB allows the directiondecomposition with 119897 levels in each high-frequency subbandsfrom NSPFB and then produces 2119897 directional subbands withthe same size as the source imagesThus theNSDFB providesthe NSCT the multidirection performance and offers moreprecise directional detail information to get more accurateresults [23] Therefore NSCT leads to better frequencyselectivity and an essential property of the shift-invarianceon account of nonsubsampled operationThe size of differentsubimages decomposed by NSCT is identical AdditionallyNSCT-based image fusion can effectively mitigate the effectsof misregistration on the results [27] Therefore NSCT ismore suitable for image fusion

22 Phase Congruency Phase congruency is a feature percep-tion approach which provides information that is invariant toimage illumination and contrast [28] This model is built on

the Local EnergyModel [29] which postulates that importantfeatures can be found at pointswhere the Fourier componentsare maximally in phase Furthermore the angle at which thephase congruency occurs signifies the feature typeThe phasecongruency can be used for feature detection [30]Themodelprovides useful feature localization and noise compensationThe phase congruency at a point (119894 119895) can be defined asfollows [31]

PC120579 (119894 119895)

= sum

119899

119882120579(119894 119895) [119860

120579

119899(119894 119895) (cos (120601120579

119899(119894 119895) minus 120601

120579

119899(119894 119895)))

minus

1003816100381610038161003816100381610038161003816sin (120601120579

119899(119894 119895) minus 120601

120579

119899(119894 119895))

1003816100381610038161003816100381610038161003816minus 119879]+

times (sum

119899

119860120579

119899(119894 119895) + 120576)

minus1

(1)

where 120579 is the orientation 119882120579(119894 119895) is the weighting factorbased on frequency spread 119860120579

119899(119894 119895) and 120601

120579

119899(119894 119895) are the

amplitude and phase for wavelet scale 119899 respectively 120601120579119899(119894 119895)

is the weighted mean phase 119879 is a noise threshold constantand 120576 is a small constant value to avoid division by zeroThe notation []+ denotes that the enclosed quantity is equalto itself when the value is positive and zero otherwise Fordetails of phase congruency measure see [29]

As we know the multimodal medical images have thefollowing characteristics

(a) the images of different modal have significantly dif-ferent pixel mappings

(b) the capturing environment of different modalitiesvaries and resulted in the change of illumination andcontrast

(c) the edges and corners in the images are identified bycollecting frequency components of the image that isin phase

According to the literature [26 32] it is easy to findthat phase congruency is not only invariant to different pixelintensity mappings illumination and contrast changes butalso gives the Fourier components that are maximally inphase These all will lead to efficient fusion That is why weuse phase congruency for multimodal medical fusion

3 Proposed Multimodal Medical ImageFusion Framework

The framework of the proposed multimodal medical imagefusion algorithm is depicted in Figure 3 but before describingit the definition of local Log-Gabor energy in NSCT domainis first described as follows

31 Log-Gabor Energy in NSCTDomain Thehigh-frequencycoefficients in NSCT domain represent the detailed com-ponents of the source images such as the edges texturesand region boundaries [21] In general the coefficients withlarger absolute values correspond to the sharper brightness

4 Computational and Mathematical Methods in Medicine

Fusion rules

NSCT

Phase congruency

Fused high coefficients

Low coefficientHigh

coefficientsHigh

coefficients

Fused low coefficient

MaxLog-Gabor energy

Inverse NSCT

Fused image

Max phase congruency

NSCT

Low coefficient

Source images

Figure 3 The framework of the proposed fusion algorithm

in the image It is to be noted that the noise is also related tohigh-frequency coefficients and may cause miscalculation ofsharpness values and therefore affect the fusion performance[26] Furthermore human visual system is generally moresensitive to texture detail features than the value of a singlepixel

To overcome the defect mentioned above a novel high-frequency fusion rule based on local Log-Gabor energy isdesigned in this paper Gabor wavelet is a popular techniquethat has been extensively used to extract texture features[33] Log-Gabor filters are proposed based on Gabor filtersCompared with Gabor filters Log-Gabor filters cover theshortage of the high frequency of Gabor filter componentexpression and more in accord with human visual system[34] Therefore Log-Gabor wavelet can achieve optimalspatial orientation and wider spectrum information at thesame time and thus more truly reflect the frequency responseof the natural images and improve the performance in termsof the accuracy [35]

Under polar coordinates the Log-Gabor wavelet isexpressed as follows [36]

119892 (119891 120579) = expminus[ln (1198911198910)]

2

2[ln (1205901198910)]2 times expminus

(120579 minus 1205790)2

21205902120579

(2)

in which 1198910 is the center frequency of the Log-Gabor filter1205790 is the direction of the filter 120590 is used to determine thebandwidth of the radial filter and 120590120579 is used to determine thebandwidth of the orientation If 119892119906V

119896119897(119894 119895) correspond to Log-

Gabor wavelets in scale 119906 and direction V the signal responseis expressed as follows

119866119906V119896119897(119894 119895) = 119867119896119897 (119894 119895) lowast 119892

119906V119896119897(119894 119895) (3)

where 119867119896119897(119894 119895) is the coefficient located at (119894 119895) in high-frequency subimages of the source image 119860 or 119861 at the 119896thscale 119897th direction and lowast denotes convolution operationTheLog-Gabor energy of high-frequency subimages at the 119896thscale 119897th direction is expressed as follows

119864119896119897 (119894 119895) =

119880

sum

119906=1

119881

sum

V=1

radicreal(119866119906V119896119897(119894 119895))

2+ imag(119866119906V

119896119897(119894 119895))

2 (4)

in which real(119866119906V119896119897(119894 119895)) is the real part of 119866119906V

119896119897(119894 119895) and

imag(119866119906V119896119897(119894 119895)) is the imaginary part of 119866119906V

119896119897(119894 119895) The Log-

Gabor energy in NSCT domain at the local area around thepixel (119894 119895) is given as

LE119896119897 (119894 119895) =1

(2119872 + 1) (2119873 + 1)

119872

sum

119898=minus119872

119873

sum

119899=minus119873

119864119896119897 (119894 + 119898 119895 + 119899)

(5)

Computational and Mathematical Methods in Medicine 5

in which (2119872+ 1)(2119873 + 1) is the window size The proposeddefinition of the local Log-Gabor energy not only extractsmore useful features from high-frequency coefficients butalso keeps a well performance in noisy environment

32 Proposed Fusion Framework The proposed NSCT-basedimage fusion framework is discussed in this subsectionConsidering the input multimodal medical images (119860 and119861) are perfectly registered The framework of the proposedfusion method is shown Figure 3 and described as thefollowing three steps

Step 1 Perform 120579-level NSCT on 119860 and 119861 to obtain one low-frequency subimage and series of high-frequency subimagesat each level and direction 119897 that is 119860 119871119860 119867119860

119896119897 and 119861

119871119861 119867119861

119896119897 where 119871119860 119871119861 are the low-frequency subimages and

119867119860

119896119897119867119860119896119897represent the high-frequency subimages at level 119896 isin

[1 120579] in the orientation 119897

Step 2 Fuse low- and high-frequency subbands via thefollowing novel fusion rule to obtain composite low- andhigh-frequency subbands

The low-frequency coefficients represent the approxima-tion component of the source images The popular widelyused approach is to apply the averaging methods to producethe fused coefficients However this rule reduced contrastin the fused images and cannot give the fused subimage ofhigh quality for medical imageTherefore the criterion basedon the phase congruency that is introduced in Section 22 isemployed to fuse the low-frequency coefficients The fusionrule for the low-frequency subbands is defined as

119871119865(119894 119895)

=

119871119860(119894 119895) if PC119860120579 (119894 119895) gt PC119861120579 (119894 119895)

119871119861(119894 119895) if PC119860120579 (119894 119895) lt PC119861120579 (119894 119895)

05 (119871119860(119894 119895) + 119871

119861(119894 119895)) if PC119860120579 (119894 119895) = PC119861120579 (119894 119895)

(6)

where PC119860120579(119894 119895) PC119861120579(119894 119895) is the phase congruency extractedfrom low-frequency subimages of the source images119860 and 119861respectively

For the high-frequency coefficients the most commonfusion rule is selecting the coefficient with larger absolutevalues This rule does not take any consideration of thesurrounding pixels and cannot give the fused componentsof high quality for medical image Especially when thesource images contain noise the noise could be mistaken forfused coefficients and cause miscalculation of the sharpnessvalue Therefore the criterion based on Log-Gabor energyis introduced to fuse high-frequency coefficients The fusionrule for the high-frequency subbands is defined as

119867119865

119896119897(119894 119895) =

119867119860

119896119897(119894 119895) if LE119860

119896119897(119894 119895) ge LE119861

119896119897(119894 119895)

119867119861

119896119897(119894 119895) otherwise

(7)

where LE119860119896119897(119894 119895) and LE119861

119896119897(119894 119895) are the local Log-Gabor energy

extracted fromhigh-frequency subimages at the 119896th scale and119897th direction of source images 119860 and 119861 respectively

Step 3 Perform 120579-level by the inverseNSCTon the fused low-and high-frequency subimages The fused image is obtainedultimately in this way

4 The Experimental Results and Analysis

It is well known that different image quality metrics imply thevisual quality of images from different aspects but none ofthem can directly imply the quality In this paper we considerboth the visual representation and quantitative assessmentof the fused images For evaluation of the proposed fusionmethod we have considered five separate fusion performancemetrics defined as below

41 Evaluation Index System

411 Standard Deviation The standard deviation (STD) ofan image with size of1198720 times 1198730 is defined as [37]

Std = ( 1

1198720 times 1198730

1198720

sum

119894=1

1198730

sum

119895=1

(119865 (119894 119895) minus 120583)2)

12

(8)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) and 120583 is the mean value of the image The STDcan be used to estimate how widely the gray values spread inan image The larger the STD the better the result

412 Edge Based SimilarityMeasure The edge based similar-ity measure (119876119860119861119865) is proposed by Xydeas and Petrovic [38]The definition is given as

119876119860119861119865

=

sum1198720

119894=1sum1198730

119895=1[119876119860119865(119894 119895) 119908

119860(119894 119895) + 119876

119861119865(119894 119895) 119908

119861(119894 119895)]

sum1198720

119894=1sum1198730

119895=1[119908119860 (119894 119895) + 119908119861 (119894 119895)]

(9)

where 119908119860(119894 119895) and 119908119861(119894 119895) are the corresponding gradientstrength for images 119860 and 119861 respectively The definition of119876119860119865(119894 119895) and 119876119861119865(119894 119895) is given as

119876119860119865(119894 119895) = 119876

119860119865

119886(119894 119895) 119876

119860119865

119892(119894 119895)

119876119861119865(119894 119895) = 119876

119861119865

119886(119894 119895) 119876

119861119865

119892(119894 119895)

(10)

where 119876119909119865119886(119894 119895) and 119876119909119865

119892(119894 119895) are the edge strength and

orientation preservation values at location for image 119909 (119860or 119861) respectively The edge based similarity measure givesthe similarity between the edges transferred from the inputimages to the fused image [26]The larger the value the betterthe fusion result

6 Computational and Mathematical Methods in Medicine

413 Mutual Information Mutual information (MI) [39]between the fusion image 119865 and the source images119860 and 119861 isdefined as follows

MI = MI119860119865 +MI119861119865

MI119860119865 =119871

sum

119891=0

119871

sum

119886=0

119901119860119865(119886 119891) log

2(119901119860119865(119886 119891)

119901119860 (119886) 119901119865 (119891))

MI119861119865 =119871

sum

119891=0

119871

sum

119887=0

119901119861119865(119887 119891) log

2(119901119861119865(119887 119891)

119901119861 (119887) 119901119865 (119891))

(11)

where MI119909119865 denotes the normalized mutual informationbetween the fused image and the input image 119860 119861 119886 119887 and119891 isin [0 119871] and 119871 is the number of bins 119901119860(119886) 119901119861(119887) and119901119865(119891) are the normalized gray level histograms of source

images and fused image 119901119909119865(119886 119891) is the joint gray levelhistograms between fused image and each source image

MI can indicate how much information the fused image119865 conveys about the source images 119860 and 119861 [22] Thereforethe greater the value of MI the better the fusion effect

414 Cross Entropy The cross entropy is defined as [8]

CE =119871minus1

sum

119897=0

119875119897log2119875119897

119876119897

(12)

where 119875119897 and119876119897 denote the gray level histogram of the sourceimage and the fused image respectively The cross entropy isused to evaluate the difference between the source images andthe fused image The lower value corresponds to the betterfusion result

415 Spatial Frequency Spatial frequency is defined as [40]

SF = radicRF2 + CF2 (13)

where RF and CF are the row frequency and column fre-quency respectively and are defined as

RF = radic 1

11987201198730

1198720minus1

sum

119894=0

1198730minus1

sum

119895=0

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2

CF = radic 1

11987201198730

1198720minus1

sum

119894=0

1198730minus1

sum

119895=0

[119865 (119894 119895) minus 119865 (119894 minus 1 119895)]2

(14)

The spatial frequency reflects the edge information of thefused image Larger spatial frequency values indicate betterimage quality

42 Experiments on Multimodal Medical Image Fusion Toevaluate the performance of the proposed image fusionapproach the experiments are performed on three groups ofmultimodal medical images These images are characterizedin two distinct pairs (1) CT and MRI (2) MR-T1 and MR-T2 The images in Figures 4(a1)-4(b1) and 4(a2)-4(b2) are

CT and MRI images whereas Figures 4(a3)-4(b3) are T1-weighted MR image (MR-T1) and T2-weighted MR image(MR-T2) All images have the same size of 256 times 256 pixelwith 256-level gray scale For all these image groups theresults of the proposed fusion framework are compared withthose of the traditional discretewavelet transform (DWT) [1316] the second generation curvelet transform (fast discretecurvelet transform FDCT) [41 42] the dual tree complexwavelet transform (DTCWT) [4] and the nonsubsampledContourlet transform (NSCT-1 andNSCT-2) basedmethodsThe high-frequency coefficients and low-frequency coeffi-cients of DWT FDCT DTCWT and NSCT-1 based methodsaremerged by the popular widely used fusion rule of selectingthe coefficient with larger absolute values and the averagingrule (average-maximum rule) respectively NSCT-2 basedmethod is merged by the fusion rules proposed by Bhatnagaret al in [26] In order to perform a fair comparison the sourceimages are all decomposed into the same levels with 3 forthose methods except FDCTmethod For DWTmethod theimages are decomposed using the DBSS (2 2) wavelet Forimplementing NSCT ldquo9-7rdquo filters and ldquopkvardquo filters (how toset the filters can be seen in [43]) are used as the pyramidaland directional filters respectively

421 Subjective Evaluation The first pair of medical imagesare two groups of brain CT and MRI images on differentaspects shown in Figures 4(a1) 4(b1) and 4(a2) 4(b2)respectively It can be easily seen that the CT image showsthe dense structure while MRI provides information aboutsoft tissues The obtained fused images from DWT FDCTDTCWT NSCT-1 and NSCT-2 are shown in Figures 4(c1)ndash4(g1) and 4(c2)ndash4(g2) respectively The results for the pro-posed fusion method have been shown in Figures 4(h1) and4(h2) On comparing these results it can be easily observedthat the proposedmethod outperforms those fusionmethodsand has good visual representation of fused image

The second pair of medical images areMR-T1 andMR-T2images shown in Figures 4(a3) and 4(b3) The comparisonof DWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 4(c3)ndash4(h3) clearly implies thatthe fusion result of the proposed method has better qualityand contrast in comparison to other methods

Similarly on observing the noticeable improvement hasbeen emphasized in Figure 4 by the red arrows and theanalysis above one can easily verify the fact that again theproposed method has been found superior in terms of visualrepresentation over DWT FDCT DTCWT NSCT-1 andNSCT-2 fusion methods

422 Objective Evaluation For objective evaluation of thefusion results shown in Figure 4 we have used five fusionmetrics cross entropy spatial frequency STD 119876119860119861119865 andMIThe quantitative comparison of cross entropy and spatialfrequency for these fused images is visually given by Figures5-6 and other metrics are given by Table 1

On observing Figure 5 one can easily observe all thethree results of the proposed scheme have lower values ofcross entropy than any of the DWT FDCT DTCWT NSCT-1 and NSCT-2 fusion methods The cross entropy is used to

Computational and Mathematical Methods in Medicine 7

(a1) (b1) (d1)(c1)

(e1) (h1)(g1)(f1)

(a2) (d2)(c2)(b2)

(e2) (f2) (h2)(g2)

(a3) (c3)(b3) (d3)

(e3) (f3) (g3) (h3)

Figure 4 Sourcemultimodalmedical images (a1) (b1) image group 1 (CT andMRI) (a2) (b2) image group 2 (CT andMRI) (a3) (b3) imagegroup 3 (MR-T1 and MR-T2) fused images from (c1) (c2) (c3) DWT based method (d1) (d2) (d3) FDCT based method (e1) (e2) (e3)DTCWT based method (f1) (f2) (f3) NSCT-1 based method (g1) (g2) (g3) NSCT-2 based method (h1) (h2) (h3) our proposed method

8 Computational and Mathematical Methods in Medicine

Table 1 Comparison on quantitative evaluation of different methods for the first set medical images

Images Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed method

Image group 1(CT and MRI)

STD 41160 39170 40132 40581 54641 58476QABF 0618 0592 0652 0683 0427 0716MI 2371 2029 2393 2438 1886 2580

Image group 2(CT and MRI)

STD 72260 70572 71407 71733 80378 83334QABF 0567 0584 0606 0622 0450 0618MI 3137 3138 3198 3235 2915 3302

Image group 3(MR-T1 and MR-T2)

STD 37358 36856 37358 37277 39023 40630QABF 0635 0647 0668 0682 0589 0689MI 3209 3142 3302 3347 3349 3469

005

115

225

DWT FDCT DTCWT NSCT-1 NSCT-2

Cross entropy

Group 1Group 2Group 3

Proposedmethod

1323 1384

0046

Figure 5 Comparison on cross entropy of different methods andimages

evaluate the difference between the source images and thefused image Therefore the lower value corresponds to thebetter fusion result

On observing Figure 6 two values of the spatial frequencyof the fused image obtained by the proposed method arethe highest and the other one is 6447 which is close tothe highest value 6581 Observation of Table 1 yields that allthe three results of the proposed fusion scheme have highervalues of STD 119876119860119861119865 and MI than any of other methodsexcept one value of119876119860119861119865 (image group 2) is the second bestHowever an overall comparison shows the superiority of theproposed fusion scheme

423 Combined Evaluation Since the subjective and objec-tive evaluations separately are not able to examine fusionresults we have combined them From these figures (Figures4ndash6) and table (Table 1) it is clearly to find that the proposedmethod not only preserves most of the source images char-acteristics and information but also improves the definitionand the spatial quality better than the existing methodswhich can be justified by the optimum values of objectivecriteria except one value of spatial frequency (image group3) and one value of 119876119860119861119865 (image group 2) Consider theexample of the first set of images the five criteria values ofthe proposed method are 1323 (cross entropy) 7050 (spatialfrequency) 58476 (STD) 0716 (119876119860119861119865) and 2580 (MI)respectively Each of them is the optimal one in the first setof experiments

7050

7976

64476

657

758

6581

85

DWT FDCT DTCWT NSCT-1 NSCT-2

Spatial frequencies

Group 1Group 2Group 3

Proposedmethod

Figure 6 Comparison on spatial frequencies of different methodsand images

Among these methods the result of NSCT-2 basedmethod also gives poor results when comparing to theproposed NSCT-based methodThis stems from the fact thathigh-frequency fusion rule of NSCT-2 based method is notable to extract the detail information in the high frequencyeffectively Also by carefully looking at the outputs of theproposed NSCT-based method (Figures 4(h1) 4(h2) and4(h3)) we can find that they get more contrast and morespatial resolution than the outputs of NSCT-2 based method(highlighted by the red arrows) and other methodsThemainreason behind the better performance is that the proposedfusion rules for low- and high-frequency coefficients caneffectively extract prominent and detail information from thesource images Therefore it can be possible to conclude thatthe proposed method is better than the existing methods

43 Fusion ofMultimodalMedical Noisy Images and a ClinicalExample To evaluate the performance of the proposedmethod in noisy environment the input image group 1 hasbeen additionally corrupted with Gaussian noise with astandard deviation of 5 (shown in Figures 7(a) and 7(b))In addition a clinical applicability on noninvasive diagnosisof neoplastic disease is given in the last subsection

431 Fusion of Multimodal Medical Noisy Images For com-parison apart from visual observation objective criteria onSTD MI and 119876119860119861119865 are used to evaluate how much clearor detail information of the source images is transferredto the fused images However maybe these criteria cannot

Computational and Mathematical Methods in Medicine 9

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 7 The multimodal medical images with noise (a) CT image with 5 noise (b) MRI image with 5 noise fused images by (c) DWT(d) FDCT (e) DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 2 Comparison on quantitative evaluation of different methods for the noise medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 41893 40044 40995 41338 52533 57787QABF 0592 0579 0619 0637 0269 0640MI 2785 2539 2826 2882 1816 2914

effectively evaluate the performance of the fusion methodsin terms of the noise transmission For further comparisonPeak Signal to Noise Ratio (PSNR) a ratio between themaximum possible power of a signal and the power of noisethat affects the fidelity [44] is used The larger the value ofPSNR the less the image distortion [45] PSNR is formulatedas

PSNR = 10lg100381610038161003816100381610038161003816100381610038161003816

2552

RMSE2100381610038161003816100381610038161003816100381610038161003816

(15)

where RMSE denotes the Root Mean Square Error betweenthe fused image and the reference imageThe reference imagein the following experiment is selecting from Figure 4(h1)which is proven to be the best performance compared to otherimages

Figure 7 illustrates the fusion results obtained by thedifferent methods The comparison of the images fused byDWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 7(c)ndash7(h) clearly implies thatthe fused image by proposed method has better quality andcontrast than other methods Figure 8 shows the values ofPSNR of different methods in fusing noisy images Onecan observe that the proposed method has higher values of

1516171819202122232425

DWT FDCT DTCWT NSCT-1 NSCT-2 Proposedmethod

PSNR23704

Figure 8 Comparison on PSNR of different methods for the noisemedical images

PSNR than any of the DWT FDCT DTCWT NSCT-1 andNSCT-2 fusionmethods Table 2 gives the quantitative resultsof fused images and shows that the values of STD 119876119860119861119865and MI are also the highest of all the six methods Fromthe analysis above we can also observe that the proposedscheme provides the best performance and outperforms theother algorithms In addition compared with the result ofthe NSCT-1 method using the average-maximum rule it

10 Computational and Mathematical Methods in Medicine

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 9 Brain images of the man with recurrent tumor (a) MR-T1 image (b) MR-T2 image fused images by (c) DWT (d) FDCT (e)DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 3 Comparison on quantitative evaluation of different methods for the clinical medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 64433 62487 63194 63652 60722 67155QABF 0541 0565 0578 0597 0411 0624MI 2507 2455 2540 2584 2392 2624

demonstrated the validities of the proposed fusion rule innoisy environment

432 A Clinical Example on Noninvasive Diagnosis In orderto demonstrate the practical value of the proposed schemein medical imaging one clinical case on neoplastic diagnosisis considered where MR-T1MR-T2 medical modalities areused The images have been downloaded from the HarvardUniversity site (httpwwwmedharvardeduAANLIBhomehtml) Figures 9(a)-9(b) show the recurrent tumor case of a51-year-old woman who sought medical attention because ofgradually increasing right hemiparesis (weakness) and hemi-anopia (visual loss) At craniotomy left parietal anaplasticastrocytoma was found A right frontal lesion was biopsiedA large region of mixed signal on MR-T1 and MR-T2 imagesgives the signs of the possibility of active tumor (highlightedby the red arrows)

Figures 9(c)ndash9(h) show the fused images by DWT FDCTDTCWT NSCT-1 NSCT-2 and proposed method It isobvious that the fused image by proposed method has bettercontrast and sharpness of active tumor (highlighted by the redarrows) than other methods Table 3 shows the quantitativeevaluation of different methods for the clinical medicalimages The values of the proposed method are optimum in

terms of STD MI and 119876119860119861119865 From Figure 9 and Table 3we can obtain the same conclusion that the proposed schemeprovides the best performance and outperforms the otheralgorithms

5 Conclusions

Multimodal medical image fusion plays an important role inclinical applications But the real challenge is to obtain a visu-ally enhanced image through fusion process In this paper anovel and effective image fusion framework based on NSCTand Log-Gabor energy is proposedThe potential advantagesinclude (1) NSCT is more suitable for image fusion becauseof its advantages such as multiresolution multidirection andshift-invariance (2) a new couple of fusion rules based onphase congruency andLog-Gabor energy are used to preservemore useful information in the fused image to improve thequality of the fused images and overcome the limitations ofthe traditional fusion rules and (3) the proposed methodcan provide a better performance than the current fusionmethods whatever the source images are clean or noisy Inthe experiments five groups of multimodal medical imagesincluding one group with noise and one group clinicalexample of a woman affected with recurrent tumor are

Computational and Mathematical Methods in Medicine 11

fused by using traditional fusion methods and the proposedframeworkThe subjective and objective comparisons clearlydemonstrate that the proposed algorithm can enhance thedetails of the fused image and can improve the visual effectwith less information distortion than other fusion methodsIn the future we plan to design a pureC++ platform to reducethe time cost and extend our method for 3D or 4D medicalimage fusion

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work is supported by the National Natural ScienceFoundation of China (no 61262034 no 61462031 and no61473221) by the Key Project of Chinese Ministry of Edu-cation (no 211087) by the Doctoral Fund of Ministry ofEducation of China (no 20120201120071) by the NaturalScience Foundation of Jiangxi Province (no 20114BAB211020and no 20132BAB201025) by the Young Scientist Foundationof Jiangxi Province (no 20122BCB23017) by the Scienceand Technology Application Project of Jiangxi Province(no KJLD14031) by the Science and Technology ResearchProject of the Education Department of Jiangxi Province (noGJJ14334) and by the Fundamental Research Funds for theCentral Universities of China

References

[1] R Shen I Cheng and A Basu ldquoCross-scale coefficient selec-tion for volumetric medical image fusionrdquo IEEE Transactionson Biomedical Engineering vol 60 no 4 pp 1069ndash1079 2013

[2] V D Calhoun and T Adali ldquoFeature-based fusion of medicalimaging datardquo IEEE Transactions on Information Technology inBiomedicine vol 13 no 5 pp 711ndash720 2009

[3] Y Yang D S Park S Huang and J Yang ldquoFusion of CT andMR images using an improved wavelet based methodrdquo Journalof X-Ray Science and Technology vol 18 no 2 pp 157ndash170 2010

[4] R Singh R Srivastava O Prakash and A Khare ldquoMultimodalmedical image fusion in dual tree complex wavelet transformdomain using maximum and average fusion rulesrdquo Journal ofMedical Imaging and Health Informatics vol 2 no 2 pp 168ndash173 2012

[5] Z Wang and Y Ma ldquoMedical image fusion using m-PCNNrdquoInformation Fusion vol 9 no 2 pp 176ndash185 2008

[6] V Barra and J Boire ldquoA general framework for the fusion ofanatomical and functionalmedical imagesrdquoNeuroImage vol 13no 3 pp 410ndash424 2001

[7] A P James and B V Dasarathy ldquoMedical image fusion a surveyof the state of the artrdquo Information Fusion vol 19 pp 4ndash19 2014

[8] P Balasubramaniam and V P Ananthi ldquoImage fusion usingintuitionistic fuzzy setsrdquo Information Fusion vol 20 pp 21ndash302014

[9] Y Yang S Y Huang F J Gao et al ldquoMulti-focus image fusionusing an effective discrete wavelet transform based algorithmrdquoMeasurement Science Review vol 14 no 2 pp 102ndash108 2014

[10] P S Pradhan R L King N H Younan and D W HolcombldquoEstimation of the number of decomposition levels for awavelet-based multiresolution multisensor image fusionrdquo IEEETransactions on Geoscience and Remote Sensing vol 44 no 12pp 3674ndash3686 2006

[11] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[12] J Yang and R S Blum ldquoA statistical signal processing approachto image fusion for conceled weapon detectionrdquo in Proceedingsof the International Conference on Image Processing (ICIP rsquo02)vol 1 pp 513ndash516 September 2002

[13] R Singh and A Khare ldquoMultiscale medical image fusion inwavelet domainrdquoThe Scientific World Journal vol 2013 ArticleID 521034 10 pages 2013

[14] B Aiazzi L Alparone A Barducci S Baronti and I PippildquoMultispectral fusion of multisensor image data by the general-ized Laplacian pyramidrdquo in Proceedings of the 1999 IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo99) vol 2 pp 1183ndash1185 Hamburg Germany July 1999

[15] V S Petrovic andC S Xydeas ldquoGradient-basedmultiresolutionimage fusionrdquo IEEE Transactions on Image Processing vol 13no 2 pp 228ndash237 2004

[16] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journalon Advances in Signal Processing vol 2010 Article ID 5793412010

[17] M N Do and M Vetterli ldquoThe contourlet transform an effi-cient directional multiresolution image representationrdquo IEEETransactions on Image Processing vol 14 no 12 pp 2091ndash21062005

[18] R Singh and A Khare ldquoFusion of multimodal medical imagesusing Daubechies complex wavelet transformmdasha multiresolu-tion approachrdquo Information Fusion vol 19 pp 49ndash60 2014

[19] X B Qu J W Yan and G D Yang ldquoMultifocus image fusionmethod of sharp frequency localized Contourlet transformdomain based on sum-modified-Laplacianrdquo Optics and Preci-sion Engineering vol 17 no 5 pp 1203ndash1212 2009

[20] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006

[21] Q Zhang and B Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[22] Y Chai H Li and X Zhang ldquoMultifocus image fusion basedon features contrast of multiscale products in nonsubsampledcontourlet transform domainrdquo Optik vol 123 no 7 pp 569ndash581 2012

[23] X B Qu J W Yan H Z Xiao and Z Zhu ldquoImage fusionalgorithm based on spatial frequency-motivated pulse cou-pled neural networks in nonsubsampled contourlet transformdomainrdquo Acta Automatica Sinica vol 34 no 12 pp 1508ndash15142008

[24] Y Wu C Wu and S Wu ldquoFusion of remote sensing imagesbased on nonsubsampled contourlet transform and regionsegmentationrdquo Journal of Shanghai JiaotongUniversity (Science)vol 16 no 6 pp 722ndash727 2011

[25] J Adu J Gan andYWang ldquoImage fusion based onnonsubsam-pled contourlet transform for infrared and visible light imagerdquoInfrared Physics amp Technology vol 61 pp 94ndash100 2013

12 Computational and Mathematical Methods in Medicine

[26] G Bhatnagar Q M J Wu and Z Liu ldquoDirective contrastbased multimodal medical image fusion in NSCT domainrdquoIEEE Transactions on Multimedia vol 15 no 5 pp 1014ndash10242013

[27] S Das and M K Kundu ldquoNSCT-based multimodal medicalimage fusion using pulse-coupled neural network andmodifiedspatial frequencyrdquo Medical and Biological Engineering andComputing vol 50 no 10 pp 1105ndash1114 2012

[28] P Kovesi ldquoImage features from phase congruencyrdquo Videre vol1 no 3 pp 1ndash26 1999

[29] M C Morrone and R A Owens ldquoFeature detection from localenergyrdquo Pattern Recognition Letters vol 6 no 5 pp 303ndash3131987

[30] P Kovesi ldquoPhase congruency a low-level image invariantrdquoPsychological Research vol 64 no 2 pp 136ndash148 2000

[31] L Zhang X Mou and D Zhang ldquoFSIM a feature similarityindex for image quality assessmentrdquo IEEE Transactions onImage Processing vol 20 no 8 pp 2378ndash2386 2011

[32] A Wong and W Bishop ldquoEfficient least squares fusion of MRIand CT images using a phase congruency modelrdquo PatternRecognition Letters vol 29 no 3 pp 173ndash180 2008

[33] L Yu Z He and Q Cao ldquoGabor texture representationmethodfor face recognition using theGamma and generalizedGaussianmodelsrdquo Image and Vision Computing vol 28 no 1 pp 177ndash1872010

[34] P Yao J Li X Ye Z Zhuang and B Li ldquoIris recognitionalgorithm using modified Log-Gabor filtersrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 461ndash464 August 2006

[35] R Raghavendra and C Busch ldquoNovel image fusion schemebased on dependency measure for robust multispectral palm-print recognitionrdquo Pattern Recognition vol 47 no 6 pp 2205ndash2221 2014

[36] R Redondo F Sroubek S Fischer and G Cristobal ldquoMultifo-cus image fusion using the log-Gabor transform and aMultisizeWindows techniquerdquo Information Fusion vol 10 no 2 pp 163ndash171 2009

[37] X Zhang X Li and Y Feng ldquoA new image fusion performancemeasure using Riesz transformsrdquoOptik vol 125 no 3 pp 1427ndash1433 2014

[38] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000

[39] B Yang and S Li ldquoPixel-level image fusion with simultaneousorthogonal matching pursuitrdquo Information Fusion vol 13 no 1pp 10ndash19 2012

[40] QMiao C Shi P XuM Yang and Y Shi ldquoA novel algorithm ofimage fusion using shearletsrdquoOptics Communications vol 284no 6 pp 1540ndash1547 2011

[41] E Candes L Demanet D Donoho and L Ying ldquoFast discretecurvelet transformsrdquo Multiscale Modeling and Simulation vol5 no 3 pp 861ndash899 2006

[42] M Fu and C Zhao ldquoFusion of infrared and visible imagesbased on the second generation curvelet transformrdquo Journal ofInfrared andMillimeterWaves vol 28 no 4 pp 254ndash258 2009

[43] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[44] Y Phamila and R Amutha ldquoDiscrete Cosine Transform basedfusion of multi-focus images for visual sensor networksrdquo SignalProcessing vol 95 pp 161ndash170 2014

[45] L Wang B Li and L Tian ldquoMulti-modal medical volumetricdata fusion using 3D discrete shearlet transform and global-to-local rulerdquo IEEETransactions on Biomedical Engineering vol 61no 1 pp 197ndash206 2014

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 2: Research Article Log-Gabor Energy Based Multimodal Medical ...downloads.hindawi.com/journals/cmmm/2014/835481.pdf · elds of multiscale transform based image fusion [ ]. However,

2 Computational and Mathematical Methods in Medicine

of spectral information and suffer from spectral degradation[10] Li et al [11] introduced the artificial neural network(ANN) to make image fusion However the performanceof ANN depends on the sample images and this is not anappealing characteristic Yang and Blum [12] used a statisticalapproach to fuse the images In their method the distortionis modeled as a mixture of Gaussian probability densityfunctions which is a limiting assumption Since the actualobjects usually contain structures at many different scalesor resolutions and multiscale techniques can provide themeans to exploit this fact the frequency domain techniquesespecially the multiscale techniques have attracted more andmore interest in image fusion [13]

In frequency domain techniques each source image isfirst decomposed into a sequence of multiscale coefficientsVarious fusion rules are then employed in the selection ofthese coefficients which are synthesized via inverse trans-forms to form the fused image Recently series of frequencydomain methods have been explored by using multiscaletransform including Laplacian pyramid transform gra-dient pyramid transform filter-subtract-decimate pyramidtransform discrete wavelet transform (DWT) and complexwavelet transform (CWT) [14ndash20] There is evidence thatmultiscale transform based signal decomposition is similarto the human visual system As we know the wavelet analysiswith its upstanding localize peculiarity in both time andfrequency domain has become one of the most commonlyused methods in the fields of multiscale transform basedimage fusion [16] However wavelet analysis cannot effec-tively represent the line singularities and plane singularitiesof the images and thus cannot represent the directions of theedges of images accurately To overcome these shortcomingsof the wavelet transform Do and Vetterli [17] proposedContourlet transform which can give the asymptotic optimalrepresentation of contours and has been successfully used forimage fusion However the up- and downsampling processof Contourlet decomposition and reconstruction results inthe Contourlet transform lacking shift-invariance and havingpseudo-Gibbs phenomena in the fused image [19] Later daCunha et al [20] proposed the nonsubsampled Contourlettransform (NSCT) based on Contourlet transform Thismethod inherits the advantages of Contourlet transformwhile possessing shift-invariance and effectively suppressingpseudo-Gibbs phenomena

Although quite good results have been reported by NSCTbased method there is still much room to improve the fusionperformance in the coefficient selection as follows

(a) The low-frequency coefficients of the fused image canbe simply acquired by averaging the low-frequencycoefficients of the input images This rule decreasedcontrast in the fused images [21] and cannot give thefused subimage of high quality for medical images

(b) The popularly used larger absolute rule is imple-mented in the value of a single pixel of the currenthigh-frequency subband The disadvantage of thismethod is that the coefficients only know the value ofa single pixel but not any of the relationship between

the corresponding coefficients in high-frequency sub-bands [22]

(c) Most fusion rules of the NSCT-based methods areimplemented in multifocus images [23] remote sens-ing images [24] and infrared and visible images [25]The results are not of the same quality as those of themultimodal medical images For example Chai et al[22] proposed aNSCTmethod based on features con-trast ofmultiscale products to fusemultifocus imagesHowever it has been proven that this algorithm is notable to utilize prominent information present in thesubbands efficiently and results in the poor qualitywhen it is used to fuse multimodal medical images[26]

In this paper a novel fusion framework based on NSCTis proposed for multimodal medical images The maincontribution of the method lies in the proposed fusionrule which can capture the best membership of sourceimagesrsquo coefficients to the corresponding fused coefficientThe phase congruency and Log-Gabor energy are unifiedas the fusion rules for low- and high-frequency coefficientsrespectively The phase congruency provides a contrast andbrightness-invariant representation of low-frequency coeffi-cients whereas Log-Gabor energy efficiently determines thefrequency coefficients from the clear and detail parts in thehigh frequency The combinations of these two techniquescan preserve more details from source images and thusimprove the quality of the fused images Experiments indicatethat the proposed framework can provide a better fusionoutcome when compared to series of traditional imagefusion methods in terms of both subjective and objectiveevaluations

The rest of the paper is organized as follows NSCT andphase congruency are described in Section 2 followed bythe proposed multimodal medical image fusion frameworkin Section 3 Experimental results and discussions are givenin Section 4 and the concluding remarks are presented inSection 5

2 Preliminaries

This section provides the description of concepts on whichthe proposed framework is based These concepts includingNSCT and phase congruency are described as follows

21 Nonsubsampled Contourlet Transform (NSCT) Contour-let transform can be divided into two stages [19] Laplacianpyramid (LP) and directional filter bank (DFB) and offersan efficient directional multiresolution image representationAmong them LP is first utilized to capture the point singular-ities and then followed by the DFB to link the singular pointinto linear structures LP is used to decompose the originalimages into low-frequency and high-frequency subimagesand DFB divides the high-frequency subbands into direc-tional subbands The Contourlet decomposed schematicdiagram is shown in Figure 1

Computational and Mathematical Methods in Medicine 3

DFBLPImage

Highfrequency

Highfrequency

darr (2 2)

Figure 1 Contourlet decomposed schematic diagram

ImageNSPFB

Lowfrequency

Highfrequency

Highfrequency

NSDFB

NSDFB

Figure 2 NSCT decomposed schematic diagram

The NSCT is proposed based on the theory of Contourlettransform NSCT inherits the advantage of Contourlettransform enhances directional selectivity and shift-invari-ance and effectively overcomes the pseudo-Gibbs phenom-ena NSCT is built on nonsubsampled pyramid filter bank(NSPFB) and nonsubsampled directional filter bank(NSDFB) [21] Figure 2 gives the NSCT decompositionframework with 119896 = 2 levels

TheNSPFB ensures themultiscale performance by takingadvantage of a two-channel nonsubsampled filter bank andone low-frequency subband andone high-frequency subbandthat can be produced at each decomposition level TheNSDFB is a two-channel nonsubsampled filter bank con-structed by eliminating the downsamplers and upsamplersand combining the directional fan filter banks in the nonsub-sampled directional filter [23] NSDFB allows the directiondecomposition with 119897 levels in each high-frequency subbandsfrom NSPFB and then produces 2119897 directional subbands withthe same size as the source imagesThus theNSDFB providesthe NSCT the multidirection performance and offers moreprecise directional detail information to get more accurateresults [23] Therefore NSCT leads to better frequencyselectivity and an essential property of the shift-invarianceon account of nonsubsampled operationThe size of differentsubimages decomposed by NSCT is identical AdditionallyNSCT-based image fusion can effectively mitigate the effectsof misregistration on the results [27] Therefore NSCT ismore suitable for image fusion

22 Phase Congruency Phase congruency is a feature percep-tion approach which provides information that is invariant toimage illumination and contrast [28] This model is built on

the Local EnergyModel [29] which postulates that importantfeatures can be found at pointswhere the Fourier componentsare maximally in phase Furthermore the angle at which thephase congruency occurs signifies the feature typeThe phasecongruency can be used for feature detection [30]Themodelprovides useful feature localization and noise compensationThe phase congruency at a point (119894 119895) can be defined asfollows [31]

PC120579 (119894 119895)

= sum

119899

119882120579(119894 119895) [119860

120579

119899(119894 119895) (cos (120601120579

119899(119894 119895) minus 120601

120579

119899(119894 119895)))

minus

1003816100381610038161003816100381610038161003816sin (120601120579

119899(119894 119895) minus 120601

120579

119899(119894 119895))

1003816100381610038161003816100381610038161003816minus 119879]+

times (sum

119899

119860120579

119899(119894 119895) + 120576)

minus1

(1)

where 120579 is the orientation 119882120579(119894 119895) is the weighting factorbased on frequency spread 119860120579

119899(119894 119895) and 120601

120579

119899(119894 119895) are the

amplitude and phase for wavelet scale 119899 respectively 120601120579119899(119894 119895)

is the weighted mean phase 119879 is a noise threshold constantand 120576 is a small constant value to avoid division by zeroThe notation []+ denotes that the enclosed quantity is equalto itself when the value is positive and zero otherwise Fordetails of phase congruency measure see [29]

As we know the multimodal medical images have thefollowing characteristics

(a) the images of different modal have significantly dif-ferent pixel mappings

(b) the capturing environment of different modalitiesvaries and resulted in the change of illumination andcontrast

(c) the edges and corners in the images are identified bycollecting frequency components of the image that isin phase

According to the literature [26 32] it is easy to findthat phase congruency is not only invariant to different pixelintensity mappings illumination and contrast changes butalso gives the Fourier components that are maximally inphase These all will lead to efficient fusion That is why weuse phase congruency for multimodal medical fusion

3 Proposed Multimodal Medical ImageFusion Framework

The framework of the proposed multimodal medical imagefusion algorithm is depicted in Figure 3 but before describingit the definition of local Log-Gabor energy in NSCT domainis first described as follows

31 Log-Gabor Energy in NSCTDomain Thehigh-frequencycoefficients in NSCT domain represent the detailed com-ponents of the source images such as the edges texturesand region boundaries [21] In general the coefficients withlarger absolute values correspond to the sharper brightness

4 Computational and Mathematical Methods in Medicine

Fusion rules

NSCT

Phase congruency

Fused high coefficients

Low coefficientHigh

coefficientsHigh

coefficients

Fused low coefficient

MaxLog-Gabor energy

Inverse NSCT

Fused image

Max phase congruency

NSCT

Low coefficient

Source images

Figure 3 The framework of the proposed fusion algorithm

in the image It is to be noted that the noise is also related tohigh-frequency coefficients and may cause miscalculation ofsharpness values and therefore affect the fusion performance[26] Furthermore human visual system is generally moresensitive to texture detail features than the value of a singlepixel

To overcome the defect mentioned above a novel high-frequency fusion rule based on local Log-Gabor energy isdesigned in this paper Gabor wavelet is a popular techniquethat has been extensively used to extract texture features[33] Log-Gabor filters are proposed based on Gabor filtersCompared with Gabor filters Log-Gabor filters cover theshortage of the high frequency of Gabor filter componentexpression and more in accord with human visual system[34] Therefore Log-Gabor wavelet can achieve optimalspatial orientation and wider spectrum information at thesame time and thus more truly reflect the frequency responseof the natural images and improve the performance in termsof the accuracy [35]

Under polar coordinates the Log-Gabor wavelet isexpressed as follows [36]

119892 (119891 120579) = expminus[ln (1198911198910)]

2

2[ln (1205901198910)]2 times expminus

(120579 minus 1205790)2

21205902120579

(2)

in which 1198910 is the center frequency of the Log-Gabor filter1205790 is the direction of the filter 120590 is used to determine thebandwidth of the radial filter and 120590120579 is used to determine thebandwidth of the orientation If 119892119906V

119896119897(119894 119895) correspond to Log-

Gabor wavelets in scale 119906 and direction V the signal responseis expressed as follows

119866119906V119896119897(119894 119895) = 119867119896119897 (119894 119895) lowast 119892

119906V119896119897(119894 119895) (3)

where 119867119896119897(119894 119895) is the coefficient located at (119894 119895) in high-frequency subimages of the source image 119860 or 119861 at the 119896thscale 119897th direction and lowast denotes convolution operationTheLog-Gabor energy of high-frequency subimages at the 119896thscale 119897th direction is expressed as follows

119864119896119897 (119894 119895) =

119880

sum

119906=1

119881

sum

V=1

radicreal(119866119906V119896119897(119894 119895))

2+ imag(119866119906V

119896119897(119894 119895))

2 (4)

in which real(119866119906V119896119897(119894 119895)) is the real part of 119866119906V

119896119897(119894 119895) and

imag(119866119906V119896119897(119894 119895)) is the imaginary part of 119866119906V

119896119897(119894 119895) The Log-

Gabor energy in NSCT domain at the local area around thepixel (119894 119895) is given as

LE119896119897 (119894 119895) =1

(2119872 + 1) (2119873 + 1)

119872

sum

119898=minus119872

119873

sum

119899=minus119873

119864119896119897 (119894 + 119898 119895 + 119899)

(5)

Computational and Mathematical Methods in Medicine 5

in which (2119872+ 1)(2119873 + 1) is the window size The proposeddefinition of the local Log-Gabor energy not only extractsmore useful features from high-frequency coefficients butalso keeps a well performance in noisy environment

32 Proposed Fusion Framework The proposed NSCT-basedimage fusion framework is discussed in this subsectionConsidering the input multimodal medical images (119860 and119861) are perfectly registered The framework of the proposedfusion method is shown Figure 3 and described as thefollowing three steps

Step 1 Perform 120579-level NSCT on 119860 and 119861 to obtain one low-frequency subimage and series of high-frequency subimagesat each level and direction 119897 that is 119860 119871119860 119867119860

119896119897 and 119861

119871119861 119867119861

119896119897 where 119871119860 119871119861 are the low-frequency subimages and

119867119860

119896119897119867119860119896119897represent the high-frequency subimages at level 119896 isin

[1 120579] in the orientation 119897

Step 2 Fuse low- and high-frequency subbands via thefollowing novel fusion rule to obtain composite low- andhigh-frequency subbands

The low-frequency coefficients represent the approxima-tion component of the source images The popular widelyused approach is to apply the averaging methods to producethe fused coefficients However this rule reduced contrastin the fused images and cannot give the fused subimage ofhigh quality for medical imageTherefore the criterion basedon the phase congruency that is introduced in Section 22 isemployed to fuse the low-frequency coefficients The fusionrule for the low-frequency subbands is defined as

119871119865(119894 119895)

=

119871119860(119894 119895) if PC119860120579 (119894 119895) gt PC119861120579 (119894 119895)

119871119861(119894 119895) if PC119860120579 (119894 119895) lt PC119861120579 (119894 119895)

05 (119871119860(119894 119895) + 119871

119861(119894 119895)) if PC119860120579 (119894 119895) = PC119861120579 (119894 119895)

(6)

where PC119860120579(119894 119895) PC119861120579(119894 119895) is the phase congruency extractedfrom low-frequency subimages of the source images119860 and 119861respectively

For the high-frequency coefficients the most commonfusion rule is selecting the coefficient with larger absolutevalues This rule does not take any consideration of thesurrounding pixels and cannot give the fused componentsof high quality for medical image Especially when thesource images contain noise the noise could be mistaken forfused coefficients and cause miscalculation of the sharpnessvalue Therefore the criterion based on Log-Gabor energyis introduced to fuse high-frequency coefficients The fusionrule for the high-frequency subbands is defined as

119867119865

119896119897(119894 119895) =

119867119860

119896119897(119894 119895) if LE119860

119896119897(119894 119895) ge LE119861

119896119897(119894 119895)

119867119861

119896119897(119894 119895) otherwise

(7)

where LE119860119896119897(119894 119895) and LE119861

119896119897(119894 119895) are the local Log-Gabor energy

extracted fromhigh-frequency subimages at the 119896th scale and119897th direction of source images 119860 and 119861 respectively

Step 3 Perform 120579-level by the inverseNSCTon the fused low-and high-frequency subimages The fused image is obtainedultimately in this way

4 The Experimental Results and Analysis

It is well known that different image quality metrics imply thevisual quality of images from different aspects but none ofthem can directly imply the quality In this paper we considerboth the visual representation and quantitative assessmentof the fused images For evaluation of the proposed fusionmethod we have considered five separate fusion performancemetrics defined as below

41 Evaluation Index System

411 Standard Deviation The standard deviation (STD) ofan image with size of1198720 times 1198730 is defined as [37]

Std = ( 1

1198720 times 1198730

1198720

sum

119894=1

1198730

sum

119895=1

(119865 (119894 119895) minus 120583)2)

12

(8)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) and 120583 is the mean value of the image The STDcan be used to estimate how widely the gray values spread inan image The larger the STD the better the result

412 Edge Based SimilarityMeasure The edge based similar-ity measure (119876119860119861119865) is proposed by Xydeas and Petrovic [38]The definition is given as

119876119860119861119865

=

sum1198720

119894=1sum1198730

119895=1[119876119860119865(119894 119895) 119908

119860(119894 119895) + 119876

119861119865(119894 119895) 119908

119861(119894 119895)]

sum1198720

119894=1sum1198730

119895=1[119908119860 (119894 119895) + 119908119861 (119894 119895)]

(9)

where 119908119860(119894 119895) and 119908119861(119894 119895) are the corresponding gradientstrength for images 119860 and 119861 respectively The definition of119876119860119865(119894 119895) and 119876119861119865(119894 119895) is given as

119876119860119865(119894 119895) = 119876

119860119865

119886(119894 119895) 119876

119860119865

119892(119894 119895)

119876119861119865(119894 119895) = 119876

119861119865

119886(119894 119895) 119876

119861119865

119892(119894 119895)

(10)

where 119876119909119865119886(119894 119895) and 119876119909119865

119892(119894 119895) are the edge strength and

orientation preservation values at location for image 119909 (119860or 119861) respectively The edge based similarity measure givesthe similarity between the edges transferred from the inputimages to the fused image [26]The larger the value the betterthe fusion result

6 Computational and Mathematical Methods in Medicine

413 Mutual Information Mutual information (MI) [39]between the fusion image 119865 and the source images119860 and 119861 isdefined as follows

MI = MI119860119865 +MI119861119865

MI119860119865 =119871

sum

119891=0

119871

sum

119886=0

119901119860119865(119886 119891) log

2(119901119860119865(119886 119891)

119901119860 (119886) 119901119865 (119891))

MI119861119865 =119871

sum

119891=0

119871

sum

119887=0

119901119861119865(119887 119891) log

2(119901119861119865(119887 119891)

119901119861 (119887) 119901119865 (119891))

(11)

where MI119909119865 denotes the normalized mutual informationbetween the fused image and the input image 119860 119861 119886 119887 and119891 isin [0 119871] and 119871 is the number of bins 119901119860(119886) 119901119861(119887) and119901119865(119891) are the normalized gray level histograms of source

images and fused image 119901119909119865(119886 119891) is the joint gray levelhistograms between fused image and each source image

MI can indicate how much information the fused image119865 conveys about the source images 119860 and 119861 [22] Thereforethe greater the value of MI the better the fusion effect

414 Cross Entropy The cross entropy is defined as [8]

CE =119871minus1

sum

119897=0

119875119897log2119875119897

119876119897

(12)

where 119875119897 and119876119897 denote the gray level histogram of the sourceimage and the fused image respectively The cross entropy isused to evaluate the difference between the source images andthe fused image The lower value corresponds to the betterfusion result

415 Spatial Frequency Spatial frequency is defined as [40]

SF = radicRF2 + CF2 (13)

where RF and CF are the row frequency and column fre-quency respectively and are defined as

RF = radic 1

11987201198730

1198720minus1

sum

119894=0

1198730minus1

sum

119895=0

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2

CF = radic 1

11987201198730

1198720minus1

sum

119894=0

1198730minus1

sum

119895=0

[119865 (119894 119895) minus 119865 (119894 minus 1 119895)]2

(14)

The spatial frequency reflects the edge information of thefused image Larger spatial frequency values indicate betterimage quality

42 Experiments on Multimodal Medical Image Fusion Toevaluate the performance of the proposed image fusionapproach the experiments are performed on three groups ofmultimodal medical images These images are characterizedin two distinct pairs (1) CT and MRI (2) MR-T1 and MR-T2 The images in Figures 4(a1)-4(b1) and 4(a2)-4(b2) are

CT and MRI images whereas Figures 4(a3)-4(b3) are T1-weighted MR image (MR-T1) and T2-weighted MR image(MR-T2) All images have the same size of 256 times 256 pixelwith 256-level gray scale For all these image groups theresults of the proposed fusion framework are compared withthose of the traditional discretewavelet transform (DWT) [1316] the second generation curvelet transform (fast discretecurvelet transform FDCT) [41 42] the dual tree complexwavelet transform (DTCWT) [4] and the nonsubsampledContourlet transform (NSCT-1 andNSCT-2) basedmethodsThe high-frequency coefficients and low-frequency coeffi-cients of DWT FDCT DTCWT and NSCT-1 based methodsaremerged by the popular widely used fusion rule of selectingthe coefficient with larger absolute values and the averagingrule (average-maximum rule) respectively NSCT-2 basedmethod is merged by the fusion rules proposed by Bhatnagaret al in [26] In order to perform a fair comparison the sourceimages are all decomposed into the same levels with 3 forthose methods except FDCTmethod For DWTmethod theimages are decomposed using the DBSS (2 2) wavelet Forimplementing NSCT ldquo9-7rdquo filters and ldquopkvardquo filters (how toset the filters can be seen in [43]) are used as the pyramidaland directional filters respectively

421 Subjective Evaluation The first pair of medical imagesare two groups of brain CT and MRI images on differentaspects shown in Figures 4(a1) 4(b1) and 4(a2) 4(b2)respectively It can be easily seen that the CT image showsthe dense structure while MRI provides information aboutsoft tissues The obtained fused images from DWT FDCTDTCWT NSCT-1 and NSCT-2 are shown in Figures 4(c1)ndash4(g1) and 4(c2)ndash4(g2) respectively The results for the pro-posed fusion method have been shown in Figures 4(h1) and4(h2) On comparing these results it can be easily observedthat the proposedmethod outperforms those fusionmethodsand has good visual representation of fused image

The second pair of medical images areMR-T1 andMR-T2images shown in Figures 4(a3) and 4(b3) The comparisonof DWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 4(c3)ndash4(h3) clearly implies thatthe fusion result of the proposed method has better qualityand contrast in comparison to other methods

Similarly on observing the noticeable improvement hasbeen emphasized in Figure 4 by the red arrows and theanalysis above one can easily verify the fact that again theproposed method has been found superior in terms of visualrepresentation over DWT FDCT DTCWT NSCT-1 andNSCT-2 fusion methods

422 Objective Evaluation For objective evaluation of thefusion results shown in Figure 4 we have used five fusionmetrics cross entropy spatial frequency STD 119876119860119861119865 andMIThe quantitative comparison of cross entropy and spatialfrequency for these fused images is visually given by Figures5-6 and other metrics are given by Table 1

On observing Figure 5 one can easily observe all thethree results of the proposed scheme have lower values ofcross entropy than any of the DWT FDCT DTCWT NSCT-1 and NSCT-2 fusion methods The cross entropy is used to

Computational and Mathematical Methods in Medicine 7

(a1) (b1) (d1)(c1)

(e1) (h1)(g1)(f1)

(a2) (d2)(c2)(b2)

(e2) (f2) (h2)(g2)

(a3) (c3)(b3) (d3)

(e3) (f3) (g3) (h3)

Figure 4 Sourcemultimodalmedical images (a1) (b1) image group 1 (CT andMRI) (a2) (b2) image group 2 (CT andMRI) (a3) (b3) imagegroup 3 (MR-T1 and MR-T2) fused images from (c1) (c2) (c3) DWT based method (d1) (d2) (d3) FDCT based method (e1) (e2) (e3)DTCWT based method (f1) (f2) (f3) NSCT-1 based method (g1) (g2) (g3) NSCT-2 based method (h1) (h2) (h3) our proposed method

8 Computational and Mathematical Methods in Medicine

Table 1 Comparison on quantitative evaluation of different methods for the first set medical images

Images Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed method

Image group 1(CT and MRI)

STD 41160 39170 40132 40581 54641 58476QABF 0618 0592 0652 0683 0427 0716MI 2371 2029 2393 2438 1886 2580

Image group 2(CT and MRI)

STD 72260 70572 71407 71733 80378 83334QABF 0567 0584 0606 0622 0450 0618MI 3137 3138 3198 3235 2915 3302

Image group 3(MR-T1 and MR-T2)

STD 37358 36856 37358 37277 39023 40630QABF 0635 0647 0668 0682 0589 0689MI 3209 3142 3302 3347 3349 3469

005

115

225

DWT FDCT DTCWT NSCT-1 NSCT-2

Cross entropy

Group 1Group 2Group 3

Proposedmethod

1323 1384

0046

Figure 5 Comparison on cross entropy of different methods andimages

evaluate the difference between the source images and thefused image Therefore the lower value corresponds to thebetter fusion result

On observing Figure 6 two values of the spatial frequencyof the fused image obtained by the proposed method arethe highest and the other one is 6447 which is close tothe highest value 6581 Observation of Table 1 yields that allthe three results of the proposed fusion scheme have highervalues of STD 119876119860119861119865 and MI than any of other methodsexcept one value of119876119860119861119865 (image group 2) is the second bestHowever an overall comparison shows the superiority of theproposed fusion scheme

423 Combined Evaluation Since the subjective and objec-tive evaluations separately are not able to examine fusionresults we have combined them From these figures (Figures4ndash6) and table (Table 1) it is clearly to find that the proposedmethod not only preserves most of the source images char-acteristics and information but also improves the definitionand the spatial quality better than the existing methodswhich can be justified by the optimum values of objectivecriteria except one value of spatial frequency (image group3) and one value of 119876119860119861119865 (image group 2) Consider theexample of the first set of images the five criteria values ofthe proposed method are 1323 (cross entropy) 7050 (spatialfrequency) 58476 (STD) 0716 (119876119860119861119865) and 2580 (MI)respectively Each of them is the optimal one in the first setof experiments

7050

7976

64476

657

758

6581

85

DWT FDCT DTCWT NSCT-1 NSCT-2

Spatial frequencies

Group 1Group 2Group 3

Proposedmethod

Figure 6 Comparison on spatial frequencies of different methodsand images

Among these methods the result of NSCT-2 basedmethod also gives poor results when comparing to theproposed NSCT-based methodThis stems from the fact thathigh-frequency fusion rule of NSCT-2 based method is notable to extract the detail information in the high frequencyeffectively Also by carefully looking at the outputs of theproposed NSCT-based method (Figures 4(h1) 4(h2) and4(h3)) we can find that they get more contrast and morespatial resolution than the outputs of NSCT-2 based method(highlighted by the red arrows) and other methodsThemainreason behind the better performance is that the proposedfusion rules for low- and high-frequency coefficients caneffectively extract prominent and detail information from thesource images Therefore it can be possible to conclude thatthe proposed method is better than the existing methods

43 Fusion ofMultimodalMedical Noisy Images and a ClinicalExample To evaluate the performance of the proposedmethod in noisy environment the input image group 1 hasbeen additionally corrupted with Gaussian noise with astandard deviation of 5 (shown in Figures 7(a) and 7(b))In addition a clinical applicability on noninvasive diagnosisof neoplastic disease is given in the last subsection

431 Fusion of Multimodal Medical Noisy Images For com-parison apart from visual observation objective criteria onSTD MI and 119876119860119861119865 are used to evaluate how much clearor detail information of the source images is transferredto the fused images However maybe these criteria cannot

Computational and Mathematical Methods in Medicine 9

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 7 The multimodal medical images with noise (a) CT image with 5 noise (b) MRI image with 5 noise fused images by (c) DWT(d) FDCT (e) DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 2 Comparison on quantitative evaluation of different methods for the noise medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 41893 40044 40995 41338 52533 57787QABF 0592 0579 0619 0637 0269 0640MI 2785 2539 2826 2882 1816 2914

effectively evaluate the performance of the fusion methodsin terms of the noise transmission For further comparisonPeak Signal to Noise Ratio (PSNR) a ratio between themaximum possible power of a signal and the power of noisethat affects the fidelity [44] is used The larger the value ofPSNR the less the image distortion [45] PSNR is formulatedas

PSNR = 10lg100381610038161003816100381610038161003816100381610038161003816

2552

RMSE2100381610038161003816100381610038161003816100381610038161003816

(15)

where RMSE denotes the Root Mean Square Error betweenthe fused image and the reference imageThe reference imagein the following experiment is selecting from Figure 4(h1)which is proven to be the best performance compared to otherimages

Figure 7 illustrates the fusion results obtained by thedifferent methods The comparison of the images fused byDWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 7(c)ndash7(h) clearly implies thatthe fused image by proposed method has better quality andcontrast than other methods Figure 8 shows the values ofPSNR of different methods in fusing noisy images Onecan observe that the proposed method has higher values of

1516171819202122232425

DWT FDCT DTCWT NSCT-1 NSCT-2 Proposedmethod

PSNR23704

Figure 8 Comparison on PSNR of different methods for the noisemedical images

PSNR than any of the DWT FDCT DTCWT NSCT-1 andNSCT-2 fusionmethods Table 2 gives the quantitative resultsof fused images and shows that the values of STD 119876119860119861119865and MI are also the highest of all the six methods Fromthe analysis above we can also observe that the proposedscheme provides the best performance and outperforms theother algorithms In addition compared with the result ofthe NSCT-1 method using the average-maximum rule it

10 Computational and Mathematical Methods in Medicine

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 9 Brain images of the man with recurrent tumor (a) MR-T1 image (b) MR-T2 image fused images by (c) DWT (d) FDCT (e)DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 3 Comparison on quantitative evaluation of different methods for the clinical medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 64433 62487 63194 63652 60722 67155QABF 0541 0565 0578 0597 0411 0624MI 2507 2455 2540 2584 2392 2624

demonstrated the validities of the proposed fusion rule innoisy environment

432 A Clinical Example on Noninvasive Diagnosis In orderto demonstrate the practical value of the proposed schemein medical imaging one clinical case on neoplastic diagnosisis considered where MR-T1MR-T2 medical modalities areused The images have been downloaded from the HarvardUniversity site (httpwwwmedharvardeduAANLIBhomehtml) Figures 9(a)-9(b) show the recurrent tumor case of a51-year-old woman who sought medical attention because ofgradually increasing right hemiparesis (weakness) and hemi-anopia (visual loss) At craniotomy left parietal anaplasticastrocytoma was found A right frontal lesion was biopsiedA large region of mixed signal on MR-T1 and MR-T2 imagesgives the signs of the possibility of active tumor (highlightedby the red arrows)

Figures 9(c)ndash9(h) show the fused images by DWT FDCTDTCWT NSCT-1 NSCT-2 and proposed method It isobvious that the fused image by proposed method has bettercontrast and sharpness of active tumor (highlighted by the redarrows) than other methods Table 3 shows the quantitativeevaluation of different methods for the clinical medicalimages The values of the proposed method are optimum in

terms of STD MI and 119876119860119861119865 From Figure 9 and Table 3we can obtain the same conclusion that the proposed schemeprovides the best performance and outperforms the otheralgorithms

5 Conclusions

Multimodal medical image fusion plays an important role inclinical applications But the real challenge is to obtain a visu-ally enhanced image through fusion process In this paper anovel and effective image fusion framework based on NSCTand Log-Gabor energy is proposedThe potential advantagesinclude (1) NSCT is more suitable for image fusion becauseof its advantages such as multiresolution multidirection andshift-invariance (2) a new couple of fusion rules based onphase congruency andLog-Gabor energy are used to preservemore useful information in the fused image to improve thequality of the fused images and overcome the limitations ofthe traditional fusion rules and (3) the proposed methodcan provide a better performance than the current fusionmethods whatever the source images are clean or noisy Inthe experiments five groups of multimodal medical imagesincluding one group with noise and one group clinicalexample of a woman affected with recurrent tumor are

Computational and Mathematical Methods in Medicine 11

fused by using traditional fusion methods and the proposedframeworkThe subjective and objective comparisons clearlydemonstrate that the proposed algorithm can enhance thedetails of the fused image and can improve the visual effectwith less information distortion than other fusion methodsIn the future we plan to design a pureC++ platform to reducethe time cost and extend our method for 3D or 4D medicalimage fusion

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work is supported by the National Natural ScienceFoundation of China (no 61262034 no 61462031 and no61473221) by the Key Project of Chinese Ministry of Edu-cation (no 211087) by the Doctoral Fund of Ministry ofEducation of China (no 20120201120071) by the NaturalScience Foundation of Jiangxi Province (no 20114BAB211020and no 20132BAB201025) by the Young Scientist Foundationof Jiangxi Province (no 20122BCB23017) by the Scienceand Technology Application Project of Jiangxi Province(no KJLD14031) by the Science and Technology ResearchProject of the Education Department of Jiangxi Province (noGJJ14334) and by the Fundamental Research Funds for theCentral Universities of China

References

[1] R Shen I Cheng and A Basu ldquoCross-scale coefficient selec-tion for volumetric medical image fusionrdquo IEEE Transactionson Biomedical Engineering vol 60 no 4 pp 1069ndash1079 2013

[2] V D Calhoun and T Adali ldquoFeature-based fusion of medicalimaging datardquo IEEE Transactions on Information Technology inBiomedicine vol 13 no 5 pp 711ndash720 2009

[3] Y Yang D S Park S Huang and J Yang ldquoFusion of CT andMR images using an improved wavelet based methodrdquo Journalof X-Ray Science and Technology vol 18 no 2 pp 157ndash170 2010

[4] R Singh R Srivastava O Prakash and A Khare ldquoMultimodalmedical image fusion in dual tree complex wavelet transformdomain using maximum and average fusion rulesrdquo Journal ofMedical Imaging and Health Informatics vol 2 no 2 pp 168ndash173 2012

[5] Z Wang and Y Ma ldquoMedical image fusion using m-PCNNrdquoInformation Fusion vol 9 no 2 pp 176ndash185 2008

[6] V Barra and J Boire ldquoA general framework for the fusion ofanatomical and functionalmedical imagesrdquoNeuroImage vol 13no 3 pp 410ndash424 2001

[7] A P James and B V Dasarathy ldquoMedical image fusion a surveyof the state of the artrdquo Information Fusion vol 19 pp 4ndash19 2014

[8] P Balasubramaniam and V P Ananthi ldquoImage fusion usingintuitionistic fuzzy setsrdquo Information Fusion vol 20 pp 21ndash302014

[9] Y Yang S Y Huang F J Gao et al ldquoMulti-focus image fusionusing an effective discrete wavelet transform based algorithmrdquoMeasurement Science Review vol 14 no 2 pp 102ndash108 2014

[10] P S Pradhan R L King N H Younan and D W HolcombldquoEstimation of the number of decomposition levels for awavelet-based multiresolution multisensor image fusionrdquo IEEETransactions on Geoscience and Remote Sensing vol 44 no 12pp 3674ndash3686 2006

[11] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[12] J Yang and R S Blum ldquoA statistical signal processing approachto image fusion for conceled weapon detectionrdquo in Proceedingsof the International Conference on Image Processing (ICIP rsquo02)vol 1 pp 513ndash516 September 2002

[13] R Singh and A Khare ldquoMultiscale medical image fusion inwavelet domainrdquoThe Scientific World Journal vol 2013 ArticleID 521034 10 pages 2013

[14] B Aiazzi L Alparone A Barducci S Baronti and I PippildquoMultispectral fusion of multisensor image data by the general-ized Laplacian pyramidrdquo in Proceedings of the 1999 IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo99) vol 2 pp 1183ndash1185 Hamburg Germany July 1999

[15] V S Petrovic andC S Xydeas ldquoGradient-basedmultiresolutionimage fusionrdquo IEEE Transactions on Image Processing vol 13no 2 pp 228ndash237 2004

[16] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journalon Advances in Signal Processing vol 2010 Article ID 5793412010

[17] M N Do and M Vetterli ldquoThe contourlet transform an effi-cient directional multiresolution image representationrdquo IEEETransactions on Image Processing vol 14 no 12 pp 2091ndash21062005

[18] R Singh and A Khare ldquoFusion of multimodal medical imagesusing Daubechies complex wavelet transformmdasha multiresolu-tion approachrdquo Information Fusion vol 19 pp 49ndash60 2014

[19] X B Qu J W Yan and G D Yang ldquoMultifocus image fusionmethod of sharp frequency localized Contourlet transformdomain based on sum-modified-Laplacianrdquo Optics and Preci-sion Engineering vol 17 no 5 pp 1203ndash1212 2009

[20] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006

[21] Q Zhang and B Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[22] Y Chai H Li and X Zhang ldquoMultifocus image fusion basedon features contrast of multiscale products in nonsubsampledcontourlet transform domainrdquo Optik vol 123 no 7 pp 569ndash581 2012

[23] X B Qu J W Yan H Z Xiao and Z Zhu ldquoImage fusionalgorithm based on spatial frequency-motivated pulse cou-pled neural networks in nonsubsampled contourlet transformdomainrdquo Acta Automatica Sinica vol 34 no 12 pp 1508ndash15142008

[24] Y Wu C Wu and S Wu ldquoFusion of remote sensing imagesbased on nonsubsampled contourlet transform and regionsegmentationrdquo Journal of Shanghai JiaotongUniversity (Science)vol 16 no 6 pp 722ndash727 2011

[25] J Adu J Gan andYWang ldquoImage fusion based onnonsubsam-pled contourlet transform for infrared and visible light imagerdquoInfrared Physics amp Technology vol 61 pp 94ndash100 2013

12 Computational and Mathematical Methods in Medicine

[26] G Bhatnagar Q M J Wu and Z Liu ldquoDirective contrastbased multimodal medical image fusion in NSCT domainrdquoIEEE Transactions on Multimedia vol 15 no 5 pp 1014ndash10242013

[27] S Das and M K Kundu ldquoNSCT-based multimodal medicalimage fusion using pulse-coupled neural network andmodifiedspatial frequencyrdquo Medical and Biological Engineering andComputing vol 50 no 10 pp 1105ndash1114 2012

[28] P Kovesi ldquoImage features from phase congruencyrdquo Videre vol1 no 3 pp 1ndash26 1999

[29] M C Morrone and R A Owens ldquoFeature detection from localenergyrdquo Pattern Recognition Letters vol 6 no 5 pp 303ndash3131987

[30] P Kovesi ldquoPhase congruency a low-level image invariantrdquoPsychological Research vol 64 no 2 pp 136ndash148 2000

[31] L Zhang X Mou and D Zhang ldquoFSIM a feature similarityindex for image quality assessmentrdquo IEEE Transactions onImage Processing vol 20 no 8 pp 2378ndash2386 2011

[32] A Wong and W Bishop ldquoEfficient least squares fusion of MRIand CT images using a phase congruency modelrdquo PatternRecognition Letters vol 29 no 3 pp 173ndash180 2008

[33] L Yu Z He and Q Cao ldquoGabor texture representationmethodfor face recognition using theGamma and generalizedGaussianmodelsrdquo Image and Vision Computing vol 28 no 1 pp 177ndash1872010

[34] P Yao J Li X Ye Z Zhuang and B Li ldquoIris recognitionalgorithm using modified Log-Gabor filtersrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 461ndash464 August 2006

[35] R Raghavendra and C Busch ldquoNovel image fusion schemebased on dependency measure for robust multispectral palm-print recognitionrdquo Pattern Recognition vol 47 no 6 pp 2205ndash2221 2014

[36] R Redondo F Sroubek S Fischer and G Cristobal ldquoMultifo-cus image fusion using the log-Gabor transform and aMultisizeWindows techniquerdquo Information Fusion vol 10 no 2 pp 163ndash171 2009

[37] X Zhang X Li and Y Feng ldquoA new image fusion performancemeasure using Riesz transformsrdquoOptik vol 125 no 3 pp 1427ndash1433 2014

[38] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000

[39] B Yang and S Li ldquoPixel-level image fusion with simultaneousorthogonal matching pursuitrdquo Information Fusion vol 13 no 1pp 10ndash19 2012

[40] QMiao C Shi P XuM Yang and Y Shi ldquoA novel algorithm ofimage fusion using shearletsrdquoOptics Communications vol 284no 6 pp 1540ndash1547 2011

[41] E Candes L Demanet D Donoho and L Ying ldquoFast discretecurvelet transformsrdquo Multiscale Modeling and Simulation vol5 no 3 pp 861ndash899 2006

[42] M Fu and C Zhao ldquoFusion of infrared and visible imagesbased on the second generation curvelet transformrdquo Journal ofInfrared andMillimeterWaves vol 28 no 4 pp 254ndash258 2009

[43] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[44] Y Phamila and R Amutha ldquoDiscrete Cosine Transform basedfusion of multi-focus images for visual sensor networksrdquo SignalProcessing vol 95 pp 161ndash170 2014

[45] L Wang B Li and L Tian ldquoMulti-modal medical volumetricdata fusion using 3D discrete shearlet transform and global-to-local rulerdquo IEEETransactions on Biomedical Engineering vol 61no 1 pp 197ndash206 2014

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 3: Research Article Log-Gabor Energy Based Multimodal Medical ...downloads.hindawi.com/journals/cmmm/2014/835481.pdf · elds of multiscale transform based image fusion [ ]. However,

Computational and Mathematical Methods in Medicine 3

DFBLPImage

Highfrequency

Highfrequency

darr (2 2)

Figure 1 Contourlet decomposed schematic diagram

ImageNSPFB

Lowfrequency

Highfrequency

Highfrequency

NSDFB

NSDFB

Figure 2 NSCT decomposed schematic diagram

The NSCT is proposed based on the theory of Contourlettransform NSCT inherits the advantage of Contourlettransform enhances directional selectivity and shift-invari-ance and effectively overcomes the pseudo-Gibbs phenom-ena NSCT is built on nonsubsampled pyramid filter bank(NSPFB) and nonsubsampled directional filter bank(NSDFB) [21] Figure 2 gives the NSCT decompositionframework with 119896 = 2 levels

TheNSPFB ensures themultiscale performance by takingadvantage of a two-channel nonsubsampled filter bank andone low-frequency subband andone high-frequency subbandthat can be produced at each decomposition level TheNSDFB is a two-channel nonsubsampled filter bank con-structed by eliminating the downsamplers and upsamplersand combining the directional fan filter banks in the nonsub-sampled directional filter [23] NSDFB allows the directiondecomposition with 119897 levels in each high-frequency subbandsfrom NSPFB and then produces 2119897 directional subbands withthe same size as the source imagesThus theNSDFB providesthe NSCT the multidirection performance and offers moreprecise directional detail information to get more accurateresults [23] Therefore NSCT leads to better frequencyselectivity and an essential property of the shift-invarianceon account of nonsubsampled operationThe size of differentsubimages decomposed by NSCT is identical AdditionallyNSCT-based image fusion can effectively mitigate the effectsof misregistration on the results [27] Therefore NSCT ismore suitable for image fusion

22 Phase Congruency Phase congruency is a feature percep-tion approach which provides information that is invariant toimage illumination and contrast [28] This model is built on

the Local EnergyModel [29] which postulates that importantfeatures can be found at pointswhere the Fourier componentsare maximally in phase Furthermore the angle at which thephase congruency occurs signifies the feature typeThe phasecongruency can be used for feature detection [30]Themodelprovides useful feature localization and noise compensationThe phase congruency at a point (119894 119895) can be defined asfollows [31]

PC120579 (119894 119895)

= sum

119899

119882120579(119894 119895) [119860

120579

119899(119894 119895) (cos (120601120579

119899(119894 119895) minus 120601

120579

119899(119894 119895)))

minus

1003816100381610038161003816100381610038161003816sin (120601120579

119899(119894 119895) minus 120601

120579

119899(119894 119895))

1003816100381610038161003816100381610038161003816minus 119879]+

times (sum

119899

119860120579

119899(119894 119895) + 120576)

minus1

(1)

where 120579 is the orientation 119882120579(119894 119895) is the weighting factorbased on frequency spread 119860120579

119899(119894 119895) and 120601

120579

119899(119894 119895) are the

amplitude and phase for wavelet scale 119899 respectively 120601120579119899(119894 119895)

is the weighted mean phase 119879 is a noise threshold constantand 120576 is a small constant value to avoid division by zeroThe notation []+ denotes that the enclosed quantity is equalto itself when the value is positive and zero otherwise Fordetails of phase congruency measure see [29]

As we know the multimodal medical images have thefollowing characteristics

(a) the images of different modal have significantly dif-ferent pixel mappings

(b) the capturing environment of different modalitiesvaries and resulted in the change of illumination andcontrast

(c) the edges and corners in the images are identified bycollecting frequency components of the image that isin phase

According to the literature [26 32] it is easy to findthat phase congruency is not only invariant to different pixelintensity mappings illumination and contrast changes butalso gives the Fourier components that are maximally inphase These all will lead to efficient fusion That is why weuse phase congruency for multimodal medical fusion

3 Proposed Multimodal Medical ImageFusion Framework

The framework of the proposed multimodal medical imagefusion algorithm is depicted in Figure 3 but before describingit the definition of local Log-Gabor energy in NSCT domainis first described as follows

31 Log-Gabor Energy in NSCTDomain Thehigh-frequencycoefficients in NSCT domain represent the detailed com-ponents of the source images such as the edges texturesand region boundaries [21] In general the coefficients withlarger absolute values correspond to the sharper brightness

4 Computational and Mathematical Methods in Medicine

Fusion rules

NSCT

Phase congruency

Fused high coefficients

Low coefficientHigh

coefficientsHigh

coefficients

Fused low coefficient

MaxLog-Gabor energy

Inverse NSCT

Fused image

Max phase congruency

NSCT

Low coefficient

Source images

Figure 3 The framework of the proposed fusion algorithm

in the image It is to be noted that the noise is also related tohigh-frequency coefficients and may cause miscalculation ofsharpness values and therefore affect the fusion performance[26] Furthermore human visual system is generally moresensitive to texture detail features than the value of a singlepixel

To overcome the defect mentioned above a novel high-frequency fusion rule based on local Log-Gabor energy isdesigned in this paper Gabor wavelet is a popular techniquethat has been extensively used to extract texture features[33] Log-Gabor filters are proposed based on Gabor filtersCompared with Gabor filters Log-Gabor filters cover theshortage of the high frequency of Gabor filter componentexpression and more in accord with human visual system[34] Therefore Log-Gabor wavelet can achieve optimalspatial orientation and wider spectrum information at thesame time and thus more truly reflect the frequency responseof the natural images and improve the performance in termsof the accuracy [35]

Under polar coordinates the Log-Gabor wavelet isexpressed as follows [36]

119892 (119891 120579) = expminus[ln (1198911198910)]

2

2[ln (1205901198910)]2 times expminus

(120579 minus 1205790)2

21205902120579

(2)

in which 1198910 is the center frequency of the Log-Gabor filter1205790 is the direction of the filter 120590 is used to determine thebandwidth of the radial filter and 120590120579 is used to determine thebandwidth of the orientation If 119892119906V

119896119897(119894 119895) correspond to Log-

Gabor wavelets in scale 119906 and direction V the signal responseis expressed as follows

119866119906V119896119897(119894 119895) = 119867119896119897 (119894 119895) lowast 119892

119906V119896119897(119894 119895) (3)

where 119867119896119897(119894 119895) is the coefficient located at (119894 119895) in high-frequency subimages of the source image 119860 or 119861 at the 119896thscale 119897th direction and lowast denotes convolution operationTheLog-Gabor energy of high-frequency subimages at the 119896thscale 119897th direction is expressed as follows

119864119896119897 (119894 119895) =

119880

sum

119906=1

119881

sum

V=1

radicreal(119866119906V119896119897(119894 119895))

2+ imag(119866119906V

119896119897(119894 119895))

2 (4)

in which real(119866119906V119896119897(119894 119895)) is the real part of 119866119906V

119896119897(119894 119895) and

imag(119866119906V119896119897(119894 119895)) is the imaginary part of 119866119906V

119896119897(119894 119895) The Log-

Gabor energy in NSCT domain at the local area around thepixel (119894 119895) is given as

LE119896119897 (119894 119895) =1

(2119872 + 1) (2119873 + 1)

119872

sum

119898=minus119872

119873

sum

119899=minus119873

119864119896119897 (119894 + 119898 119895 + 119899)

(5)

Computational and Mathematical Methods in Medicine 5

in which (2119872+ 1)(2119873 + 1) is the window size The proposeddefinition of the local Log-Gabor energy not only extractsmore useful features from high-frequency coefficients butalso keeps a well performance in noisy environment

32 Proposed Fusion Framework The proposed NSCT-basedimage fusion framework is discussed in this subsectionConsidering the input multimodal medical images (119860 and119861) are perfectly registered The framework of the proposedfusion method is shown Figure 3 and described as thefollowing three steps

Step 1 Perform 120579-level NSCT on 119860 and 119861 to obtain one low-frequency subimage and series of high-frequency subimagesat each level and direction 119897 that is 119860 119871119860 119867119860

119896119897 and 119861

119871119861 119867119861

119896119897 where 119871119860 119871119861 are the low-frequency subimages and

119867119860

119896119897119867119860119896119897represent the high-frequency subimages at level 119896 isin

[1 120579] in the orientation 119897

Step 2 Fuse low- and high-frequency subbands via thefollowing novel fusion rule to obtain composite low- andhigh-frequency subbands

The low-frequency coefficients represent the approxima-tion component of the source images The popular widelyused approach is to apply the averaging methods to producethe fused coefficients However this rule reduced contrastin the fused images and cannot give the fused subimage ofhigh quality for medical imageTherefore the criterion basedon the phase congruency that is introduced in Section 22 isemployed to fuse the low-frequency coefficients The fusionrule for the low-frequency subbands is defined as

119871119865(119894 119895)

=

119871119860(119894 119895) if PC119860120579 (119894 119895) gt PC119861120579 (119894 119895)

119871119861(119894 119895) if PC119860120579 (119894 119895) lt PC119861120579 (119894 119895)

05 (119871119860(119894 119895) + 119871

119861(119894 119895)) if PC119860120579 (119894 119895) = PC119861120579 (119894 119895)

(6)

where PC119860120579(119894 119895) PC119861120579(119894 119895) is the phase congruency extractedfrom low-frequency subimages of the source images119860 and 119861respectively

For the high-frequency coefficients the most commonfusion rule is selecting the coefficient with larger absolutevalues This rule does not take any consideration of thesurrounding pixels and cannot give the fused componentsof high quality for medical image Especially when thesource images contain noise the noise could be mistaken forfused coefficients and cause miscalculation of the sharpnessvalue Therefore the criterion based on Log-Gabor energyis introduced to fuse high-frequency coefficients The fusionrule for the high-frequency subbands is defined as

119867119865

119896119897(119894 119895) =

119867119860

119896119897(119894 119895) if LE119860

119896119897(119894 119895) ge LE119861

119896119897(119894 119895)

119867119861

119896119897(119894 119895) otherwise

(7)

where LE119860119896119897(119894 119895) and LE119861

119896119897(119894 119895) are the local Log-Gabor energy

extracted fromhigh-frequency subimages at the 119896th scale and119897th direction of source images 119860 and 119861 respectively

Step 3 Perform 120579-level by the inverseNSCTon the fused low-and high-frequency subimages The fused image is obtainedultimately in this way

4 The Experimental Results and Analysis

It is well known that different image quality metrics imply thevisual quality of images from different aspects but none ofthem can directly imply the quality In this paper we considerboth the visual representation and quantitative assessmentof the fused images For evaluation of the proposed fusionmethod we have considered five separate fusion performancemetrics defined as below

41 Evaluation Index System

411 Standard Deviation The standard deviation (STD) ofan image with size of1198720 times 1198730 is defined as [37]

Std = ( 1

1198720 times 1198730

1198720

sum

119894=1

1198730

sum

119895=1

(119865 (119894 119895) minus 120583)2)

12

(8)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) and 120583 is the mean value of the image The STDcan be used to estimate how widely the gray values spread inan image The larger the STD the better the result

412 Edge Based SimilarityMeasure The edge based similar-ity measure (119876119860119861119865) is proposed by Xydeas and Petrovic [38]The definition is given as

119876119860119861119865

=

sum1198720

119894=1sum1198730

119895=1[119876119860119865(119894 119895) 119908

119860(119894 119895) + 119876

119861119865(119894 119895) 119908

119861(119894 119895)]

sum1198720

119894=1sum1198730

119895=1[119908119860 (119894 119895) + 119908119861 (119894 119895)]

(9)

where 119908119860(119894 119895) and 119908119861(119894 119895) are the corresponding gradientstrength for images 119860 and 119861 respectively The definition of119876119860119865(119894 119895) and 119876119861119865(119894 119895) is given as

119876119860119865(119894 119895) = 119876

119860119865

119886(119894 119895) 119876

119860119865

119892(119894 119895)

119876119861119865(119894 119895) = 119876

119861119865

119886(119894 119895) 119876

119861119865

119892(119894 119895)

(10)

where 119876119909119865119886(119894 119895) and 119876119909119865

119892(119894 119895) are the edge strength and

orientation preservation values at location for image 119909 (119860or 119861) respectively The edge based similarity measure givesthe similarity between the edges transferred from the inputimages to the fused image [26]The larger the value the betterthe fusion result

6 Computational and Mathematical Methods in Medicine

413 Mutual Information Mutual information (MI) [39]between the fusion image 119865 and the source images119860 and 119861 isdefined as follows

MI = MI119860119865 +MI119861119865

MI119860119865 =119871

sum

119891=0

119871

sum

119886=0

119901119860119865(119886 119891) log

2(119901119860119865(119886 119891)

119901119860 (119886) 119901119865 (119891))

MI119861119865 =119871

sum

119891=0

119871

sum

119887=0

119901119861119865(119887 119891) log

2(119901119861119865(119887 119891)

119901119861 (119887) 119901119865 (119891))

(11)

where MI119909119865 denotes the normalized mutual informationbetween the fused image and the input image 119860 119861 119886 119887 and119891 isin [0 119871] and 119871 is the number of bins 119901119860(119886) 119901119861(119887) and119901119865(119891) are the normalized gray level histograms of source

images and fused image 119901119909119865(119886 119891) is the joint gray levelhistograms between fused image and each source image

MI can indicate how much information the fused image119865 conveys about the source images 119860 and 119861 [22] Thereforethe greater the value of MI the better the fusion effect

414 Cross Entropy The cross entropy is defined as [8]

CE =119871minus1

sum

119897=0

119875119897log2119875119897

119876119897

(12)

where 119875119897 and119876119897 denote the gray level histogram of the sourceimage and the fused image respectively The cross entropy isused to evaluate the difference between the source images andthe fused image The lower value corresponds to the betterfusion result

415 Spatial Frequency Spatial frequency is defined as [40]

SF = radicRF2 + CF2 (13)

where RF and CF are the row frequency and column fre-quency respectively and are defined as

RF = radic 1

11987201198730

1198720minus1

sum

119894=0

1198730minus1

sum

119895=0

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2

CF = radic 1

11987201198730

1198720minus1

sum

119894=0

1198730minus1

sum

119895=0

[119865 (119894 119895) minus 119865 (119894 minus 1 119895)]2

(14)

The spatial frequency reflects the edge information of thefused image Larger spatial frequency values indicate betterimage quality

42 Experiments on Multimodal Medical Image Fusion Toevaluate the performance of the proposed image fusionapproach the experiments are performed on three groups ofmultimodal medical images These images are characterizedin two distinct pairs (1) CT and MRI (2) MR-T1 and MR-T2 The images in Figures 4(a1)-4(b1) and 4(a2)-4(b2) are

CT and MRI images whereas Figures 4(a3)-4(b3) are T1-weighted MR image (MR-T1) and T2-weighted MR image(MR-T2) All images have the same size of 256 times 256 pixelwith 256-level gray scale For all these image groups theresults of the proposed fusion framework are compared withthose of the traditional discretewavelet transform (DWT) [1316] the second generation curvelet transform (fast discretecurvelet transform FDCT) [41 42] the dual tree complexwavelet transform (DTCWT) [4] and the nonsubsampledContourlet transform (NSCT-1 andNSCT-2) basedmethodsThe high-frequency coefficients and low-frequency coeffi-cients of DWT FDCT DTCWT and NSCT-1 based methodsaremerged by the popular widely used fusion rule of selectingthe coefficient with larger absolute values and the averagingrule (average-maximum rule) respectively NSCT-2 basedmethod is merged by the fusion rules proposed by Bhatnagaret al in [26] In order to perform a fair comparison the sourceimages are all decomposed into the same levels with 3 forthose methods except FDCTmethod For DWTmethod theimages are decomposed using the DBSS (2 2) wavelet Forimplementing NSCT ldquo9-7rdquo filters and ldquopkvardquo filters (how toset the filters can be seen in [43]) are used as the pyramidaland directional filters respectively

421 Subjective Evaluation The first pair of medical imagesare two groups of brain CT and MRI images on differentaspects shown in Figures 4(a1) 4(b1) and 4(a2) 4(b2)respectively It can be easily seen that the CT image showsthe dense structure while MRI provides information aboutsoft tissues The obtained fused images from DWT FDCTDTCWT NSCT-1 and NSCT-2 are shown in Figures 4(c1)ndash4(g1) and 4(c2)ndash4(g2) respectively The results for the pro-posed fusion method have been shown in Figures 4(h1) and4(h2) On comparing these results it can be easily observedthat the proposedmethod outperforms those fusionmethodsand has good visual representation of fused image

The second pair of medical images areMR-T1 andMR-T2images shown in Figures 4(a3) and 4(b3) The comparisonof DWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 4(c3)ndash4(h3) clearly implies thatthe fusion result of the proposed method has better qualityand contrast in comparison to other methods

Similarly on observing the noticeable improvement hasbeen emphasized in Figure 4 by the red arrows and theanalysis above one can easily verify the fact that again theproposed method has been found superior in terms of visualrepresentation over DWT FDCT DTCWT NSCT-1 andNSCT-2 fusion methods

422 Objective Evaluation For objective evaluation of thefusion results shown in Figure 4 we have used five fusionmetrics cross entropy spatial frequency STD 119876119860119861119865 andMIThe quantitative comparison of cross entropy and spatialfrequency for these fused images is visually given by Figures5-6 and other metrics are given by Table 1

On observing Figure 5 one can easily observe all thethree results of the proposed scheme have lower values ofcross entropy than any of the DWT FDCT DTCWT NSCT-1 and NSCT-2 fusion methods The cross entropy is used to

Computational and Mathematical Methods in Medicine 7

(a1) (b1) (d1)(c1)

(e1) (h1)(g1)(f1)

(a2) (d2)(c2)(b2)

(e2) (f2) (h2)(g2)

(a3) (c3)(b3) (d3)

(e3) (f3) (g3) (h3)

Figure 4 Sourcemultimodalmedical images (a1) (b1) image group 1 (CT andMRI) (a2) (b2) image group 2 (CT andMRI) (a3) (b3) imagegroup 3 (MR-T1 and MR-T2) fused images from (c1) (c2) (c3) DWT based method (d1) (d2) (d3) FDCT based method (e1) (e2) (e3)DTCWT based method (f1) (f2) (f3) NSCT-1 based method (g1) (g2) (g3) NSCT-2 based method (h1) (h2) (h3) our proposed method

8 Computational and Mathematical Methods in Medicine

Table 1 Comparison on quantitative evaluation of different methods for the first set medical images

Images Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed method

Image group 1(CT and MRI)

STD 41160 39170 40132 40581 54641 58476QABF 0618 0592 0652 0683 0427 0716MI 2371 2029 2393 2438 1886 2580

Image group 2(CT and MRI)

STD 72260 70572 71407 71733 80378 83334QABF 0567 0584 0606 0622 0450 0618MI 3137 3138 3198 3235 2915 3302

Image group 3(MR-T1 and MR-T2)

STD 37358 36856 37358 37277 39023 40630QABF 0635 0647 0668 0682 0589 0689MI 3209 3142 3302 3347 3349 3469

005

115

225

DWT FDCT DTCWT NSCT-1 NSCT-2

Cross entropy

Group 1Group 2Group 3

Proposedmethod

1323 1384

0046

Figure 5 Comparison on cross entropy of different methods andimages

evaluate the difference between the source images and thefused image Therefore the lower value corresponds to thebetter fusion result

On observing Figure 6 two values of the spatial frequencyof the fused image obtained by the proposed method arethe highest and the other one is 6447 which is close tothe highest value 6581 Observation of Table 1 yields that allthe three results of the proposed fusion scheme have highervalues of STD 119876119860119861119865 and MI than any of other methodsexcept one value of119876119860119861119865 (image group 2) is the second bestHowever an overall comparison shows the superiority of theproposed fusion scheme

423 Combined Evaluation Since the subjective and objec-tive evaluations separately are not able to examine fusionresults we have combined them From these figures (Figures4ndash6) and table (Table 1) it is clearly to find that the proposedmethod not only preserves most of the source images char-acteristics and information but also improves the definitionand the spatial quality better than the existing methodswhich can be justified by the optimum values of objectivecriteria except one value of spatial frequency (image group3) and one value of 119876119860119861119865 (image group 2) Consider theexample of the first set of images the five criteria values ofthe proposed method are 1323 (cross entropy) 7050 (spatialfrequency) 58476 (STD) 0716 (119876119860119861119865) and 2580 (MI)respectively Each of them is the optimal one in the first setof experiments

7050

7976

64476

657

758

6581

85

DWT FDCT DTCWT NSCT-1 NSCT-2

Spatial frequencies

Group 1Group 2Group 3

Proposedmethod

Figure 6 Comparison on spatial frequencies of different methodsand images

Among these methods the result of NSCT-2 basedmethod also gives poor results when comparing to theproposed NSCT-based methodThis stems from the fact thathigh-frequency fusion rule of NSCT-2 based method is notable to extract the detail information in the high frequencyeffectively Also by carefully looking at the outputs of theproposed NSCT-based method (Figures 4(h1) 4(h2) and4(h3)) we can find that they get more contrast and morespatial resolution than the outputs of NSCT-2 based method(highlighted by the red arrows) and other methodsThemainreason behind the better performance is that the proposedfusion rules for low- and high-frequency coefficients caneffectively extract prominent and detail information from thesource images Therefore it can be possible to conclude thatthe proposed method is better than the existing methods

43 Fusion ofMultimodalMedical Noisy Images and a ClinicalExample To evaluate the performance of the proposedmethod in noisy environment the input image group 1 hasbeen additionally corrupted with Gaussian noise with astandard deviation of 5 (shown in Figures 7(a) and 7(b))In addition a clinical applicability on noninvasive diagnosisof neoplastic disease is given in the last subsection

431 Fusion of Multimodal Medical Noisy Images For com-parison apart from visual observation objective criteria onSTD MI and 119876119860119861119865 are used to evaluate how much clearor detail information of the source images is transferredto the fused images However maybe these criteria cannot

Computational and Mathematical Methods in Medicine 9

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 7 The multimodal medical images with noise (a) CT image with 5 noise (b) MRI image with 5 noise fused images by (c) DWT(d) FDCT (e) DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 2 Comparison on quantitative evaluation of different methods for the noise medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 41893 40044 40995 41338 52533 57787QABF 0592 0579 0619 0637 0269 0640MI 2785 2539 2826 2882 1816 2914

effectively evaluate the performance of the fusion methodsin terms of the noise transmission For further comparisonPeak Signal to Noise Ratio (PSNR) a ratio between themaximum possible power of a signal and the power of noisethat affects the fidelity [44] is used The larger the value ofPSNR the less the image distortion [45] PSNR is formulatedas

PSNR = 10lg100381610038161003816100381610038161003816100381610038161003816

2552

RMSE2100381610038161003816100381610038161003816100381610038161003816

(15)

where RMSE denotes the Root Mean Square Error betweenthe fused image and the reference imageThe reference imagein the following experiment is selecting from Figure 4(h1)which is proven to be the best performance compared to otherimages

Figure 7 illustrates the fusion results obtained by thedifferent methods The comparison of the images fused byDWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 7(c)ndash7(h) clearly implies thatthe fused image by proposed method has better quality andcontrast than other methods Figure 8 shows the values ofPSNR of different methods in fusing noisy images Onecan observe that the proposed method has higher values of

1516171819202122232425

DWT FDCT DTCWT NSCT-1 NSCT-2 Proposedmethod

PSNR23704

Figure 8 Comparison on PSNR of different methods for the noisemedical images

PSNR than any of the DWT FDCT DTCWT NSCT-1 andNSCT-2 fusionmethods Table 2 gives the quantitative resultsof fused images and shows that the values of STD 119876119860119861119865and MI are also the highest of all the six methods Fromthe analysis above we can also observe that the proposedscheme provides the best performance and outperforms theother algorithms In addition compared with the result ofthe NSCT-1 method using the average-maximum rule it

10 Computational and Mathematical Methods in Medicine

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 9 Brain images of the man with recurrent tumor (a) MR-T1 image (b) MR-T2 image fused images by (c) DWT (d) FDCT (e)DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 3 Comparison on quantitative evaluation of different methods for the clinical medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 64433 62487 63194 63652 60722 67155QABF 0541 0565 0578 0597 0411 0624MI 2507 2455 2540 2584 2392 2624

demonstrated the validities of the proposed fusion rule innoisy environment

432 A Clinical Example on Noninvasive Diagnosis In orderto demonstrate the practical value of the proposed schemein medical imaging one clinical case on neoplastic diagnosisis considered where MR-T1MR-T2 medical modalities areused The images have been downloaded from the HarvardUniversity site (httpwwwmedharvardeduAANLIBhomehtml) Figures 9(a)-9(b) show the recurrent tumor case of a51-year-old woman who sought medical attention because ofgradually increasing right hemiparesis (weakness) and hemi-anopia (visual loss) At craniotomy left parietal anaplasticastrocytoma was found A right frontal lesion was biopsiedA large region of mixed signal on MR-T1 and MR-T2 imagesgives the signs of the possibility of active tumor (highlightedby the red arrows)

Figures 9(c)ndash9(h) show the fused images by DWT FDCTDTCWT NSCT-1 NSCT-2 and proposed method It isobvious that the fused image by proposed method has bettercontrast and sharpness of active tumor (highlighted by the redarrows) than other methods Table 3 shows the quantitativeevaluation of different methods for the clinical medicalimages The values of the proposed method are optimum in

terms of STD MI and 119876119860119861119865 From Figure 9 and Table 3we can obtain the same conclusion that the proposed schemeprovides the best performance and outperforms the otheralgorithms

5 Conclusions

Multimodal medical image fusion plays an important role inclinical applications But the real challenge is to obtain a visu-ally enhanced image through fusion process In this paper anovel and effective image fusion framework based on NSCTand Log-Gabor energy is proposedThe potential advantagesinclude (1) NSCT is more suitable for image fusion becauseof its advantages such as multiresolution multidirection andshift-invariance (2) a new couple of fusion rules based onphase congruency andLog-Gabor energy are used to preservemore useful information in the fused image to improve thequality of the fused images and overcome the limitations ofthe traditional fusion rules and (3) the proposed methodcan provide a better performance than the current fusionmethods whatever the source images are clean or noisy Inthe experiments five groups of multimodal medical imagesincluding one group with noise and one group clinicalexample of a woman affected with recurrent tumor are

Computational and Mathematical Methods in Medicine 11

fused by using traditional fusion methods and the proposedframeworkThe subjective and objective comparisons clearlydemonstrate that the proposed algorithm can enhance thedetails of the fused image and can improve the visual effectwith less information distortion than other fusion methodsIn the future we plan to design a pureC++ platform to reducethe time cost and extend our method for 3D or 4D medicalimage fusion

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work is supported by the National Natural ScienceFoundation of China (no 61262034 no 61462031 and no61473221) by the Key Project of Chinese Ministry of Edu-cation (no 211087) by the Doctoral Fund of Ministry ofEducation of China (no 20120201120071) by the NaturalScience Foundation of Jiangxi Province (no 20114BAB211020and no 20132BAB201025) by the Young Scientist Foundationof Jiangxi Province (no 20122BCB23017) by the Scienceand Technology Application Project of Jiangxi Province(no KJLD14031) by the Science and Technology ResearchProject of the Education Department of Jiangxi Province (noGJJ14334) and by the Fundamental Research Funds for theCentral Universities of China

References

[1] R Shen I Cheng and A Basu ldquoCross-scale coefficient selec-tion for volumetric medical image fusionrdquo IEEE Transactionson Biomedical Engineering vol 60 no 4 pp 1069ndash1079 2013

[2] V D Calhoun and T Adali ldquoFeature-based fusion of medicalimaging datardquo IEEE Transactions on Information Technology inBiomedicine vol 13 no 5 pp 711ndash720 2009

[3] Y Yang D S Park S Huang and J Yang ldquoFusion of CT andMR images using an improved wavelet based methodrdquo Journalof X-Ray Science and Technology vol 18 no 2 pp 157ndash170 2010

[4] R Singh R Srivastava O Prakash and A Khare ldquoMultimodalmedical image fusion in dual tree complex wavelet transformdomain using maximum and average fusion rulesrdquo Journal ofMedical Imaging and Health Informatics vol 2 no 2 pp 168ndash173 2012

[5] Z Wang and Y Ma ldquoMedical image fusion using m-PCNNrdquoInformation Fusion vol 9 no 2 pp 176ndash185 2008

[6] V Barra and J Boire ldquoA general framework for the fusion ofanatomical and functionalmedical imagesrdquoNeuroImage vol 13no 3 pp 410ndash424 2001

[7] A P James and B V Dasarathy ldquoMedical image fusion a surveyof the state of the artrdquo Information Fusion vol 19 pp 4ndash19 2014

[8] P Balasubramaniam and V P Ananthi ldquoImage fusion usingintuitionistic fuzzy setsrdquo Information Fusion vol 20 pp 21ndash302014

[9] Y Yang S Y Huang F J Gao et al ldquoMulti-focus image fusionusing an effective discrete wavelet transform based algorithmrdquoMeasurement Science Review vol 14 no 2 pp 102ndash108 2014

[10] P S Pradhan R L King N H Younan and D W HolcombldquoEstimation of the number of decomposition levels for awavelet-based multiresolution multisensor image fusionrdquo IEEETransactions on Geoscience and Remote Sensing vol 44 no 12pp 3674ndash3686 2006

[11] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[12] J Yang and R S Blum ldquoA statistical signal processing approachto image fusion for conceled weapon detectionrdquo in Proceedingsof the International Conference on Image Processing (ICIP rsquo02)vol 1 pp 513ndash516 September 2002

[13] R Singh and A Khare ldquoMultiscale medical image fusion inwavelet domainrdquoThe Scientific World Journal vol 2013 ArticleID 521034 10 pages 2013

[14] B Aiazzi L Alparone A Barducci S Baronti and I PippildquoMultispectral fusion of multisensor image data by the general-ized Laplacian pyramidrdquo in Proceedings of the 1999 IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo99) vol 2 pp 1183ndash1185 Hamburg Germany July 1999

[15] V S Petrovic andC S Xydeas ldquoGradient-basedmultiresolutionimage fusionrdquo IEEE Transactions on Image Processing vol 13no 2 pp 228ndash237 2004

[16] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journalon Advances in Signal Processing vol 2010 Article ID 5793412010

[17] M N Do and M Vetterli ldquoThe contourlet transform an effi-cient directional multiresolution image representationrdquo IEEETransactions on Image Processing vol 14 no 12 pp 2091ndash21062005

[18] R Singh and A Khare ldquoFusion of multimodal medical imagesusing Daubechies complex wavelet transformmdasha multiresolu-tion approachrdquo Information Fusion vol 19 pp 49ndash60 2014

[19] X B Qu J W Yan and G D Yang ldquoMultifocus image fusionmethod of sharp frequency localized Contourlet transformdomain based on sum-modified-Laplacianrdquo Optics and Preci-sion Engineering vol 17 no 5 pp 1203ndash1212 2009

[20] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006

[21] Q Zhang and B Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[22] Y Chai H Li and X Zhang ldquoMultifocus image fusion basedon features contrast of multiscale products in nonsubsampledcontourlet transform domainrdquo Optik vol 123 no 7 pp 569ndash581 2012

[23] X B Qu J W Yan H Z Xiao and Z Zhu ldquoImage fusionalgorithm based on spatial frequency-motivated pulse cou-pled neural networks in nonsubsampled contourlet transformdomainrdquo Acta Automatica Sinica vol 34 no 12 pp 1508ndash15142008

[24] Y Wu C Wu and S Wu ldquoFusion of remote sensing imagesbased on nonsubsampled contourlet transform and regionsegmentationrdquo Journal of Shanghai JiaotongUniversity (Science)vol 16 no 6 pp 722ndash727 2011

[25] J Adu J Gan andYWang ldquoImage fusion based onnonsubsam-pled contourlet transform for infrared and visible light imagerdquoInfrared Physics amp Technology vol 61 pp 94ndash100 2013

12 Computational and Mathematical Methods in Medicine

[26] G Bhatnagar Q M J Wu and Z Liu ldquoDirective contrastbased multimodal medical image fusion in NSCT domainrdquoIEEE Transactions on Multimedia vol 15 no 5 pp 1014ndash10242013

[27] S Das and M K Kundu ldquoNSCT-based multimodal medicalimage fusion using pulse-coupled neural network andmodifiedspatial frequencyrdquo Medical and Biological Engineering andComputing vol 50 no 10 pp 1105ndash1114 2012

[28] P Kovesi ldquoImage features from phase congruencyrdquo Videre vol1 no 3 pp 1ndash26 1999

[29] M C Morrone and R A Owens ldquoFeature detection from localenergyrdquo Pattern Recognition Letters vol 6 no 5 pp 303ndash3131987

[30] P Kovesi ldquoPhase congruency a low-level image invariantrdquoPsychological Research vol 64 no 2 pp 136ndash148 2000

[31] L Zhang X Mou and D Zhang ldquoFSIM a feature similarityindex for image quality assessmentrdquo IEEE Transactions onImage Processing vol 20 no 8 pp 2378ndash2386 2011

[32] A Wong and W Bishop ldquoEfficient least squares fusion of MRIand CT images using a phase congruency modelrdquo PatternRecognition Letters vol 29 no 3 pp 173ndash180 2008

[33] L Yu Z He and Q Cao ldquoGabor texture representationmethodfor face recognition using theGamma and generalizedGaussianmodelsrdquo Image and Vision Computing vol 28 no 1 pp 177ndash1872010

[34] P Yao J Li X Ye Z Zhuang and B Li ldquoIris recognitionalgorithm using modified Log-Gabor filtersrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 461ndash464 August 2006

[35] R Raghavendra and C Busch ldquoNovel image fusion schemebased on dependency measure for robust multispectral palm-print recognitionrdquo Pattern Recognition vol 47 no 6 pp 2205ndash2221 2014

[36] R Redondo F Sroubek S Fischer and G Cristobal ldquoMultifo-cus image fusion using the log-Gabor transform and aMultisizeWindows techniquerdquo Information Fusion vol 10 no 2 pp 163ndash171 2009

[37] X Zhang X Li and Y Feng ldquoA new image fusion performancemeasure using Riesz transformsrdquoOptik vol 125 no 3 pp 1427ndash1433 2014

[38] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000

[39] B Yang and S Li ldquoPixel-level image fusion with simultaneousorthogonal matching pursuitrdquo Information Fusion vol 13 no 1pp 10ndash19 2012

[40] QMiao C Shi P XuM Yang and Y Shi ldquoA novel algorithm ofimage fusion using shearletsrdquoOptics Communications vol 284no 6 pp 1540ndash1547 2011

[41] E Candes L Demanet D Donoho and L Ying ldquoFast discretecurvelet transformsrdquo Multiscale Modeling and Simulation vol5 no 3 pp 861ndash899 2006

[42] M Fu and C Zhao ldquoFusion of infrared and visible imagesbased on the second generation curvelet transformrdquo Journal ofInfrared andMillimeterWaves vol 28 no 4 pp 254ndash258 2009

[43] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[44] Y Phamila and R Amutha ldquoDiscrete Cosine Transform basedfusion of multi-focus images for visual sensor networksrdquo SignalProcessing vol 95 pp 161ndash170 2014

[45] L Wang B Li and L Tian ldquoMulti-modal medical volumetricdata fusion using 3D discrete shearlet transform and global-to-local rulerdquo IEEETransactions on Biomedical Engineering vol 61no 1 pp 197ndash206 2014

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 4: Research Article Log-Gabor Energy Based Multimodal Medical ...downloads.hindawi.com/journals/cmmm/2014/835481.pdf · elds of multiscale transform based image fusion [ ]. However,

4 Computational and Mathematical Methods in Medicine

Fusion rules

NSCT

Phase congruency

Fused high coefficients

Low coefficientHigh

coefficientsHigh

coefficients

Fused low coefficient

MaxLog-Gabor energy

Inverse NSCT

Fused image

Max phase congruency

NSCT

Low coefficient

Source images

Figure 3 The framework of the proposed fusion algorithm

in the image It is to be noted that the noise is also related tohigh-frequency coefficients and may cause miscalculation ofsharpness values and therefore affect the fusion performance[26] Furthermore human visual system is generally moresensitive to texture detail features than the value of a singlepixel

To overcome the defect mentioned above a novel high-frequency fusion rule based on local Log-Gabor energy isdesigned in this paper Gabor wavelet is a popular techniquethat has been extensively used to extract texture features[33] Log-Gabor filters are proposed based on Gabor filtersCompared with Gabor filters Log-Gabor filters cover theshortage of the high frequency of Gabor filter componentexpression and more in accord with human visual system[34] Therefore Log-Gabor wavelet can achieve optimalspatial orientation and wider spectrum information at thesame time and thus more truly reflect the frequency responseof the natural images and improve the performance in termsof the accuracy [35]

Under polar coordinates the Log-Gabor wavelet isexpressed as follows [36]

119892 (119891 120579) = expminus[ln (1198911198910)]

2

2[ln (1205901198910)]2 times expminus

(120579 minus 1205790)2

21205902120579

(2)

in which 1198910 is the center frequency of the Log-Gabor filter1205790 is the direction of the filter 120590 is used to determine thebandwidth of the radial filter and 120590120579 is used to determine thebandwidth of the orientation If 119892119906V

119896119897(119894 119895) correspond to Log-

Gabor wavelets in scale 119906 and direction V the signal responseis expressed as follows

119866119906V119896119897(119894 119895) = 119867119896119897 (119894 119895) lowast 119892

119906V119896119897(119894 119895) (3)

where 119867119896119897(119894 119895) is the coefficient located at (119894 119895) in high-frequency subimages of the source image 119860 or 119861 at the 119896thscale 119897th direction and lowast denotes convolution operationTheLog-Gabor energy of high-frequency subimages at the 119896thscale 119897th direction is expressed as follows

119864119896119897 (119894 119895) =

119880

sum

119906=1

119881

sum

V=1

radicreal(119866119906V119896119897(119894 119895))

2+ imag(119866119906V

119896119897(119894 119895))

2 (4)

in which real(119866119906V119896119897(119894 119895)) is the real part of 119866119906V

119896119897(119894 119895) and

imag(119866119906V119896119897(119894 119895)) is the imaginary part of 119866119906V

119896119897(119894 119895) The Log-

Gabor energy in NSCT domain at the local area around thepixel (119894 119895) is given as

LE119896119897 (119894 119895) =1

(2119872 + 1) (2119873 + 1)

119872

sum

119898=minus119872

119873

sum

119899=minus119873

119864119896119897 (119894 + 119898 119895 + 119899)

(5)

Computational and Mathematical Methods in Medicine 5

in which (2119872+ 1)(2119873 + 1) is the window size The proposeddefinition of the local Log-Gabor energy not only extractsmore useful features from high-frequency coefficients butalso keeps a well performance in noisy environment

32 Proposed Fusion Framework The proposed NSCT-basedimage fusion framework is discussed in this subsectionConsidering the input multimodal medical images (119860 and119861) are perfectly registered The framework of the proposedfusion method is shown Figure 3 and described as thefollowing three steps

Step 1 Perform 120579-level NSCT on 119860 and 119861 to obtain one low-frequency subimage and series of high-frequency subimagesat each level and direction 119897 that is 119860 119871119860 119867119860

119896119897 and 119861

119871119861 119867119861

119896119897 where 119871119860 119871119861 are the low-frequency subimages and

119867119860

119896119897119867119860119896119897represent the high-frequency subimages at level 119896 isin

[1 120579] in the orientation 119897

Step 2 Fuse low- and high-frequency subbands via thefollowing novel fusion rule to obtain composite low- andhigh-frequency subbands

The low-frequency coefficients represent the approxima-tion component of the source images The popular widelyused approach is to apply the averaging methods to producethe fused coefficients However this rule reduced contrastin the fused images and cannot give the fused subimage ofhigh quality for medical imageTherefore the criterion basedon the phase congruency that is introduced in Section 22 isemployed to fuse the low-frequency coefficients The fusionrule for the low-frequency subbands is defined as

119871119865(119894 119895)

=

119871119860(119894 119895) if PC119860120579 (119894 119895) gt PC119861120579 (119894 119895)

119871119861(119894 119895) if PC119860120579 (119894 119895) lt PC119861120579 (119894 119895)

05 (119871119860(119894 119895) + 119871

119861(119894 119895)) if PC119860120579 (119894 119895) = PC119861120579 (119894 119895)

(6)

where PC119860120579(119894 119895) PC119861120579(119894 119895) is the phase congruency extractedfrom low-frequency subimages of the source images119860 and 119861respectively

For the high-frequency coefficients the most commonfusion rule is selecting the coefficient with larger absolutevalues This rule does not take any consideration of thesurrounding pixels and cannot give the fused componentsof high quality for medical image Especially when thesource images contain noise the noise could be mistaken forfused coefficients and cause miscalculation of the sharpnessvalue Therefore the criterion based on Log-Gabor energyis introduced to fuse high-frequency coefficients The fusionrule for the high-frequency subbands is defined as

119867119865

119896119897(119894 119895) =

119867119860

119896119897(119894 119895) if LE119860

119896119897(119894 119895) ge LE119861

119896119897(119894 119895)

119867119861

119896119897(119894 119895) otherwise

(7)

where LE119860119896119897(119894 119895) and LE119861

119896119897(119894 119895) are the local Log-Gabor energy

extracted fromhigh-frequency subimages at the 119896th scale and119897th direction of source images 119860 and 119861 respectively

Step 3 Perform 120579-level by the inverseNSCTon the fused low-and high-frequency subimages The fused image is obtainedultimately in this way

4 The Experimental Results and Analysis

It is well known that different image quality metrics imply thevisual quality of images from different aspects but none ofthem can directly imply the quality In this paper we considerboth the visual representation and quantitative assessmentof the fused images For evaluation of the proposed fusionmethod we have considered five separate fusion performancemetrics defined as below

41 Evaluation Index System

411 Standard Deviation The standard deviation (STD) ofan image with size of1198720 times 1198730 is defined as [37]

Std = ( 1

1198720 times 1198730

1198720

sum

119894=1

1198730

sum

119895=1

(119865 (119894 119895) minus 120583)2)

12

(8)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) and 120583 is the mean value of the image The STDcan be used to estimate how widely the gray values spread inan image The larger the STD the better the result

412 Edge Based SimilarityMeasure The edge based similar-ity measure (119876119860119861119865) is proposed by Xydeas and Petrovic [38]The definition is given as

119876119860119861119865

=

sum1198720

119894=1sum1198730

119895=1[119876119860119865(119894 119895) 119908

119860(119894 119895) + 119876

119861119865(119894 119895) 119908

119861(119894 119895)]

sum1198720

119894=1sum1198730

119895=1[119908119860 (119894 119895) + 119908119861 (119894 119895)]

(9)

where 119908119860(119894 119895) and 119908119861(119894 119895) are the corresponding gradientstrength for images 119860 and 119861 respectively The definition of119876119860119865(119894 119895) and 119876119861119865(119894 119895) is given as

119876119860119865(119894 119895) = 119876

119860119865

119886(119894 119895) 119876

119860119865

119892(119894 119895)

119876119861119865(119894 119895) = 119876

119861119865

119886(119894 119895) 119876

119861119865

119892(119894 119895)

(10)

where 119876119909119865119886(119894 119895) and 119876119909119865

119892(119894 119895) are the edge strength and

orientation preservation values at location for image 119909 (119860or 119861) respectively The edge based similarity measure givesthe similarity between the edges transferred from the inputimages to the fused image [26]The larger the value the betterthe fusion result

6 Computational and Mathematical Methods in Medicine

413 Mutual Information Mutual information (MI) [39]between the fusion image 119865 and the source images119860 and 119861 isdefined as follows

MI = MI119860119865 +MI119861119865

MI119860119865 =119871

sum

119891=0

119871

sum

119886=0

119901119860119865(119886 119891) log

2(119901119860119865(119886 119891)

119901119860 (119886) 119901119865 (119891))

MI119861119865 =119871

sum

119891=0

119871

sum

119887=0

119901119861119865(119887 119891) log

2(119901119861119865(119887 119891)

119901119861 (119887) 119901119865 (119891))

(11)

where MI119909119865 denotes the normalized mutual informationbetween the fused image and the input image 119860 119861 119886 119887 and119891 isin [0 119871] and 119871 is the number of bins 119901119860(119886) 119901119861(119887) and119901119865(119891) are the normalized gray level histograms of source

images and fused image 119901119909119865(119886 119891) is the joint gray levelhistograms between fused image and each source image

MI can indicate how much information the fused image119865 conveys about the source images 119860 and 119861 [22] Thereforethe greater the value of MI the better the fusion effect

414 Cross Entropy The cross entropy is defined as [8]

CE =119871minus1

sum

119897=0

119875119897log2119875119897

119876119897

(12)

where 119875119897 and119876119897 denote the gray level histogram of the sourceimage and the fused image respectively The cross entropy isused to evaluate the difference between the source images andthe fused image The lower value corresponds to the betterfusion result

415 Spatial Frequency Spatial frequency is defined as [40]

SF = radicRF2 + CF2 (13)

where RF and CF are the row frequency and column fre-quency respectively and are defined as

RF = radic 1

11987201198730

1198720minus1

sum

119894=0

1198730minus1

sum

119895=0

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2

CF = radic 1

11987201198730

1198720minus1

sum

119894=0

1198730minus1

sum

119895=0

[119865 (119894 119895) minus 119865 (119894 minus 1 119895)]2

(14)

The spatial frequency reflects the edge information of thefused image Larger spatial frequency values indicate betterimage quality

42 Experiments on Multimodal Medical Image Fusion Toevaluate the performance of the proposed image fusionapproach the experiments are performed on three groups ofmultimodal medical images These images are characterizedin two distinct pairs (1) CT and MRI (2) MR-T1 and MR-T2 The images in Figures 4(a1)-4(b1) and 4(a2)-4(b2) are

CT and MRI images whereas Figures 4(a3)-4(b3) are T1-weighted MR image (MR-T1) and T2-weighted MR image(MR-T2) All images have the same size of 256 times 256 pixelwith 256-level gray scale For all these image groups theresults of the proposed fusion framework are compared withthose of the traditional discretewavelet transform (DWT) [1316] the second generation curvelet transform (fast discretecurvelet transform FDCT) [41 42] the dual tree complexwavelet transform (DTCWT) [4] and the nonsubsampledContourlet transform (NSCT-1 andNSCT-2) basedmethodsThe high-frequency coefficients and low-frequency coeffi-cients of DWT FDCT DTCWT and NSCT-1 based methodsaremerged by the popular widely used fusion rule of selectingthe coefficient with larger absolute values and the averagingrule (average-maximum rule) respectively NSCT-2 basedmethod is merged by the fusion rules proposed by Bhatnagaret al in [26] In order to perform a fair comparison the sourceimages are all decomposed into the same levels with 3 forthose methods except FDCTmethod For DWTmethod theimages are decomposed using the DBSS (2 2) wavelet Forimplementing NSCT ldquo9-7rdquo filters and ldquopkvardquo filters (how toset the filters can be seen in [43]) are used as the pyramidaland directional filters respectively

421 Subjective Evaluation The first pair of medical imagesare two groups of brain CT and MRI images on differentaspects shown in Figures 4(a1) 4(b1) and 4(a2) 4(b2)respectively It can be easily seen that the CT image showsthe dense structure while MRI provides information aboutsoft tissues The obtained fused images from DWT FDCTDTCWT NSCT-1 and NSCT-2 are shown in Figures 4(c1)ndash4(g1) and 4(c2)ndash4(g2) respectively The results for the pro-posed fusion method have been shown in Figures 4(h1) and4(h2) On comparing these results it can be easily observedthat the proposedmethod outperforms those fusionmethodsand has good visual representation of fused image

The second pair of medical images areMR-T1 andMR-T2images shown in Figures 4(a3) and 4(b3) The comparisonof DWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 4(c3)ndash4(h3) clearly implies thatthe fusion result of the proposed method has better qualityand contrast in comparison to other methods

Similarly on observing the noticeable improvement hasbeen emphasized in Figure 4 by the red arrows and theanalysis above one can easily verify the fact that again theproposed method has been found superior in terms of visualrepresentation over DWT FDCT DTCWT NSCT-1 andNSCT-2 fusion methods

422 Objective Evaluation For objective evaluation of thefusion results shown in Figure 4 we have used five fusionmetrics cross entropy spatial frequency STD 119876119860119861119865 andMIThe quantitative comparison of cross entropy and spatialfrequency for these fused images is visually given by Figures5-6 and other metrics are given by Table 1

On observing Figure 5 one can easily observe all thethree results of the proposed scheme have lower values ofcross entropy than any of the DWT FDCT DTCWT NSCT-1 and NSCT-2 fusion methods The cross entropy is used to

Computational and Mathematical Methods in Medicine 7

(a1) (b1) (d1)(c1)

(e1) (h1)(g1)(f1)

(a2) (d2)(c2)(b2)

(e2) (f2) (h2)(g2)

(a3) (c3)(b3) (d3)

(e3) (f3) (g3) (h3)

Figure 4 Sourcemultimodalmedical images (a1) (b1) image group 1 (CT andMRI) (a2) (b2) image group 2 (CT andMRI) (a3) (b3) imagegroup 3 (MR-T1 and MR-T2) fused images from (c1) (c2) (c3) DWT based method (d1) (d2) (d3) FDCT based method (e1) (e2) (e3)DTCWT based method (f1) (f2) (f3) NSCT-1 based method (g1) (g2) (g3) NSCT-2 based method (h1) (h2) (h3) our proposed method

8 Computational and Mathematical Methods in Medicine

Table 1 Comparison on quantitative evaluation of different methods for the first set medical images

Images Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed method

Image group 1(CT and MRI)

STD 41160 39170 40132 40581 54641 58476QABF 0618 0592 0652 0683 0427 0716MI 2371 2029 2393 2438 1886 2580

Image group 2(CT and MRI)

STD 72260 70572 71407 71733 80378 83334QABF 0567 0584 0606 0622 0450 0618MI 3137 3138 3198 3235 2915 3302

Image group 3(MR-T1 and MR-T2)

STD 37358 36856 37358 37277 39023 40630QABF 0635 0647 0668 0682 0589 0689MI 3209 3142 3302 3347 3349 3469

005

115

225

DWT FDCT DTCWT NSCT-1 NSCT-2

Cross entropy

Group 1Group 2Group 3

Proposedmethod

1323 1384

0046

Figure 5 Comparison on cross entropy of different methods andimages

evaluate the difference between the source images and thefused image Therefore the lower value corresponds to thebetter fusion result

On observing Figure 6 two values of the spatial frequencyof the fused image obtained by the proposed method arethe highest and the other one is 6447 which is close tothe highest value 6581 Observation of Table 1 yields that allthe three results of the proposed fusion scheme have highervalues of STD 119876119860119861119865 and MI than any of other methodsexcept one value of119876119860119861119865 (image group 2) is the second bestHowever an overall comparison shows the superiority of theproposed fusion scheme

423 Combined Evaluation Since the subjective and objec-tive evaluations separately are not able to examine fusionresults we have combined them From these figures (Figures4ndash6) and table (Table 1) it is clearly to find that the proposedmethod not only preserves most of the source images char-acteristics and information but also improves the definitionand the spatial quality better than the existing methodswhich can be justified by the optimum values of objectivecriteria except one value of spatial frequency (image group3) and one value of 119876119860119861119865 (image group 2) Consider theexample of the first set of images the five criteria values ofthe proposed method are 1323 (cross entropy) 7050 (spatialfrequency) 58476 (STD) 0716 (119876119860119861119865) and 2580 (MI)respectively Each of them is the optimal one in the first setof experiments

7050

7976

64476

657

758

6581

85

DWT FDCT DTCWT NSCT-1 NSCT-2

Spatial frequencies

Group 1Group 2Group 3

Proposedmethod

Figure 6 Comparison on spatial frequencies of different methodsand images

Among these methods the result of NSCT-2 basedmethod also gives poor results when comparing to theproposed NSCT-based methodThis stems from the fact thathigh-frequency fusion rule of NSCT-2 based method is notable to extract the detail information in the high frequencyeffectively Also by carefully looking at the outputs of theproposed NSCT-based method (Figures 4(h1) 4(h2) and4(h3)) we can find that they get more contrast and morespatial resolution than the outputs of NSCT-2 based method(highlighted by the red arrows) and other methodsThemainreason behind the better performance is that the proposedfusion rules for low- and high-frequency coefficients caneffectively extract prominent and detail information from thesource images Therefore it can be possible to conclude thatthe proposed method is better than the existing methods

43 Fusion ofMultimodalMedical Noisy Images and a ClinicalExample To evaluate the performance of the proposedmethod in noisy environment the input image group 1 hasbeen additionally corrupted with Gaussian noise with astandard deviation of 5 (shown in Figures 7(a) and 7(b))In addition a clinical applicability on noninvasive diagnosisof neoplastic disease is given in the last subsection

431 Fusion of Multimodal Medical Noisy Images For com-parison apart from visual observation objective criteria onSTD MI and 119876119860119861119865 are used to evaluate how much clearor detail information of the source images is transferredto the fused images However maybe these criteria cannot

Computational and Mathematical Methods in Medicine 9

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 7 The multimodal medical images with noise (a) CT image with 5 noise (b) MRI image with 5 noise fused images by (c) DWT(d) FDCT (e) DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 2 Comparison on quantitative evaluation of different methods for the noise medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 41893 40044 40995 41338 52533 57787QABF 0592 0579 0619 0637 0269 0640MI 2785 2539 2826 2882 1816 2914

effectively evaluate the performance of the fusion methodsin terms of the noise transmission For further comparisonPeak Signal to Noise Ratio (PSNR) a ratio between themaximum possible power of a signal and the power of noisethat affects the fidelity [44] is used The larger the value ofPSNR the less the image distortion [45] PSNR is formulatedas

PSNR = 10lg100381610038161003816100381610038161003816100381610038161003816

2552

RMSE2100381610038161003816100381610038161003816100381610038161003816

(15)

where RMSE denotes the Root Mean Square Error betweenthe fused image and the reference imageThe reference imagein the following experiment is selecting from Figure 4(h1)which is proven to be the best performance compared to otherimages

Figure 7 illustrates the fusion results obtained by thedifferent methods The comparison of the images fused byDWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 7(c)ndash7(h) clearly implies thatthe fused image by proposed method has better quality andcontrast than other methods Figure 8 shows the values ofPSNR of different methods in fusing noisy images Onecan observe that the proposed method has higher values of

1516171819202122232425

DWT FDCT DTCWT NSCT-1 NSCT-2 Proposedmethod

PSNR23704

Figure 8 Comparison on PSNR of different methods for the noisemedical images

PSNR than any of the DWT FDCT DTCWT NSCT-1 andNSCT-2 fusionmethods Table 2 gives the quantitative resultsof fused images and shows that the values of STD 119876119860119861119865and MI are also the highest of all the six methods Fromthe analysis above we can also observe that the proposedscheme provides the best performance and outperforms theother algorithms In addition compared with the result ofthe NSCT-1 method using the average-maximum rule it

10 Computational and Mathematical Methods in Medicine

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 9 Brain images of the man with recurrent tumor (a) MR-T1 image (b) MR-T2 image fused images by (c) DWT (d) FDCT (e)DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 3 Comparison on quantitative evaluation of different methods for the clinical medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 64433 62487 63194 63652 60722 67155QABF 0541 0565 0578 0597 0411 0624MI 2507 2455 2540 2584 2392 2624

demonstrated the validities of the proposed fusion rule innoisy environment

432 A Clinical Example on Noninvasive Diagnosis In orderto demonstrate the practical value of the proposed schemein medical imaging one clinical case on neoplastic diagnosisis considered where MR-T1MR-T2 medical modalities areused The images have been downloaded from the HarvardUniversity site (httpwwwmedharvardeduAANLIBhomehtml) Figures 9(a)-9(b) show the recurrent tumor case of a51-year-old woman who sought medical attention because ofgradually increasing right hemiparesis (weakness) and hemi-anopia (visual loss) At craniotomy left parietal anaplasticastrocytoma was found A right frontal lesion was biopsiedA large region of mixed signal on MR-T1 and MR-T2 imagesgives the signs of the possibility of active tumor (highlightedby the red arrows)

Figures 9(c)ndash9(h) show the fused images by DWT FDCTDTCWT NSCT-1 NSCT-2 and proposed method It isobvious that the fused image by proposed method has bettercontrast and sharpness of active tumor (highlighted by the redarrows) than other methods Table 3 shows the quantitativeevaluation of different methods for the clinical medicalimages The values of the proposed method are optimum in

terms of STD MI and 119876119860119861119865 From Figure 9 and Table 3we can obtain the same conclusion that the proposed schemeprovides the best performance and outperforms the otheralgorithms

5 Conclusions

Multimodal medical image fusion plays an important role inclinical applications But the real challenge is to obtain a visu-ally enhanced image through fusion process In this paper anovel and effective image fusion framework based on NSCTand Log-Gabor energy is proposedThe potential advantagesinclude (1) NSCT is more suitable for image fusion becauseof its advantages such as multiresolution multidirection andshift-invariance (2) a new couple of fusion rules based onphase congruency andLog-Gabor energy are used to preservemore useful information in the fused image to improve thequality of the fused images and overcome the limitations ofthe traditional fusion rules and (3) the proposed methodcan provide a better performance than the current fusionmethods whatever the source images are clean or noisy Inthe experiments five groups of multimodal medical imagesincluding one group with noise and one group clinicalexample of a woman affected with recurrent tumor are

Computational and Mathematical Methods in Medicine 11

fused by using traditional fusion methods and the proposedframeworkThe subjective and objective comparisons clearlydemonstrate that the proposed algorithm can enhance thedetails of the fused image and can improve the visual effectwith less information distortion than other fusion methodsIn the future we plan to design a pureC++ platform to reducethe time cost and extend our method for 3D or 4D medicalimage fusion

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work is supported by the National Natural ScienceFoundation of China (no 61262034 no 61462031 and no61473221) by the Key Project of Chinese Ministry of Edu-cation (no 211087) by the Doctoral Fund of Ministry ofEducation of China (no 20120201120071) by the NaturalScience Foundation of Jiangxi Province (no 20114BAB211020and no 20132BAB201025) by the Young Scientist Foundationof Jiangxi Province (no 20122BCB23017) by the Scienceand Technology Application Project of Jiangxi Province(no KJLD14031) by the Science and Technology ResearchProject of the Education Department of Jiangxi Province (noGJJ14334) and by the Fundamental Research Funds for theCentral Universities of China

References

[1] R Shen I Cheng and A Basu ldquoCross-scale coefficient selec-tion for volumetric medical image fusionrdquo IEEE Transactionson Biomedical Engineering vol 60 no 4 pp 1069ndash1079 2013

[2] V D Calhoun and T Adali ldquoFeature-based fusion of medicalimaging datardquo IEEE Transactions on Information Technology inBiomedicine vol 13 no 5 pp 711ndash720 2009

[3] Y Yang D S Park S Huang and J Yang ldquoFusion of CT andMR images using an improved wavelet based methodrdquo Journalof X-Ray Science and Technology vol 18 no 2 pp 157ndash170 2010

[4] R Singh R Srivastava O Prakash and A Khare ldquoMultimodalmedical image fusion in dual tree complex wavelet transformdomain using maximum and average fusion rulesrdquo Journal ofMedical Imaging and Health Informatics vol 2 no 2 pp 168ndash173 2012

[5] Z Wang and Y Ma ldquoMedical image fusion using m-PCNNrdquoInformation Fusion vol 9 no 2 pp 176ndash185 2008

[6] V Barra and J Boire ldquoA general framework for the fusion ofanatomical and functionalmedical imagesrdquoNeuroImage vol 13no 3 pp 410ndash424 2001

[7] A P James and B V Dasarathy ldquoMedical image fusion a surveyof the state of the artrdquo Information Fusion vol 19 pp 4ndash19 2014

[8] P Balasubramaniam and V P Ananthi ldquoImage fusion usingintuitionistic fuzzy setsrdquo Information Fusion vol 20 pp 21ndash302014

[9] Y Yang S Y Huang F J Gao et al ldquoMulti-focus image fusionusing an effective discrete wavelet transform based algorithmrdquoMeasurement Science Review vol 14 no 2 pp 102ndash108 2014

[10] P S Pradhan R L King N H Younan and D W HolcombldquoEstimation of the number of decomposition levels for awavelet-based multiresolution multisensor image fusionrdquo IEEETransactions on Geoscience and Remote Sensing vol 44 no 12pp 3674ndash3686 2006

[11] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[12] J Yang and R S Blum ldquoA statistical signal processing approachto image fusion for conceled weapon detectionrdquo in Proceedingsof the International Conference on Image Processing (ICIP rsquo02)vol 1 pp 513ndash516 September 2002

[13] R Singh and A Khare ldquoMultiscale medical image fusion inwavelet domainrdquoThe Scientific World Journal vol 2013 ArticleID 521034 10 pages 2013

[14] B Aiazzi L Alparone A Barducci S Baronti and I PippildquoMultispectral fusion of multisensor image data by the general-ized Laplacian pyramidrdquo in Proceedings of the 1999 IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo99) vol 2 pp 1183ndash1185 Hamburg Germany July 1999

[15] V S Petrovic andC S Xydeas ldquoGradient-basedmultiresolutionimage fusionrdquo IEEE Transactions on Image Processing vol 13no 2 pp 228ndash237 2004

[16] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journalon Advances in Signal Processing vol 2010 Article ID 5793412010

[17] M N Do and M Vetterli ldquoThe contourlet transform an effi-cient directional multiresolution image representationrdquo IEEETransactions on Image Processing vol 14 no 12 pp 2091ndash21062005

[18] R Singh and A Khare ldquoFusion of multimodal medical imagesusing Daubechies complex wavelet transformmdasha multiresolu-tion approachrdquo Information Fusion vol 19 pp 49ndash60 2014

[19] X B Qu J W Yan and G D Yang ldquoMultifocus image fusionmethod of sharp frequency localized Contourlet transformdomain based on sum-modified-Laplacianrdquo Optics and Preci-sion Engineering vol 17 no 5 pp 1203ndash1212 2009

[20] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006

[21] Q Zhang and B Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[22] Y Chai H Li and X Zhang ldquoMultifocus image fusion basedon features contrast of multiscale products in nonsubsampledcontourlet transform domainrdquo Optik vol 123 no 7 pp 569ndash581 2012

[23] X B Qu J W Yan H Z Xiao and Z Zhu ldquoImage fusionalgorithm based on spatial frequency-motivated pulse cou-pled neural networks in nonsubsampled contourlet transformdomainrdquo Acta Automatica Sinica vol 34 no 12 pp 1508ndash15142008

[24] Y Wu C Wu and S Wu ldquoFusion of remote sensing imagesbased on nonsubsampled contourlet transform and regionsegmentationrdquo Journal of Shanghai JiaotongUniversity (Science)vol 16 no 6 pp 722ndash727 2011

[25] J Adu J Gan andYWang ldquoImage fusion based onnonsubsam-pled contourlet transform for infrared and visible light imagerdquoInfrared Physics amp Technology vol 61 pp 94ndash100 2013

12 Computational and Mathematical Methods in Medicine

[26] G Bhatnagar Q M J Wu and Z Liu ldquoDirective contrastbased multimodal medical image fusion in NSCT domainrdquoIEEE Transactions on Multimedia vol 15 no 5 pp 1014ndash10242013

[27] S Das and M K Kundu ldquoNSCT-based multimodal medicalimage fusion using pulse-coupled neural network andmodifiedspatial frequencyrdquo Medical and Biological Engineering andComputing vol 50 no 10 pp 1105ndash1114 2012

[28] P Kovesi ldquoImage features from phase congruencyrdquo Videre vol1 no 3 pp 1ndash26 1999

[29] M C Morrone and R A Owens ldquoFeature detection from localenergyrdquo Pattern Recognition Letters vol 6 no 5 pp 303ndash3131987

[30] P Kovesi ldquoPhase congruency a low-level image invariantrdquoPsychological Research vol 64 no 2 pp 136ndash148 2000

[31] L Zhang X Mou and D Zhang ldquoFSIM a feature similarityindex for image quality assessmentrdquo IEEE Transactions onImage Processing vol 20 no 8 pp 2378ndash2386 2011

[32] A Wong and W Bishop ldquoEfficient least squares fusion of MRIand CT images using a phase congruency modelrdquo PatternRecognition Letters vol 29 no 3 pp 173ndash180 2008

[33] L Yu Z He and Q Cao ldquoGabor texture representationmethodfor face recognition using theGamma and generalizedGaussianmodelsrdquo Image and Vision Computing vol 28 no 1 pp 177ndash1872010

[34] P Yao J Li X Ye Z Zhuang and B Li ldquoIris recognitionalgorithm using modified Log-Gabor filtersrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 461ndash464 August 2006

[35] R Raghavendra and C Busch ldquoNovel image fusion schemebased on dependency measure for robust multispectral palm-print recognitionrdquo Pattern Recognition vol 47 no 6 pp 2205ndash2221 2014

[36] R Redondo F Sroubek S Fischer and G Cristobal ldquoMultifo-cus image fusion using the log-Gabor transform and aMultisizeWindows techniquerdquo Information Fusion vol 10 no 2 pp 163ndash171 2009

[37] X Zhang X Li and Y Feng ldquoA new image fusion performancemeasure using Riesz transformsrdquoOptik vol 125 no 3 pp 1427ndash1433 2014

[38] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000

[39] B Yang and S Li ldquoPixel-level image fusion with simultaneousorthogonal matching pursuitrdquo Information Fusion vol 13 no 1pp 10ndash19 2012

[40] QMiao C Shi P XuM Yang and Y Shi ldquoA novel algorithm ofimage fusion using shearletsrdquoOptics Communications vol 284no 6 pp 1540ndash1547 2011

[41] E Candes L Demanet D Donoho and L Ying ldquoFast discretecurvelet transformsrdquo Multiscale Modeling and Simulation vol5 no 3 pp 861ndash899 2006

[42] M Fu and C Zhao ldquoFusion of infrared and visible imagesbased on the second generation curvelet transformrdquo Journal ofInfrared andMillimeterWaves vol 28 no 4 pp 254ndash258 2009

[43] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[44] Y Phamila and R Amutha ldquoDiscrete Cosine Transform basedfusion of multi-focus images for visual sensor networksrdquo SignalProcessing vol 95 pp 161ndash170 2014

[45] L Wang B Li and L Tian ldquoMulti-modal medical volumetricdata fusion using 3D discrete shearlet transform and global-to-local rulerdquo IEEETransactions on Biomedical Engineering vol 61no 1 pp 197ndash206 2014

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 5: Research Article Log-Gabor Energy Based Multimodal Medical ...downloads.hindawi.com/journals/cmmm/2014/835481.pdf · elds of multiscale transform based image fusion [ ]. However,

Computational and Mathematical Methods in Medicine 5

in which (2119872+ 1)(2119873 + 1) is the window size The proposeddefinition of the local Log-Gabor energy not only extractsmore useful features from high-frequency coefficients butalso keeps a well performance in noisy environment

32 Proposed Fusion Framework The proposed NSCT-basedimage fusion framework is discussed in this subsectionConsidering the input multimodal medical images (119860 and119861) are perfectly registered The framework of the proposedfusion method is shown Figure 3 and described as thefollowing three steps

Step 1 Perform 120579-level NSCT on 119860 and 119861 to obtain one low-frequency subimage and series of high-frequency subimagesat each level and direction 119897 that is 119860 119871119860 119867119860

119896119897 and 119861

119871119861 119867119861

119896119897 where 119871119860 119871119861 are the low-frequency subimages and

119867119860

119896119897119867119860119896119897represent the high-frequency subimages at level 119896 isin

[1 120579] in the orientation 119897

Step 2 Fuse low- and high-frequency subbands via thefollowing novel fusion rule to obtain composite low- andhigh-frequency subbands

The low-frequency coefficients represent the approxima-tion component of the source images The popular widelyused approach is to apply the averaging methods to producethe fused coefficients However this rule reduced contrastin the fused images and cannot give the fused subimage ofhigh quality for medical imageTherefore the criterion basedon the phase congruency that is introduced in Section 22 isemployed to fuse the low-frequency coefficients The fusionrule for the low-frequency subbands is defined as

119871119865(119894 119895)

=

119871119860(119894 119895) if PC119860120579 (119894 119895) gt PC119861120579 (119894 119895)

119871119861(119894 119895) if PC119860120579 (119894 119895) lt PC119861120579 (119894 119895)

05 (119871119860(119894 119895) + 119871

119861(119894 119895)) if PC119860120579 (119894 119895) = PC119861120579 (119894 119895)

(6)

where PC119860120579(119894 119895) PC119861120579(119894 119895) is the phase congruency extractedfrom low-frequency subimages of the source images119860 and 119861respectively

For the high-frequency coefficients the most commonfusion rule is selecting the coefficient with larger absolutevalues This rule does not take any consideration of thesurrounding pixels and cannot give the fused componentsof high quality for medical image Especially when thesource images contain noise the noise could be mistaken forfused coefficients and cause miscalculation of the sharpnessvalue Therefore the criterion based on Log-Gabor energyis introduced to fuse high-frequency coefficients The fusionrule for the high-frequency subbands is defined as

119867119865

119896119897(119894 119895) =

119867119860

119896119897(119894 119895) if LE119860

119896119897(119894 119895) ge LE119861

119896119897(119894 119895)

119867119861

119896119897(119894 119895) otherwise

(7)

where LE119860119896119897(119894 119895) and LE119861

119896119897(119894 119895) are the local Log-Gabor energy

extracted fromhigh-frequency subimages at the 119896th scale and119897th direction of source images 119860 and 119861 respectively

Step 3 Perform 120579-level by the inverseNSCTon the fused low-and high-frequency subimages The fused image is obtainedultimately in this way

4 The Experimental Results and Analysis

It is well known that different image quality metrics imply thevisual quality of images from different aspects but none ofthem can directly imply the quality In this paper we considerboth the visual representation and quantitative assessmentof the fused images For evaluation of the proposed fusionmethod we have considered five separate fusion performancemetrics defined as below

41 Evaluation Index System

411 Standard Deviation The standard deviation (STD) ofan image with size of1198720 times 1198730 is defined as [37]

Std = ( 1

1198720 times 1198730

1198720

sum

119894=1

1198730

sum

119895=1

(119865 (119894 119895) minus 120583)2)

12

(8)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) and 120583 is the mean value of the image The STDcan be used to estimate how widely the gray values spread inan image The larger the STD the better the result

412 Edge Based SimilarityMeasure The edge based similar-ity measure (119876119860119861119865) is proposed by Xydeas and Petrovic [38]The definition is given as

119876119860119861119865

=

sum1198720

119894=1sum1198730

119895=1[119876119860119865(119894 119895) 119908

119860(119894 119895) + 119876

119861119865(119894 119895) 119908

119861(119894 119895)]

sum1198720

119894=1sum1198730

119895=1[119908119860 (119894 119895) + 119908119861 (119894 119895)]

(9)

where 119908119860(119894 119895) and 119908119861(119894 119895) are the corresponding gradientstrength for images 119860 and 119861 respectively The definition of119876119860119865(119894 119895) and 119876119861119865(119894 119895) is given as

119876119860119865(119894 119895) = 119876

119860119865

119886(119894 119895) 119876

119860119865

119892(119894 119895)

119876119861119865(119894 119895) = 119876

119861119865

119886(119894 119895) 119876

119861119865

119892(119894 119895)

(10)

where 119876119909119865119886(119894 119895) and 119876119909119865

119892(119894 119895) are the edge strength and

orientation preservation values at location for image 119909 (119860or 119861) respectively The edge based similarity measure givesthe similarity between the edges transferred from the inputimages to the fused image [26]The larger the value the betterthe fusion result

6 Computational and Mathematical Methods in Medicine

413 Mutual Information Mutual information (MI) [39]between the fusion image 119865 and the source images119860 and 119861 isdefined as follows

MI = MI119860119865 +MI119861119865

MI119860119865 =119871

sum

119891=0

119871

sum

119886=0

119901119860119865(119886 119891) log

2(119901119860119865(119886 119891)

119901119860 (119886) 119901119865 (119891))

MI119861119865 =119871

sum

119891=0

119871

sum

119887=0

119901119861119865(119887 119891) log

2(119901119861119865(119887 119891)

119901119861 (119887) 119901119865 (119891))

(11)

where MI119909119865 denotes the normalized mutual informationbetween the fused image and the input image 119860 119861 119886 119887 and119891 isin [0 119871] and 119871 is the number of bins 119901119860(119886) 119901119861(119887) and119901119865(119891) are the normalized gray level histograms of source

images and fused image 119901119909119865(119886 119891) is the joint gray levelhistograms between fused image and each source image

MI can indicate how much information the fused image119865 conveys about the source images 119860 and 119861 [22] Thereforethe greater the value of MI the better the fusion effect

414 Cross Entropy The cross entropy is defined as [8]

CE =119871minus1

sum

119897=0

119875119897log2119875119897

119876119897

(12)

where 119875119897 and119876119897 denote the gray level histogram of the sourceimage and the fused image respectively The cross entropy isused to evaluate the difference between the source images andthe fused image The lower value corresponds to the betterfusion result

415 Spatial Frequency Spatial frequency is defined as [40]

SF = radicRF2 + CF2 (13)

where RF and CF are the row frequency and column fre-quency respectively and are defined as

RF = radic 1

11987201198730

1198720minus1

sum

119894=0

1198730minus1

sum

119895=0

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2

CF = radic 1

11987201198730

1198720minus1

sum

119894=0

1198730minus1

sum

119895=0

[119865 (119894 119895) minus 119865 (119894 minus 1 119895)]2

(14)

The spatial frequency reflects the edge information of thefused image Larger spatial frequency values indicate betterimage quality

42 Experiments on Multimodal Medical Image Fusion Toevaluate the performance of the proposed image fusionapproach the experiments are performed on three groups ofmultimodal medical images These images are characterizedin two distinct pairs (1) CT and MRI (2) MR-T1 and MR-T2 The images in Figures 4(a1)-4(b1) and 4(a2)-4(b2) are

CT and MRI images whereas Figures 4(a3)-4(b3) are T1-weighted MR image (MR-T1) and T2-weighted MR image(MR-T2) All images have the same size of 256 times 256 pixelwith 256-level gray scale For all these image groups theresults of the proposed fusion framework are compared withthose of the traditional discretewavelet transform (DWT) [1316] the second generation curvelet transform (fast discretecurvelet transform FDCT) [41 42] the dual tree complexwavelet transform (DTCWT) [4] and the nonsubsampledContourlet transform (NSCT-1 andNSCT-2) basedmethodsThe high-frequency coefficients and low-frequency coeffi-cients of DWT FDCT DTCWT and NSCT-1 based methodsaremerged by the popular widely used fusion rule of selectingthe coefficient with larger absolute values and the averagingrule (average-maximum rule) respectively NSCT-2 basedmethod is merged by the fusion rules proposed by Bhatnagaret al in [26] In order to perform a fair comparison the sourceimages are all decomposed into the same levels with 3 forthose methods except FDCTmethod For DWTmethod theimages are decomposed using the DBSS (2 2) wavelet Forimplementing NSCT ldquo9-7rdquo filters and ldquopkvardquo filters (how toset the filters can be seen in [43]) are used as the pyramidaland directional filters respectively

421 Subjective Evaluation The first pair of medical imagesare two groups of brain CT and MRI images on differentaspects shown in Figures 4(a1) 4(b1) and 4(a2) 4(b2)respectively It can be easily seen that the CT image showsthe dense structure while MRI provides information aboutsoft tissues The obtained fused images from DWT FDCTDTCWT NSCT-1 and NSCT-2 are shown in Figures 4(c1)ndash4(g1) and 4(c2)ndash4(g2) respectively The results for the pro-posed fusion method have been shown in Figures 4(h1) and4(h2) On comparing these results it can be easily observedthat the proposedmethod outperforms those fusionmethodsand has good visual representation of fused image

The second pair of medical images areMR-T1 andMR-T2images shown in Figures 4(a3) and 4(b3) The comparisonof DWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 4(c3)ndash4(h3) clearly implies thatthe fusion result of the proposed method has better qualityand contrast in comparison to other methods

Similarly on observing the noticeable improvement hasbeen emphasized in Figure 4 by the red arrows and theanalysis above one can easily verify the fact that again theproposed method has been found superior in terms of visualrepresentation over DWT FDCT DTCWT NSCT-1 andNSCT-2 fusion methods

422 Objective Evaluation For objective evaluation of thefusion results shown in Figure 4 we have used five fusionmetrics cross entropy spatial frequency STD 119876119860119861119865 andMIThe quantitative comparison of cross entropy and spatialfrequency for these fused images is visually given by Figures5-6 and other metrics are given by Table 1

On observing Figure 5 one can easily observe all thethree results of the proposed scheme have lower values ofcross entropy than any of the DWT FDCT DTCWT NSCT-1 and NSCT-2 fusion methods The cross entropy is used to

Computational and Mathematical Methods in Medicine 7

(a1) (b1) (d1)(c1)

(e1) (h1)(g1)(f1)

(a2) (d2)(c2)(b2)

(e2) (f2) (h2)(g2)

(a3) (c3)(b3) (d3)

(e3) (f3) (g3) (h3)

Figure 4 Sourcemultimodalmedical images (a1) (b1) image group 1 (CT andMRI) (a2) (b2) image group 2 (CT andMRI) (a3) (b3) imagegroup 3 (MR-T1 and MR-T2) fused images from (c1) (c2) (c3) DWT based method (d1) (d2) (d3) FDCT based method (e1) (e2) (e3)DTCWT based method (f1) (f2) (f3) NSCT-1 based method (g1) (g2) (g3) NSCT-2 based method (h1) (h2) (h3) our proposed method

8 Computational and Mathematical Methods in Medicine

Table 1 Comparison on quantitative evaluation of different methods for the first set medical images

Images Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed method

Image group 1(CT and MRI)

STD 41160 39170 40132 40581 54641 58476QABF 0618 0592 0652 0683 0427 0716MI 2371 2029 2393 2438 1886 2580

Image group 2(CT and MRI)

STD 72260 70572 71407 71733 80378 83334QABF 0567 0584 0606 0622 0450 0618MI 3137 3138 3198 3235 2915 3302

Image group 3(MR-T1 and MR-T2)

STD 37358 36856 37358 37277 39023 40630QABF 0635 0647 0668 0682 0589 0689MI 3209 3142 3302 3347 3349 3469

005

115

225

DWT FDCT DTCWT NSCT-1 NSCT-2

Cross entropy

Group 1Group 2Group 3

Proposedmethod

1323 1384

0046

Figure 5 Comparison on cross entropy of different methods andimages

evaluate the difference between the source images and thefused image Therefore the lower value corresponds to thebetter fusion result

On observing Figure 6 two values of the spatial frequencyof the fused image obtained by the proposed method arethe highest and the other one is 6447 which is close tothe highest value 6581 Observation of Table 1 yields that allthe three results of the proposed fusion scheme have highervalues of STD 119876119860119861119865 and MI than any of other methodsexcept one value of119876119860119861119865 (image group 2) is the second bestHowever an overall comparison shows the superiority of theproposed fusion scheme

423 Combined Evaluation Since the subjective and objec-tive evaluations separately are not able to examine fusionresults we have combined them From these figures (Figures4ndash6) and table (Table 1) it is clearly to find that the proposedmethod not only preserves most of the source images char-acteristics and information but also improves the definitionand the spatial quality better than the existing methodswhich can be justified by the optimum values of objectivecriteria except one value of spatial frequency (image group3) and one value of 119876119860119861119865 (image group 2) Consider theexample of the first set of images the five criteria values ofthe proposed method are 1323 (cross entropy) 7050 (spatialfrequency) 58476 (STD) 0716 (119876119860119861119865) and 2580 (MI)respectively Each of them is the optimal one in the first setof experiments

7050

7976

64476

657

758

6581

85

DWT FDCT DTCWT NSCT-1 NSCT-2

Spatial frequencies

Group 1Group 2Group 3

Proposedmethod

Figure 6 Comparison on spatial frequencies of different methodsand images

Among these methods the result of NSCT-2 basedmethod also gives poor results when comparing to theproposed NSCT-based methodThis stems from the fact thathigh-frequency fusion rule of NSCT-2 based method is notable to extract the detail information in the high frequencyeffectively Also by carefully looking at the outputs of theproposed NSCT-based method (Figures 4(h1) 4(h2) and4(h3)) we can find that they get more contrast and morespatial resolution than the outputs of NSCT-2 based method(highlighted by the red arrows) and other methodsThemainreason behind the better performance is that the proposedfusion rules for low- and high-frequency coefficients caneffectively extract prominent and detail information from thesource images Therefore it can be possible to conclude thatthe proposed method is better than the existing methods

43 Fusion ofMultimodalMedical Noisy Images and a ClinicalExample To evaluate the performance of the proposedmethod in noisy environment the input image group 1 hasbeen additionally corrupted with Gaussian noise with astandard deviation of 5 (shown in Figures 7(a) and 7(b))In addition a clinical applicability on noninvasive diagnosisof neoplastic disease is given in the last subsection

431 Fusion of Multimodal Medical Noisy Images For com-parison apart from visual observation objective criteria onSTD MI and 119876119860119861119865 are used to evaluate how much clearor detail information of the source images is transferredto the fused images However maybe these criteria cannot

Computational and Mathematical Methods in Medicine 9

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 7 The multimodal medical images with noise (a) CT image with 5 noise (b) MRI image with 5 noise fused images by (c) DWT(d) FDCT (e) DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 2 Comparison on quantitative evaluation of different methods for the noise medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 41893 40044 40995 41338 52533 57787QABF 0592 0579 0619 0637 0269 0640MI 2785 2539 2826 2882 1816 2914

effectively evaluate the performance of the fusion methodsin terms of the noise transmission For further comparisonPeak Signal to Noise Ratio (PSNR) a ratio between themaximum possible power of a signal and the power of noisethat affects the fidelity [44] is used The larger the value ofPSNR the less the image distortion [45] PSNR is formulatedas

PSNR = 10lg100381610038161003816100381610038161003816100381610038161003816

2552

RMSE2100381610038161003816100381610038161003816100381610038161003816

(15)

where RMSE denotes the Root Mean Square Error betweenthe fused image and the reference imageThe reference imagein the following experiment is selecting from Figure 4(h1)which is proven to be the best performance compared to otherimages

Figure 7 illustrates the fusion results obtained by thedifferent methods The comparison of the images fused byDWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 7(c)ndash7(h) clearly implies thatthe fused image by proposed method has better quality andcontrast than other methods Figure 8 shows the values ofPSNR of different methods in fusing noisy images Onecan observe that the proposed method has higher values of

1516171819202122232425

DWT FDCT DTCWT NSCT-1 NSCT-2 Proposedmethod

PSNR23704

Figure 8 Comparison on PSNR of different methods for the noisemedical images

PSNR than any of the DWT FDCT DTCWT NSCT-1 andNSCT-2 fusionmethods Table 2 gives the quantitative resultsof fused images and shows that the values of STD 119876119860119861119865and MI are also the highest of all the six methods Fromthe analysis above we can also observe that the proposedscheme provides the best performance and outperforms theother algorithms In addition compared with the result ofthe NSCT-1 method using the average-maximum rule it

10 Computational and Mathematical Methods in Medicine

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 9 Brain images of the man with recurrent tumor (a) MR-T1 image (b) MR-T2 image fused images by (c) DWT (d) FDCT (e)DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 3 Comparison on quantitative evaluation of different methods for the clinical medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 64433 62487 63194 63652 60722 67155QABF 0541 0565 0578 0597 0411 0624MI 2507 2455 2540 2584 2392 2624

demonstrated the validities of the proposed fusion rule innoisy environment

432 A Clinical Example on Noninvasive Diagnosis In orderto demonstrate the practical value of the proposed schemein medical imaging one clinical case on neoplastic diagnosisis considered where MR-T1MR-T2 medical modalities areused The images have been downloaded from the HarvardUniversity site (httpwwwmedharvardeduAANLIBhomehtml) Figures 9(a)-9(b) show the recurrent tumor case of a51-year-old woman who sought medical attention because ofgradually increasing right hemiparesis (weakness) and hemi-anopia (visual loss) At craniotomy left parietal anaplasticastrocytoma was found A right frontal lesion was biopsiedA large region of mixed signal on MR-T1 and MR-T2 imagesgives the signs of the possibility of active tumor (highlightedby the red arrows)

Figures 9(c)ndash9(h) show the fused images by DWT FDCTDTCWT NSCT-1 NSCT-2 and proposed method It isobvious that the fused image by proposed method has bettercontrast and sharpness of active tumor (highlighted by the redarrows) than other methods Table 3 shows the quantitativeevaluation of different methods for the clinical medicalimages The values of the proposed method are optimum in

terms of STD MI and 119876119860119861119865 From Figure 9 and Table 3we can obtain the same conclusion that the proposed schemeprovides the best performance and outperforms the otheralgorithms

5 Conclusions

Multimodal medical image fusion plays an important role inclinical applications But the real challenge is to obtain a visu-ally enhanced image through fusion process In this paper anovel and effective image fusion framework based on NSCTand Log-Gabor energy is proposedThe potential advantagesinclude (1) NSCT is more suitable for image fusion becauseof its advantages such as multiresolution multidirection andshift-invariance (2) a new couple of fusion rules based onphase congruency andLog-Gabor energy are used to preservemore useful information in the fused image to improve thequality of the fused images and overcome the limitations ofthe traditional fusion rules and (3) the proposed methodcan provide a better performance than the current fusionmethods whatever the source images are clean or noisy Inthe experiments five groups of multimodal medical imagesincluding one group with noise and one group clinicalexample of a woman affected with recurrent tumor are

Computational and Mathematical Methods in Medicine 11

fused by using traditional fusion methods and the proposedframeworkThe subjective and objective comparisons clearlydemonstrate that the proposed algorithm can enhance thedetails of the fused image and can improve the visual effectwith less information distortion than other fusion methodsIn the future we plan to design a pureC++ platform to reducethe time cost and extend our method for 3D or 4D medicalimage fusion

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work is supported by the National Natural ScienceFoundation of China (no 61262034 no 61462031 and no61473221) by the Key Project of Chinese Ministry of Edu-cation (no 211087) by the Doctoral Fund of Ministry ofEducation of China (no 20120201120071) by the NaturalScience Foundation of Jiangxi Province (no 20114BAB211020and no 20132BAB201025) by the Young Scientist Foundationof Jiangxi Province (no 20122BCB23017) by the Scienceand Technology Application Project of Jiangxi Province(no KJLD14031) by the Science and Technology ResearchProject of the Education Department of Jiangxi Province (noGJJ14334) and by the Fundamental Research Funds for theCentral Universities of China

References

[1] R Shen I Cheng and A Basu ldquoCross-scale coefficient selec-tion for volumetric medical image fusionrdquo IEEE Transactionson Biomedical Engineering vol 60 no 4 pp 1069ndash1079 2013

[2] V D Calhoun and T Adali ldquoFeature-based fusion of medicalimaging datardquo IEEE Transactions on Information Technology inBiomedicine vol 13 no 5 pp 711ndash720 2009

[3] Y Yang D S Park S Huang and J Yang ldquoFusion of CT andMR images using an improved wavelet based methodrdquo Journalof X-Ray Science and Technology vol 18 no 2 pp 157ndash170 2010

[4] R Singh R Srivastava O Prakash and A Khare ldquoMultimodalmedical image fusion in dual tree complex wavelet transformdomain using maximum and average fusion rulesrdquo Journal ofMedical Imaging and Health Informatics vol 2 no 2 pp 168ndash173 2012

[5] Z Wang and Y Ma ldquoMedical image fusion using m-PCNNrdquoInformation Fusion vol 9 no 2 pp 176ndash185 2008

[6] V Barra and J Boire ldquoA general framework for the fusion ofanatomical and functionalmedical imagesrdquoNeuroImage vol 13no 3 pp 410ndash424 2001

[7] A P James and B V Dasarathy ldquoMedical image fusion a surveyof the state of the artrdquo Information Fusion vol 19 pp 4ndash19 2014

[8] P Balasubramaniam and V P Ananthi ldquoImage fusion usingintuitionistic fuzzy setsrdquo Information Fusion vol 20 pp 21ndash302014

[9] Y Yang S Y Huang F J Gao et al ldquoMulti-focus image fusionusing an effective discrete wavelet transform based algorithmrdquoMeasurement Science Review vol 14 no 2 pp 102ndash108 2014

[10] P S Pradhan R L King N H Younan and D W HolcombldquoEstimation of the number of decomposition levels for awavelet-based multiresolution multisensor image fusionrdquo IEEETransactions on Geoscience and Remote Sensing vol 44 no 12pp 3674ndash3686 2006

[11] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[12] J Yang and R S Blum ldquoA statistical signal processing approachto image fusion for conceled weapon detectionrdquo in Proceedingsof the International Conference on Image Processing (ICIP rsquo02)vol 1 pp 513ndash516 September 2002

[13] R Singh and A Khare ldquoMultiscale medical image fusion inwavelet domainrdquoThe Scientific World Journal vol 2013 ArticleID 521034 10 pages 2013

[14] B Aiazzi L Alparone A Barducci S Baronti and I PippildquoMultispectral fusion of multisensor image data by the general-ized Laplacian pyramidrdquo in Proceedings of the 1999 IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo99) vol 2 pp 1183ndash1185 Hamburg Germany July 1999

[15] V S Petrovic andC S Xydeas ldquoGradient-basedmultiresolutionimage fusionrdquo IEEE Transactions on Image Processing vol 13no 2 pp 228ndash237 2004

[16] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journalon Advances in Signal Processing vol 2010 Article ID 5793412010

[17] M N Do and M Vetterli ldquoThe contourlet transform an effi-cient directional multiresolution image representationrdquo IEEETransactions on Image Processing vol 14 no 12 pp 2091ndash21062005

[18] R Singh and A Khare ldquoFusion of multimodal medical imagesusing Daubechies complex wavelet transformmdasha multiresolu-tion approachrdquo Information Fusion vol 19 pp 49ndash60 2014

[19] X B Qu J W Yan and G D Yang ldquoMultifocus image fusionmethod of sharp frequency localized Contourlet transformdomain based on sum-modified-Laplacianrdquo Optics and Preci-sion Engineering vol 17 no 5 pp 1203ndash1212 2009

[20] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006

[21] Q Zhang and B Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[22] Y Chai H Li and X Zhang ldquoMultifocus image fusion basedon features contrast of multiscale products in nonsubsampledcontourlet transform domainrdquo Optik vol 123 no 7 pp 569ndash581 2012

[23] X B Qu J W Yan H Z Xiao and Z Zhu ldquoImage fusionalgorithm based on spatial frequency-motivated pulse cou-pled neural networks in nonsubsampled contourlet transformdomainrdquo Acta Automatica Sinica vol 34 no 12 pp 1508ndash15142008

[24] Y Wu C Wu and S Wu ldquoFusion of remote sensing imagesbased on nonsubsampled contourlet transform and regionsegmentationrdquo Journal of Shanghai JiaotongUniversity (Science)vol 16 no 6 pp 722ndash727 2011

[25] J Adu J Gan andYWang ldquoImage fusion based onnonsubsam-pled contourlet transform for infrared and visible light imagerdquoInfrared Physics amp Technology vol 61 pp 94ndash100 2013

12 Computational and Mathematical Methods in Medicine

[26] G Bhatnagar Q M J Wu and Z Liu ldquoDirective contrastbased multimodal medical image fusion in NSCT domainrdquoIEEE Transactions on Multimedia vol 15 no 5 pp 1014ndash10242013

[27] S Das and M K Kundu ldquoNSCT-based multimodal medicalimage fusion using pulse-coupled neural network andmodifiedspatial frequencyrdquo Medical and Biological Engineering andComputing vol 50 no 10 pp 1105ndash1114 2012

[28] P Kovesi ldquoImage features from phase congruencyrdquo Videre vol1 no 3 pp 1ndash26 1999

[29] M C Morrone and R A Owens ldquoFeature detection from localenergyrdquo Pattern Recognition Letters vol 6 no 5 pp 303ndash3131987

[30] P Kovesi ldquoPhase congruency a low-level image invariantrdquoPsychological Research vol 64 no 2 pp 136ndash148 2000

[31] L Zhang X Mou and D Zhang ldquoFSIM a feature similarityindex for image quality assessmentrdquo IEEE Transactions onImage Processing vol 20 no 8 pp 2378ndash2386 2011

[32] A Wong and W Bishop ldquoEfficient least squares fusion of MRIand CT images using a phase congruency modelrdquo PatternRecognition Letters vol 29 no 3 pp 173ndash180 2008

[33] L Yu Z He and Q Cao ldquoGabor texture representationmethodfor face recognition using theGamma and generalizedGaussianmodelsrdquo Image and Vision Computing vol 28 no 1 pp 177ndash1872010

[34] P Yao J Li X Ye Z Zhuang and B Li ldquoIris recognitionalgorithm using modified Log-Gabor filtersrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 461ndash464 August 2006

[35] R Raghavendra and C Busch ldquoNovel image fusion schemebased on dependency measure for robust multispectral palm-print recognitionrdquo Pattern Recognition vol 47 no 6 pp 2205ndash2221 2014

[36] R Redondo F Sroubek S Fischer and G Cristobal ldquoMultifo-cus image fusion using the log-Gabor transform and aMultisizeWindows techniquerdquo Information Fusion vol 10 no 2 pp 163ndash171 2009

[37] X Zhang X Li and Y Feng ldquoA new image fusion performancemeasure using Riesz transformsrdquoOptik vol 125 no 3 pp 1427ndash1433 2014

[38] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000

[39] B Yang and S Li ldquoPixel-level image fusion with simultaneousorthogonal matching pursuitrdquo Information Fusion vol 13 no 1pp 10ndash19 2012

[40] QMiao C Shi P XuM Yang and Y Shi ldquoA novel algorithm ofimage fusion using shearletsrdquoOptics Communications vol 284no 6 pp 1540ndash1547 2011

[41] E Candes L Demanet D Donoho and L Ying ldquoFast discretecurvelet transformsrdquo Multiscale Modeling and Simulation vol5 no 3 pp 861ndash899 2006

[42] M Fu and C Zhao ldquoFusion of infrared and visible imagesbased on the second generation curvelet transformrdquo Journal ofInfrared andMillimeterWaves vol 28 no 4 pp 254ndash258 2009

[43] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[44] Y Phamila and R Amutha ldquoDiscrete Cosine Transform basedfusion of multi-focus images for visual sensor networksrdquo SignalProcessing vol 95 pp 161ndash170 2014

[45] L Wang B Li and L Tian ldquoMulti-modal medical volumetricdata fusion using 3D discrete shearlet transform and global-to-local rulerdquo IEEETransactions on Biomedical Engineering vol 61no 1 pp 197ndash206 2014

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 6: Research Article Log-Gabor Energy Based Multimodal Medical ...downloads.hindawi.com/journals/cmmm/2014/835481.pdf · elds of multiscale transform based image fusion [ ]. However,

6 Computational and Mathematical Methods in Medicine

413 Mutual Information Mutual information (MI) [39]between the fusion image 119865 and the source images119860 and 119861 isdefined as follows

MI = MI119860119865 +MI119861119865

MI119860119865 =119871

sum

119891=0

119871

sum

119886=0

119901119860119865(119886 119891) log

2(119901119860119865(119886 119891)

119901119860 (119886) 119901119865 (119891))

MI119861119865 =119871

sum

119891=0

119871

sum

119887=0

119901119861119865(119887 119891) log

2(119901119861119865(119887 119891)

119901119861 (119887) 119901119865 (119891))

(11)

where MI119909119865 denotes the normalized mutual informationbetween the fused image and the input image 119860 119861 119886 119887 and119891 isin [0 119871] and 119871 is the number of bins 119901119860(119886) 119901119861(119887) and119901119865(119891) are the normalized gray level histograms of source

images and fused image 119901119909119865(119886 119891) is the joint gray levelhistograms between fused image and each source image

MI can indicate how much information the fused image119865 conveys about the source images 119860 and 119861 [22] Thereforethe greater the value of MI the better the fusion effect

414 Cross Entropy The cross entropy is defined as [8]

CE =119871minus1

sum

119897=0

119875119897log2119875119897

119876119897

(12)

where 119875119897 and119876119897 denote the gray level histogram of the sourceimage and the fused image respectively The cross entropy isused to evaluate the difference between the source images andthe fused image The lower value corresponds to the betterfusion result

415 Spatial Frequency Spatial frequency is defined as [40]

SF = radicRF2 + CF2 (13)

where RF and CF are the row frequency and column fre-quency respectively and are defined as

RF = radic 1

11987201198730

1198720minus1

sum

119894=0

1198730minus1

sum

119895=0

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2

CF = radic 1

11987201198730

1198720minus1

sum

119894=0

1198730minus1

sum

119895=0

[119865 (119894 119895) minus 119865 (119894 minus 1 119895)]2

(14)

The spatial frequency reflects the edge information of thefused image Larger spatial frequency values indicate betterimage quality

42 Experiments on Multimodal Medical Image Fusion Toevaluate the performance of the proposed image fusionapproach the experiments are performed on three groups ofmultimodal medical images These images are characterizedin two distinct pairs (1) CT and MRI (2) MR-T1 and MR-T2 The images in Figures 4(a1)-4(b1) and 4(a2)-4(b2) are

CT and MRI images whereas Figures 4(a3)-4(b3) are T1-weighted MR image (MR-T1) and T2-weighted MR image(MR-T2) All images have the same size of 256 times 256 pixelwith 256-level gray scale For all these image groups theresults of the proposed fusion framework are compared withthose of the traditional discretewavelet transform (DWT) [1316] the second generation curvelet transform (fast discretecurvelet transform FDCT) [41 42] the dual tree complexwavelet transform (DTCWT) [4] and the nonsubsampledContourlet transform (NSCT-1 andNSCT-2) basedmethodsThe high-frequency coefficients and low-frequency coeffi-cients of DWT FDCT DTCWT and NSCT-1 based methodsaremerged by the popular widely used fusion rule of selectingthe coefficient with larger absolute values and the averagingrule (average-maximum rule) respectively NSCT-2 basedmethod is merged by the fusion rules proposed by Bhatnagaret al in [26] In order to perform a fair comparison the sourceimages are all decomposed into the same levels with 3 forthose methods except FDCTmethod For DWTmethod theimages are decomposed using the DBSS (2 2) wavelet Forimplementing NSCT ldquo9-7rdquo filters and ldquopkvardquo filters (how toset the filters can be seen in [43]) are used as the pyramidaland directional filters respectively

421 Subjective Evaluation The first pair of medical imagesare two groups of brain CT and MRI images on differentaspects shown in Figures 4(a1) 4(b1) and 4(a2) 4(b2)respectively It can be easily seen that the CT image showsthe dense structure while MRI provides information aboutsoft tissues The obtained fused images from DWT FDCTDTCWT NSCT-1 and NSCT-2 are shown in Figures 4(c1)ndash4(g1) and 4(c2)ndash4(g2) respectively The results for the pro-posed fusion method have been shown in Figures 4(h1) and4(h2) On comparing these results it can be easily observedthat the proposedmethod outperforms those fusionmethodsand has good visual representation of fused image

The second pair of medical images areMR-T1 andMR-T2images shown in Figures 4(a3) and 4(b3) The comparisonof DWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 4(c3)ndash4(h3) clearly implies thatthe fusion result of the proposed method has better qualityand contrast in comparison to other methods

Similarly on observing the noticeable improvement hasbeen emphasized in Figure 4 by the red arrows and theanalysis above one can easily verify the fact that again theproposed method has been found superior in terms of visualrepresentation over DWT FDCT DTCWT NSCT-1 andNSCT-2 fusion methods

422 Objective Evaluation For objective evaluation of thefusion results shown in Figure 4 we have used five fusionmetrics cross entropy spatial frequency STD 119876119860119861119865 andMIThe quantitative comparison of cross entropy and spatialfrequency for these fused images is visually given by Figures5-6 and other metrics are given by Table 1

On observing Figure 5 one can easily observe all thethree results of the proposed scheme have lower values ofcross entropy than any of the DWT FDCT DTCWT NSCT-1 and NSCT-2 fusion methods The cross entropy is used to

Computational and Mathematical Methods in Medicine 7

(a1) (b1) (d1)(c1)

(e1) (h1)(g1)(f1)

(a2) (d2)(c2)(b2)

(e2) (f2) (h2)(g2)

(a3) (c3)(b3) (d3)

(e3) (f3) (g3) (h3)

Figure 4 Sourcemultimodalmedical images (a1) (b1) image group 1 (CT andMRI) (a2) (b2) image group 2 (CT andMRI) (a3) (b3) imagegroup 3 (MR-T1 and MR-T2) fused images from (c1) (c2) (c3) DWT based method (d1) (d2) (d3) FDCT based method (e1) (e2) (e3)DTCWT based method (f1) (f2) (f3) NSCT-1 based method (g1) (g2) (g3) NSCT-2 based method (h1) (h2) (h3) our proposed method

8 Computational and Mathematical Methods in Medicine

Table 1 Comparison on quantitative evaluation of different methods for the first set medical images

Images Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed method

Image group 1(CT and MRI)

STD 41160 39170 40132 40581 54641 58476QABF 0618 0592 0652 0683 0427 0716MI 2371 2029 2393 2438 1886 2580

Image group 2(CT and MRI)

STD 72260 70572 71407 71733 80378 83334QABF 0567 0584 0606 0622 0450 0618MI 3137 3138 3198 3235 2915 3302

Image group 3(MR-T1 and MR-T2)

STD 37358 36856 37358 37277 39023 40630QABF 0635 0647 0668 0682 0589 0689MI 3209 3142 3302 3347 3349 3469

005

115

225

DWT FDCT DTCWT NSCT-1 NSCT-2

Cross entropy

Group 1Group 2Group 3

Proposedmethod

1323 1384

0046

Figure 5 Comparison on cross entropy of different methods andimages

evaluate the difference between the source images and thefused image Therefore the lower value corresponds to thebetter fusion result

On observing Figure 6 two values of the spatial frequencyof the fused image obtained by the proposed method arethe highest and the other one is 6447 which is close tothe highest value 6581 Observation of Table 1 yields that allthe three results of the proposed fusion scheme have highervalues of STD 119876119860119861119865 and MI than any of other methodsexcept one value of119876119860119861119865 (image group 2) is the second bestHowever an overall comparison shows the superiority of theproposed fusion scheme

423 Combined Evaluation Since the subjective and objec-tive evaluations separately are not able to examine fusionresults we have combined them From these figures (Figures4ndash6) and table (Table 1) it is clearly to find that the proposedmethod not only preserves most of the source images char-acteristics and information but also improves the definitionand the spatial quality better than the existing methodswhich can be justified by the optimum values of objectivecriteria except one value of spatial frequency (image group3) and one value of 119876119860119861119865 (image group 2) Consider theexample of the first set of images the five criteria values ofthe proposed method are 1323 (cross entropy) 7050 (spatialfrequency) 58476 (STD) 0716 (119876119860119861119865) and 2580 (MI)respectively Each of them is the optimal one in the first setof experiments

7050

7976

64476

657

758

6581

85

DWT FDCT DTCWT NSCT-1 NSCT-2

Spatial frequencies

Group 1Group 2Group 3

Proposedmethod

Figure 6 Comparison on spatial frequencies of different methodsand images

Among these methods the result of NSCT-2 basedmethod also gives poor results when comparing to theproposed NSCT-based methodThis stems from the fact thathigh-frequency fusion rule of NSCT-2 based method is notable to extract the detail information in the high frequencyeffectively Also by carefully looking at the outputs of theproposed NSCT-based method (Figures 4(h1) 4(h2) and4(h3)) we can find that they get more contrast and morespatial resolution than the outputs of NSCT-2 based method(highlighted by the red arrows) and other methodsThemainreason behind the better performance is that the proposedfusion rules for low- and high-frequency coefficients caneffectively extract prominent and detail information from thesource images Therefore it can be possible to conclude thatthe proposed method is better than the existing methods

43 Fusion ofMultimodalMedical Noisy Images and a ClinicalExample To evaluate the performance of the proposedmethod in noisy environment the input image group 1 hasbeen additionally corrupted with Gaussian noise with astandard deviation of 5 (shown in Figures 7(a) and 7(b))In addition a clinical applicability on noninvasive diagnosisof neoplastic disease is given in the last subsection

431 Fusion of Multimodal Medical Noisy Images For com-parison apart from visual observation objective criteria onSTD MI and 119876119860119861119865 are used to evaluate how much clearor detail information of the source images is transferredto the fused images However maybe these criteria cannot

Computational and Mathematical Methods in Medicine 9

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 7 The multimodal medical images with noise (a) CT image with 5 noise (b) MRI image with 5 noise fused images by (c) DWT(d) FDCT (e) DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 2 Comparison on quantitative evaluation of different methods for the noise medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 41893 40044 40995 41338 52533 57787QABF 0592 0579 0619 0637 0269 0640MI 2785 2539 2826 2882 1816 2914

effectively evaluate the performance of the fusion methodsin terms of the noise transmission For further comparisonPeak Signal to Noise Ratio (PSNR) a ratio between themaximum possible power of a signal and the power of noisethat affects the fidelity [44] is used The larger the value ofPSNR the less the image distortion [45] PSNR is formulatedas

PSNR = 10lg100381610038161003816100381610038161003816100381610038161003816

2552

RMSE2100381610038161003816100381610038161003816100381610038161003816

(15)

where RMSE denotes the Root Mean Square Error betweenthe fused image and the reference imageThe reference imagein the following experiment is selecting from Figure 4(h1)which is proven to be the best performance compared to otherimages

Figure 7 illustrates the fusion results obtained by thedifferent methods The comparison of the images fused byDWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 7(c)ndash7(h) clearly implies thatthe fused image by proposed method has better quality andcontrast than other methods Figure 8 shows the values ofPSNR of different methods in fusing noisy images Onecan observe that the proposed method has higher values of

1516171819202122232425

DWT FDCT DTCWT NSCT-1 NSCT-2 Proposedmethod

PSNR23704

Figure 8 Comparison on PSNR of different methods for the noisemedical images

PSNR than any of the DWT FDCT DTCWT NSCT-1 andNSCT-2 fusionmethods Table 2 gives the quantitative resultsof fused images and shows that the values of STD 119876119860119861119865and MI are also the highest of all the six methods Fromthe analysis above we can also observe that the proposedscheme provides the best performance and outperforms theother algorithms In addition compared with the result ofthe NSCT-1 method using the average-maximum rule it

10 Computational and Mathematical Methods in Medicine

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 9 Brain images of the man with recurrent tumor (a) MR-T1 image (b) MR-T2 image fused images by (c) DWT (d) FDCT (e)DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 3 Comparison on quantitative evaluation of different methods for the clinical medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 64433 62487 63194 63652 60722 67155QABF 0541 0565 0578 0597 0411 0624MI 2507 2455 2540 2584 2392 2624

demonstrated the validities of the proposed fusion rule innoisy environment

432 A Clinical Example on Noninvasive Diagnosis In orderto demonstrate the practical value of the proposed schemein medical imaging one clinical case on neoplastic diagnosisis considered where MR-T1MR-T2 medical modalities areused The images have been downloaded from the HarvardUniversity site (httpwwwmedharvardeduAANLIBhomehtml) Figures 9(a)-9(b) show the recurrent tumor case of a51-year-old woman who sought medical attention because ofgradually increasing right hemiparesis (weakness) and hemi-anopia (visual loss) At craniotomy left parietal anaplasticastrocytoma was found A right frontal lesion was biopsiedA large region of mixed signal on MR-T1 and MR-T2 imagesgives the signs of the possibility of active tumor (highlightedby the red arrows)

Figures 9(c)ndash9(h) show the fused images by DWT FDCTDTCWT NSCT-1 NSCT-2 and proposed method It isobvious that the fused image by proposed method has bettercontrast and sharpness of active tumor (highlighted by the redarrows) than other methods Table 3 shows the quantitativeevaluation of different methods for the clinical medicalimages The values of the proposed method are optimum in

terms of STD MI and 119876119860119861119865 From Figure 9 and Table 3we can obtain the same conclusion that the proposed schemeprovides the best performance and outperforms the otheralgorithms

5 Conclusions

Multimodal medical image fusion plays an important role inclinical applications But the real challenge is to obtain a visu-ally enhanced image through fusion process In this paper anovel and effective image fusion framework based on NSCTand Log-Gabor energy is proposedThe potential advantagesinclude (1) NSCT is more suitable for image fusion becauseof its advantages such as multiresolution multidirection andshift-invariance (2) a new couple of fusion rules based onphase congruency andLog-Gabor energy are used to preservemore useful information in the fused image to improve thequality of the fused images and overcome the limitations ofthe traditional fusion rules and (3) the proposed methodcan provide a better performance than the current fusionmethods whatever the source images are clean or noisy Inthe experiments five groups of multimodal medical imagesincluding one group with noise and one group clinicalexample of a woman affected with recurrent tumor are

Computational and Mathematical Methods in Medicine 11

fused by using traditional fusion methods and the proposedframeworkThe subjective and objective comparisons clearlydemonstrate that the proposed algorithm can enhance thedetails of the fused image and can improve the visual effectwith less information distortion than other fusion methodsIn the future we plan to design a pureC++ platform to reducethe time cost and extend our method for 3D or 4D medicalimage fusion

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work is supported by the National Natural ScienceFoundation of China (no 61262034 no 61462031 and no61473221) by the Key Project of Chinese Ministry of Edu-cation (no 211087) by the Doctoral Fund of Ministry ofEducation of China (no 20120201120071) by the NaturalScience Foundation of Jiangxi Province (no 20114BAB211020and no 20132BAB201025) by the Young Scientist Foundationof Jiangxi Province (no 20122BCB23017) by the Scienceand Technology Application Project of Jiangxi Province(no KJLD14031) by the Science and Technology ResearchProject of the Education Department of Jiangxi Province (noGJJ14334) and by the Fundamental Research Funds for theCentral Universities of China

References

[1] R Shen I Cheng and A Basu ldquoCross-scale coefficient selec-tion for volumetric medical image fusionrdquo IEEE Transactionson Biomedical Engineering vol 60 no 4 pp 1069ndash1079 2013

[2] V D Calhoun and T Adali ldquoFeature-based fusion of medicalimaging datardquo IEEE Transactions on Information Technology inBiomedicine vol 13 no 5 pp 711ndash720 2009

[3] Y Yang D S Park S Huang and J Yang ldquoFusion of CT andMR images using an improved wavelet based methodrdquo Journalof X-Ray Science and Technology vol 18 no 2 pp 157ndash170 2010

[4] R Singh R Srivastava O Prakash and A Khare ldquoMultimodalmedical image fusion in dual tree complex wavelet transformdomain using maximum and average fusion rulesrdquo Journal ofMedical Imaging and Health Informatics vol 2 no 2 pp 168ndash173 2012

[5] Z Wang and Y Ma ldquoMedical image fusion using m-PCNNrdquoInformation Fusion vol 9 no 2 pp 176ndash185 2008

[6] V Barra and J Boire ldquoA general framework for the fusion ofanatomical and functionalmedical imagesrdquoNeuroImage vol 13no 3 pp 410ndash424 2001

[7] A P James and B V Dasarathy ldquoMedical image fusion a surveyof the state of the artrdquo Information Fusion vol 19 pp 4ndash19 2014

[8] P Balasubramaniam and V P Ananthi ldquoImage fusion usingintuitionistic fuzzy setsrdquo Information Fusion vol 20 pp 21ndash302014

[9] Y Yang S Y Huang F J Gao et al ldquoMulti-focus image fusionusing an effective discrete wavelet transform based algorithmrdquoMeasurement Science Review vol 14 no 2 pp 102ndash108 2014

[10] P S Pradhan R L King N H Younan and D W HolcombldquoEstimation of the number of decomposition levels for awavelet-based multiresolution multisensor image fusionrdquo IEEETransactions on Geoscience and Remote Sensing vol 44 no 12pp 3674ndash3686 2006

[11] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[12] J Yang and R S Blum ldquoA statistical signal processing approachto image fusion for conceled weapon detectionrdquo in Proceedingsof the International Conference on Image Processing (ICIP rsquo02)vol 1 pp 513ndash516 September 2002

[13] R Singh and A Khare ldquoMultiscale medical image fusion inwavelet domainrdquoThe Scientific World Journal vol 2013 ArticleID 521034 10 pages 2013

[14] B Aiazzi L Alparone A Barducci S Baronti and I PippildquoMultispectral fusion of multisensor image data by the general-ized Laplacian pyramidrdquo in Proceedings of the 1999 IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo99) vol 2 pp 1183ndash1185 Hamburg Germany July 1999

[15] V S Petrovic andC S Xydeas ldquoGradient-basedmultiresolutionimage fusionrdquo IEEE Transactions on Image Processing vol 13no 2 pp 228ndash237 2004

[16] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journalon Advances in Signal Processing vol 2010 Article ID 5793412010

[17] M N Do and M Vetterli ldquoThe contourlet transform an effi-cient directional multiresolution image representationrdquo IEEETransactions on Image Processing vol 14 no 12 pp 2091ndash21062005

[18] R Singh and A Khare ldquoFusion of multimodal medical imagesusing Daubechies complex wavelet transformmdasha multiresolu-tion approachrdquo Information Fusion vol 19 pp 49ndash60 2014

[19] X B Qu J W Yan and G D Yang ldquoMultifocus image fusionmethod of sharp frequency localized Contourlet transformdomain based on sum-modified-Laplacianrdquo Optics and Preci-sion Engineering vol 17 no 5 pp 1203ndash1212 2009

[20] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006

[21] Q Zhang and B Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[22] Y Chai H Li and X Zhang ldquoMultifocus image fusion basedon features contrast of multiscale products in nonsubsampledcontourlet transform domainrdquo Optik vol 123 no 7 pp 569ndash581 2012

[23] X B Qu J W Yan H Z Xiao and Z Zhu ldquoImage fusionalgorithm based on spatial frequency-motivated pulse cou-pled neural networks in nonsubsampled contourlet transformdomainrdquo Acta Automatica Sinica vol 34 no 12 pp 1508ndash15142008

[24] Y Wu C Wu and S Wu ldquoFusion of remote sensing imagesbased on nonsubsampled contourlet transform and regionsegmentationrdquo Journal of Shanghai JiaotongUniversity (Science)vol 16 no 6 pp 722ndash727 2011

[25] J Adu J Gan andYWang ldquoImage fusion based onnonsubsam-pled contourlet transform for infrared and visible light imagerdquoInfrared Physics amp Technology vol 61 pp 94ndash100 2013

12 Computational and Mathematical Methods in Medicine

[26] G Bhatnagar Q M J Wu and Z Liu ldquoDirective contrastbased multimodal medical image fusion in NSCT domainrdquoIEEE Transactions on Multimedia vol 15 no 5 pp 1014ndash10242013

[27] S Das and M K Kundu ldquoNSCT-based multimodal medicalimage fusion using pulse-coupled neural network andmodifiedspatial frequencyrdquo Medical and Biological Engineering andComputing vol 50 no 10 pp 1105ndash1114 2012

[28] P Kovesi ldquoImage features from phase congruencyrdquo Videre vol1 no 3 pp 1ndash26 1999

[29] M C Morrone and R A Owens ldquoFeature detection from localenergyrdquo Pattern Recognition Letters vol 6 no 5 pp 303ndash3131987

[30] P Kovesi ldquoPhase congruency a low-level image invariantrdquoPsychological Research vol 64 no 2 pp 136ndash148 2000

[31] L Zhang X Mou and D Zhang ldquoFSIM a feature similarityindex for image quality assessmentrdquo IEEE Transactions onImage Processing vol 20 no 8 pp 2378ndash2386 2011

[32] A Wong and W Bishop ldquoEfficient least squares fusion of MRIand CT images using a phase congruency modelrdquo PatternRecognition Letters vol 29 no 3 pp 173ndash180 2008

[33] L Yu Z He and Q Cao ldquoGabor texture representationmethodfor face recognition using theGamma and generalizedGaussianmodelsrdquo Image and Vision Computing vol 28 no 1 pp 177ndash1872010

[34] P Yao J Li X Ye Z Zhuang and B Li ldquoIris recognitionalgorithm using modified Log-Gabor filtersrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 461ndash464 August 2006

[35] R Raghavendra and C Busch ldquoNovel image fusion schemebased on dependency measure for robust multispectral palm-print recognitionrdquo Pattern Recognition vol 47 no 6 pp 2205ndash2221 2014

[36] R Redondo F Sroubek S Fischer and G Cristobal ldquoMultifo-cus image fusion using the log-Gabor transform and aMultisizeWindows techniquerdquo Information Fusion vol 10 no 2 pp 163ndash171 2009

[37] X Zhang X Li and Y Feng ldquoA new image fusion performancemeasure using Riesz transformsrdquoOptik vol 125 no 3 pp 1427ndash1433 2014

[38] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000

[39] B Yang and S Li ldquoPixel-level image fusion with simultaneousorthogonal matching pursuitrdquo Information Fusion vol 13 no 1pp 10ndash19 2012

[40] QMiao C Shi P XuM Yang and Y Shi ldquoA novel algorithm ofimage fusion using shearletsrdquoOptics Communications vol 284no 6 pp 1540ndash1547 2011

[41] E Candes L Demanet D Donoho and L Ying ldquoFast discretecurvelet transformsrdquo Multiscale Modeling and Simulation vol5 no 3 pp 861ndash899 2006

[42] M Fu and C Zhao ldquoFusion of infrared and visible imagesbased on the second generation curvelet transformrdquo Journal ofInfrared andMillimeterWaves vol 28 no 4 pp 254ndash258 2009

[43] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[44] Y Phamila and R Amutha ldquoDiscrete Cosine Transform basedfusion of multi-focus images for visual sensor networksrdquo SignalProcessing vol 95 pp 161ndash170 2014

[45] L Wang B Li and L Tian ldquoMulti-modal medical volumetricdata fusion using 3D discrete shearlet transform and global-to-local rulerdquo IEEETransactions on Biomedical Engineering vol 61no 1 pp 197ndash206 2014

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 7: Research Article Log-Gabor Energy Based Multimodal Medical ...downloads.hindawi.com/journals/cmmm/2014/835481.pdf · elds of multiscale transform based image fusion [ ]. However,

Computational and Mathematical Methods in Medicine 7

(a1) (b1) (d1)(c1)

(e1) (h1)(g1)(f1)

(a2) (d2)(c2)(b2)

(e2) (f2) (h2)(g2)

(a3) (c3)(b3) (d3)

(e3) (f3) (g3) (h3)

Figure 4 Sourcemultimodalmedical images (a1) (b1) image group 1 (CT andMRI) (a2) (b2) image group 2 (CT andMRI) (a3) (b3) imagegroup 3 (MR-T1 and MR-T2) fused images from (c1) (c2) (c3) DWT based method (d1) (d2) (d3) FDCT based method (e1) (e2) (e3)DTCWT based method (f1) (f2) (f3) NSCT-1 based method (g1) (g2) (g3) NSCT-2 based method (h1) (h2) (h3) our proposed method

8 Computational and Mathematical Methods in Medicine

Table 1 Comparison on quantitative evaluation of different methods for the first set medical images

Images Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed method

Image group 1(CT and MRI)

STD 41160 39170 40132 40581 54641 58476QABF 0618 0592 0652 0683 0427 0716MI 2371 2029 2393 2438 1886 2580

Image group 2(CT and MRI)

STD 72260 70572 71407 71733 80378 83334QABF 0567 0584 0606 0622 0450 0618MI 3137 3138 3198 3235 2915 3302

Image group 3(MR-T1 and MR-T2)

STD 37358 36856 37358 37277 39023 40630QABF 0635 0647 0668 0682 0589 0689MI 3209 3142 3302 3347 3349 3469

005

115

225

DWT FDCT DTCWT NSCT-1 NSCT-2

Cross entropy

Group 1Group 2Group 3

Proposedmethod

1323 1384

0046

Figure 5 Comparison on cross entropy of different methods andimages

evaluate the difference between the source images and thefused image Therefore the lower value corresponds to thebetter fusion result

On observing Figure 6 two values of the spatial frequencyof the fused image obtained by the proposed method arethe highest and the other one is 6447 which is close tothe highest value 6581 Observation of Table 1 yields that allthe three results of the proposed fusion scheme have highervalues of STD 119876119860119861119865 and MI than any of other methodsexcept one value of119876119860119861119865 (image group 2) is the second bestHowever an overall comparison shows the superiority of theproposed fusion scheme

423 Combined Evaluation Since the subjective and objec-tive evaluations separately are not able to examine fusionresults we have combined them From these figures (Figures4ndash6) and table (Table 1) it is clearly to find that the proposedmethod not only preserves most of the source images char-acteristics and information but also improves the definitionand the spatial quality better than the existing methodswhich can be justified by the optimum values of objectivecriteria except one value of spatial frequency (image group3) and one value of 119876119860119861119865 (image group 2) Consider theexample of the first set of images the five criteria values ofthe proposed method are 1323 (cross entropy) 7050 (spatialfrequency) 58476 (STD) 0716 (119876119860119861119865) and 2580 (MI)respectively Each of them is the optimal one in the first setof experiments

7050

7976

64476

657

758

6581

85

DWT FDCT DTCWT NSCT-1 NSCT-2

Spatial frequencies

Group 1Group 2Group 3

Proposedmethod

Figure 6 Comparison on spatial frequencies of different methodsand images

Among these methods the result of NSCT-2 basedmethod also gives poor results when comparing to theproposed NSCT-based methodThis stems from the fact thathigh-frequency fusion rule of NSCT-2 based method is notable to extract the detail information in the high frequencyeffectively Also by carefully looking at the outputs of theproposed NSCT-based method (Figures 4(h1) 4(h2) and4(h3)) we can find that they get more contrast and morespatial resolution than the outputs of NSCT-2 based method(highlighted by the red arrows) and other methodsThemainreason behind the better performance is that the proposedfusion rules for low- and high-frequency coefficients caneffectively extract prominent and detail information from thesource images Therefore it can be possible to conclude thatthe proposed method is better than the existing methods

43 Fusion ofMultimodalMedical Noisy Images and a ClinicalExample To evaluate the performance of the proposedmethod in noisy environment the input image group 1 hasbeen additionally corrupted with Gaussian noise with astandard deviation of 5 (shown in Figures 7(a) and 7(b))In addition a clinical applicability on noninvasive diagnosisof neoplastic disease is given in the last subsection

431 Fusion of Multimodal Medical Noisy Images For com-parison apart from visual observation objective criteria onSTD MI and 119876119860119861119865 are used to evaluate how much clearor detail information of the source images is transferredto the fused images However maybe these criteria cannot

Computational and Mathematical Methods in Medicine 9

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 7 The multimodal medical images with noise (a) CT image with 5 noise (b) MRI image with 5 noise fused images by (c) DWT(d) FDCT (e) DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 2 Comparison on quantitative evaluation of different methods for the noise medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 41893 40044 40995 41338 52533 57787QABF 0592 0579 0619 0637 0269 0640MI 2785 2539 2826 2882 1816 2914

effectively evaluate the performance of the fusion methodsin terms of the noise transmission For further comparisonPeak Signal to Noise Ratio (PSNR) a ratio between themaximum possible power of a signal and the power of noisethat affects the fidelity [44] is used The larger the value ofPSNR the less the image distortion [45] PSNR is formulatedas

PSNR = 10lg100381610038161003816100381610038161003816100381610038161003816

2552

RMSE2100381610038161003816100381610038161003816100381610038161003816

(15)

where RMSE denotes the Root Mean Square Error betweenthe fused image and the reference imageThe reference imagein the following experiment is selecting from Figure 4(h1)which is proven to be the best performance compared to otherimages

Figure 7 illustrates the fusion results obtained by thedifferent methods The comparison of the images fused byDWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 7(c)ndash7(h) clearly implies thatthe fused image by proposed method has better quality andcontrast than other methods Figure 8 shows the values ofPSNR of different methods in fusing noisy images Onecan observe that the proposed method has higher values of

1516171819202122232425

DWT FDCT DTCWT NSCT-1 NSCT-2 Proposedmethod

PSNR23704

Figure 8 Comparison on PSNR of different methods for the noisemedical images

PSNR than any of the DWT FDCT DTCWT NSCT-1 andNSCT-2 fusionmethods Table 2 gives the quantitative resultsof fused images and shows that the values of STD 119876119860119861119865and MI are also the highest of all the six methods Fromthe analysis above we can also observe that the proposedscheme provides the best performance and outperforms theother algorithms In addition compared with the result ofthe NSCT-1 method using the average-maximum rule it

10 Computational and Mathematical Methods in Medicine

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 9 Brain images of the man with recurrent tumor (a) MR-T1 image (b) MR-T2 image fused images by (c) DWT (d) FDCT (e)DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 3 Comparison on quantitative evaluation of different methods for the clinical medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 64433 62487 63194 63652 60722 67155QABF 0541 0565 0578 0597 0411 0624MI 2507 2455 2540 2584 2392 2624

demonstrated the validities of the proposed fusion rule innoisy environment

432 A Clinical Example on Noninvasive Diagnosis In orderto demonstrate the practical value of the proposed schemein medical imaging one clinical case on neoplastic diagnosisis considered where MR-T1MR-T2 medical modalities areused The images have been downloaded from the HarvardUniversity site (httpwwwmedharvardeduAANLIBhomehtml) Figures 9(a)-9(b) show the recurrent tumor case of a51-year-old woman who sought medical attention because ofgradually increasing right hemiparesis (weakness) and hemi-anopia (visual loss) At craniotomy left parietal anaplasticastrocytoma was found A right frontal lesion was biopsiedA large region of mixed signal on MR-T1 and MR-T2 imagesgives the signs of the possibility of active tumor (highlightedby the red arrows)

Figures 9(c)ndash9(h) show the fused images by DWT FDCTDTCWT NSCT-1 NSCT-2 and proposed method It isobvious that the fused image by proposed method has bettercontrast and sharpness of active tumor (highlighted by the redarrows) than other methods Table 3 shows the quantitativeevaluation of different methods for the clinical medicalimages The values of the proposed method are optimum in

terms of STD MI and 119876119860119861119865 From Figure 9 and Table 3we can obtain the same conclusion that the proposed schemeprovides the best performance and outperforms the otheralgorithms

5 Conclusions

Multimodal medical image fusion plays an important role inclinical applications But the real challenge is to obtain a visu-ally enhanced image through fusion process In this paper anovel and effective image fusion framework based on NSCTand Log-Gabor energy is proposedThe potential advantagesinclude (1) NSCT is more suitable for image fusion becauseof its advantages such as multiresolution multidirection andshift-invariance (2) a new couple of fusion rules based onphase congruency andLog-Gabor energy are used to preservemore useful information in the fused image to improve thequality of the fused images and overcome the limitations ofthe traditional fusion rules and (3) the proposed methodcan provide a better performance than the current fusionmethods whatever the source images are clean or noisy Inthe experiments five groups of multimodal medical imagesincluding one group with noise and one group clinicalexample of a woman affected with recurrent tumor are

Computational and Mathematical Methods in Medicine 11

fused by using traditional fusion methods and the proposedframeworkThe subjective and objective comparisons clearlydemonstrate that the proposed algorithm can enhance thedetails of the fused image and can improve the visual effectwith less information distortion than other fusion methodsIn the future we plan to design a pureC++ platform to reducethe time cost and extend our method for 3D or 4D medicalimage fusion

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work is supported by the National Natural ScienceFoundation of China (no 61262034 no 61462031 and no61473221) by the Key Project of Chinese Ministry of Edu-cation (no 211087) by the Doctoral Fund of Ministry ofEducation of China (no 20120201120071) by the NaturalScience Foundation of Jiangxi Province (no 20114BAB211020and no 20132BAB201025) by the Young Scientist Foundationof Jiangxi Province (no 20122BCB23017) by the Scienceand Technology Application Project of Jiangxi Province(no KJLD14031) by the Science and Technology ResearchProject of the Education Department of Jiangxi Province (noGJJ14334) and by the Fundamental Research Funds for theCentral Universities of China

References

[1] R Shen I Cheng and A Basu ldquoCross-scale coefficient selec-tion for volumetric medical image fusionrdquo IEEE Transactionson Biomedical Engineering vol 60 no 4 pp 1069ndash1079 2013

[2] V D Calhoun and T Adali ldquoFeature-based fusion of medicalimaging datardquo IEEE Transactions on Information Technology inBiomedicine vol 13 no 5 pp 711ndash720 2009

[3] Y Yang D S Park S Huang and J Yang ldquoFusion of CT andMR images using an improved wavelet based methodrdquo Journalof X-Ray Science and Technology vol 18 no 2 pp 157ndash170 2010

[4] R Singh R Srivastava O Prakash and A Khare ldquoMultimodalmedical image fusion in dual tree complex wavelet transformdomain using maximum and average fusion rulesrdquo Journal ofMedical Imaging and Health Informatics vol 2 no 2 pp 168ndash173 2012

[5] Z Wang and Y Ma ldquoMedical image fusion using m-PCNNrdquoInformation Fusion vol 9 no 2 pp 176ndash185 2008

[6] V Barra and J Boire ldquoA general framework for the fusion ofanatomical and functionalmedical imagesrdquoNeuroImage vol 13no 3 pp 410ndash424 2001

[7] A P James and B V Dasarathy ldquoMedical image fusion a surveyof the state of the artrdquo Information Fusion vol 19 pp 4ndash19 2014

[8] P Balasubramaniam and V P Ananthi ldquoImage fusion usingintuitionistic fuzzy setsrdquo Information Fusion vol 20 pp 21ndash302014

[9] Y Yang S Y Huang F J Gao et al ldquoMulti-focus image fusionusing an effective discrete wavelet transform based algorithmrdquoMeasurement Science Review vol 14 no 2 pp 102ndash108 2014

[10] P S Pradhan R L King N H Younan and D W HolcombldquoEstimation of the number of decomposition levels for awavelet-based multiresolution multisensor image fusionrdquo IEEETransactions on Geoscience and Remote Sensing vol 44 no 12pp 3674ndash3686 2006

[11] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[12] J Yang and R S Blum ldquoA statistical signal processing approachto image fusion for conceled weapon detectionrdquo in Proceedingsof the International Conference on Image Processing (ICIP rsquo02)vol 1 pp 513ndash516 September 2002

[13] R Singh and A Khare ldquoMultiscale medical image fusion inwavelet domainrdquoThe Scientific World Journal vol 2013 ArticleID 521034 10 pages 2013

[14] B Aiazzi L Alparone A Barducci S Baronti and I PippildquoMultispectral fusion of multisensor image data by the general-ized Laplacian pyramidrdquo in Proceedings of the 1999 IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo99) vol 2 pp 1183ndash1185 Hamburg Germany July 1999

[15] V S Petrovic andC S Xydeas ldquoGradient-basedmultiresolutionimage fusionrdquo IEEE Transactions on Image Processing vol 13no 2 pp 228ndash237 2004

[16] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journalon Advances in Signal Processing vol 2010 Article ID 5793412010

[17] M N Do and M Vetterli ldquoThe contourlet transform an effi-cient directional multiresolution image representationrdquo IEEETransactions on Image Processing vol 14 no 12 pp 2091ndash21062005

[18] R Singh and A Khare ldquoFusion of multimodal medical imagesusing Daubechies complex wavelet transformmdasha multiresolu-tion approachrdquo Information Fusion vol 19 pp 49ndash60 2014

[19] X B Qu J W Yan and G D Yang ldquoMultifocus image fusionmethod of sharp frequency localized Contourlet transformdomain based on sum-modified-Laplacianrdquo Optics and Preci-sion Engineering vol 17 no 5 pp 1203ndash1212 2009

[20] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006

[21] Q Zhang and B Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[22] Y Chai H Li and X Zhang ldquoMultifocus image fusion basedon features contrast of multiscale products in nonsubsampledcontourlet transform domainrdquo Optik vol 123 no 7 pp 569ndash581 2012

[23] X B Qu J W Yan H Z Xiao and Z Zhu ldquoImage fusionalgorithm based on spatial frequency-motivated pulse cou-pled neural networks in nonsubsampled contourlet transformdomainrdquo Acta Automatica Sinica vol 34 no 12 pp 1508ndash15142008

[24] Y Wu C Wu and S Wu ldquoFusion of remote sensing imagesbased on nonsubsampled contourlet transform and regionsegmentationrdquo Journal of Shanghai JiaotongUniversity (Science)vol 16 no 6 pp 722ndash727 2011

[25] J Adu J Gan andYWang ldquoImage fusion based onnonsubsam-pled contourlet transform for infrared and visible light imagerdquoInfrared Physics amp Technology vol 61 pp 94ndash100 2013

12 Computational and Mathematical Methods in Medicine

[26] G Bhatnagar Q M J Wu and Z Liu ldquoDirective contrastbased multimodal medical image fusion in NSCT domainrdquoIEEE Transactions on Multimedia vol 15 no 5 pp 1014ndash10242013

[27] S Das and M K Kundu ldquoNSCT-based multimodal medicalimage fusion using pulse-coupled neural network andmodifiedspatial frequencyrdquo Medical and Biological Engineering andComputing vol 50 no 10 pp 1105ndash1114 2012

[28] P Kovesi ldquoImage features from phase congruencyrdquo Videre vol1 no 3 pp 1ndash26 1999

[29] M C Morrone and R A Owens ldquoFeature detection from localenergyrdquo Pattern Recognition Letters vol 6 no 5 pp 303ndash3131987

[30] P Kovesi ldquoPhase congruency a low-level image invariantrdquoPsychological Research vol 64 no 2 pp 136ndash148 2000

[31] L Zhang X Mou and D Zhang ldquoFSIM a feature similarityindex for image quality assessmentrdquo IEEE Transactions onImage Processing vol 20 no 8 pp 2378ndash2386 2011

[32] A Wong and W Bishop ldquoEfficient least squares fusion of MRIand CT images using a phase congruency modelrdquo PatternRecognition Letters vol 29 no 3 pp 173ndash180 2008

[33] L Yu Z He and Q Cao ldquoGabor texture representationmethodfor face recognition using theGamma and generalizedGaussianmodelsrdquo Image and Vision Computing vol 28 no 1 pp 177ndash1872010

[34] P Yao J Li X Ye Z Zhuang and B Li ldquoIris recognitionalgorithm using modified Log-Gabor filtersrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 461ndash464 August 2006

[35] R Raghavendra and C Busch ldquoNovel image fusion schemebased on dependency measure for robust multispectral palm-print recognitionrdquo Pattern Recognition vol 47 no 6 pp 2205ndash2221 2014

[36] R Redondo F Sroubek S Fischer and G Cristobal ldquoMultifo-cus image fusion using the log-Gabor transform and aMultisizeWindows techniquerdquo Information Fusion vol 10 no 2 pp 163ndash171 2009

[37] X Zhang X Li and Y Feng ldquoA new image fusion performancemeasure using Riesz transformsrdquoOptik vol 125 no 3 pp 1427ndash1433 2014

[38] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000

[39] B Yang and S Li ldquoPixel-level image fusion with simultaneousorthogonal matching pursuitrdquo Information Fusion vol 13 no 1pp 10ndash19 2012

[40] QMiao C Shi P XuM Yang and Y Shi ldquoA novel algorithm ofimage fusion using shearletsrdquoOptics Communications vol 284no 6 pp 1540ndash1547 2011

[41] E Candes L Demanet D Donoho and L Ying ldquoFast discretecurvelet transformsrdquo Multiscale Modeling and Simulation vol5 no 3 pp 861ndash899 2006

[42] M Fu and C Zhao ldquoFusion of infrared and visible imagesbased on the second generation curvelet transformrdquo Journal ofInfrared andMillimeterWaves vol 28 no 4 pp 254ndash258 2009

[43] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[44] Y Phamila and R Amutha ldquoDiscrete Cosine Transform basedfusion of multi-focus images for visual sensor networksrdquo SignalProcessing vol 95 pp 161ndash170 2014

[45] L Wang B Li and L Tian ldquoMulti-modal medical volumetricdata fusion using 3D discrete shearlet transform and global-to-local rulerdquo IEEETransactions on Biomedical Engineering vol 61no 1 pp 197ndash206 2014

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 8: Research Article Log-Gabor Energy Based Multimodal Medical ...downloads.hindawi.com/journals/cmmm/2014/835481.pdf · elds of multiscale transform based image fusion [ ]. However,

8 Computational and Mathematical Methods in Medicine

Table 1 Comparison on quantitative evaluation of different methods for the first set medical images

Images Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed method

Image group 1(CT and MRI)

STD 41160 39170 40132 40581 54641 58476QABF 0618 0592 0652 0683 0427 0716MI 2371 2029 2393 2438 1886 2580

Image group 2(CT and MRI)

STD 72260 70572 71407 71733 80378 83334QABF 0567 0584 0606 0622 0450 0618MI 3137 3138 3198 3235 2915 3302

Image group 3(MR-T1 and MR-T2)

STD 37358 36856 37358 37277 39023 40630QABF 0635 0647 0668 0682 0589 0689MI 3209 3142 3302 3347 3349 3469

005

115

225

DWT FDCT DTCWT NSCT-1 NSCT-2

Cross entropy

Group 1Group 2Group 3

Proposedmethod

1323 1384

0046

Figure 5 Comparison on cross entropy of different methods andimages

evaluate the difference between the source images and thefused image Therefore the lower value corresponds to thebetter fusion result

On observing Figure 6 two values of the spatial frequencyof the fused image obtained by the proposed method arethe highest and the other one is 6447 which is close tothe highest value 6581 Observation of Table 1 yields that allthe three results of the proposed fusion scheme have highervalues of STD 119876119860119861119865 and MI than any of other methodsexcept one value of119876119860119861119865 (image group 2) is the second bestHowever an overall comparison shows the superiority of theproposed fusion scheme

423 Combined Evaluation Since the subjective and objec-tive evaluations separately are not able to examine fusionresults we have combined them From these figures (Figures4ndash6) and table (Table 1) it is clearly to find that the proposedmethod not only preserves most of the source images char-acteristics and information but also improves the definitionand the spatial quality better than the existing methodswhich can be justified by the optimum values of objectivecriteria except one value of spatial frequency (image group3) and one value of 119876119860119861119865 (image group 2) Consider theexample of the first set of images the five criteria values ofthe proposed method are 1323 (cross entropy) 7050 (spatialfrequency) 58476 (STD) 0716 (119876119860119861119865) and 2580 (MI)respectively Each of them is the optimal one in the first setof experiments

7050

7976

64476

657

758

6581

85

DWT FDCT DTCWT NSCT-1 NSCT-2

Spatial frequencies

Group 1Group 2Group 3

Proposedmethod

Figure 6 Comparison on spatial frequencies of different methodsand images

Among these methods the result of NSCT-2 basedmethod also gives poor results when comparing to theproposed NSCT-based methodThis stems from the fact thathigh-frequency fusion rule of NSCT-2 based method is notable to extract the detail information in the high frequencyeffectively Also by carefully looking at the outputs of theproposed NSCT-based method (Figures 4(h1) 4(h2) and4(h3)) we can find that they get more contrast and morespatial resolution than the outputs of NSCT-2 based method(highlighted by the red arrows) and other methodsThemainreason behind the better performance is that the proposedfusion rules for low- and high-frequency coefficients caneffectively extract prominent and detail information from thesource images Therefore it can be possible to conclude thatthe proposed method is better than the existing methods

43 Fusion ofMultimodalMedical Noisy Images and a ClinicalExample To evaluate the performance of the proposedmethod in noisy environment the input image group 1 hasbeen additionally corrupted with Gaussian noise with astandard deviation of 5 (shown in Figures 7(a) and 7(b))In addition a clinical applicability on noninvasive diagnosisof neoplastic disease is given in the last subsection

431 Fusion of Multimodal Medical Noisy Images For com-parison apart from visual observation objective criteria onSTD MI and 119876119860119861119865 are used to evaluate how much clearor detail information of the source images is transferredto the fused images However maybe these criteria cannot

Computational and Mathematical Methods in Medicine 9

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 7 The multimodal medical images with noise (a) CT image with 5 noise (b) MRI image with 5 noise fused images by (c) DWT(d) FDCT (e) DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 2 Comparison on quantitative evaluation of different methods for the noise medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 41893 40044 40995 41338 52533 57787QABF 0592 0579 0619 0637 0269 0640MI 2785 2539 2826 2882 1816 2914

effectively evaluate the performance of the fusion methodsin terms of the noise transmission For further comparisonPeak Signal to Noise Ratio (PSNR) a ratio between themaximum possible power of a signal and the power of noisethat affects the fidelity [44] is used The larger the value ofPSNR the less the image distortion [45] PSNR is formulatedas

PSNR = 10lg100381610038161003816100381610038161003816100381610038161003816

2552

RMSE2100381610038161003816100381610038161003816100381610038161003816

(15)

where RMSE denotes the Root Mean Square Error betweenthe fused image and the reference imageThe reference imagein the following experiment is selecting from Figure 4(h1)which is proven to be the best performance compared to otherimages

Figure 7 illustrates the fusion results obtained by thedifferent methods The comparison of the images fused byDWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 7(c)ndash7(h) clearly implies thatthe fused image by proposed method has better quality andcontrast than other methods Figure 8 shows the values ofPSNR of different methods in fusing noisy images Onecan observe that the proposed method has higher values of

1516171819202122232425

DWT FDCT DTCWT NSCT-1 NSCT-2 Proposedmethod

PSNR23704

Figure 8 Comparison on PSNR of different methods for the noisemedical images

PSNR than any of the DWT FDCT DTCWT NSCT-1 andNSCT-2 fusionmethods Table 2 gives the quantitative resultsof fused images and shows that the values of STD 119876119860119861119865and MI are also the highest of all the six methods Fromthe analysis above we can also observe that the proposedscheme provides the best performance and outperforms theother algorithms In addition compared with the result ofthe NSCT-1 method using the average-maximum rule it

10 Computational and Mathematical Methods in Medicine

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 9 Brain images of the man with recurrent tumor (a) MR-T1 image (b) MR-T2 image fused images by (c) DWT (d) FDCT (e)DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 3 Comparison on quantitative evaluation of different methods for the clinical medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 64433 62487 63194 63652 60722 67155QABF 0541 0565 0578 0597 0411 0624MI 2507 2455 2540 2584 2392 2624

demonstrated the validities of the proposed fusion rule innoisy environment

432 A Clinical Example on Noninvasive Diagnosis In orderto demonstrate the practical value of the proposed schemein medical imaging one clinical case on neoplastic diagnosisis considered where MR-T1MR-T2 medical modalities areused The images have been downloaded from the HarvardUniversity site (httpwwwmedharvardeduAANLIBhomehtml) Figures 9(a)-9(b) show the recurrent tumor case of a51-year-old woman who sought medical attention because ofgradually increasing right hemiparesis (weakness) and hemi-anopia (visual loss) At craniotomy left parietal anaplasticastrocytoma was found A right frontal lesion was biopsiedA large region of mixed signal on MR-T1 and MR-T2 imagesgives the signs of the possibility of active tumor (highlightedby the red arrows)

Figures 9(c)ndash9(h) show the fused images by DWT FDCTDTCWT NSCT-1 NSCT-2 and proposed method It isobvious that the fused image by proposed method has bettercontrast and sharpness of active tumor (highlighted by the redarrows) than other methods Table 3 shows the quantitativeevaluation of different methods for the clinical medicalimages The values of the proposed method are optimum in

terms of STD MI and 119876119860119861119865 From Figure 9 and Table 3we can obtain the same conclusion that the proposed schemeprovides the best performance and outperforms the otheralgorithms

5 Conclusions

Multimodal medical image fusion plays an important role inclinical applications But the real challenge is to obtain a visu-ally enhanced image through fusion process In this paper anovel and effective image fusion framework based on NSCTand Log-Gabor energy is proposedThe potential advantagesinclude (1) NSCT is more suitable for image fusion becauseof its advantages such as multiresolution multidirection andshift-invariance (2) a new couple of fusion rules based onphase congruency andLog-Gabor energy are used to preservemore useful information in the fused image to improve thequality of the fused images and overcome the limitations ofthe traditional fusion rules and (3) the proposed methodcan provide a better performance than the current fusionmethods whatever the source images are clean or noisy Inthe experiments five groups of multimodal medical imagesincluding one group with noise and one group clinicalexample of a woman affected with recurrent tumor are

Computational and Mathematical Methods in Medicine 11

fused by using traditional fusion methods and the proposedframeworkThe subjective and objective comparisons clearlydemonstrate that the proposed algorithm can enhance thedetails of the fused image and can improve the visual effectwith less information distortion than other fusion methodsIn the future we plan to design a pureC++ platform to reducethe time cost and extend our method for 3D or 4D medicalimage fusion

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work is supported by the National Natural ScienceFoundation of China (no 61262034 no 61462031 and no61473221) by the Key Project of Chinese Ministry of Edu-cation (no 211087) by the Doctoral Fund of Ministry ofEducation of China (no 20120201120071) by the NaturalScience Foundation of Jiangxi Province (no 20114BAB211020and no 20132BAB201025) by the Young Scientist Foundationof Jiangxi Province (no 20122BCB23017) by the Scienceand Technology Application Project of Jiangxi Province(no KJLD14031) by the Science and Technology ResearchProject of the Education Department of Jiangxi Province (noGJJ14334) and by the Fundamental Research Funds for theCentral Universities of China

References

[1] R Shen I Cheng and A Basu ldquoCross-scale coefficient selec-tion for volumetric medical image fusionrdquo IEEE Transactionson Biomedical Engineering vol 60 no 4 pp 1069ndash1079 2013

[2] V D Calhoun and T Adali ldquoFeature-based fusion of medicalimaging datardquo IEEE Transactions on Information Technology inBiomedicine vol 13 no 5 pp 711ndash720 2009

[3] Y Yang D S Park S Huang and J Yang ldquoFusion of CT andMR images using an improved wavelet based methodrdquo Journalof X-Ray Science and Technology vol 18 no 2 pp 157ndash170 2010

[4] R Singh R Srivastava O Prakash and A Khare ldquoMultimodalmedical image fusion in dual tree complex wavelet transformdomain using maximum and average fusion rulesrdquo Journal ofMedical Imaging and Health Informatics vol 2 no 2 pp 168ndash173 2012

[5] Z Wang and Y Ma ldquoMedical image fusion using m-PCNNrdquoInformation Fusion vol 9 no 2 pp 176ndash185 2008

[6] V Barra and J Boire ldquoA general framework for the fusion ofanatomical and functionalmedical imagesrdquoNeuroImage vol 13no 3 pp 410ndash424 2001

[7] A P James and B V Dasarathy ldquoMedical image fusion a surveyof the state of the artrdquo Information Fusion vol 19 pp 4ndash19 2014

[8] P Balasubramaniam and V P Ananthi ldquoImage fusion usingintuitionistic fuzzy setsrdquo Information Fusion vol 20 pp 21ndash302014

[9] Y Yang S Y Huang F J Gao et al ldquoMulti-focus image fusionusing an effective discrete wavelet transform based algorithmrdquoMeasurement Science Review vol 14 no 2 pp 102ndash108 2014

[10] P S Pradhan R L King N H Younan and D W HolcombldquoEstimation of the number of decomposition levels for awavelet-based multiresolution multisensor image fusionrdquo IEEETransactions on Geoscience and Remote Sensing vol 44 no 12pp 3674ndash3686 2006

[11] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[12] J Yang and R S Blum ldquoA statistical signal processing approachto image fusion for conceled weapon detectionrdquo in Proceedingsof the International Conference on Image Processing (ICIP rsquo02)vol 1 pp 513ndash516 September 2002

[13] R Singh and A Khare ldquoMultiscale medical image fusion inwavelet domainrdquoThe Scientific World Journal vol 2013 ArticleID 521034 10 pages 2013

[14] B Aiazzi L Alparone A Barducci S Baronti and I PippildquoMultispectral fusion of multisensor image data by the general-ized Laplacian pyramidrdquo in Proceedings of the 1999 IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo99) vol 2 pp 1183ndash1185 Hamburg Germany July 1999

[15] V S Petrovic andC S Xydeas ldquoGradient-basedmultiresolutionimage fusionrdquo IEEE Transactions on Image Processing vol 13no 2 pp 228ndash237 2004

[16] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journalon Advances in Signal Processing vol 2010 Article ID 5793412010

[17] M N Do and M Vetterli ldquoThe contourlet transform an effi-cient directional multiresolution image representationrdquo IEEETransactions on Image Processing vol 14 no 12 pp 2091ndash21062005

[18] R Singh and A Khare ldquoFusion of multimodal medical imagesusing Daubechies complex wavelet transformmdasha multiresolu-tion approachrdquo Information Fusion vol 19 pp 49ndash60 2014

[19] X B Qu J W Yan and G D Yang ldquoMultifocus image fusionmethod of sharp frequency localized Contourlet transformdomain based on sum-modified-Laplacianrdquo Optics and Preci-sion Engineering vol 17 no 5 pp 1203ndash1212 2009

[20] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006

[21] Q Zhang and B Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[22] Y Chai H Li and X Zhang ldquoMultifocus image fusion basedon features contrast of multiscale products in nonsubsampledcontourlet transform domainrdquo Optik vol 123 no 7 pp 569ndash581 2012

[23] X B Qu J W Yan H Z Xiao and Z Zhu ldquoImage fusionalgorithm based on spatial frequency-motivated pulse cou-pled neural networks in nonsubsampled contourlet transformdomainrdquo Acta Automatica Sinica vol 34 no 12 pp 1508ndash15142008

[24] Y Wu C Wu and S Wu ldquoFusion of remote sensing imagesbased on nonsubsampled contourlet transform and regionsegmentationrdquo Journal of Shanghai JiaotongUniversity (Science)vol 16 no 6 pp 722ndash727 2011

[25] J Adu J Gan andYWang ldquoImage fusion based onnonsubsam-pled contourlet transform for infrared and visible light imagerdquoInfrared Physics amp Technology vol 61 pp 94ndash100 2013

12 Computational and Mathematical Methods in Medicine

[26] G Bhatnagar Q M J Wu and Z Liu ldquoDirective contrastbased multimodal medical image fusion in NSCT domainrdquoIEEE Transactions on Multimedia vol 15 no 5 pp 1014ndash10242013

[27] S Das and M K Kundu ldquoNSCT-based multimodal medicalimage fusion using pulse-coupled neural network andmodifiedspatial frequencyrdquo Medical and Biological Engineering andComputing vol 50 no 10 pp 1105ndash1114 2012

[28] P Kovesi ldquoImage features from phase congruencyrdquo Videre vol1 no 3 pp 1ndash26 1999

[29] M C Morrone and R A Owens ldquoFeature detection from localenergyrdquo Pattern Recognition Letters vol 6 no 5 pp 303ndash3131987

[30] P Kovesi ldquoPhase congruency a low-level image invariantrdquoPsychological Research vol 64 no 2 pp 136ndash148 2000

[31] L Zhang X Mou and D Zhang ldquoFSIM a feature similarityindex for image quality assessmentrdquo IEEE Transactions onImage Processing vol 20 no 8 pp 2378ndash2386 2011

[32] A Wong and W Bishop ldquoEfficient least squares fusion of MRIand CT images using a phase congruency modelrdquo PatternRecognition Letters vol 29 no 3 pp 173ndash180 2008

[33] L Yu Z He and Q Cao ldquoGabor texture representationmethodfor face recognition using theGamma and generalizedGaussianmodelsrdquo Image and Vision Computing vol 28 no 1 pp 177ndash1872010

[34] P Yao J Li X Ye Z Zhuang and B Li ldquoIris recognitionalgorithm using modified Log-Gabor filtersrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 461ndash464 August 2006

[35] R Raghavendra and C Busch ldquoNovel image fusion schemebased on dependency measure for robust multispectral palm-print recognitionrdquo Pattern Recognition vol 47 no 6 pp 2205ndash2221 2014

[36] R Redondo F Sroubek S Fischer and G Cristobal ldquoMultifo-cus image fusion using the log-Gabor transform and aMultisizeWindows techniquerdquo Information Fusion vol 10 no 2 pp 163ndash171 2009

[37] X Zhang X Li and Y Feng ldquoA new image fusion performancemeasure using Riesz transformsrdquoOptik vol 125 no 3 pp 1427ndash1433 2014

[38] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000

[39] B Yang and S Li ldquoPixel-level image fusion with simultaneousorthogonal matching pursuitrdquo Information Fusion vol 13 no 1pp 10ndash19 2012

[40] QMiao C Shi P XuM Yang and Y Shi ldquoA novel algorithm ofimage fusion using shearletsrdquoOptics Communications vol 284no 6 pp 1540ndash1547 2011

[41] E Candes L Demanet D Donoho and L Ying ldquoFast discretecurvelet transformsrdquo Multiscale Modeling and Simulation vol5 no 3 pp 861ndash899 2006

[42] M Fu and C Zhao ldquoFusion of infrared and visible imagesbased on the second generation curvelet transformrdquo Journal ofInfrared andMillimeterWaves vol 28 no 4 pp 254ndash258 2009

[43] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[44] Y Phamila and R Amutha ldquoDiscrete Cosine Transform basedfusion of multi-focus images for visual sensor networksrdquo SignalProcessing vol 95 pp 161ndash170 2014

[45] L Wang B Li and L Tian ldquoMulti-modal medical volumetricdata fusion using 3D discrete shearlet transform and global-to-local rulerdquo IEEETransactions on Biomedical Engineering vol 61no 1 pp 197ndash206 2014

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 9: Research Article Log-Gabor Energy Based Multimodal Medical ...downloads.hindawi.com/journals/cmmm/2014/835481.pdf · elds of multiscale transform based image fusion [ ]. However,

Computational and Mathematical Methods in Medicine 9

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 7 The multimodal medical images with noise (a) CT image with 5 noise (b) MRI image with 5 noise fused images by (c) DWT(d) FDCT (e) DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 2 Comparison on quantitative evaluation of different methods for the noise medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 41893 40044 40995 41338 52533 57787QABF 0592 0579 0619 0637 0269 0640MI 2785 2539 2826 2882 1816 2914

effectively evaluate the performance of the fusion methodsin terms of the noise transmission For further comparisonPeak Signal to Noise Ratio (PSNR) a ratio between themaximum possible power of a signal and the power of noisethat affects the fidelity [44] is used The larger the value ofPSNR the less the image distortion [45] PSNR is formulatedas

PSNR = 10lg100381610038161003816100381610038161003816100381610038161003816

2552

RMSE2100381610038161003816100381610038161003816100381610038161003816

(15)

where RMSE denotes the Root Mean Square Error betweenthe fused image and the reference imageThe reference imagein the following experiment is selecting from Figure 4(h1)which is proven to be the best performance compared to otherimages

Figure 7 illustrates the fusion results obtained by thedifferent methods The comparison of the images fused byDWT FDCT DTCWT NSCT-1 NSCT-2 and proposedmethod shown in Figures 7(c)ndash7(h) clearly implies thatthe fused image by proposed method has better quality andcontrast than other methods Figure 8 shows the values ofPSNR of different methods in fusing noisy images Onecan observe that the proposed method has higher values of

1516171819202122232425

DWT FDCT DTCWT NSCT-1 NSCT-2 Proposedmethod

PSNR23704

Figure 8 Comparison on PSNR of different methods for the noisemedical images

PSNR than any of the DWT FDCT DTCWT NSCT-1 andNSCT-2 fusionmethods Table 2 gives the quantitative resultsof fused images and shows that the values of STD 119876119860119861119865and MI are also the highest of all the six methods Fromthe analysis above we can also observe that the proposedscheme provides the best performance and outperforms theother algorithms In addition compared with the result ofthe NSCT-1 method using the average-maximum rule it

10 Computational and Mathematical Methods in Medicine

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 9 Brain images of the man with recurrent tumor (a) MR-T1 image (b) MR-T2 image fused images by (c) DWT (d) FDCT (e)DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 3 Comparison on quantitative evaluation of different methods for the clinical medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 64433 62487 63194 63652 60722 67155QABF 0541 0565 0578 0597 0411 0624MI 2507 2455 2540 2584 2392 2624

demonstrated the validities of the proposed fusion rule innoisy environment

432 A Clinical Example on Noninvasive Diagnosis In orderto demonstrate the practical value of the proposed schemein medical imaging one clinical case on neoplastic diagnosisis considered where MR-T1MR-T2 medical modalities areused The images have been downloaded from the HarvardUniversity site (httpwwwmedharvardeduAANLIBhomehtml) Figures 9(a)-9(b) show the recurrent tumor case of a51-year-old woman who sought medical attention because ofgradually increasing right hemiparesis (weakness) and hemi-anopia (visual loss) At craniotomy left parietal anaplasticastrocytoma was found A right frontal lesion was biopsiedA large region of mixed signal on MR-T1 and MR-T2 imagesgives the signs of the possibility of active tumor (highlightedby the red arrows)

Figures 9(c)ndash9(h) show the fused images by DWT FDCTDTCWT NSCT-1 NSCT-2 and proposed method It isobvious that the fused image by proposed method has bettercontrast and sharpness of active tumor (highlighted by the redarrows) than other methods Table 3 shows the quantitativeevaluation of different methods for the clinical medicalimages The values of the proposed method are optimum in

terms of STD MI and 119876119860119861119865 From Figure 9 and Table 3we can obtain the same conclusion that the proposed schemeprovides the best performance and outperforms the otheralgorithms

5 Conclusions

Multimodal medical image fusion plays an important role inclinical applications But the real challenge is to obtain a visu-ally enhanced image through fusion process In this paper anovel and effective image fusion framework based on NSCTand Log-Gabor energy is proposedThe potential advantagesinclude (1) NSCT is more suitable for image fusion becauseof its advantages such as multiresolution multidirection andshift-invariance (2) a new couple of fusion rules based onphase congruency andLog-Gabor energy are used to preservemore useful information in the fused image to improve thequality of the fused images and overcome the limitations ofthe traditional fusion rules and (3) the proposed methodcan provide a better performance than the current fusionmethods whatever the source images are clean or noisy Inthe experiments five groups of multimodal medical imagesincluding one group with noise and one group clinicalexample of a woman affected with recurrent tumor are

Computational and Mathematical Methods in Medicine 11

fused by using traditional fusion methods and the proposedframeworkThe subjective and objective comparisons clearlydemonstrate that the proposed algorithm can enhance thedetails of the fused image and can improve the visual effectwith less information distortion than other fusion methodsIn the future we plan to design a pureC++ platform to reducethe time cost and extend our method for 3D or 4D medicalimage fusion

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work is supported by the National Natural ScienceFoundation of China (no 61262034 no 61462031 and no61473221) by the Key Project of Chinese Ministry of Edu-cation (no 211087) by the Doctoral Fund of Ministry ofEducation of China (no 20120201120071) by the NaturalScience Foundation of Jiangxi Province (no 20114BAB211020and no 20132BAB201025) by the Young Scientist Foundationof Jiangxi Province (no 20122BCB23017) by the Scienceand Technology Application Project of Jiangxi Province(no KJLD14031) by the Science and Technology ResearchProject of the Education Department of Jiangxi Province (noGJJ14334) and by the Fundamental Research Funds for theCentral Universities of China

References

[1] R Shen I Cheng and A Basu ldquoCross-scale coefficient selec-tion for volumetric medical image fusionrdquo IEEE Transactionson Biomedical Engineering vol 60 no 4 pp 1069ndash1079 2013

[2] V D Calhoun and T Adali ldquoFeature-based fusion of medicalimaging datardquo IEEE Transactions on Information Technology inBiomedicine vol 13 no 5 pp 711ndash720 2009

[3] Y Yang D S Park S Huang and J Yang ldquoFusion of CT andMR images using an improved wavelet based methodrdquo Journalof X-Ray Science and Technology vol 18 no 2 pp 157ndash170 2010

[4] R Singh R Srivastava O Prakash and A Khare ldquoMultimodalmedical image fusion in dual tree complex wavelet transformdomain using maximum and average fusion rulesrdquo Journal ofMedical Imaging and Health Informatics vol 2 no 2 pp 168ndash173 2012

[5] Z Wang and Y Ma ldquoMedical image fusion using m-PCNNrdquoInformation Fusion vol 9 no 2 pp 176ndash185 2008

[6] V Barra and J Boire ldquoA general framework for the fusion ofanatomical and functionalmedical imagesrdquoNeuroImage vol 13no 3 pp 410ndash424 2001

[7] A P James and B V Dasarathy ldquoMedical image fusion a surveyof the state of the artrdquo Information Fusion vol 19 pp 4ndash19 2014

[8] P Balasubramaniam and V P Ananthi ldquoImage fusion usingintuitionistic fuzzy setsrdquo Information Fusion vol 20 pp 21ndash302014

[9] Y Yang S Y Huang F J Gao et al ldquoMulti-focus image fusionusing an effective discrete wavelet transform based algorithmrdquoMeasurement Science Review vol 14 no 2 pp 102ndash108 2014

[10] P S Pradhan R L King N H Younan and D W HolcombldquoEstimation of the number of decomposition levels for awavelet-based multiresolution multisensor image fusionrdquo IEEETransactions on Geoscience and Remote Sensing vol 44 no 12pp 3674ndash3686 2006

[11] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[12] J Yang and R S Blum ldquoA statistical signal processing approachto image fusion for conceled weapon detectionrdquo in Proceedingsof the International Conference on Image Processing (ICIP rsquo02)vol 1 pp 513ndash516 September 2002

[13] R Singh and A Khare ldquoMultiscale medical image fusion inwavelet domainrdquoThe Scientific World Journal vol 2013 ArticleID 521034 10 pages 2013

[14] B Aiazzi L Alparone A Barducci S Baronti and I PippildquoMultispectral fusion of multisensor image data by the general-ized Laplacian pyramidrdquo in Proceedings of the 1999 IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo99) vol 2 pp 1183ndash1185 Hamburg Germany July 1999

[15] V S Petrovic andC S Xydeas ldquoGradient-basedmultiresolutionimage fusionrdquo IEEE Transactions on Image Processing vol 13no 2 pp 228ndash237 2004

[16] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journalon Advances in Signal Processing vol 2010 Article ID 5793412010

[17] M N Do and M Vetterli ldquoThe contourlet transform an effi-cient directional multiresolution image representationrdquo IEEETransactions on Image Processing vol 14 no 12 pp 2091ndash21062005

[18] R Singh and A Khare ldquoFusion of multimodal medical imagesusing Daubechies complex wavelet transformmdasha multiresolu-tion approachrdquo Information Fusion vol 19 pp 49ndash60 2014

[19] X B Qu J W Yan and G D Yang ldquoMultifocus image fusionmethod of sharp frequency localized Contourlet transformdomain based on sum-modified-Laplacianrdquo Optics and Preci-sion Engineering vol 17 no 5 pp 1203ndash1212 2009

[20] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006

[21] Q Zhang and B Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[22] Y Chai H Li and X Zhang ldquoMultifocus image fusion basedon features contrast of multiscale products in nonsubsampledcontourlet transform domainrdquo Optik vol 123 no 7 pp 569ndash581 2012

[23] X B Qu J W Yan H Z Xiao and Z Zhu ldquoImage fusionalgorithm based on spatial frequency-motivated pulse cou-pled neural networks in nonsubsampled contourlet transformdomainrdquo Acta Automatica Sinica vol 34 no 12 pp 1508ndash15142008

[24] Y Wu C Wu and S Wu ldquoFusion of remote sensing imagesbased on nonsubsampled contourlet transform and regionsegmentationrdquo Journal of Shanghai JiaotongUniversity (Science)vol 16 no 6 pp 722ndash727 2011

[25] J Adu J Gan andYWang ldquoImage fusion based onnonsubsam-pled contourlet transform for infrared and visible light imagerdquoInfrared Physics amp Technology vol 61 pp 94ndash100 2013

12 Computational and Mathematical Methods in Medicine

[26] G Bhatnagar Q M J Wu and Z Liu ldquoDirective contrastbased multimodal medical image fusion in NSCT domainrdquoIEEE Transactions on Multimedia vol 15 no 5 pp 1014ndash10242013

[27] S Das and M K Kundu ldquoNSCT-based multimodal medicalimage fusion using pulse-coupled neural network andmodifiedspatial frequencyrdquo Medical and Biological Engineering andComputing vol 50 no 10 pp 1105ndash1114 2012

[28] P Kovesi ldquoImage features from phase congruencyrdquo Videre vol1 no 3 pp 1ndash26 1999

[29] M C Morrone and R A Owens ldquoFeature detection from localenergyrdquo Pattern Recognition Letters vol 6 no 5 pp 303ndash3131987

[30] P Kovesi ldquoPhase congruency a low-level image invariantrdquoPsychological Research vol 64 no 2 pp 136ndash148 2000

[31] L Zhang X Mou and D Zhang ldquoFSIM a feature similarityindex for image quality assessmentrdquo IEEE Transactions onImage Processing vol 20 no 8 pp 2378ndash2386 2011

[32] A Wong and W Bishop ldquoEfficient least squares fusion of MRIand CT images using a phase congruency modelrdquo PatternRecognition Letters vol 29 no 3 pp 173ndash180 2008

[33] L Yu Z He and Q Cao ldquoGabor texture representationmethodfor face recognition using theGamma and generalizedGaussianmodelsrdquo Image and Vision Computing vol 28 no 1 pp 177ndash1872010

[34] P Yao J Li X Ye Z Zhuang and B Li ldquoIris recognitionalgorithm using modified Log-Gabor filtersrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 461ndash464 August 2006

[35] R Raghavendra and C Busch ldquoNovel image fusion schemebased on dependency measure for robust multispectral palm-print recognitionrdquo Pattern Recognition vol 47 no 6 pp 2205ndash2221 2014

[36] R Redondo F Sroubek S Fischer and G Cristobal ldquoMultifo-cus image fusion using the log-Gabor transform and aMultisizeWindows techniquerdquo Information Fusion vol 10 no 2 pp 163ndash171 2009

[37] X Zhang X Li and Y Feng ldquoA new image fusion performancemeasure using Riesz transformsrdquoOptik vol 125 no 3 pp 1427ndash1433 2014

[38] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000

[39] B Yang and S Li ldquoPixel-level image fusion with simultaneousorthogonal matching pursuitrdquo Information Fusion vol 13 no 1pp 10ndash19 2012

[40] QMiao C Shi P XuM Yang and Y Shi ldquoA novel algorithm ofimage fusion using shearletsrdquoOptics Communications vol 284no 6 pp 1540ndash1547 2011

[41] E Candes L Demanet D Donoho and L Ying ldquoFast discretecurvelet transformsrdquo Multiscale Modeling and Simulation vol5 no 3 pp 861ndash899 2006

[42] M Fu and C Zhao ldquoFusion of infrared and visible imagesbased on the second generation curvelet transformrdquo Journal ofInfrared andMillimeterWaves vol 28 no 4 pp 254ndash258 2009

[43] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[44] Y Phamila and R Amutha ldquoDiscrete Cosine Transform basedfusion of multi-focus images for visual sensor networksrdquo SignalProcessing vol 95 pp 161ndash170 2014

[45] L Wang B Li and L Tian ldquoMulti-modal medical volumetricdata fusion using 3D discrete shearlet transform and global-to-local rulerdquo IEEETransactions on Biomedical Engineering vol 61no 1 pp 197ndash206 2014

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 10: Research Article Log-Gabor Energy Based Multimodal Medical ...downloads.hindawi.com/journals/cmmm/2014/835481.pdf · elds of multiscale transform based image fusion [ ]. However,

10 Computational and Mathematical Methods in Medicine

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 9 Brain images of the man with recurrent tumor (a) MR-T1 image (b) MR-T2 image fused images by (c) DWT (d) FDCT (e)DTCWT (f) NSCT-1 (g) NSCT-2 and (h) proposed method

Table 3 Comparison on quantitative evaluation of different methods for the clinical medical images

Evaluation DWT FDCT DTCWT NSCT-1 NSCT-2 Proposed methodSTD 64433 62487 63194 63652 60722 67155QABF 0541 0565 0578 0597 0411 0624MI 2507 2455 2540 2584 2392 2624

demonstrated the validities of the proposed fusion rule innoisy environment

432 A Clinical Example on Noninvasive Diagnosis In orderto demonstrate the practical value of the proposed schemein medical imaging one clinical case on neoplastic diagnosisis considered where MR-T1MR-T2 medical modalities areused The images have been downloaded from the HarvardUniversity site (httpwwwmedharvardeduAANLIBhomehtml) Figures 9(a)-9(b) show the recurrent tumor case of a51-year-old woman who sought medical attention because ofgradually increasing right hemiparesis (weakness) and hemi-anopia (visual loss) At craniotomy left parietal anaplasticastrocytoma was found A right frontal lesion was biopsiedA large region of mixed signal on MR-T1 and MR-T2 imagesgives the signs of the possibility of active tumor (highlightedby the red arrows)

Figures 9(c)ndash9(h) show the fused images by DWT FDCTDTCWT NSCT-1 NSCT-2 and proposed method It isobvious that the fused image by proposed method has bettercontrast and sharpness of active tumor (highlighted by the redarrows) than other methods Table 3 shows the quantitativeevaluation of different methods for the clinical medicalimages The values of the proposed method are optimum in

terms of STD MI and 119876119860119861119865 From Figure 9 and Table 3we can obtain the same conclusion that the proposed schemeprovides the best performance and outperforms the otheralgorithms

5 Conclusions

Multimodal medical image fusion plays an important role inclinical applications But the real challenge is to obtain a visu-ally enhanced image through fusion process In this paper anovel and effective image fusion framework based on NSCTand Log-Gabor energy is proposedThe potential advantagesinclude (1) NSCT is more suitable for image fusion becauseof its advantages such as multiresolution multidirection andshift-invariance (2) a new couple of fusion rules based onphase congruency andLog-Gabor energy are used to preservemore useful information in the fused image to improve thequality of the fused images and overcome the limitations ofthe traditional fusion rules and (3) the proposed methodcan provide a better performance than the current fusionmethods whatever the source images are clean or noisy Inthe experiments five groups of multimodal medical imagesincluding one group with noise and one group clinicalexample of a woman affected with recurrent tumor are

Computational and Mathematical Methods in Medicine 11

fused by using traditional fusion methods and the proposedframeworkThe subjective and objective comparisons clearlydemonstrate that the proposed algorithm can enhance thedetails of the fused image and can improve the visual effectwith less information distortion than other fusion methodsIn the future we plan to design a pureC++ platform to reducethe time cost and extend our method for 3D or 4D medicalimage fusion

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work is supported by the National Natural ScienceFoundation of China (no 61262034 no 61462031 and no61473221) by the Key Project of Chinese Ministry of Edu-cation (no 211087) by the Doctoral Fund of Ministry ofEducation of China (no 20120201120071) by the NaturalScience Foundation of Jiangxi Province (no 20114BAB211020and no 20132BAB201025) by the Young Scientist Foundationof Jiangxi Province (no 20122BCB23017) by the Scienceand Technology Application Project of Jiangxi Province(no KJLD14031) by the Science and Technology ResearchProject of the Education Department of Jiangxi Province (noGJJ14334) and by the Fundamental Research Funds for theCentral Universities of China

References

[1] R Shen I Cheng and A Basu ldquoCross-scale coefficient selec-tion for volumetric medical image fusionrdquo IEEE Transactionson Biomedical Engineering vol 60 no 4 pp 1069ndash1079 2013

[2] V D Calhoun and T Adali ldquoFeature-based fusion of medicalimaging datardquo IEEE Transactions on Information Technology inBiomedicine vol 13 no 5 pp 711ndash720 2009

[3] Y Yang D S Park S Huang and J Yang ldquoFusion of CT andMR images using an improved wavelet based methodrdquo Journalof X-Ray Science and Technology vol 18 no 2 pp 157ndash170 2010

[4] R Singh R Srivastava O Prakash and A Khare ldquoMultimodalmedical image fusion in dual tree complex wavelet transformdomain using maximum and average fusion rulesrdquo Journal ofMedical Imaging and Health Informatics vol 2 no 2 pp 168ndash173 2012

[5] Z Wang and Y Ma ldquoMedical image fusion using m-PCNNrdquoInformation Fusion vol 9 no 2 pp 176ndash185 2008

[6] V Barra and J Boire ldquoA general framework for the fusion ofanatomical and functionalmedical imagesrdquoNeuroImage vol 13no 3 pp 410ndash424 2001

[7] A P James and B V Dasarathy ldquoMedical image fusion a surveyof the state of the artrdquo Information Fusion vol 19 pp 4ndash19 2014

[8] P Balasubramaniam and V P Ananthi ldquoImage fusion usingintuitionistic fuzzy setsrdquo Information Fusion vol 20 pp 21ndash302014

[9] Y Yang S Y Huang F J Gao et al ldquoMulti-focus image fusionusing an effective discrete wavelet transform based algorithmrdquoMeasurement Science Review vol 14 no 2 pp 102ndash108 2014

[10] P S Pradhan R L King N H Younan and D W HolcombldquoEstimation of the number of decomposition levels for awavelet-based multiresolution multisensor image fusionrdquo IEEETransactions on Geoscience and Remote Sensing vol 44 no 12pp 3674ndash3686 2006

[11] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[12] J Yang and R S Blum ldquoA statistical signal processing approachto image fusion for conceled weapon detectionrdquo in Proceedingsof the International Conference on Image Processing (ICIP rsquo02)vol 1 pp 513ndash516 September 2002

[13] R Singh and A Khare ldquoMultiscale medical image fusion inwavelet domainrdquoThe Scientific World Journal vol 2013 ArticleID 521034 10 pages 2013

[14] B Aiazzi L Alparone A Barducci S Baronti and I PippildquoMultispectral fusion of multisensor image data by the general-ized Laplacian pyramidrdquo in Proceedings of the 1999 IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo99) vol 2 pp 1183ndash1185 Hamburg Germany July 1999

[15] V S Petrovic andC S Xydeas ldquoGradient-basedmultiresolutionimage fusionrdquo IEEE Transactions on Image Processing vol 13no 2 pp 228ndash237 2004

[16] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journalon Advances in Signal Processing vol 2010 Article ID 5793412010

[17] M N Do and M Vetterli ldquoThe contourlet transform an effi-cient directional multiresolution image representationrdquo IEEETransactions on Image Processing vol 14 no 12 pp 2091ndash21062005

[18] R Singh and A Khare ldquoFusion of multimodal medical imagesusing Daubechies complex wavelet transformmdasha multiresolu-tion approachrdquo Information Fusion vol 19 pp 49ndash60 2014

[19] X B Qu J W Yan and G D Yang ldquoMultifocus image fusionmethod of sharp frequency localized Contourlet transformdomain based on sum-modified-Laplacianrdquo Optics and Preci-sion Engineering vol 17 no 5 pp 1203ndash1212 2009

[20] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006

[21] Q Zhang and B Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[22] Y Chai H Li and X Zhang ldquoMultifocus image fusion basedon features contrast of multiscale products in nonsubsampledcontourlet transform domainrdquo Optik vol 123 no 7 pp 569ndash581 2012

[23] X B Qu J W Yan H Z Xiao and Z Zhu ldquoImage fusionalgorithm based on spatial frequency-motivated pulse cou-pled neural networks in nonsubsampled contourlet transformdomainrdquo Acta Automatica Sinica vol 34 no 12 pp 1508ndash15142008

[24] Y Wu C Wu and S Wu ldquoFusion of remote sensing imagesbased on nonsubsampled contourlet transform and regionsegmentationrdquo Journal of Shanghai JiaotongUniversity (Science)vol 16 no 6 pp 722ndash727 2011

[25] J Adu J Gan andYWang ldquoImage fusion based onnonsubsam-pled contourlet transform for infrared and visible light imagerdquoInfrared Physics amp Technology vol 61 pp 94ndash100 2013

12 Computational and Mathematical Methods in Medicine

[26] G Bhatnagar Q M J Wu and Z Liu ldquoDirective contrastbased multimodal medical image fusion in NSCT domainrdquoIEEE Transactions on Multimedia vol 15 no 5 pp 1014ndash10242013

[27] S Das and M K Kundu ldquoNSCT-based multimodal medicalimage fusion using pulse-coupled neural network andmodifiedspatial frequencyrdquo Medical and Biological Engineering andComputing vol 50 no 10 pp 1105ndash1114 2012

[28] P Kovesi ldquoImage features from phase congruencyrdquo Videre vol1 no 3 pp 1ndash26 1999

[29] M C Morrone and R A Owens ldquoFeature detection from localenergyrdquo Pattern Recognition Letters vol 6 no 5 pp 303ndash3131987

[30] P Kovesi ldquoPhase congruency a low-level image invariantrdquoPsychological Research vol 64 no 2 pp 136ndash148 2000

[31] L Zhang X Mou and D Zhang ldquoFSIM a feature similarityindex for image quality assessmentrdquo IEEE Transactions onImage Processing vol 20 no 8 pp 2378ndash2386 2011

[32] A Wong and W Bishop ldquoEfficient least squares fusion of MRIand CT images using a phase congruency modelrdquo PatternRecognition Letters vol 29 no 3 pp 173ndash180 2008

[33] L Yu Z He and Q Cao ldquoGabor texture representationmethodfor face recognition using theGamma and generalizedGaussianmodelsrdquo Image and Vision Computing vol 28 no 1 pp 177ndash1872010

[34] P Yao J Li X Ye Z Zhuang and B Li ldquoIris recognitionalgorithm using modified Log-Gabor filtersrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 461ndash464 August 2006

[35] R Raghavendra and C Busch ldquoNovel image fusion schemebased on dependency measure for robust multispectral palm-print recognitionrdquo Pattern Recognition vol 47 no 6 pp 2205ndash2221 2014

[36] R Redondo F Sroubek S Fischer and G Cristobal ldquoMultifo-cus image fusion using the log-Gabor transform and aMultisizeWindows techniquerdquo Information Fusion vol 10 no 2 pp 163ndash171 2009

[37] X Zhang X Li and Y Feng ldquoA new image fusion performancemeasure using Riesz transformsrdquoOptik vol 125 no 3 pp 1427ndash1433 2014

[38] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000

[39] B Yang and S Li ldquoPixel-level image fusion with simultaneousorthogonal matching pursuitrdquo Information Fusion vol 13 no 1pp 10ndash19 2012

[40] QMiao C Shi P XuM Yang and Y Shi ldquoA novel algorithm ofimage fusion using shearletsrdquoOptics Communications vol 284no 6 pp 1540ndash1547 2011

[41] E Candes L Demanet D Donoho and L Ying ldquoFast discretecurvelet transformsrdquo Multiscale Modeling and Simulation vol5 no 3 pp 861ndash899 2006

[42] M Fu and C Zhao ldquoFusion of infrared and visible imagesbased on the second generation curvelet transformrdquo Journal ofInfrared andMillimeterWaves vol 28 no 4 pp 254ndash258 2009

[43] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[44] Y Phamila and R Amutha ldquoDiscrete Cosine Transform basedfusion of multi-focus images for visual sensor networksrdquo SignalProcessing vol 95 pp 161ndash170 2014

[45] L Wang B Li and L Tian ldquoMulti-modal medical volumetricdata fusion using 3D discrete shearlet transform and global-to-local rulerdquo IEEETransactions on Biomedical Engineering vol 61no 1 pp 197ndash206 2014

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 11: Research Article Log-Gabor Energy Based Multimodal Medical ...downloads.hindawi.com/journals/cmmm/2014/835481.pdf · elds of multiscale transform based image fusion [ ]. However,

Computational and Mathematical Methods in Medicine 11

fused by using traditional fusion methods and the proposedframeworkThe subjective and objective comparisons clearlydemonstrate that the proposed algorithm can enhance thedetails of the fused image and can improve the visual effectwith less information distortion than other fusion methodsIn the future we plan to design a pureC++ platform to reducethe time cost and extend our method for 3D or 4D medicalimage fusion

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work is supported by the National Natural ScienceFoundation of China (no 61262034 no 61462031 and no61473221) by the Key Project of Chinese Ministry of Edu-cation (no 211087) by the Doctoral Fund of Ministry ofEducation of China (no 20120201120071) by the NaturalScience Foundation of Jiangxi Province (no 20114BAB211020and no 20132BAB201025) by the Young Scientist Foundationof Jiangxi Province (no 20122BCB23017) by the Scienceand Technology Application Project of Jiangxi Province(no KJLD14031) by the Science and Technology ResearchProject of the Education Department of Jiangxi Province (noGJJ14334) and by the Fundamental Research Funds for theCentral Universities of China

References

[1] R Shen I Cheng and A Basu ldquoCross-scale coefficient selec-tion for volumetric medical image fusionrdquo IEEE Transactionson Biomedical Engineering vol 60 no 4 pp 1069ndash1079 2013

[2] V D Calhoun and T Adali ldquoFeature-based fusion of medicalimaging datardquo IEEE Transactions on Information Technology inBiomedicine vol 13 no 5 pp 711ndash720 2009

[3] Y Yang D S Park S Huang and J Yang ldquoFusion of CT andMR images using an improved wavelet based methodrdquo Journalof X-Ray Science and Technology vol 18 no 2 pp 157ndash170 2010

[4] R Singh R Srivastava O Prakash and A Khare ldquoMultimodalmedical image fusion in dual tree complex wavelet transformdomain using maximum and average fusion rulesrdquo Journal ofMedical Imaging and Health Informatics vol 2 no 2 pp 168ndash173 2012

[5] Z Wang and Y Ma ldquoMedical image fusion using m-PCNNrdquoInformation Fusion vol 9 no 2 pp 176ndash185 2008

[6] V Barra and J Boire ldquoA general framework for the fusion ofanatomical and functionalmedical imagesrdquoNeuroImage vol 13no 3 pp 410ndash424 2001

[7] A P James and B V Dasarathy ldquoMedical image fusion a surveyof the state of the artrdquo Information Fusion vol 19 pp 4ndash19 2014

[8] P Balasubramaniam and V P Ananthi ldquoImage fusion usingintuitionistic fuzzy setsrdquo Information Fusion vol 20 pp 21ndash302014

[9] Y Yang S Y Huang F J Gao et al ldquoMulti-focus image fusionusing an effective discrete wavelet transform based algorithmrdquoMeasurement Science Review vol 14 no 2 pp 102ndash108 2014

[10] P S Pradhan R L King N H Younan and D W HolcombldquoEstimation of the number of decomposition levels for awavelet-based multiresolution multisensor image fusionrdquo IEEETransactions on Geoscience and Remote Sensing vol 44 no 12pp 3674ndash3686 2006

[11] S Li J T Kwok and Y Wang ldquoMultifocus image fusion usingartificial neural networksrdquo Pattern Recognition Letters vol 23no 8 pp 985ndash997 2002

[12] J Yang and R S Blum ldquoA statistical signal processing approachto image fusion for conceled weapon detectionrdquo in Proceedingsof the International Conference on Image Processing (ICIP rsquo02)vol 1 pp 513ndash516 September 2002

[13] R Singh and A Khare ldquoMultiscale medical image fusion inwavelet domainrdquoThe Scientific World Journal vol 2013 ArticleID 521034 10 pages 2013

[14] B Aiazzi L Alparone A Barducci S Baronti and I PippildquoMultispectral fusion of multisensor image data by the general-ized Laplacian pyramidrdquo in Proceedings of the 1999 IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo99) vol 2 pp 1183ndash1185 Hamburg Germany July 1999

[15] V S Petrovic andC S Xydeas ldquoGradient-basedmultiresolutionimage fusionrdquo IEEE Transactions on Image Processing vol 13no 2 pp 228ndash237 2004

[16] Y Yang D S Park S Huang and N Rao ldquoMedical imagefusion via an effective wavelet-based approachrdquo Eurasip Journalon Advances in Signal Processing vol 2010 Article ID 5793412010

[17] M N Do and M Vetterli ldquoThe contourlet transform an effi-cient directional multiresolution image representationrdquo IEEETransactions on Image Processing vol 14 no 12 pp 2091ndash21062005

[18] R Singh and A Khare ldquoFusion of multimodal medical imagesusing Daubechies complex wavelet transformmdasha multiresolu-tion approachrdquo Information Fusion vol 19 pp 49ndash60 2014

[19] X B Qu J W Yan and G D Yang ldquoMultifocus image fusionmethod of sharp frequency localized Contourlet transformdomain based on sum-modified-Laplacianrdquo Optics and Preci-sion Engineering vol 17 no 5 pp 1203ndash1212 2009

[20] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006

[21] Q Zhang and B Guo ldquoMultifocus image fusion using thenonsubsampled contourlet transformrdquo Signal Processing vol89 no 7 pp 1334ndash1346 2009

[22] Y Chai H Li and X Zhang ldquoMultifocus image fusion basedon features contrast of multiscale products in nonsubsampledcontourlet transform domainrdquo Optik vol 123 no 7 pp 569ndash581 2012

[23] X B Qu J W Yan H Z Xiao and Z Zhu ldquoImage fusionalgorithm based on spatial frequency-motivated pulse cou-pled neural networks in nonsubsampled contourlet transformdomainrdquo Acta Automatica Sinica vol 34 no 12 pp 1508ndash15142008

[24] Y Wu C Wu and S Wu ldquoFusion of remote sensing imagesbased on nonsubsampled contourlet transform and regionsegmentationrdquo Journal of Shanghai JiaotongUniversity (Science)vol 16 no 6 pp 722ndash727 2011

[25] J Adu J Gan andYWang ldquoImage fusion based onnonsubsam-pled contourlet transform for infrared and visible light imagerdquoInfrared Physics amp Technology vol 61 pp 94ndash100 2013

12 Computational and Mathematical Methods in Medicine

[26] G Bhatnagar Q M J Wu and Z Liu ldquoDirective contrastbased multimodal medical image fusion in NSCT domainrdquoIEEE Transactions on Multimedia vol 15 no 5 pp 1014ndash10242013

[27] S Das and M K Kundu ldquoNSCT-based multimodal medicalimage fusion using pulse-coupled neural network andmodifiedspatial frequencyrdquo Medical and Biological Engineering andComputing vol 50 no 10 pp 1105ndash1114 2012

[28] P Kovesi ldquoImage features from phase congruencyrdquo Videre vol1 no 3 pp 1ndash26 1999

[29] M C Morrone and R A Owens ldquoFeature detection from localenergyrdquo Pattern Recognition Letters vol 6 no 5 pp 303ndash3131987

[30] P Kovesi ldquoPhase congruency a low-level image invariantrdquoPsychological Research vol 64 no 2 pp 136ndash148 2000

[31] L Zhang X Mou and D Zhang ldquoFSIM a feature similarityindex for image quality assessmentrdquo IEEE Transactions onImage Processing vol 20 no 8 pp 2378ndash2386 2011

[32] A Wong and W Bishop ldquoEfficient least squares fusion of MRIand CT images using a phase congruency modelrdquo PatternRecognition Letters vol 29 no 3 pp 173ndash180 2008

[33] L Yu Z He and Q Cao ldquoGabor texture representationmethodfor face recognition using theGamma and generalizedGaussianmodelsrdquo Image and Vision Computing vol 28 no 1 pp 177ndash1872010

[34] P Yao J Li X Ye Z Zhuang and B Li ldquoIris recognitionalgorithm using modified Log-Gabor filtersrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 461ndash464 August 2006

[35] R Raghavendra and C Busch ldquoNovel image fusion schemebased on dependency measure for robust multispectral palm-print recognitionrdquo Pattern Recognition vol 47 no 6 pp 2205ndash2221 2014

[36] R Redondo F Sroubek S Fischer and G Cristobal ldquoMultifo-cus image fusion using the log-Gabor transform and aMultisizeWindows techniquerdquo Information Fusion vol 10 no 2 pp 163ndash171 2009

[37] X Zhang X Li and Y Feng ldquoA new image fusion performancemeasure using Riesz transformsrdquoOptik vol 125 no 3 pp 1427ndash1433 2014

[38] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000

[39] B Yang and S Li ldquoPixel-level image fusion with simultaneousorthogonal matching pursuitrdquo Information Fusion vol 13 no 1pp 10ndash19 2012

[40] QMiao C Shi P XuM Yang and Y Shi ldquoA novel algorithm ofimage fusion using shearletsrdquoOptics Communications vol 284no 6 pp 1540ndash1547 2011

[41] E Candes L Demanet D Donoho and L Ying ldquoFast discretecurvelet transformsrdquo Multiscale Modeling and Simulation vol5 no 3 pp 861ndash899 2006

[42] M Fu and C Zhao ldquoFusion of infrared and visible imagesbased on the second generation curvelet transformrdquo Journal ofInfrared andMillimeterWaves vol 28 no 4 pp 254ndash258 2009

[43] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[44] Y Phamila and R Amutha ldquoDiscrete Cosine Transform basedfusion of multi-focus images for visual sensor networksrdquo SignalProcessing vol 95 pp 161ndash170 2014

[45] L Wang B Li and L Tian ldquoMulti-modal medical volumetricdata fusion using 3D discrete shearlet transform and global-to-local rulerdquo IEEETransactions on Biomedical Engineering vol 61no 1 pp 197ndash206 2014

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 12: Research Article Log-Gabor Energy Based Multimodal Medical ...downloads.hindawi.com/journals/cmmm/2014/835481.pdf · elds of multiscale transform based image fusion [ ]. However,

12 Computational and Mathematical Methods in Medicine

[26] G Bhatnagar Q M J Wu and Z Liu ldquoDirective contrastbased multimodal medical image fusion in NSCT domainrdquoIEEE Transactions on Multimedia vol 15 no 5 pp 1014ndash10242013

[27] S Das and M K Kundu ldquoNSCT-based multimodal medicalimage fusion using pulse-coupled neural network andmodifiedspatial frequencyrdquo Medical and Biological Engineering andComputing vol 50 no 10 pp 1105ndash1114 2012

[28] P Kovesi ldquoImage features from phase congruencyrdquo Videre vol1 no 3 pp 1ndash26 1999

[29] M C Morrone and R A Owens ldquoFeature detection from localenergyrdquo Pattern Recognition Letters vol 6 no 5 pp 303ndash3131987

[30] P Kovesi ldquoPhase congruency a low-level image invariantrdquoPsychological Research vol 64 no 2 pp 136ndash148 2000

[31] L Zhang X Mou and D Zhang ldquoFSIM a feature similarityindex for image quality assessmentrdquo IEEE Transactions onImage Processing vol 20 no 8 pp 2378ndash2386 2011

[32] A Wong and W Bishop ldquoEfficient least squares fusion of MRIand CT images using a phase congruency modelrdquo PatternRecognition Letters vol 29 no 3 pp 173ndash180 2008

[33] L Yu Z He and Q Cao ldquoGabor texture representationmethodfor face recognition using theGamma and generalizedGaussianmodelsrdquo Image and Vision Computing vol 28 no 1 pp 177ndash1872010

[34] P Yao J Li X Ye Z Zhuang and B Li ldquoIris recognitionalgorithm using modified Log-Gabor filtersrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 461ndash464 August 2006

[35] R Raghavendra and C Busch ldquoNovel image fusion schemebased on dependency measure for robust multispectral palm-print recognitionrdquo Pattern Recognition vol 47 no 6 pp 2205ndash2221 2014

[36] R Redondo F Sroubek S Fischer and G Cristobal ldquoMultifo-cus image fusion using the log-Gabor transform and aMultisizeWindows techniquerdquo Information Fusion vol 10 no 2 pp 163ndash171 2009

[37] X Zhang X Li and Y Feng ldquoA new image fusion performancemeasure using Riesz transformsrdquoOptik vol 125 no 3 pp 1427ndash1433 2014

[38] C S Xydeas and V Petrovic ldquoObjective image fusion perfor-mance measurerdquo Electronics Letters vol 36 no 4 pp 308ndash3092000

[39] B Yang and S Li ldquoPixel-level image fusion with simultaneousorthogonal matching pursuitrdquo Information Fusion vol 13 no 1pp 10ndash19 2012

[40] QMiao C Shi P XuM Yang and Y Shi ldquoA novel algorithm ofimage fusion using shearletsrdquoOptics Communications vol 284no 6 pp 1540ndash1547 2011

[41] E Candes L Demanet D Donoho and L Ying ldquoFast discretecurvelet transformsrdquo Multiscale Modeling and Simulation vol5 no 3 pp 861ndash899 2006

[42] M Fu and C Zhao ldquoFusion of infrared and visible imagesbased on the second generation curvelet transformrdquo Journal ofInfrared andMillimeterWaves vol 28 no 4 pp 254ndash258 2009

[43] S Li B Yang and J Hu ldquoPerformance comparison of differentmulti-resolution transforms for image fusionrdquo InformationFusion vol 12 no 2 pp 74ndash84 2011

[44] Y Phamila and R Amutha ldquoDiscrete Cosine Transform basedfusion of multi-focus images for visual sensor networksrdquo SignalProcessing vol 95 pp 161ndash170 2014

[45] L Wang B Li and L Tian ldquoMulti-modal medical volumetricdata fusion using 3D discrete shearlet transform and global-to-local rulerdquo IEEETransactions on Biomedical Engineering vol 61no 1 pp 197ndash206 2014

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 13: Research Article Log-Gabor Energy Based Multimodal Medical ...downloads.hindawi.com/journals/cmmm/2014/835481.pdf · elds of multiscale transform based image fusion [ ]. However,

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom