Palmprint recognition using Gabor-based local invariant features

6
Letters Palmprint recognition using Gabor-based local invariant features Xin Pan a,b, , Qiu-Qi Ruan a a Institute of Information Science, Beijing Jiaotong University, Beijing 100044, PR China b College of Computerand Information Engineering, Inner Mongolia Agricultural University, Huhhot 010018, PR China article info Article history: Received 16 December 2007 Received in revised form 25 June 2008 Accepted 15 November 2008 Communicated by T. Heskes Available online 6 December 2008 Keywords: Feature extraction Invariant Palmprint recognition Gabor function abstract Variations occurred on palmprint images degrade the performance of recognition. In this paper, we propose a novel approach to extract local invariant features using Gabor function, to handle the variations of rotation, translation and illumination, raised by the capturing device and the palm structure. The local invariant features can be obtained by dividing a Gabor filtered image into two- layered partitions and then calculating the differences of variance between each lower-layer sub-block and its resided upper-layer block (called local relative variance). The extracted features only reflect relations between local sub-blocks and its resided upper-layer block, so that the global disturbance occurred on palmprint images is counteracted. The effectiveness of the proposed method is demonstrated by the experimental results. & 2008 Elsevier B.V. All rights reserved. 1. Introduction Recently, there has been an extensive research on palmprint recognition owing to its distinguished characteristics including stable structures, low-cost and low-intrusiveness [1]. Early studies focus on structural features for off-line palmprint images of high resolution (up to 500 dpi) [2,3]. As for the online palmprint images (less than 100dpi) employed in most cases today, texture analysis has been introduced to palmprint recognition [4–7] because extracting structural features becomes much more difficult, and the mere principal lines do not contribute adequately to high accuracy [6]. Li et al. [4] used four masks to highlight the distribution of line segments in horizontal, vertical and two diagonal lines, and then computed the global and local energies to represent a palmprint image. Wu et al. [5] applied the derivative of Gaussian (DoG) filters to extract palmprint texture and encode to DoGCodes for recognition. Connie et al. [7] combined three wavelet bases and linear projection methods for better perfor- mance than that obtained by directly using the original images. Among the approaches for texture analysis, the Gabor function has been regarded as an effective tool due to its optimal localization properties in both spatial and frequency domain [8]. By using a Gabor function of multiple scales and orientations, we can decompose the images into distinctive components. Kong et al. [6] has successfully applied 2D Gabor filter in palmprint recognition. In their method, Gabor features, derived from the convolution of a Gabor filter and palmprint images, were encoded into hamming codes by pixels. But these techniques cannot deal with the variations effectively. In fact, the variations occurred on palmprint images are inevitable. When capturing a palmprint image and cropping the region of interest (ROI), it is very hard to align the palmprint images in the same precise position, which brings forth rotation and translation. Moreover, the illuminations of captured images vary with the stretching and pressure of palms greatly (see Fig. 1). To address the problem, Kong et al. [6] supposed the image shifts in two directions. They calculated the Hamming distance for each possibility separately, and took the minimum as the final distance. However, it is difficult to confine the shift to a supposed limit, and moreover, the method requires extra cumbersome calculation for each possibility. Therefore, how to extract invariant features against variations of positions and illumination is of great importance for palmprint recognition. Arivazhagan et al. [8] attempted to obtain rotation invariant features using Gabor functions for texture classification. In their method, texture features were found by calculating the mean and variance of the Gabor filtered image, the rotation invariability was achieved by rotation-normalized, circular shift of feature elements to ensure all the images had the same dominant direction [8]. The approach seems to be effective for the regular textures, containing obvious rotation variations such as barks, bricks, etc. However, the holistic features are not suited for palmprint images, because the texture is non-periodical, and contains minor variations after image alignment. ARTICLE IN PRESS Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing 0925-2312/$ - see front matter & 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2008.11.019 Corresponding author at: Institute of Information Science, Beijing Jiaotong University, Beijing 100044, PR China. E-mail addresses: [email protected], [email protected] (X. Pan). Neurocomputing 72 (2009) 2040–2045

Transcript of Palmprint recognition using Gabor-based local invariant features

Page 1: Palmprint recognition using Gabor-based local invariant features

ARTICLE IN PRESS

Neurocomputing 72 (2009) 2040–2045

Contents lists available at ScienceDirect

Neurocomputing

0925-23

doi:10.1

� Corr

Univers

E-m

journal homepage: www.elsevier.com/locate/neucom

Letters

Palmprint recognition using Gabor-based local invariant features

Xin Pan a,b,�, Qiu-Qi Ruan a

a Institute of Information Science, Beijing Jiaotong University, Beijing 100044, PR Chinab College of Computer and Information Engineering, Inner Mongolia Agricultural University, Huhhot 010018, PR China

a r t i c l e i n f o

Article history:

Received 16 December 2007

Received in revised form

25 June 2008

Accepted 15 November 2008

Communicated by T. Heskeslayered partitions and then calculating the differences of variance between each lower-layer sub-block

Available online 6 December 2008

Keywords:

Feature extraction

Invariant

Palmprint recognition

Gabor function

12/$ - see front matter & 2008 Elsevier B.V. A

016/j.neucom.2008.11.019

esponding author at: Institute of Informati

ity, Beijing 100044, PR China.

ail addresses: [email protected], pxffyfx@12

a b s t r a c t

Variations occurred on palmprint images degrade the performance of recognition. In this paper, we

propose a novel approach to extract local invariant features using Gabor function, to handle the

variations of rotation, translation and illumination, raised by the capturing device and the palm

structure. The local invariant features can be obtained by dividing a Gabor filtered image into two-

and its resided upper-layer block (called local relative variance). The extracted features only reflect

relations between local sub-blocks and its resided upper-layer block, so that the global disturbance

occurred on palmprint images is counteracted. The effectiveness of the proposed method is

demonstrated by the experimental results.

& 2008 Elsevier B.V. All rights reserved.

1. Introduction

Recently, there has been an extensive research on palmprintrecognition owing to its distinguished characteristics includingstable structures, low-cost and low-intrusiveness [1]. Early studiesfocus on structural features for off-line palmprint images of highresolution (up to 500 dpi) [2,3]. As for the online palmprint images(less than 100 dpi) employed in most cases today, texture analysishas been introduced to palmprint recognition [4–7] becauseextracting structural features becomes much more difficult, andthe mere principal lines do not contribute adequately to highaccuracy [6]. Li et al. [4] used four masks to highlight thedistribution of line segments in horizontal, vertical and twodiagonal lines, and then computed the global and local energies torepresent a palmprint image. Wu et al. [5] applied the derivativeof Gaussian (DoG) filters to extract palmprint texture and encodeto DoGCodes for recognition. Connie et al. [7] combined threewavelet bases and linear projection methods for better perfor-mance than that obtained by directly using the original images.Among the approaches for texture analysis, the Gabor functionhas been regarded as an effective tool due to its optimallocalization properties in both spatial and frequency domain [8].By using a Gabor function of multiple scales and orientations, wecan decompose the images into distinctive components. Kong

ll rights reserved.

on Science, Beijing Jiaotong

6.com (X. Pan).

et al. [6] has successfully applied 2D Gabor filter in palmprintrecognition. In their method, Gabor features, derived from theconvolution of a Gabor filter and palmprint images, were encodedinto hamming codes by pixels. But these techniques cannot dealwith the variations effectively.

In fact, the variations occurred on palmprint images areinevitable. When capturing a palmprint image and cropping theregion of interest (ROI), it is very hard to align the palmprintimages in the same precise position, which brings forth rotationand translation. Moreover, the illuminations of captured imagesvary with the stretching and pressure of palms greatly (see Fig. 1).To address the problem, Kong et al. [6] supposed the image shiftsin two directions. They calculated the Hamming distance for eachpossibility separately, and took the minimum as the final distance.However, it is difficult to confine the shift to a supposed limit, andmoreover, the method requires extra cumbersome calculation foreach possibility. Therefore, how to extract invariant featuresagainst variations of positions and illumination is of greatimportance for palmprint recognition.

Arivazhagan et al. [8] attempted to obtain rotation invariantfeatures using Gabor functions for texture classification. In theirmethod, texture features were found by calculating the mean andvariance of the Gabor filtered image, the rotation invariability wasachieved by rotation-normalized, circular shift of feature elementsto ensure all the images had the same dominant direction [8].The approach seems to be effective for the regular textures,containing obvious rotation variations such as barks, bricks, etc.However, the holistic features are not suited for palmprint images,because the texture is non-periodical, and contains minorvariations after image alignment.

Page 2: Palmprint recognition using Gabor-based local invariant features

ARTICLE IN PRESS

Fig. 1. Palmprint samples containing variations (images of same column from the same palm).

X. Pan, Q.-Q. Ruan / Neurocomputing 72 (2009) 2040–2045 2041

Therefore, a novel method to extract Gabor-based localinvariant features for palmprint recognition is proposed in thispaper. Inspired by fractal coding [9], a Gabor filtered image can bepartitioned into 4p blocks in which each is divided into 4 sub-blocks, the local relative variance (LRV) is defined as the differenceof variance between each lower-layer sub-block and its residedupper-layer block. Eventually, the LRVs of all the 4p+1 sub-blocks,attributed to Gabor filtered images of all scales and orientations;compose the Gabor-based local invariant feature vector torepresent a palmprint image. Due to the image localization andvariance subtraction, the features are locally invariant to globalnoises aroused by variations of position and illumination,resulting in better recognition performance. In addition to thehigher accuracy, the proposed method is more efficient becausethe mere one-order statistical calculation of blocks, rather thanpixels, is required, resulting in less computation effort.

This paper is organized as follows. Section 2 introduces thelocal Gabor invariant features for palmprint recognition.The experimental results are discussed in Section 3. Finally,Section 4 highlights the conclusion.

2. Gabor-based local invariant features for recognition

Generally speaking, a palmprint recognition system mainlycontains three stages: preprocessing, feature extraction andfeature matching. In the preprocessing stage, the captured palmimage is aligned and the center part is cropped as the ROI forrecognition. The feature matching stage is to identify the testimage as belonging to the class which shows the highestsimilarity. This paper focuses on the second stage to propose anovel method of extract Gabor-based local invariant featuresincluding three major steps: Gabor convolution, two-layer parti-tion and computing the LRVs (see Fig. 2).

2.1. 2D Gabor function

2D Gabor function has the following general form [6]:

Gðx; y; y;u;sÞ ¼ 1

2ps2exp �

x2 þ y2

2s2

� �

� expf2piðux cos yþ uy sin yÞg (1)

where i ¼ffiffiffiffiffiffiffi�1p

, u is the frequency of the sinusoidal wave,y controls the orientation of the function, and s is the standarddeviation of the Gaussian envelope. Modified from the experi-mental results of Kong et al. [6], we design a Gabor function of fivescales and four orientations for more robust features, whoseparameters are set as follows, (when v ¼ 3, u ¼ 0.0916):

u ¼ 0:2592=ffiffiffi2p

v; v ¼ 0;1; . . . ;4;

yk ¼pk

4; k ¼ 0; . . . ;3. (2)

For a given image I(x,y), the convolution of Gabor function andthe palmprint image yields Gabor filtered images

Ovkðx; yÞ ¼ Iðx; yÞ � Gðv; kÞ; v ¼ 0;1; . . . ;4; k ¼ 0; . . . ;3. (3)

2.2. Two-layer partition

Local matching is a common approach to biometrics includingface recognition [10] whose general idea is to first locate severalfacial components and then classify the faces by comparing andcombining the corresponding local statistics. Meanwhile, Nanniet al. [11] found that partitioning the ROI around the referencepoint in smaller sub-images can bound the effects of imagevariations to better preserve and represent the local information.Hence, we divide the extracted ROI of palmprint image intouniform grids for no obvious components such as such as eyes,nose, chip contained in a face image. Prompted by fractal codes[9], which are obtained by searching for the self-similar largerdomain blocks to smaller range blocks through a geometric andaffine transformation, we divide the image into two-layerpartitions and attempt to represent the image by the similarityrelationship between the blocks.

Given a Gabor filtered image Ovk(x,y) of size M�N, the firstpartition is to divide palmprint image into 4p non-overlappingupper-layer blocks (where p is a nonnegative integer, andpolog2ðminðM;NÞ � 1), marked as A1,A2,A3,y,A4

p. 4p is preferableblock number by simply dividing the two-dimensional image into2p parts along two-dimensional coordinate directions similar toquadtree segmentation. The second partition is to divide eachupper-layer block into 4 parts, and altogether 4p+1 lower-layer

Page 3: Palmprint recognition using Gabor-based local invariant features

ARTICLE IN PRESS

Fig. 2. Block diagram of the proposed palmprint identification system.

Fig. 3. Two-layer partition (p ¼ 1) (a) upper-layer partition and (b) lower-layer

partition.

X. Pan, Q.-Q. Ruan / Neurocomputing 72 (2009) 2040–20452042

sub-blocks A11,A12,A13,A14,A21,A22,y,A4p4. Fig. 3. illustrates thepartition procedure when p ¼ 1.

2.3. Local relative variance (LRV)

Based on the formulas of average energy and standarddeviation of the holistic image [8], the average energies andstandard deviations of local blocks and sub-blocks can becomputed to preserve more local information. For an arbitraryupper-layer block Ai (i ¼ 1,y,4p), the average energy and standarddeviation can be calculated as follows

mvkðAiÞ ¼

Px;y2AijOvkðx; yÞj

4�pMN, (4)

svkðAiÞ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiPx;y2AijjOvkðx; yÞj � mvkðAiÞj

q4�pMN

. (5)

where jOvkðx; yÞj is the magnitude value. Similarly, the averageenergy and standard deviation of a lower-layer sub-block Aij

(i ¼ 1,y,4p; j ¼ 1,y,4) derived from the second partition of Ai are

mvkðAijÞ ¼

Px;y2AijjOvkðx; yÞj

4�ðpþ1ÞMN, (6)

svkðAijÞ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiPx;y2AijjjOvkðx; yÞj � mvkðAijÞ

q

4�ðpþ1ÞMN, (7)

Aiming to represent the image with self-similarity relationshipbetween lower-layer sub-block and upper-layer block, we definethe LRV of all the local sub-blocks, rather than using the directaverage energy and standard deviation [11], to form the featurevector. The LRV of a sub-block Aij is defined as the difference ofstandard variance between the lower-layer sub-block Aij and itsresided upper-layer block Ai,

DsvkðAijÞ ¼ jsvkðAijÞ � svkðAiÞj. (8)

Taken the upper-layer deviation as the benchmark, the LRVreflects the deviation extent of each local sub-block and itsresided upper-layer blocks. Therefore, the global error aroused byvariations of translation, rotation and shadow is eliminated bytheir subtraction. Then, we can create a feature vector Fvk of oneGabor filtered image by arranging the LRVs of all 4p+1 lower-layersub-blocks in sequence,

Fvk ¼ ðDsvkðA11Þ;DsvkðA12Þ; . . . ;DsvkðA4p4ÞÞ. (9)

Considering five orientations and four scales are applied inGabor function for palmprint textures, the feature vector of apalmprint image can be represented as

F ¼ ðF00; F01; F02; . . . ; F43Þ. (10)

Having obtained the feature vectors, Manhattan distance (or cityblock) is used to measure the similarity between the test image(U) and the target image (T) for efficient matching. As the featurelength increase with the number of p, Manhattan distance used asthe similarity measurement requires less computation effort thancommonly used Euclidean distance. The concrete equation ofManhattan distance is

dðU; TÞ ¼X4

v¼0

X3

k¼0

jFvkðUÞ � FvkðVÞj

¼X4

v¼0

X3

k¼0

X4p

i¼1

X4

j¼1

jDsUvkðAijÞ �DsT

vkðAijÞj (11)

The nearest neighbor classifier is used for identification. The testimage U is classified as belonging to class P if their distance is theminimum among all the classes in the database. The abovestatement can be expressed as follows, if

dðU; PÞ ¼ minT

dðU; TÞ; T ¼ 1;2; . . . ; c, (12)

then decide UAP.

3. Experiments and results

3.1. Palmprint database

We have collected 1460 palmprint images of 292�413 pixelsof 72 dpi from 146 palms by a small-scale image scanner, each has10 samples. The volunteers are required to spread their hands onthe surface of the scanner, where we have a reference position forthe thumb. The other four fingers can stretch freely, which leadsto the rotation and translation of the palm within a small extent.In addition, owing to the diversities of palm pressure, stretchingextent and structure, the illuminations of images vary obviouslyfrom one another. The central part of 128�128 pixels croppedfrom original palms constitutes a palmprint database by thepreprocessing method [12] (see Fig. 4). The first five randomimages per class are chosen for training and the last five for

Page 4: Palmprint recognition using Gabor-based local invariant features

ARTICLE IN PRESS

Fig. 4. Experimental palmprint images (a) original images and (b) extracted ROI.

Table 1Comparison of correct recognition rates (%) using different p.

Distance Training sample per class

5 4 3 2 1

p ¼ 0

Manhattan 79.59 74.66 67.81 51.10 35.62

Euclidean 79.14 74.25 67.4 50.14 34.66

p ¼ 1

Manhattan 97.53 95.21 91.23 85.89 76.85

Euclidean 97.12 94.38 89.59 84.25 71.64

p ¼ 2

Manhattan 98.36 97.12 94.52 88.22 78.90

Euclidean 96.44 95.34 92.88 85.89 73.15

p ¼ 3

Manhattan 96.30 95.07 91.64 85.62 70.14

Euclidean 94.79 92.33 88.90 81.64 64.79

p ¼ 4

Manhattan 86.58 82.33 73.56 63.29 43.29

Euclidean 81.10 74.79 66.85 53.97 35.48

X. Pan, Q.-Q. Ruan / Neurocomputing 72 (2009) 2040–2045 2043

testing, i.e. the training set and the testing set each contains730 images. All the experiments are executed on a computersystem of AMD Dual Core Processor 4000+ 2.10 GHz and 1 GB RAMwith Matlab 7.3.

3.2. Experimental results

The first group of experiments aims to find the appropriateparameter p for the palmprint database. Table 1 lists the correct

recognition rates when p ranges between 0 and 4 usingManhattan distance and Euclidean distance respectively. Thenumber of lower-layer sub-blocks is 4p+1, and the featuredimension is 20� 4p+1. As can be seen, the recognition perfor-mance increases with parameter p when p varies from 0 to 2. Theimprovement is mainly because the appropriate partitions canalleviate the effects of image variations, the same conclusion as in[11]. The top recognition rates of p ¼ 2 are 98.36 and 96.44% usingManhattan and Euclidean distance, respectively, with 5 trainingsample per class. However, superfluous partitions cannot furtherenhance the recognition rates, witnessed by the decrease ofrecognition performance when p varies from 3 to 4. Anotherconclusion we can conclude from Table 1 is that Manhattandistance is more accurate in measuring the similarity of theextracted features than Euclidean distance with higher recogni-tion rates. In addition, the Manhattan distance is more efficientthan Euclidean distance as we stated in Section 2.3. Therefore,Manhattan distance is preferable in the proposed method.

The second group of experiments is to compare the recognitionrates of our proposed algorithm (p ¼ 2) with some other Gabor-based methods and subspace projection methods. The log-Gabormethod uses log-Gabor filters to convolute with the originalimages, and the feature extraction and matching procedures arethe same as those of our method for easy comparison. The log-Gabor filters are based on Gaussian transfer functions symme-trical on the log frequency scale [13]. As can be seen, therecognition rates do not exhibit the superiority of log-Gabor filterscompared with the Gabor filters in palmprint recognition. Thesimilar result is obtained in the study on facial expressionrecognition [14]. The traditional Gabor method outperformsthe rotation invariant method [8] by a large margin, validatingthe localized idea fits for palmprint recognition better than theintegration. However, affected by variations of translation, rota-tion and illuminations, its performance result is less satisfyingthan that of the proposed method with a gap of 5.8%. Rotationinvariant method [8] performs worst among all the methods

Page 5: Palmprint recognition using Gabor-based local invariant features

ARTICLE IN PRESS

X. Pan, Q.-Q. Ruan / Neurocomputing 72 (2009) 2040–20452044

mainly because the rotation scopes of palmprint images areconfined to a small extent after image alignment, differing fromthe common textures (e.g., barks, bricks) which rotate at a randomangle. Therefore, the dominant direction corresponding to themaximum mean energy cannot guarantee the rotation invariancefor palmprint textures.

Moreover, the proposed method outperforms the subspaceprojection methods Eigenpalm and Fisherpalm, witnessed by thecomparison in Table 2. The reason lies in that the subspacemethods are more liable to be affected by the variations anddistortions occurred on images. Another disadvantage of subspaceprojection methods is that the recognition accuracy decreaseswith the training sample dramatically. As can be seen, thedifferences of Eigenpalm and Fisherpalm when using 5 samplesand 2 samples per class are 20.68 and 21.88%, respectively, ascompared with 10.14% than that of our proposed method.

In addition to accuracy improvements of our algorithm,efficiency is the obvious characteristic compared with the otherGabor-based methods. Table 3 lists the recognition time of thethree Gabor-based algorithms: the proposed method, traditionalGabor [6] and rotation invariant method [8]. Log-Gabor method isno more listed in the table because the time mainly consumed infeature extraction and feature matching is the same as that of ourproposed method. The time is divided into three parts, inaccordance with the recognition procedure: Gabor convolution,feature extraction and feature matching. From the table, we cansee the convolution time of our method and Arivazhagan et al. [8]is more than that of Kong et al. [6], because the former twoapproaches use 20 filters, while the later only uses one filter.However, the time for feature extraction and matching of ouralgorithm and is minor as compared to that of [6]. The reasonmainly lies in that the local invariant features are first-orderstatistical quantity of 16�16 pixels in our algorithm, whileHamming encoding and Hamming distance are based on pixelcomputation in [6]. The time consumption differs significantly

Table 3Comparative time (s) of different algorithms.

Recognition

procedure

The proposed

method

Traditional

Gabor [6]

Rotation

invariant [8]

Gabor convolution

Per image 0.1816 0.0091 0.1816

Total 265.1360 13.2860 265.1360

Feature extraction

Per image 0.04518 0.3275 0.0248

Total 63.51 478.15 36.2680

Feature matching

Per matching 4.1963e�005 0.0154 1.4807e�005

Total 22.3621 8.2067e+003 7.8907

Table 2Comparison of correct recognition rates (%) of different methods.

Method Training sample per class

5 4 3 2

The proposed method 98.36 97.12 94.52 88.22

Log_Gabor method 96.30 93.55 90.14 80.55

Traditional Gabor method [6] 92.88 91.23 86.99 80.96

Rotation invariant method [8] 53.29 44.79 35.62 23.29

Eigenpalm 92.05 89.32 83.42 71.37

Fisherpalm 94.25 91.10 84.52 72.47

with the increase of database, witnessed by the total timecomparison (the number of image and matching are 1460 and730�730 ¼ 532,900, respectively). The time of our method isslightly longer than that of rotation invariant [8] in featureextraction and matching because we use local features by dividingthe image into blocks, while they use integrated features.Nevertheless, the calculation time is still in the same order ofmagnitude, and time difference can be omitted as compared toaccuracy improvements.

4. Conclusion

This paper reports a novel method to extract Gabor-based localinvariant features for palmprint recognition. The novelty of thisstudy comes from using the relationship between the local lower-layer sub-blocks and upper-layer blocks based on Gabor features,defined as LRV, to represent palmprint image. As counteraction ofthe global disturbances and variations, the proposed methodachieves obvious improvements in terms of correct recognitionrate. At the same time, the proposed method is high efficient forthe local invariant features are simple statistical quantitieswithout sophisticated calculation. We expect that the method isalso helpful to the applications disturbed by variations oftranslation, rotation and illumination, etc.

Acknowledgments

The authors are grateful to the anonymous reviewers fortheir constructive comments and advices. This work is supportedpartly by the National Natural Science Foundation of Chinaunder Grant No. 60472033, No. 60672062, and the NationalGrand Fundamental Research 973 Program of China under GrantNo. 2004CB318005.

References

[1] D. Zhang, Palmprint Authentication, Kluwer Academic Publication, Dordrecht,2004.

[2] N. Duta, A.K. Jain, K.V. Mardia, Matching of palmprint, Pattern RecognitionLetters 23 (4) (2002) 477–485.

[3] D. Zhang, W. Shu, Two novel characteristics in palmprint verification: datumpoint invariance and Line feature matching, Pattern Recognition 32 (4) (1999)691–702.

[4] W. Li, J. You, D. Zhang, Texture-based palmprint retrieval using a layeredsearch scheme for personal identification, IEEE Transactions on Multimedia7 (5) (2005) 891–898.

[5] X. Wu, K. Wang, D. Zhang, Palmprint texture analysis using derivative ofGaussian filters, in: Proceedings of 2006 International Conference onComputational Intelligence and Security, 2006, pp. 751–754.

[6] W.K. Kong, D. Zhang, W. Li, Palmprint feature extraction using 2-D Gaborfilters, Pattern Recognition 36 (10) (2003) 2339–2347.

[7] T. Connie, A.T.B. Jin, M.G.K. Ong, D.N.C. Ling, An automated palmprintrecognition system, Image and Vision Computing 23 (5) (2005) 501–515.

[8] S. Arivazhagan, L. Ganesan, S.P. Priyal, Texture classification using Gaborwavelets based rotation invariant features, Pattern Recognition Letters 27 (16)(2006) 1976–1982.

[9] D. Putra, I.K. Gede, A. Susanto, A. Harjoko, T.S. Widodo, Palmprint verificationbased on fractal codes and fractal dimensions, in: Proceedings of the 8thIASTED International Conference Signal and Image Processing, Honolulu,Hawaii, USA, 2006, pp. 323–328.

[10] J. Zou, Q. Ji, G. Nagy, A comparative study of local matching approach for facerecognition, IEEE Transaction on Image Processing 16 (10) (2007) 2617–2628.

[11] L. Nanni, A. Lumini, A hybrid wavelet-based fingerprint matcher, PatternRecognition 40 (11) (2007) 3146–3151.

[12] Y. Wang, Q. Ruan, A new preprocessing method of palmprint, Journal of Imageand Graphics 13 (6) (2008) 1115–1122 (in Chinese).

[13] D.J. Field, Relations between the statistics of natural images and the responseprosperities of cortical cells, Journal of the Optical Society of America 4 (12)(1987) 2379–2394.

[14] N. Rose, Facial expression classification using Gabor and log-Gabor filters, in:Proceedings of the 7th International Conference on Automatic Face andGesture Recognition, 2006, pp. 346–350.

Page 6: Palmprint recognition using Gabor-based local invariant features

ARTICLE IN PRESS

X. Pan, Q.-Q. Ruan / Neurocomputing 72 (2009) 2040–2045 2045

Xin Pan received her B.S. and M.S in Xi’an Institute ofTechnology and Inner Mongolia Agricultural Universityin 1997 and 2000. She has worked in College ofComputer and Information Engineering, Inner Mongo-lia Agricultural University since then. Now she ispursuing her Ph.D. degree at the institute of informa-tion science, Beijing Jiaotong University. Her researchinterests include image processing, pattern recogni-tion, etc.

Qiuqi Ruan was born in 1944. He received the B.S. andM.S. degrees from Northern Jiaotong University, Chinain1969 and 1981, respectively. From January 1987 toMay 1990, he was a visiting scholar in the University ofPittsburgh, and the University of Cincinnati. Subse-quently, he has been a visiting professor in USA forseveral times. He has published two books and morethan 100 papers, and achieved a national patent. Nowhe is a professor, doctorate supervisor at the instituteof information science, Beijing Jiaotong University. Heis a senior member of IEEE. His main research interestsinclude digital signal processing, computer vision,

pattern recognition, and virtual reality, etc.