Multimodal Palmprint Biometric System using SPIHT … Processing/ICATS_2015... · 1 Multimodal...

6
1 Multimodal Palmprint Biometric System using SPIHT and Radial Basis Function Djamel Samai 1 , Abdallah Meraoumia 1 , Salim Chitroub 2 and Noureddine Doghmane 3 1 Universit´ e Kasdi Merbah Ouargla, Laboratoire de G´ enie ´ Electrique. Facult´ e des Nouvelles Technologies de l’Information et de la Communication, Ouargla, 30000, Alg´ erie 2 Signal and Image Processing Laboratory, Electronics and Computer Science Faculty, USTHB, P.O. box 32, El Alia, Bab Ezzouar, 16111, Algiers, Algeria 3 LASA Laboratory, Department of Electronics, Faculty of Engineering, Badji Mokhtar University, B.P. 12, Annaba, 23000, Algeria. Email: [email protected], [email protected], s [email protected], [email protected] Abstract—Palmprints have been widely studied for personal authentication because they are highly accurate. So, to minimize the amount of data to be transferred via a network link to the respective location with low bandwidth, and storage of reference data in template databases, we propose an efficient multimodal palmprint biometric system based on the famous Set Partitioning In Hierarchical Trees (SPIHT ) coder and using the radial Basis Function (RBF ). The fusion is made on matching score level. The proposed method is tested and evaluated on the PolyU palmprint database of 400 users. The experimental results show the effectiveness and reliability of the proposed approach, which brings both high identification and accuracy rate. Index Terms—Biometrics, SPIHT, Multimodal Palmprint, RBF, Data fusion. I. I NTRODUCTION P ERSONAL identification and verification both play an important role in our life. Today, biometrics systems are used more and more in different activities and works. They replaced traditional knowledge-based or token-based personal identification or verification systems who became tedious time-consuming inefficient and expensive. These shortcomings have led to biometrics identification or verification systems becoming the focus of the research community in recent years [1][2]. Biometrics involves the automatic identification of an individual databases based on his physiological or behavioral characteristics template. In the literature, a number of biometric technologies have been proposed, and one of them is the hand-based biometrics, including fingerprint [3], palmprint [4][5], hand geometry or hand shape [6], hand vein [7], and Finger-Knuckle-Print (FKP ) [8]. Their usage provides a reliable, low-cost and user-friendly viable solution for a range of access control applications. Palmprint identification is one kind of hand-biometric technology. The rich texture information of palmprint offers one of the powerful means in personal identification [4][9]. Palmprint contains many line features, for example, principal lines, wrinkles, and ridges. Because of the large surface and the rich line features that can be captured even with a lower resolution, we expect palmprints to be robust to noise and to have high individuality. For this reason, and due to the large amounts of data involved, we propose to use a compressed template storage of palmprints with low bitrate and to investigate the compression effects on the recognition operation. Image compression is performed us- ing the famous Set Partitioning In Hierarchical Trees (SPIHT ) coder [10] and all experiments are done in pixel domain. In other words, the images are compressed to a certain bitrate and then uncompressed prior to use in recognition experiments. To improve our coding scheme for palmprint identification, we fuse the color palmprint image composed of three spectral bands (RGB ) with the band Near-InfraRed (NIR ) palmprint to generate a multimodal biometric system. The remainder of the paper is organized as follows. The proposed scheme for multimodal palmprint identification is exposed in section 2. Section 3 gives the proposed feature extraction method based on the SPIHT algorithm. The fusion technique used for fusing the information presented by extracted features is detailed in section 4. The experimental results prior to fusion and after fusion are given and commented in section 5. Finally, section 6 is devoted to the conclusion and future work. II. PROPOSED SYSTEM Fig. 1 illustrates the schematic diagram of the proposed system using multispectral palmprint images (RGB and NIR spectral bands). It is composed of two phases: an enrollment phase and an identification/verification phase. After a prepro- cessing operation to detect the key points of the palm, a feature extraction operation is necessary to obtain some effective features by using the SPIHT coder algorithm with low bitrate. We obtain a coarse version of palmprint images which preserve all their characteristics. The extracted feature vectors are stored in a set of observation vectors as reference templates after a training operation where we used a Radial Basis Function network [11]. Each RBF output will correspond to each class. For identification, an observation vector extracted from the test multispectral palmprint images and classifying them by the trained classifier. In the matching module which compare the input features and templates from the database, evaluation and maximum selection is used to measure the similarity between the vectors. Matching scores from both uni-biometric identification systems RGB and NIR are combined into a unique matching score using fusion at the matching-score level. Based on this unique matching score, a final decision of accepting or rejecting the user is then made. International Conference on Automatic control, Telecommunications and Signals (ICATS15) University BADJI Mokhtar - Annaba - Algeria - November 16-18, 2015 1

Transcript of Multimodal Palmprint Biometric System using SPIHT … Processing/ICATS_2015... · 1 Multimodal...

1

Multimodal Palmprint Biometric System usingSPIHT and Radial Basis Function

Djamel Samai1, Abdallah Meraoumia1, Salim Chitroub2 and Noureddine Doghmane3

1Universite Kasdi Merbah Ouargla, Laboratoire de Genie Electrique.Faculte des Nouvelles Technologies de l’Information et de la Communication, Ouargla, 30000, Algerie

2Signal and Image Processing Laboratory, Electronics and Computer Science Faculty, USTHB,P.O. box 32, El Alia, Bab Ezzouar, 16111, Algiers, Algeria

3LASA Laboratory, Department of Electronics, Faculty of Engineering, Badji Mokhtar University,B.P. 12, Annaba, 23000, Algeria.

Email: [email protected], [email protected], s [email protected], [email protected]

Abstract—Palmprints have been widely studied for personalauthentication because they are highly accurate. So, to minimizethe amount of data to be transferred via a network link to therespective location with low bandwidth, and storage of referencedata in template databases, we propose an efficient multimodalpalmprint biometric system based on the famous Set PartitioningIn Hierarchical Trees (SPIHT) coder and using the radial BasisFunction (RBF). The fusion is made on matching score level.The proposed method is tested and evaluated on the PolyUpalmprint database of 400 users. The experimental results showthe effectiveness and reliability of the proposed approach, whichbrings both high identification and accuracy rate.

Index Terms—Biometrics, SPIHT, Multimodal Palmprint,RBF, Data fusion.

I. INTRODUCTION

PERSONAL identification and verification both play animportant role in our life. Today, biometrics systems are

used more and more in different activities and works. Theyreplaced traditional knowledge-based or token-based personalidentification or verification systems who became tedioustime-consuming inefficient and expensive. These shortcomingshave led to biometrics identification or verification systemsbecoming the focus of the research community in recentyears [1][2]. Biometrics involves the automatic identificationof an individual databases based on his physiological orbehavioral characteristics template. In the literature, a numberof biometric technologies have been proposed, and one ofthem is the hand-based biometrics, including fingerprint [3],palmprint [4][5], hand geometry or hand shape [6], hand vein[7], and Finger-Knuckle-Print (FKP) [8]. Their usage providesa reliable, low-cost and user-friendly viable solution for arange of access control applications. Palmprint identificationis one kind of hand-biometric technology. The rich textureinformation of palmprint offers one of the powerful meansin personal identification [4][9]. Palmprint contains many linefeatures, for example, principal lines, wrinkles, and ridges.Because of the large surface and the rich line features that canbe captured even with a lower resolution, we expect palmprintsto be robust to noise and to have high individuality. For thisreason, and due to the large amounts of data involved, wepropose to use a compressed template storage of palmprintswith low bitrate and to investigate the compression effects on

the recognition operation. Image compression is performed us-ing the famous Set Partitioning In Hierarchical Trees (SPIHT)coder [10] and all experiments are done in pixel domain. Inother words, the images are compressed to a certain bitrate andthen uncompressed prior to use in recognition experiments.To improve our coding scheme for palmprint identification,we fuse the color palmprint image composed of three spectralbands (RGB) with the band Near-InfraRed (NIR) palmprintto generate a multimodal biometric system. The remainder ofthe paper is organized as follows. The proposed scheme formultimodal palmprint identification is exposed in section 2.Section 3 gives the proposed feature extraction method basedon the SPIHT algorithm. The fusion technique used for fusingthe information presented by extracted features is detailed insection 4. The experimental results prior to fusion and afterfusion are given and commented in section 5. Finally, section6 is devoted to the conclusion and future work.

II. PROPOSED SYSTEM

Fig. 1 illustrates the schematic diagram of the proposedsystem using multispectral palmprint images (RGB and NIRspectral bands). It is composed of two phases: an enrollmentphase and an identification/verification phase. After a prepro-cessing operation to detect the key points of the palm, a featureextraction operation is necessary to obtain some effectivefeatures by using the SPIHT coder algorithm with low bitrate.We obtain a coarse version of palmprint images which preserveall their characteristics. The extracted feature vectors are storedin a set of observation vectors as reference templates aftera training operation where we used a Radial Basis Functionnetwork [11]. Each RBF output will correspond to each class.For identification, an observation vector extracted from thetest multispectral palmprint images and classifying them bythe trained classifier. In the matching module which comparethe input features and templates from the database, evaluationand maximum selection is used to measure the similaritybetween the vectors. Matching scores from both uni-biometricidentification systems RGB and NIR are combined into aunique matching score using fusion at the matching-scorelevel. Based on this unique matching score, a final decision ofaccepting or rejecting the user is then made.

International Conference on Automatic control, Telecommunications and Signals (ICATS15)University BADJI Mokhtar - Annaba - Algeria - November 16-18, 2015

1

2

Fig. 1. Multimodal palmprint identification/verification system based on SPIHT and Radial Basis Function.

III. FEATURE EXTRACTION

A. SPIHT algorithm

The SPIHT algorithm [10] is an efficient algorithm for stillimage compression. Images decomposed by wavelet transformhave self-similarity across levels. This similarity is exploitedby SPIHT algorithm to establish Spatial Orientation Trees(SOT). Fig. 2 gives an example of SOT for a 16 × 16image with 2 levels [12]. The SPIHT algorithm orders thewavelet coefficients by magnitude and transmits them from themost significant bit (MSB) to the least significant bit (LSB).It partitions the coefficients or the sets of coefficients intosignificant or insignificant coefficients. Individual significantcoefficients are added to the List of Significant Pixels (LSP),and insignificant coefficients to the List of Insignificant Pixels(LIP) while sets of descendant coefficients to the List of In-significant Sets (LIS). When a LIS entry contains one or moresignificant pixels for a certain threshold value, it is partitionedinto significant pixels, insignificant pixels, and insignificantsets. Whenever the algorithm determines the significance of acoefficient, it produces one bit for the information. The numberof bits from significant tests is the same as the number ofentries in the LIP and the LIS, and the number of sign bitsproduced corresponds the number of entries that are added tothe LSP. Once a pixel enters the LIP, the pixel generates onebit for every bit plane to show whether it is significant or not.

B. Palmprint feature representation

Palmprint feature representation is to describe the featuresin a concise and easy to compare way. Therefore, we usethe palmprint image after decompression with the SPIHTalgorithm at low bitrate. Fig. 3 shows an example of palmprintimage at 0.1 bits per pixel and 0.25 bits per pixel. If we workwith the image at 0.25 bits per pixel, our proposed systemis able to obtain features from a palm including principallines wrinkles and ridge texture. The decompressed palmprintimage is reordered to produce an one-dimensional vector foreach image. So, the observation vectors for all persons havebeen modeled as RBF networks. A typical RBF networkwhich is a model form a special neural network architecturecontains three layers: input layer, hidden layer and output layer[11][13]. The neuron number of the input layer is decided by

Fig. 2. Spatial Orientation Trees in SPIHT.

the feature vector dimension of samples, while the numberof neurons in the hidden layer is adjustable. The neuronsof the output layer are as many as the pattern classes. Thevalues of the input variables (input vector) is forwarded fromthe input layer to the hidden layer. The nodes within eachlayer are fully connected to the previous layer nodes. Thehidden nodes are characterized by the center locations and thenonlinear radial basis function they employ. Each hidden nodereceives the input vector, calculates the Euclidean distancebetween the center location and the input vector and finallyperforms a nonlinear transformation of the distance, usingthe radial basis function. The output of each hidden node isthen multiplied by a particular weight, while the final outputof the network is a simple summation of all the weightedhidden node activations. Conventionally, K-means clusteringalgorithm could be applied to find RBF centers which are themost important parameters [14].

IV. MATCHING, FUSION SCHEME AND DECISION

The match score is a measure of similarity between theinput and template biometric feature vectors. The top matchcan be determined by examining the match scores concerningall comparisons and reporting the identity of the templatecorresponding to the largest similarity score. Fusion is basedon the combination of matching scores after separate featureextraction and comparison between reference data and test data

International Conference on Automatic control, Telecommunications and Signals (ICATS15)University BADJI Mokhtar - Annaba - Algeria - November 16-18, 2015

2

3

(a) (b) (c)

Fig. 3. Reconstruction of (a) original Palmprint image encoded using (SPIHT) at (b) 0.1 bits per pixel and (c) 0.25 bits per pixel.

for each subsystem. There are several matching score fusionrules integrate normalized matching scores of a user to producethe final matching score [15].

A. Simple Sum ruleThe Simple Sum rule takes the sum of the R matching scores

of the kth user as the final matching score Sk of this user. Sk

is calculated as follows:

Sk =R∑

i=1

Ski (1)

B. Product ruleThe Product rule regards the multiplication result of the R

matching scores of the kth user

Sk =R∏

i=1

Ski (2)

C. Min Score ruleThe Min Score rule selects the minimum score from the R

matching scores of the kth user as the final matching score ofthis user. This rule is expressed as follows:

Sk = min(Sk1, Sk2, ..., SkR) (3)

D. Max Score ruleThe Max Score rule selects the maximum score from the R

matching scores of the kth user as the final matching score ofthis user. This rule is shown as follows:

Sk = max(Sk1, Sk2, ..., SkR) (4)

E. Weighted Sum ruleThe Weighted Sum rule assumes that the R biometric

traits have different significance in personal authentication andassigns different weights to the matching scores of differenttraits. The weighted sum of the R matching scores, is consid-ered as the final matching score of the kth user. This rule isshown as follows:

Sk =R∑

i=1

wiSki (5)

The final result of the fusion is a new matching score, whichis the basis for the classification decision of the entire system.

V. EXPERIMENTAL RESULTS AND DISCUSSION

A. Experimental database

Our experiments are designed for testing the accuracy andefficiency of the proposed method. They are performed onthe multispectral palmprint database from the Hong Kongpolytechnic university (PolyU) [16]. The database containsimages captured with visible and infrared light. we havechoose 400 different persons among 500. Each person contains12 palms. With the total, we have 4800 palms.

B. Evaluation Criterion

We can measure the accuracy of any biometric recognitionsystem by two values [14].

1) The False Accept Rate (FAR): The FAR of an impostorn is defined as the number of accepted verification attemptsfor an impostor n by the number of all verification attemptsfor the same impostor n. The overall FAR for N imposters isdefined as follows:

FAR =1

N

N∑

n=1

FAR(n) (6)

2) The False Reject Rate (FRR): The FRR of a genuineuser n is defined as the number of rejected verificationattempts for a genuine user n by the number of all verificationattempts for the same genuine user n. The overall FRR for Ngenuine users is defined as follows:

FRR =1

N

N∑

n=1

FRR(n) (7)

FAR and FRR trade off against one another. The systemthreshold value is obtained based on the equal error rate(EER) criterion where FAR = FRR. In biometric system,we try to find both rates low as possible. Another performancemeasurement is obtained from FAR and FRR, which is calledthe Genuine Acceptance Rate (GAR). It represents the identi-fication rate of the system. To visually depict the performanceof a biometric system, the Receiver Operating Characteristic(ROC) curves are usually used. The ROC curves display howFAR changes with respect to the GAR and vice versa or FRRagainst FAR [15]. Biometric systems generate matching scoresthat represent how similar (or dissimilar) the input is comparedwith the stored template.

International Conference on Automatic control, Telecommunications and Signals (ICATS15)University BADJI Mokhtar - Annaba - Algeria - November 16-18, 2015

3

4

TABLE IEQUAL ERROR RATE AND ITS EQUIVALENT BITRATES.

Bitrate EERRGB NIR

0.05 2.1390 0.08330.1 1.4029 0.0395

0.25 1.0708 0.02950.5 1.9857 0.0833

0.75 1.3092 0.10081.0 2.2428 0.13892.0 2.6389 0.1667

Fig. 4. Equal Error Rate against bitrate.

C. Unimodal System Identification Test Results

We begin our experiments by evaluating the system per-formance through each modality (RGB and NIR palmprints).We used 1200 training images (3 images for each person)and 3600 test images (9 images for each person) for eachmodality. We obtained 3600 genuine comparisons and 718200impostor comparisons. Two identification modes occur, open-set identification where the person is not guaranteed to existin the database and the closed-set identification if the personis assumed to exist in the database. In our work, the proposedmethod was tested through the first mode test (open-set). Firstof all, our work is based on the compressed images, the bitrateis crucial, therefore, we must identify the best bitrate for ourapplication. The used data is compressed with the SPIHTalgorithm. We carried out several tests with different bitrateson the RGB and the NIR palmprints. Table 1 and Fig. 4illustrate the Equal Error Rate (EER) against the bitrate.According to the preceding table and figure, it is interestingto notice that the best results was achieved at a low bitrate(0.25bpp). Thereafter, we will work with this bitrate. In orderto illustrate the efficiency of using the compressed palmprintimages, the obtained results at the given bitrate are comparedto the results using original uncompressed images. Fig. 5adepicts the ROC curves which represent the performancemeasures of the open-set unimodal palmprint identificationsystem for both experiments. Our identification system canachieve a best EER of 0.0278% and 1.0679% for a thresholdT0 = 0.821 and T0 = 0.6972 in the case of NIR and RGBcompressed palmprints respectively. For the case of NIR andRGB uncompressed palmprints, we obtained EER of 0.2751%and 2.1944% for a threshold T0 = 0.7239 and T0 = 0.7012.

TABLE IITHE PERFORMANCE OF THE SYSTEM IDENTIFICATION UNDER DIFFERENT

VALUES OF THRESHOLD.

Palmprint T0 FAR FRR GAR

compressed

RGB0.4 6.9728 0.3056 99.69440.7 1.0451 1.1389 98.86111.0 0.0053 3.5278 96.4722

NIR0.3 7.0276 0 1000.6 0.5404 0 1000.9 0.0035 0.1111 99.8889

uncompressed

RGB0.4 13.5394 0.5000 99.50000.7 2.2185 2.1944 97.80561.0 00239 5.7222 94.2778

NIR0.3 8.4767 0 1000.6 0.9187 0.1667 99.83330.9 0.0032 0.6111 99.3889

Fig. 5b shows the ROC curves of GAR against FAR in thecase of NIR and RGB palmprints for various thresholds andboth cases (with/without compression). The results give a max-imum GAR of 99.9722% and 98.9325% in the case of NIRand RGB compressed palmprints against a maximum GARof 99.0556% and 94.2778% for the uncompressed palmprints.The performance of the system identification under differentvalues of T0, which control the FAR, the FRR and the GARwith percentage, is shown in Table 2. By looking at the resultsof Fig. 5, we can immediately conclude that compressionat low bitrate (0.25bpp) does not significantly influence theidentification results. Therefore, using compressed imagesexhibits some statistically significant improvements. SPIHTimages seem to be visually less distorted at low bitrates andthus more appropriate for such uses. According to table 2,we note that when one increases the threshold of similarity,the FAR decreases and the FRR increases. That justifies wellwhat one finds in the literature.

D. Multimodal System Identification Test Results

The NIR palmprint gave a good result, its insertion in themultimodal system by fusion with the RGB palmprint canproduce a robust identification system with high accuracy andimprove the rate of recognition. It promises to perform betterthan any one of its individual components (RGB or NIR). Wemade fusion at matching score which gives a good results. Inour system, RGB and NIR images are fused with differentcombinations of fusion rules where they are tested to find thecombination that optimizes the system accuracy. To find thebetter of the all fusion rules, with the lowest EER, table 3illustrates the EER according to different values of thresholdsfor compressed and uncompressed palmprints. For example, ifsum rule is used, we have EER = 0.0032% and 0.0556% forcompressed and uncompressed palmprints respectively. In thecase of using Product rule, EER was 0.0717% and 0.0833%.Using Min and Max rule, EER was 0.0833% and 0.0284%for compressed palmprints and 0.1143% and 0.1847% foruncompressed palmprints. A weighted rule improves the resultand gives the best result (EER = 0.0002% and 0.0556%for both cases) for a database size equal to 400. Therefore,the system achieved higher accuracy at the fusion of thetwo matching score compared with a single matching score.Fig. 5 gives the graphs showing the ROC curves for various

International Conference on Automatic control, Telecommunications and Signals (ICATS15)University BADJI Mokhtar - Annaba - Algeria - November 16-18, 2015

4

5

(a) (b)

Fig. 5. Unimodal identification ROC curves (a) FRR against FAR (b) GAR against FAR.

(a) (b)

Fig. 6. Multimodal identification ROC curves for weighted rule (a) FRR against FAR (b) GAR against FAR.

TABLE IIITHE EER AGAINT THRESHOLDS FOR DIFFERENT FUSION RULES.

Compressed palmprint Uncompressed palmprintFusion rule EER T0 EER T0

Sum 0.0032 0.7581 0.0556 0.7117Product 0.0717 0.5346 0.0833 0.6313

Min 0.0833 0.7037 0.1143 0.7371Max 0.0284 0.8990 0.1847 0.8616

Weighted 0.0002 0.6689 0.0556 0.7114

thresholds for the Weighted rule. The obtained results showthat using the uncompressed palmprints offers better resultsin terms of EER and GAR. The obtained results show thatthe weighted rule offers better results in terms of the genuineacceptance rate. For example, if sum rule is used, we haveEER = 0.0032%. In the case of using Product rule, EER was

0.0717%. Using Min and Max rule, EER was 0.0833% and0.0284% respectively. A weighted rule improves the result(0.0002%) for a database size equal to 400. Therefore, thesystem can achieve higher accuracy at the fusion of the twomatching score compared with a single matching score.

VI. CONCLUSION AND FURTHER WORK

The aim of this paper is to contribute to the multimodalidentification by the use of data fusion rules. Two unimodalsub-systems derived from RGB and NIR spectrums wereused in this study. Fusion of the two proposed unimodalsub-systems is performed at the matching score level togenerate a fused matching score which is used for recognizinga palmprint image. Feature extraction process use SPIHTalgorithm for images compression. It generate a template at

International Conference on Automatic control, Telecommunications and Signals (ICATS15)University BADJI Mokhtar - Annaba - Algeria - November 16-18, 2015

5

6

low bitrate that preserves the principal lines wrinkles and ridgetexture. The experimental results, obtained on a database of400 persons, show a very high open-set identification accuracy.In addition, our tests show that the multimodal system providesbetter open-set identification accuracy than the best unimodalsystems. For further improvement, our future work will projectto use other biometric modalities (Face and Iris) as well as theuse of other fusion level like feature and decision levels. Alsowe will focus on the performance evaluation in both phases(verification and identification) by using a large size database.

REFERENCES

[1] David D. Zhang, “Automated Biometrics Technologies and Systems”,Originally published by Kluwer Academic Publishers, New York in 2000.

[2] Anil K. Jain, Arun A. Ross, Karthik Nandakumar, “Introduction toBiometrics”, Springer Science+Business Media, LLC 2011.

[3] D.Maltoni, D.Maio, A.K. Jain, S. Prabhakar, “Handbook of FingerprintRecognition”, Springer, New York, June 2003.

[4] David D. Zhang, “Palmprint Authentication”, Boston: Kluwer AcademicPublishers, USA 2004.

[5] A. Meraoumia, S. Chitroub, A. Bouridane, “Do multispectral palmprintimages be reliable for person identification?”, Springer: Multimed ToolsAppl vol. 74 pp: 955-978, 2015.

[6] R. Zunkel, “Hand geometry based verifications”, in A. Jain, et al.(eds) Biometrics: Personal Identification in Networked Society. Kluwer

Academic Press, 1999.[7] C. Wilson, “Vein Pattern Recognition - A Privacy-Enhancing Biometric”,

Taylor and Francis Group, LLC, 2010.[8] Rui Zhao, Kunlun Li, Ming Liu, Xue Sun, “A Novel Approach of

Personal Identification Based on Single Knuckleprint Image”, Asia-Pacific Conference on Information Processing, APCIP, 2009.

[9] Ajay Kumar, David Zhang, “Improving Biometric Authentication Perfor-mance From the User Quality”, IEEE transactions on instrumentation andmeasurement, vol. 59, no. 3, march 2010.

[10] A. Said, W. A. Pearlman, “A new fast and efficient image codec basedon set portioning in hierarchical trees”, IEEE Trans. On Circuits andSystems for Video Technology, Vol. 6, pp. 243-250, 1996.

[11] Duda, R.O., Hart, P.E., Stork, D.G., “Pattern Classification”, 2nd edition.Wiley, New York 2001.

[12] David Salomon, “Data Compression, The Complete Reference”, FourthEdition, Springer-Verlag London Limited, 2007.

[13] X. Jing, Y. Yao, D. Zhang, J. Yang, M. Li, “Face and palmprint pixellevel fusion and kernel dcv-rbf classifier for small sample biometricrecognition”, Pattern Recognition Vol. 40, no. 11, pp. 3209-3224, 2007.

[14] Min Han, Jianhui Xi, “Efficient clustering of radial basis perceptronneural network for pattern recognition”, Pattern Recognition Vol. 37, pp.2059-2067, 2004.

[15] David Zhang, Fengxi Song, Yong Xu, Zhizhen Liang, “Advanced PatternRecognition Technologies with Applications to Biometrics”, MedicalInformation science reference, New York, 2009.

[16] The Hong Kong Polytechnic University (PolyU)Multispectral Palmprint Database. Available at:http://www.comp.polyu.edu.hk/ biometrics/MultispectralPalmprint/MSP.htm.

International Conference on Automatic control, Telecommunications and Signals (ICATS15)University BADJI Mokhtar - Annaba - Algeria - November 16-18, 2015

6