-FKR

4
Finger Knuckle Recognition Technical University of Denmark 02238-Biometric Systems Prof. Dr. Christoph Busch teaching assistant phD student Daniel Hartung Ioannis Chionidis Student Number: 101542 email: [email protected] ABSTRACT This project will implement and evaluate a 2D Finger-Knuckle- Print(FKP) recognition system. It will be developed in C++ using the OpenCV API. For every acquired sample that will be tested for matching with a database consisting of sev- eral images, following a specific procedure. X-axis will be determined through Canny edge detection, cropping the im- age according to the finger barriers and extracting the edge representation of it. We then apply convex direction cod- ing and determine the Y-axis by calculating the minimum convex magnitude. The original image is cropped at a fixed size(220×110) generated symmetrically to the X-axis and Y-axis. We apply the Gabor filter to the image, generating our template, this is compared against the Gabor filtered image of a finger using OpenCV API template matching function. Finally we will extract the FAR and FRR for our system through the samples of 10 individuals. For every one of them we will consider the sample images of their 4 fingers provided by the database. 1. INTRODUCTION Biometrics are considered as a currently ongoing scientific research topic with many applications, regarding safety and convenience. Evidence date the use of biometrics back at the 8th cen- tury, where fingerprints were used in contracts as a verifica- tion measure. During the last 3 centuries they were mostly used in forensics and later on, implemented in hi-tech ap- plications aiming both at the security and the convenience of individuals[Bus11]. It is considered as one of the most promising technologies for access control systems, such as border control(e-Pass). FKP recognition is a new type of personal authentication, among others such as the face, fingerprints, hand veins, iris of the eyes. In FKP we examine the morphological struc- ture of the outer surface of the phalangeal joint. We will implement algorithms that will extract this feature based on the approach introduced by Lin Zhang et al [LZZ09]. The algorithms were chosen to be implemented with the use of OpenCV, which is a very powerful and well-documented API, in C++ context, considering that speed is a crucial factor for heavy processing tasks such as image processing. 2. ROI EXTRACTION The Region of Interest(ROI) extraction was based on the process introduced by Zhang et al [LZZ09]. In order to ex- tract ROI from an image we need to configure the appropri- ate coordination system(x0,y0). We will crop FKP images at fixed size, in respect to the local coordination system. In the rest of the section we will describe the exact proce- dure followed for the extraction of the coordinate system and finally the ROI image. A sample outcome of the proce- dure is presented in Figure 1, presenting the original image of the finger and the outcome of the image procedure that gives as a result the ROI 1 . The implementation of the image processing techniques needed,was guided by Bradski’s and Kaehler’s excellent reference for OpenCV[BK08]. Determining X-Axis and Cropping to Is. In order to specify the X-axis we begin by applying a Canny filter to the modified initial image. We then extract the lines of the fingers. This leads us to derive an image specifi- cation. The database used contains pictures of fingers being placed in a specific way, providing us with lines on top of the picture, due to the existence of parts of the image acquir- ing equipment in that area. Through experiments with this dataset we decided that with no harm it is sufficient to ig- nore the top 50 pixels of the picture. After determining the upper and lower limits according to the highest and lowest line with the Canny edge detector, we proceed into cropping 1 The corresponding code implementing these procedure is located in files ROI extraction.cpp and ROI extraction.h .

description

finger knuckle print

Transcript of -FKR

Page 1: -FKR

Finger Knuckle Recognition

Technical University of Denmark02238-Biometric SystemsProf. Dr. Christoph Busch

teaching assistant phD student Daniel Hartung

Ioannis ChionidisStudent Number: 101542

email: [email protected]

ABSTRACT

This project will implement and evaluate a 2D Finger-Knuckle-Print(FKP) recognition system. It will be developed in C++using the OpenCV API. For every acquired sample that willbe tested for matching with a database consisting of sev-eral images, following a specific procedure. X-axis will bedetermined through Canny edge detection, cropping the im-age according to the finger barriers and extracting the edgerepresentation of it. We then apply convex direction cod-ing and determine the Y-axis by calculating the minimumconvex magnitude. The original image is cropped at a fixedsize(220×110) generated symmetrically to the X-axis andY-axis. We apply the Gabor filter to the image, generatingour template, this is compared against the Gabor filteredimage of a finger using OpenCV API template matchingfunction. Finally we will extract the FAR and FRR for oursystem through the samples of 10 individuals. For every oneof them we will consider the sample images of their 4 fingersprovided by the database.

1. INTRODUCTION

Biometrics are considered as a currently ongoing scientificresearch topic with many applications, regarding safety andconvenience.Evidence date the use of biometrics back at the 8th cen-tury, where fingerprints were used in contracts as a verifica-tion measure. During the last 3 centuries they were mostlyused in forensics and later on, implemented in hi-tech ap-plications aiming both at the security and the convenienceof individuals[Bus11]. It is considered as one of the mostpromising technologies for access control systems, such asborder control(e-Pass).

FKP recognition is a new type of personal authentication,among others such as the face, fingerprints, hand veins, irisof the eyes. In FKP we examine the morphological struc-ture of the outer surface of the phalangeal joint. We willimplement algorithms that will extract this feature basedon the approach introduced by Lin Zhang et al [LZZ09].The algorithms were chosen to be implemented with the useof OpenCV, which is a very powerful and well-documentedAPI, in C++ context, considering that speed is a crucialfactor for heavy processing tasks such as image processing.

2. ROI EXTRACTION

The Region of Interest(ROI) extraction was based on theprocess introduced by Zhang et al [LZZ09]. In order to ex-tract ROI from an image we need to configure the appropri-ate coordination system(x0, y0). We will crop FKP imagesat fixed size, in respect to the local coordination system.In the rest of the section we will describe the exact proce-dure followed for the extraction of the coordinate systemand finally the ROI image. A sample outcome of the proce-dure is presented in Figure 1, presenting the original imageof the finger and the outcome of the image procedure thatgives as a result the ROI1. The implementation of the imageprocessing techniques needed,was guided by Bradski’s andKaehler’s excellent reference for OpenCV[BK08].

Determining X-Axis and Cropping to Is.

In order to specify the X-axis we begin by applying a Cannyfilter to the modified initial image. We then extract thelines of the fingers. This leads us to derive an image specifi-cation. The database used contains pictures of fingers beingplaced in a specific way, providing us with lines on top of thepicture, due to the existence of parts of the image acquir-ing equipment in that area. Through experiments with thisdataset we decided that with no harm it is sufficient to ig-nore the top 50 pixels of the picture. After determining theupper and lower limits according to the highest and lowestline with the Canny edge detector, we proceed into cropping

1The corresponding code implementing these procedure islocated in files ROI extraction.cpp and ROI extraction.h .

Page 2: -FKR

(a) Original (b) Cropped

Figure 1: Original Image and ROI of Image

Figure 2: Convexing coding

the image with fixed left and right boundaries defined afterexperimenting with the algorithm on the specific dataset.

Extract IE from IS.

By applying a Canny edge detector to IS we derive the edgerepresentation of it, stored in IE .

Convex direction coding.

We apply curve direction modeling of FKP curves, mark-ing leftward pixels with 1, rightward -1 and the rest with0. In Figure 2 we present an example of a convex directioncoding while in Listing 1 we display the C++ code imple-mentation, where ymid stands for image.height/2.

Listing 1: Convex Coding

f o r ( i n t i = 0 ; i < Ie−>height ; i++ ) {f o r ( i n t j = 0 ; j < Ie−>width ; j++ ) {

i f ( ( i n t ) data_Ie [ i*Ie−>widthStep+j ]==0){Icd [ i ] [ j ]=0;} e l s e i f ( ( ( i n t ) data_Ie [ ( i+1)*Ie−>widthStep←֓

+(j−1)]>0)&&(( i n t ) data_Ie [ ( i+1)*Ie−>widthStep+(j←֓

+1) ]>0) ){Icd [ i ] [ j ]=0;

} e l s e i f ( ( ( ( i n t ) data_Ie [ ( i+1)*Ie−>←֓widthStep+(j−1)]>0)

&& i<=ymid )| | ( ( ( i n t ) data_Ie [ ( i+1)*Ie−>widthStep+(←֓

j+1) ]>0)&& i>ymid ) ){

Icd [ i ] [ j ]=1;

} e l s e i f ( ( ( ( i n t ) data_Ie [ ( i+1)*Ie−>widthStep←֓+(j+1) ]>0)

&& i<=ymid )| | ( ( ( i n t ) data_Ie [ ( i+1)*Ie−>widthStep+(←֓

j−1)]>0)

&& i>ymid ) ){Icd [ i ] [ j ]=−1;

} e l s e {Icd [ i ] [ j ]=0;}}

}

Determine Y-Axis.

Considering the observation, by Lin Zhang et al [LZZ09],stating that close to the phalangeal joint most of the curveson the left of it are represented in convex coding by leftwardconvex and the right of it are represented in convex cod-ing by rightward convex. By defining convex magnitude inequation 1, and considering the previous observation we canderive that Y-axis can be obtained by locating the pixel withthe minimum convex magnitude value, as shown in equation2.

conMag(x) = abs

(

W

ICD

)

(1)

W can be considered as a window of size d×h, where hrepresents the height of ICD and d the width of the windowwe choose. From this metric of every pixel we derive theminimum magnitude value of the image, and consider itsx=x0 coordinate as the Y-axis.

x0 = argxmin(conMag(x)) (2)

Crop ROI.

Our final step is the cropping of ROI from the initial im-age, IROI with (x0, y0) as our center. Image is cropped intofixed size 220×110.

3. FEATURE EXTRACTION

For the feature extraction we will be using the establishedmethod of convolution of our IROI image with a Gabor fil-ter 2. According to Lin Zhang et al [LZZ09], studies haveevaluated that the Gabor filter in combination with a com-petitive code scheme allows the extraction of orientation in-formation concerning our image, highlightning it’s effeciencyin comparison to the 3 characteristics provided by Gaborfilters(magnitude,phase and orientation). We calculate di-rectly the real part of the Gabor filter GR in order to gen-erate the filter’s kernel.

GR(x, y, ϑ) = exp(−0.5 ∗ (x

′2

sigmaX2)∗(

y′2

sigmaY 2))∗

∗cos(2π ∗ x

λ+ psi)

(3)

2The corresponding code implementing these proce-dure is located in files Feature Extraction.cpp and Fea-ture Extraction.h .

Page 3: -FKR

Figure 3: Gabor Filter on ROI image

where x′

and y′

are defined as:

x′

= x ∗ cos(ϑ ∗ π

ϑmax

) + y ∗ sin(ϑ ∗ π

ϑmax

)

x′

= −x ∗ sin(ϑ ∗ π

ϑmax

) + y ∗ cos(ϑ ∗ π

ϑmax

)

(4)

For our implementation we choose ϑmax = 6, which repre-sents the different orientations. The competitive coding isdefined as:

compCode(x, y) =argjmax {abs(IROI(x, y) ∗GR(x, y, ϑj))} ,(5)

where j = [0,5]. The result of the Gabor filter for Figure 1can be view in Figure 3.

4. PATTERN MATCHING

To achieve in extracting a level of similarity among 2 FKPimages, our algorithm follows the procedure described in thenext paragraphs3.In both verification and identification systems we need atleast an 1:1 image comparisons. The source image, the onethat is compared in order to be authenticated will be in aform of template. By template we mean that it will be mod-ified according to the previously described steps, while theimages that the template will be compared against will be ofthe original size and will only be subjected to Gabor filter-ing. These two images will be then fed to a template match-ing function of OpenCV API [BK08, p. 215] that simplyslides the template image against the actual image provid-ing a level of similarity ranging from -1(perfect mismatch)to 1(perfect match).So if we were to build the pipeline of an FKP authentica-tion system we would store the outcome of the Gabor filterof fingers in our database. OpenCV provides 6 types oftemplate matching, that are divided into two subcategories,each subcategory consists of 3 matching functions, the sec-ond subcategory contains the same template matching func-tions with the first one with the only difference that they arenormalized. For our case, after testing all 6 of them we havedecided to use the ”normalized correlation coefficient match-ing method”(CV TM CCOEFF NORMED). This is considered as themost promising in terms of comparison correctness, but isalso the most computationally hard of all 6. The mathemat-ics for computing the result for every pixel are presented inthe following equations:

Rccoeff normed(x, y) =Rcoeff (x, y)

Z(x, y)(6)

3The corresponding code implementing these procedure islocated in files comparison.cpp and comparison.h .

where:

Rcoeff (x, y) =∑

x′,y

[

T′

(x′

, y′

)× I′

(x+ x′

, y + y′

)]

2

(7)

T′

(x′

, y′

) = T (x′

, y′

)−1

(w × h)∑

x”,y” T (x”, y”)(8)

I′

(x+ x′

, y + y′

) = I(x+ x′

, y + y′

)

−1

(w × h)∑

x”,y” I(x+ x”, y + y”)

Z(x, y) =

x′,y

T (x′

, y′)2 ×

x′,y

I(x+ x′

, y + y′)2 (9)

5. EXPIREMENTS

For the evaluation of the developed system, we focused ondetermining the FAR and FRR with several testings, de-scribed in detail in the next subsections.

5.1 Data Source

In order to conduct the testings, we needed a sample ofFKP images. This resource was provided by prof. LeiZhang [Zha11] and consists of 7920 fixed size(384×288) im-ages taken from 165 volunteers. In every session there wouldbe 6 photos taken from the 4 fingers of every volunteer. Ev-ery session was in average 25 days apart from the previous.More specifically every folder in the database provided isnamed ”nnn fingertype”. ”nnn” stands for the different per-son and ”fingertype”for 1 of the 4 fingers sampled(left index,right index, left middle, right middle). Furthermore, thefirst six images in a folder consist the first photo session andthe other six the second photo session. In the spectrum ofthis course only a part of this vast database was used, morespecifically the data samples of 10 people were fed into thealgorithm.

5.2 Evaluation

The tests were divided into 2 categories, to derive FAR andFRR respectively. For both experiments we classified theresulting similarity degree according to different acceptancerates, ranging from 0.5 to 0.95 with a step of 0.05.FAR was computed by comparing the photos taken from10 individuals. For every individual’s finger, we compared2 out of 12 images against all 12. In total there were 960comparisons.To define the FRR for the different acceptance degrees weagain used the samples of 10 individuals. For every indi-vidual’s finger a comparison with the same fingers of therest 9 individuals took place. For every finger we run thealgorithm twice using 2 different images of it. That resultedinto 8640 comparisons. The 2 images used for creating thetemplate for every individual were:

nnn fingertype\01.jpg and nnn fingertype\02.jpg.

Page 4: -FKR

Figure 4: FAR-FRR diagram

In Figure 4 we can observe the correlation of FAR, FRR andacceptance barrier. As expected the relation of FAR withFRR is counter-proportional. If we were to use this softwarefor authentication applications we would have to choose theacceptance barrier in regards to our applications specifica-tions. Considering a safety-critical application such as nu-clear power plant access control, we would have to choose asimilarity acceptance degree of 0.8 or 0.85. The disadvan-tage in that case is the increase of FRR.

6. CONCLUSIONS

In this project we implemented a 2D image verification ofFKP images. The results, in terms of FAR and FRR, weresubjected to timing constraints. The experiment used a por-tion of a vast database of individual’s FKP images providingsamples of 165 volunteers, in our testings we considered 10of them. In general FKP authentication is a cost-efficient[LZZ09] metric with a user-friendly procedure for acquiringthe sample. Finally, the system is able to provide similaritydegree of two FKP images in approximately 3 seconds.

7. ACKNOWLEDGMENTS

This project was conducted in the 3-week course period inDTU under the course ”Biometric Systems”, taught by prof.Christoph Busch and assisted by phD student Daniel Har-tung.

8. REFERENCES[BK08] Gary Bradski and Adrian Kaehler. Learning

OpenCV. O’Reilly Media, 2008.

[Bus11] Christoph Busch. History of biometrics. 2011.

[LZZ09] Lei ZhangLin Zhang and David Zhang.Finger-knuckle-print: A new biometricidentification. 2009.

[Zha11] Prof. Lei Zhang.http://www4.comp.polyu.edu.hk/∼cslzhang/,June 24 2011.