Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural...

download Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

of 7

Transcript of Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural...

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    1/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 1 

    ABSTRACT Iris recognition, a relatively new biometric technology, has great advantages, such as variability, stability and security, thus it is

     the most promising for high security environments. Hence numbers of IRIS recognition algorithms are available, techniques

    we purposed such as SVD (singular value decomposition),Characterizing key local variation (KLV) this technique will be used

     to extract the features of iris.The non-useful information such as sclera, pupil, eyelashes and eyelids are removed and Region

     of Interest (ROI) is extracted. Then the Iris template is generated to reduce the information thereby concentrating only on the

     ROI. The details of this combined system named as Iris Pattern recognition using neural Network Approach. All IRIS

     recognition algorithms are evaluated on the basis of classification rate obtained after changing different parameters such as

     number of classes, Training and Testing patterns, SVD: To reduce complexity of layered neural network the dimension ofinput vectors are optimized using Singular Value Decomposition (SVD).An efficient algorithm describes for iris recognition by

     characterizing key local variations. Local details of the iris generally spread along the radial direction in the original image

     corresponding to the vertical direction in the normalized image. Therefore information density in the angular direction

     corresponding to the horizontal direction in the normalized image is much higher than that in other directions i.e., it may

     suffice only to capture local sharp variations along the horizontal direction in the normalized image to characterize an iris. Alocal extremum is either a local minimum or a local maximum. The optimum classification values are obtained with SVD 20

     dimension and maximum number of classes as 9 with the state-of-the art computational resources. The details of this combined

     system named as SVD-EBD system for Iris pattern recognition.

    Keywords: singular value decomposition, key local variation,ubiris database

    1.INTRODUCTION 

    Singular Value Decomposition (SVD) is a powerful matrix technique with many useful applications such as imagecompression, pattern recognition, least squares method and many more. The main concept behind SVD is to expose thehidden geometry of the matrix. SVD is widely used as a dimension reduction tool. If we are operating on M*N matrix(M≥N) then by applying SVD the matrix is factorized into three other matrices which is of the following form given byequation 1:

    T UDV  A    (1)Where the superscript denotes the transpose of V matrix.A= Original Matrix or Original ImageU= An M*M orthogonal matrix.V= An N*N orthogonal matrix.D= An M*N diagonal matrix in which the diagonal elements are non-zero in non-decreasing values and all the othervalues are zero. Sij=0 if i≠0 and Sii> Si+1,i+1. The two important aspects to be noted here is that:

      Dis zero everywhere except in the main diagonal. This leads to reduction in the dimension of the input patternfrom a matrix M*N to only a vector of N elements.

      Only the first k elements contain substantial information, and the vector tail without significant informationcontent can be cropped out[9].

    Analysis & Performance Evolution of IRIS

    Recognition SVD, KLV and EBP Algorithmsusing Neural Network Classifier

    1Prachi P. Jeurkar ,

     Vijaykumar S. Kolkure

    2

    1Prachi P. Jeurkar student of M.E. in Electronics in B.I.G.C.E., Solapur.

    2Vijaykumar S. Kolkure Asst. Professor in B.I.G.C.E., Solapur.

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    2/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 2 

    2. UBIRIS DATABASE 

    The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris- based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to

    capture near infrared images with enough quality. Also, all of the publicly available iris image databases contain datacorrespondent to such imaging constraints and therefore are exclusively suitable to evaluate methods thought to operateon these types of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2database, a multisession iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is freely available for researchersconcerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying theconstraints of this type of biometric recognition.[3][4] 

    Figure 1 Comparison between a good quality image and several types of non-ideal images of the UBIRIS.v2 database.These images are the result of less constrained imaging conditions, either under varying lighting conditions, at-a-distance, and on-the-move. (a) Good quality iris image. (b) Off-angle iris image. (c) Poorly focused iris image. (d)Rotated iris image. (e) Motion-blurred iris image. (f) Iris obstructions due to eyelids. (g) Iris obstructions due toeyelashes. (h) Iris obstructions due to glasses. (i) Iris obstructions due to contact lenses. (j) Iris obstructions due to hair.

    (k) Iris imaging in poor lighting conditions. (l) Iris with specular reflections. (m) Iris with lighting reflections. (n)Partially captured iris. (o) Out-of-iris image[11]. 

    3.SVD IMPLEMENTATION IN MATLAB

    The command [U,S,V] = SVD(X) produces a diagonal matrix S, of the same dimension as X and with nonnegativediagonal elements in decreasing order, and unitary matrices U and V so that X = U*S*V'. S = SVD(X) returns a vectorcontaining the singular values which is nothing but the diagonal elements in descending order. This concept is veryimportant in many applications as it can be exploited in image compression as only the first few diagonal elementscontains the substantial information and the tail end of diagonal elements can be cropped out without significant loss ofmuch information. [U, S, V] = SVD(X,0) produces the "economy size" decomposition. If X is M-by-N with M > N,then only the first N columns of U are computed and S is N-by-N. For M = N, then it is

    equivalent to SVD(X, 0). For M < N, only the first m columns of V are computed and S is M-by-M.The concept of Singular Value Decomposition (SVD) can be well understood by looking at the example below whichshows the decomposition of a matrix using SVD technique, Here A is 5*5 matrix,

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    3/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 3 

    52014

    79326

    87427

    21630

    46721

     A

     

     Now, executing the command Xpattern=svd (A) returns a vector containing the singular values which is nothing butthe diagonal elements in descending order. After execution this command returns the value of Xpattern as shown below,

    644.0

    7560.2

    6822.4

    8456.10

    6020.20

     Xpattern

     

    Xpattern obtained from the above transformation is nothing but the singular values occupying the diagonal elements ofthe matrix, except for that all the other elements of the matrix are zero. As, we can see that the tail end of Xpatternvector contains lesser information so, the tail end of this vector can be cropped out in some applications without

    substantial loss of much information. So, SVD reduces the dimensionality of the problem. This concept can be wellunderstood by looking at the figure 4.1 which is a plot between Xpattern and amount of information.

    Figure 2Implementation of SVD for Matrix ASingular Value Decomposition (SVD) has been implemented in our algorithm to reduce the size of the iris templatefrom 40*40 dimensions to 40 dimensions. The first 40 dimensions consist of the diagonal information in descendingorder. Figure 4.2 shows the image of the iris template obtained after the segmentation step to extract the region ofinterest.

    Figure 3Iris TemplateThen SVD of the image is applied to reduce the dimension of the iris template from 1600 to 40 dimensions therebyreducing huge amount of information. This helps in reducing the complexity of the Neural Network used later. TheSVD plot for Iris template is shown in figure 4.5. The amount of information gets reduced as we move towards the tailend.

    Figure 4Plotting of the SVD pattern for Iris Template

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    4/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 4 

    4.Mathematical calculation for SVD

    Singular value decomposition takes a rectangular matrix of gene expression data (defined as A, where A is an x  pmatrix) in which the n rows represents the genes, and the p columns represents the experimental conditions. The SVD

    theorem states:A nxp= U nxnS nxpV

    T pxp 

    Where,UTU = Inxn 

    VTV = I pxp(i.e. U and V are orthogonal)Where the columns of U are the left singular vectors (gene coefficient vectors); S (the same dimensions as A) hassingular values and is diagonal (mode amplitudes); and VT has rows that are the right singular vectors (expression levelvectors). The SVD represents an expansion of the original data in a coordinate system where the covariance matrix isdiagonal.Calculating the SVD consists of finding the eigenvalues and eigenvectors of AAT and ATA. The eigenvectors of ATAmake up the columns of V; the eigenvectors of AAT make up the columns of U. Also, the singular values in S  aresquare roots of eigenvalues from AAT or ATA. The singular values are the diagonal entries of the S matrix and are

    arranged in descending order. The singular values are always real numbers. If the matrix A is a real matrix, then U andV are also real[10] [13].

    5.KEY LOCAL VARIATION 

    The basic idea is that local sharp variation points, denoting the appearing or vanishing of an important image structure,are utilized to represent the characteristics of the iris. Here, local intensity variation points are acting as features. Thewhole feature extraction method includes two steps:1) Generation of 1-D intensity signals2) Using particular class of wavelet, a position sequence of local sharp variation point is recorded as a feature.

    5.1 Generation of 1-D intensity signals:

    Construct a set of 1-D intensity signals which are capable of retaining most sharp variations in the original iris image.

    Local details of the iris generally spread along the radial direction in the original image corresponding to the verticaldirection in the normalized image. Therefore information density in the angular direction corresponding to thehorizontal direction in the normalized image is much higher than that in other directions i.e., it may suffice only tocapture local sharp variations along the horizontal direction in the normalized image to characterize an iris. Inaddition, since our basic idea is to represent the randomly distributed blocks of the iris by characterizing local sharpvariations of the iris, it is unnecessary to capture local sharp variation points in every line of the iris image forrecognition. Bearing these two aspects in mind, we decompose the 2-D normalized image into a set of 1-D intensitysignals S according to the following equation 2:

     M 

     ji

     I  M i

    )I,.....I,.....I(

    I

    .

    .

    I

    .

    .

    I

     I

    1,2,....N i 

    TK 

    Tx

    T1

    x

    1

    1  jM*

    )1(

    1

     

     

     

     

      (2)

     

    Where I is the normalized image,Ix denotes gray values of the x

    throw in the image I,M is the total number of rows used to form a signal S i,N is the total number of 1-D signals.In essence, each intensity signal is a combination of successive horizontal scan lines which reflect local variations of anobject along the horizontal direction. A set of such signals contains the majority of the local sharp variations of the iris.

    Because of the radial nature of the iris, the structures within the iris can be taken row wise in the horizontal direction ofthe 2-D image, and the local sharp variations along this direction in the normalized image can characterize the iris

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    5/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 5 

    structures in 1 dimension. This is a very fast technique for feature extraction. [1] Following Figure 5Shows 1-intensitysignal.

    Figure 5 1-D Intensity signal 5.2 Feature vector Multiresolution analysis approach, the dyadic wavelet transform has been widely used in various applications,such as texture analysis, edge detection, image enhancement and data compression . It can decompose a signalinto detail components appearing at different scales. The scale parameter of the dyadic wavelets varies only along the

    dyadic sequence 2 j.

    Here, our purpose is to precisely locate the position of local sharp variations which generallyindicate the appearing or vanishing of an important image structure The dyadic wavelets satisfy such requirements aswell as incur lower computational cost, and are thus adopted in our experiments. The dyadic wavelet transform of asignal S(x) at scale 2 j is defined as follows

    dx2

    X-x)(

    2

    1 )(WT

     j j2j       

     

          xS  xS   (3)

    Where, (x/2 j) is the wavelet function at scale 2 j which is haar wavelet.The local extremum points of the wavelet transform correspond to sharp variation points of the original signal.Therefore, using such a transform, we can easily locate the iris sharp variation points by local extremum detection. As(2) shows, the wavelet transform of a signal includes a family of signals providing detail components at different scales.There is an underlying relationship between information at consecutive scales, and the signals at finer scales are easilycontaminated by noise.As we know, a local extremum is either a local minimum or a local maximum. The irregular blocks of the iris areslightly darker than their surroundings. Therefore, it is reasonable to consider that a local minimum of the wavelettransform described above denotes the appearing of an irregular block and a local maximum denotes the vanishing ofan irregular block. A pair of adjacent local extremum points (a minimum point and a maximum point) indicates that asmall block may exist between them. However, there are a few adjacent local extremum points between which theamplitude difference is very small. Such local extremum points may correspond to relatively faint characteristics in theiris image (i.e., local slow variations in the 1-D intensity signals) and are less stable and reliable for recognition. [6] A threshold-based scheme is used to suppress them. If the amplitude difference between a pair of adjacent local extremais less than a predetermined threshold, such two local extremum points are considered from faint iris characteristicsand not used as discriminating features. That is, we only utilize distinct iris characteristics (hence local sharpvariations) for accurate recognition. For each intensity signal Si, the position sequences at two scales are concatenated

    to form the corresponding features:

    Where

    di = position of local minimum or maximum

    Pi = 1 if d1 is a maximum, 0 if d1 is a minimum

    Following Figure 6States local sharp points in 1-D signal.

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    6/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 6 

    Figure 6 Local Sharp Points in 1-D Signal

     

    Features from different 1-D intensity signals are concatenated to constitute an ordered feature vector.

    Where f i denotes the features from the ith  intensity signal, and N is the total number of 1-D intensity signals. Figure 7

    to 14 shows generation of template, gives the values for gradient.  [16] 

    Figure 7 Gradient amplitude image of eyelids

    Figure 8 Contrast enhancements of eyelids 

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    7/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 7 

    Figure 9 Non maxima suppressed eyelids 

    Figure 10Threshold eyelids 

    Figure 11Gradient amplitude image of eyelids 

    Figure 12Contrast enhancement of eyelids 

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    8/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 8 

    Figure 13 Non maxima suppressed eyelids 

    Figure 14Threshold eyelids 

    6.NEURAL NETWORK 

    The research on Artificial Neural Network (ANN) started in 1943 when Warren McCulloch, a neurophysiologist, and ayoung mathematician, Walter Pitts, developed up a theory on neural computing and modeled a simple neural networkcalled McCulloch-Pitt’s model (1947).In1949 DrHebb’s put up a physiological principle called Hebbian Rule which isfurther used in training of neural networks. With introduction of Computers in 1950s, it became possible to model thehuman thinking process.Currently most neural network development is in software form and simply proves that the principle works. These neural networks due to processing limitations take days and weeks to learn.Neural Network is a processing device which is either an algorithm or a hardware whose design is motivated by the design and functioningof a human brain and the components thereof. With the emergence of neural networks, works has been carried out in

    the field of Pattern Recognition using Neural Network Classifiers. The information is distributed in connectionsthroughout the network. Neural Networks also exhibits faulttolerance and robustness. Even if the few neurons are notworking properly the performance of the system does not get affected as the network automatically adjusts itself to newinformation. Neural Networks can perform massive parallel operations whereas a computer operates sequentially. Aneural network with single node is insufficient for many applications. So, networks with large number of nodes areused which are connected via different architectures. These set of neural networks has to be trained for patternclassification tasks so, the neural network is stimulated by an environment. The neural network undergoes changes inits free parameters as a result of stimulation and network responds in a new way to the environment because of thechanges[6].6.1Feed Forward Networks

    Feed Forward Neural Network is nothing but a subclass of Layered architecture. In this the connections are allowedonly from layer j to layer j+1, it means that no backward connections and connections within same layer are possible inthis type of architecture. It is the most general architecture and is also called Error Back-propagation Neural Networks.This architecture is widely used in large applications ranging from pattern classification to Control systems. In FeedForward Neural Network every node in layer is connected to every node in next layer. The number of input nodes is

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    9/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 9 

    equal to the number of elements of pattern vector whereas the number of output nodes is equal to the number of classes.The number of hidden layers is determined by hit and trial which is approximately double as that of the number ofinput layers. It is the extension of Gradient Descent rule. The architecture of Feed Forward Neural Network is shown infigure 5.8. The signal propagates from forward from input to output layer whereas error propagates backward from

    output to input layer to update weights. Random values are chosen for hidden layer and output layer connectionweights.

    Figure 15Feed Forward Neural NetworkOutputs of hidden layer nodes are determined on basis of input patterns and assumed connection weights of hiddenlayers. Outputs of output layer nodes are determined on the basis of outputs of hidden layer nodes and assumedconnection weights for output layer. Weights of output layer are updated first based on gradient descent rule applied tooutput nodes followed by updating weights of hidden layers based on Gradient Descent applied to hidden node. This

     process is repeated until the convergence is reached (error within limits). 

    7.IMPLEMENTATION OF THE ERROR BACK PROPAGATION NEURAL NETWORK FOR

    CLASSIFICATION OF IRIS PATTERNS 

    For, the classification of the extracted feature of the iris patterns Feed Forward Neural Network is used. The FeedForward network in our algorithm implements the classical 3-layer architecture:  

    1. 

    Input layer2. Hidden layer and3. Output layer.

    The input layer contains as much neurons as the number of classes of the pattern vector, with N neurons, where N=2 to108.The number of neurons in the input layer depends upon the number input patterns. Normally, the number ofneurons in the hidden layer is approximately double as that input layer for good classification results. The output layerwill contain as much neurons as there are classes to recognize. In our algorithm with the CASIA Iris database, 108classes were utilized, therefore 108 output neurons in our network. Following are some of the parameters set fornetwork training using the neural network toolbox,

      Training function: traingda (Adaptive training rate).  Initial Learning rate: 0.2  Learning rate increment: 1.05

     

    Maximum number of epochs: 50,000  Error Goal: 5*10-7 

      Minimum Gradient: 1*10-9 

    Figure 16 Customs Neural Network

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    10/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 10 

    When the network is trained in supervised mode, a target vector is also presented to the network. This target vector Thas every element set to zero, except on the position of the target class that will be set to 1. The idea behind this designdecision is that for each input pattern X presented to the network, an output vector Y is produced. This vector has thesame number of elements of output neurons. Each output neuron implements a squashing function that produces a Real

    number in the range [0, 1]. To determine which class is being indicated by the network, we select the maximumnumber in Y and set it to 1, while setting all other elements to zero. The element set to one indicates the classificationof that input pattern. Since, the theory behind neural networks is well understood and mostly proved by mathematicalconcepts, designing a network may involve proceeding to a solution by trial and error for determination of parameters.The Feed Forward Neural Networks are used for classification of the patterns extracted from the Iris segmentation phase (Iris Template). The network uses the patterns whose dimensions are reduced using SVD. The classification rateis analyzed by varying the number of dimensions of the input pattern for the SVD generated data. The number ofclasses is also varied to check the variation in classification rate[9].As we all know that the SVD algorithm outputs a feature vector in decreasing order of values. This allows us to choosehow many elements the input pattern will have and how it impacts on the final classification result. Networkclassification are based on Iris Templateimages of 40*40 pixels quantized from the original iris image with an averagemask of 3x3 pixels. The number of patterns in the dataset is also varied in order to find proof that it may affect the final

    classification rate. As the number of classes and cases increase, the network has more difficulties in learning the properdiscriminatory weights.

    Figure 17 Neural Network Training

    In the tests we realized, when the number of classes was kept low (up to 6 classes) the network was able to reach theminimum squared error (MSE) goal within the specified number of epochs i.e. 50,000. As, the number of classes areincreased beyond 6, the MSE goal was not attained anymore, but the MSE kept decreasing until the maximum numberof epochs was reached. In this case, the classification rate also decreased.The more interesting fact happened when number of classes was higher than 20. In this case, the network didn’tachieve the MSE goal and very quickly it reached the minimum gradient parameter, which means that learning was not being productive and the network was showing no improvement towards decreasing the minimum squared error. In thiscase, network performance was found to be very ordinary, and looking at the classification rate for 50 classes it was

    seen that almost all input pattern was classified as the same class. The network became biased towards only one classand the classification rate reduces considerably.

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    11/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 11 

    Figure 18 Network achieving the Goal within specified number of epochs

    The figure 6 shows the simulation results when the network achieves the MSE goal within the specified number ofepochs, as well as the case when the MSE goal was not achieved within the specified number of epochs. It was alsoseen while simulating that when the number of classes are large the network was not able to achieve the minimumgradient parameter set during the training phase as a result of which the simulation stops in between the specifiednumber of epochs. In this case the network gets biased towards one class and we get very low classification rate.

    8.CONCLUSION 

    The study result indicates that optimum classification values for SVD algorithm are obtained with 140 images or for 20classes. In this case the number of training samples for the same class was kept 5 and the number of test patterns waskept as 2. For, these approaches as the number of classes were increased above 20 the performance of the network dropsabruptly to around 2% and becomes independent of SVD dimension. In such cases the network gets biased towards oneclass and is not recommended for classification.The study result indicates that optimum classification values for characterizing key local variation algorithm areobtained with 70 images or for 10 classes. In this case the number of training samples for the same class was kept 5 andthe number of test patterns was kept as 2. For, these approaches as the number of classes were increased above 10 the performance of the network drops abruptly to around 2%. In such cases the network gets biased towards one class andis not recommended for classification.

    TABLE 1: Conclusions 

    SVD KLV

    Database 140 70

    Training Time Large Large

    Classification Very Good Poor

    9. RESULTS 

    For simulation the eye images were collected from the UBIRIS Iris database designed by NLPR (National Laboratory ofPattern Recognition- Chinese Academy of Science). The UBIRIS Iris database consists of 7 eye images for the same person of which three were captured in the first session and the remaining were captured in the second session takenafter an interval of one month. This information was exploited as the eye images belonging to the same class are ofsimilar nature, therefore the template generated from them were also alike. The first few patterns of the same class areused for network training and the remaining was used for Network testing. Following sections shows the classification

    rate obtained for the error back propagation neural networks by varying different parameters such as number of trainingand testing patterns and number of classes.

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    12/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 12 

    9.1 Iris Classification

    The Iris classification was done using Error Back Propagation Neural Networks. In this the neural network was trainedusing Gradient Descent Rule. The classification rate was verified by changing different parameters as shown below,

      Number of classes.

     

    Changing number of training patterns from 2 to 5.9.2 False Acceptance Rate (FAR) and False Rejection Rate (FRR):  The Iris recognition performance is evaluated using the False Acceptance Rate (FAR) and False Rejection Rate (FRR).The false acceptance rate, or FAR, is the measure of the likelihood that the biometric security system will incorrectlyaccept an access attempt by an unauthorized user. A system’s FAR typically is stated as the ratio of the number of falseacceptances divided by the number of identification attempts. FAR is defined as

    The false rejection rate, or FRR, is the measure of the likelihood that the biometric security system will incorrectlyreject an access attempt by an authorized user. A system’s FRR typically is stated as the ratio of the number of falserejections divided by the number of identification attempts. FRR is defined as:

    False accept rate the probability that the system incorrectly matches the input pattern to a non matching template inthe database. It measures the percent of invalid inputs which are incorrectly accepted. False reject rate the probabilitythat the system fails to detect a match between the input pattern and a matching template in the database. It measuresthe percent of valid inputs which are incorrectly rejected.9.3 True Accept Rate (TAR)

    The True Accept Rate (TAR) describes the probability that the system correctly matches a genuine user to thecorresponding template stored within the system. The true reject rate is a statistic used to measure biometric performance when performing the verification task. It refers to the percentage of times a system (correctly) rejects a

    false claim of identity. The true accept rate is a statistic used to measure biometric performance when performing theverification task. It is the percentage of times a system (correctly) verifies a true claim of identity. If it matched withstored templates then true acceptance rate would increase else false rejection rate would increase.9.4 True Reject Rate (TRR)

    The True Reject Rate (TRR) describes the probability that the system correctly denies an imposter, not matching theimposter data to any template within the system.In traditional biometric systems, the false match rate (FMR), false non-match rate (FNMR), true accept rate (TAR),true reject rate (TRR),\ data presentation curves are often employed to define and display a system’s ability to correctlyverify or identify genuine users and imposters through matching processes. Such metrics provide a clear YES / NOmatching result and lead to acceptance or rejection decisions.Following table 6.2 gives the result in percentage in which calculated true and false acceptance and rejection rates ofsingular value decomposition and key local variation after templates passing through neural network by error back propagation algorithm.

    TABLE 2 Results Result for SVD:TAR for all training and testing patterns = 100%FRR for all training and testing patterns = 0%

    TRR (in %)  Testing Patterns

    TrainingPatterns

    2 3 4 5

    2 36.9048 60.7143 28.5714 67.8571

    3 36.9048 53.5714 52.381 67.85714 39.2857 47.619 52.381 67.85715 64.5857 47.619 52.381 67.8571

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    13/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 13 

    FAR (in %) Testing Patterns 

    Training Patterns  2 3 4 5

    2 63.0952 39.2857 71.4286 32.1429

    3 63.0952 46.4286 47.619 32.14294 60.7143 52.381 47.619 32.14295 35.7143 52.381 47.619 32.1429

    Result for KLV:

    TAR for all training and testing patterns = 100%FRR for all training and testing patterns = 0%

    TRR (in %)  Testing Patterns

    Training Patterns  2 3 4 5

    2 40.4762 26.1905 38.0952 50

    3 58.3333 16.6667 50 26.1905

    4 35.7143 21.4286 50 26.1905

    5 32.1429 21.4286 50 26.1905

    FAR (in %) Testing Patterns 

    Training Patterns  2 3 4 5

    2 59.5238 73.8095 61.9048 50

    3 41.6667 83.3327 50 73.8095

    4 64.2857 78.7614 50 73.8095

    5 67.8571 67.8571 50 73.8095

    REFERANCES 

    [1]. 

    Proceedings of the World Congress on Engineering and Computer Science 2010 Vol I WCECS 2010, October 20-22, 2010, San Francisco, USA “Performance Evaluation of IRIS Recognition by Key Local Variation Algorithmusing Neural Network Classifier” B.G. Patil, A.M.U. Wagdarikar, and S.A. More

    [2]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 14, NO. 1,JANUARY 2004 21 “How Iris Recognition Works” John Daugman Invited Paper

    [3]. The UBIRIS.v2: A Database of Visible Wavelength Iris Images Captured On-the-Move and At-a-Distance HugoProenc¸a, Sı´lvio Filipe, Ricardo Santos, Joa˜o Oliveira, and Luı´s A. Alexandre IEEE TRANSACTIONS ON

    PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 32, NO. X, XXXXXXX 2010.[4]. Comparative Study of Iris Databases and UBIRIS Database for Iris Recognition Methods for Non-Cooperative

    Environment RishabhParashar Rajasthan Technical University, Kota, INDIA Sandeep Joshi Rajasthan TechnicalUniversity, Kota, INDIA International Journal of Engineering Research & Technology (IJERT) Vol. 1 Issue 5, July- 2012 ISSN: 2278-0181.

    [5]. 

    ©2010 International Journal of Computer Applications (0975 – 8887) Volume 1 – No. 2 Iris Recognition Systemusing Biometric Template Matching Technology Sudha Gupta, Asst. Professor ,LMIETE, LMISTE, Viral Doshi,Abhinav Jain and SreeramIyer, K.J.S.C.E. Mumbai India

  • 8/9/2019 Analysis & Performance Evolution of IRIS Recognition SVD, KLV and EBP Algorithms using Neural Network Classifier

    14/14

    IPASJ International Journal of Electronics & Communication (IIJEC) Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm 

     A Publisher for Research Motivation........  Email: [email protected] 

    Volume 3, Issue 5, May 2015 ISSN 2321-5984 

    Volume 3, Issue 5, May 2015  Page 14 

    [6]. IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 6, JUNE 2004 739 Efficient Iris Recognition by Characterizing Key Local Variations Li Ma, Tieniu Tan, Fellow, IEEE, Yunhong Wang, Member, IEEE, andDexin Zhang

    [7]. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS 1 Improving

    Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and IndexingMayankVatsa, Student Member, IEEE, Richa Singh, Student Member, IEEE, and AfzelNoore, Member, IEEE

    [8]. Proceedings of the Fourth International Conference on Machine Learning and Cybernetics, Guangzhou, 18-21August 2005 ,“IRIS RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS”, YONG WANG,JIU-QIANG HAN School of Electronics and Information Engineering, Xi’an Jiao tongUniversity,Xi’an710049,China

    [9]. ITSI Transactions on Electrical and Electronics Engineering (ITSI-TEEE) Neural Network Based Human IrisPattern Recognition System Using SVD Transform Features Mrunal M. Khedkar& S. A. LadhakeSipna College ofEngg& Tech, Amravati, Maharashtra, India- 444605 E-mail : [email protected]

    [10]. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 2, No. 12, 2011, 115 | P ag e www.ijacsa.thesai.org SVD-EBP Algorithm for Iris Pattern Recognition Mr. Babasaheb G. Patil ,Departmentof Electronics Engineering Walchand College of Engineering, Sangli, (Maharashtra) India Dr. Mrs.

    ShailaSubbaraman Department of Electronics Engineering Walchand College of Engineering, Sangli,(Maharashtra) India

    [11]. Hugo Proenc¸a, Sı´lvio Filipe, Ricardo Santos, Joa˜o Oliveira, and Luı´s A. Alexandre proposed this database inthe “UBIRIS: A noisy iris Database” published in lecture notes in Computer science- ICIAP 2005: 13thInternational conference on Image Analysis & Processing, Caligari, Italy, September 6-8, 2005, vol. 1,page 970-977, ISBN: 3-540-28869-4.

    [12]. Libor Masek, “Recognition of Human Iris Patterns for Biometric Identification”.[13]. Dr. Garcia E, “Singular Value Decomposition (SVD)- A fast Track Tutorial”.[14]. Paulo Eduardo Merlotti, “Experiments on human Iris Recognition using Error Back Propagation Artificial Neural

     Network”, April 2004.[15]. C. M. Bishop, “Neural Networks for Pattern Recognition”, Oxford University Press, New York.1995.[16]. “Efficient Iris Recognition by Characterizing Key Local Variations”, IEEE Transactions on Image Processing,

    Vol. 13, No. 6, June 2004[17].

     

    Proceedings of the Fourth International Conference on Machine Learning and Cybernetics, Guangzhou, 18-21August 2005 ,“IRIS RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS”, YONG WANG,JIU-QIANG HAN School of Electronics and Information Engineering, Xi’an Jiao tongUniversity,Xi’an710049,China

    AUTHOR

    Prachi P. Jeurkar  received B.E. and M.E. in Electronics specialization in telecommunication from Bharat RatnaIndira Gandhi College of Engg., Solapur.

    Vijaykumar S. Kolkure  received B.E. and M.E in Electronics specialization in computer science from WalchandCollege of Engg., Sangli. Working as Asst. Prof. in Bharat Ratna Indira Gandhi College of Engg.,Solapur.