V.karthikeyan published article a..a

4
V.Karthikeyan * et al. / (IJITR) INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND RESEARCH Volume No. 1, Issue No. 2, February - March 2013, 207 - 210 ISSN 2320 –5547 @ 2013 http://www.ijitr.com All rights Reserved. Page | 207 Reflectance Perception Model Based Face Recognition in Dissimilar Illumination Conditions V. KARTHIKEYAN Department of ECE, SVS College of Engineering, Coimbatore-109, Tamilnadu, India V. J. VIJAYALAKSHMI Department of EEE, SKCET, Coimbatore – 46 Tamilnadu, India P. JEYAKUMAR Student, Department of ECE, Karpagam University, Coimbatore-105 AbstractReflectance Perception Based Face Recognition in different Illuminating Conditions is presented. Face recognition algorithms have to deal with significant amounts of illumination variations between gallery and probe images. Many of the State-of-the art commercial face recognition algorithms still struggle with this problem. In this projected work a new algorithm is stated for the preprocessing method which compensated for illumination variations in images along with a robust Principle Component Analysis (PCA) based Facial Feature Extraction is stated which is used to improve and reduce the dimension of the image by removing the unwanted vectors by the weighted Eigen faces. The proposed work demonstrates large performance improvements with several standard face recognition algorithms across multiple, publicly available face databases. Keywords- Face Recognition, Principle Component Analysis, Reflectance Perception. Illumination Variations I. INTRODUCTION Humans often use faces to recognize individuals and advancements in computing capability over the past few decades. Early face recognition algorithms used simple geometric models but the recognition process have matured into a science of sophisticated mathematical representation and matching process. Major advancements and initiatives in the past ten to fifteen years have propelled face recognition technology into the spotlight which can be used for both identification and Verification. In order to build robust and efficient face recognition system the problem of lighting variation is one of the main technical challenges faced by the designers. In this paper the focus is mainly on robustness to lighting variations. First in the processing stage a face image is transformed into an illumination insensitive image. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths and then each feature is individually classified by Principal Component Analysis. Along with this multiple face models generated by plural normalized face images that have different eye distances. Finally all the scores from multiple complementary classifiers a weighted based score fusion scheme is applied. II. RELATED WORKS Automated face recognition is a relatively new concept. Developed in the 1960s the first semi- automated system for face recognition required the administrator to locate features such as eyes, ears, nose and mouth on the photographs before it calculated distances and ratios to a common reference point which were then compared to the referenced data. Along with the pose variation, illumination is the most significant factor affecting the appearance of faces. Significant changes in lighting can be seen greatly within between days and among indoor and outdoor environments .Greater variations can be seen for the two images of a same person taken under two different conditions, say one is taken in a studio and the other in an outdoor location. Due to the 3D shape of the face, a direct lighting source can cast strong shadows that highlight or diminish certain facial features. Evaluations of face recognition algorithms consistently show that state-of-the-art systems can not deal with large differences in illumination conditions between gallery and probe images [1-3]. In recent years many appearance-based algorithms have been proposed to deal with this problem and find a proper solution to it [4-7].Then [5] showed that the set of images of an object in fixed pose but under varying illumination forms a convex cone in the space of images. The illumination cones of human faces can be approximated well by low-dimensional linear subspaces [8]. The linear subspaces are typically estimated from training data, requiring multiple images of the object under different illumination conditions. Alternatively, model-based approaches have been proposed to address the problem. [9] showed that previously constructed morph able 3D model to single images. Though the algorithm works well across pose and illumination, however, the computational expense is very high. To eliminate the tarnished halo effect, [10] introduced low curvature image simplifier (LCIS) hierarchical

Transcript of V.karthikeyan published article a..a

Page 1: V.karthikeyan published article a..a

V.Karthikeyan * et al. / (IJITR) INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND RESEARCHVolume No. 1, Issue No. 2, February - March 2013, 207 - 210

ISSN 2320 –5547 @ 2013 http://www.ijitr.com All rights Reserved. Page | 207

Reflectance Perception Model Based FaceRecognition in Dissimilar Illumination

ConditionsV. KARTHIKEYANDepartment of ECE,

SVS College of Engineering,Coimbatore-109,Tamilnadu, India

V. J. VIJAYALAKSHMIDepartment of EEE,

SKCET,Coimbatore – 46Tamilnadu, India

P. JEYAKUMARStudent,

Department of ECE,Karpagam University,

Coimbatore-105

Abstract—Reflectance Perception Based Face Recognition in different Illuminating Conditions is presented. Facerecognition algorithms have to deal with significant amounts of illumination variations between gallery and probe images.Many of the State-of-the art commercial face recognition algorithms still struggle with this problem. In this projectedwork a new algorithm is stated for the preprocessing method which compensated for illumination variations in imagesalong with a robust Principle Component Analysis (PCA) based Facial Feature Extraction is stated which is used toimprove and reduce the dimension of the image by removing the unwanted vectors by the weighted Eigen faces. Theproposed work demonstrates large performance improvements with several standard face recognition algorithms acrossmultiple, publicly available face databases.Keywords- Face Recognition, Principle Component Analysis, Reflectance Perception. Illumination Variations

I. INTRODUCTION

Humans often use faces to recognize individuals andadvancements in computing capability over the pastfew decades. Early face recognition algorithms usedsimple geometric models but the recognition processhave matured into a science of sophisticatedmathematical representation and matching process.Major advancements and initiatives in the past ten tofifteen years have propelled face recognitiontechnology into the spotlight which can be used forboth identification and Verification. In order to buildrobust and efficient face recognition system theproblem of lighting variation is one of the maintechnical challenges faced by the designers. In thispaper the focus is mainly on robustness to lightingvariations. First in the processing stage a face imageis transformed into an illumination insensitive image.The hybrid Fourier features are extracted fromdifferent Fourier domains in different frequencybandwidths and then each feature is individuallyclassified by Principal Component Analysis. Alongwith this multiple face models generated by pluralnormalized face images that have different eyedistances. Finally all the scores from multiplecomplementary classifiers a weighted based scorefusion scheme is applied.

II. RELATED WORKS

Automated face recognition is a relatively newconcept. Developed in the 1960s the first semi-automated system for face recognition required theadministrator to locate features such as eyes, ears,nose and mouth on the photographs before it

calculated distances and ratios to a common referencepoint which were then compared to the referenceddata. Along with the pose variation, illumination isthe most significant factor affecting the appearance offaces. Significant changes in lighting can be seengreatly within between days and among indoor andoutdoor environments .Greater variations can be seenfor the two images of a same person taken under twodifferent conditions, say one is taken in a studio andthe other in an outdoor location. Due to the 3D shapeof the face, a direct lighting source can cast strongshadows that highlight or diminish certain facialfeatures. Evaluations of face recognition algorithmsconsistently show that state-of-the-art systems cannot deal with large differences in illuminationconditions between gallery and probe images [1-3].In recent years many appearance-based algorithmshave been proposed to deal with this problem andfind a proper solution to it [4-7].Then [5] showed thatthe set of images of an object in fixed pose but undervarying illumination forms a convex cone in thespace of images. The illumination cones of humanfaces can be approximated well by low-dimensionallinear subspaces [8]. The linear subspaces aretypically estimated from training data, requiringmultiple images of the object under differentillumination conditions. Alternatively, model-basedapproaches have been proposed to address theproblem. [9] showed that previously constructedmorph able 3D model to single images. Though thealgorithm works well across pose and illumination,however, the computational expense is very high. Toeliminate the tarnished halo effect, [10] introducedlow curvature image simplifier (LCIS) hierarchical

Page 2: V.karthikeyan published article a..a

V.Karthikeyan * et al. / (IJITR) INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND RESEARCHVolume No. 1, Issue No. 2, February - March 2013, 207 - 210

ISSN 2320 –5547 @ 2013 http://www.ijitr.com All rights Reserved. Page | 208

decomposition of an image. Each component in thishierarchy is computed by solving a partial differentialequation inspired by anisotropic diffusion [11]. Ateach hierarchical level the method segments theimage into smooth (low-curvature) regions whilestopping at sharp discontinuities. The algorithm iscomputationally intensive and requires manualselection of no less than 8 different parameters.

III. PREPROCESSING ALGORITHM

3.1. The Reflectance Perception ModelThis algorithm is makes use of two widely acceptedassumptions about human vision 1) human vision ismostly sensitive to scene reflectance and mostlyinsensitive to the illumination conditions and 2)human vision responds to local changes in contrastrather than to global brightness levels. Both theseassumptions have a close relation since local contrastis given as a function of reflectance. Our main motiveis to find an estimating conditions of L(x; y) such thatwhen it divides I(x; y) it produces R(x; y) in whichthe local contrast is appropriately and accuratelyenhanced. In this view R(x; y) takes the place ofperceived sensation, while I(x; y) takes the place ofthe input stimulus. L(x; y) is then called perceptiongain which maps the input sensation into theperceived stimulus that is given as

= (1)

Here in this equation, R is mostly the reflectance ofthe scene, and L is mostly the illumination field,After all, humans perceive reflectance details in bothshadows as well as in bright regions, but they are alsoaware of the presence of shadows. To derive ourmodel we look into evidences form the experimentaltechnology. The Weber's Law states that thesensitivity threshold to a small intensity changeincreases proportionally to the signal level [12]. Thislaw follows from the experimentation on brightnessperception that consists of exposing an observer to auniform field of intensity I in which a disk isgradually increased in brightness by a quantity

.The value from which the observer perceives

the existence of the desk against the background iscalled as the brightness discrimination threshold.

Weber concluded that is constant for a wide range

of intensity values.

Fig 1: Compressive logarithmic mapping emphasizeschanges at low stimulus levels and attenuates changes

at high stimulus levels.

Weber's law gives a theoretical justification forassuming a logarithmic mapping from input stimulusto perceived sensation which can be observed fromfig 1. We can see that due to the logarithmic mappingwhen the stimulus is weak, for example in deepshadows, small changes in the input stimulus extractlarge changes in perceived sensation. When thestimulus is strong, small changes in the inputstimulus are mapped to even smaller changes inperceived sensation. In facts Local variations in theinput stimulus are mapped to the perceived sensation

Variations with the gain which is given as

= , (2)

Where is the stimulus level in a small

neighborhood in the input image by comparing

above equations we can finally obtain the model forthe perception gain:

(3)

Where the neighborhood stimulus level is bydefinition taken to be the stimulus at point the

solution for can be found by minimizing the

equationJ(L)= ʃʃρ ρx,yL-12dxdy+λʃʃ (L2x+L2y)dxdy (4)

Where the first term drives the solution for followingthe perception gain model, while the second termimposes a smoothness of the constraint here refers

to the image. The parameter controls the relative

importance of the two terms. The space varyingpermeability weight controls the anisotropic

nature of the smoothing constraint. The Euler-Lagrange equation for this calculus of variationproblem yields

(5)

On a rectangular lattice, the linear differentialequation becomes

Page 3: V.karthikeyan published article a..a

V.Karthikeyan * et al. / (IJITR) INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND RESEARCHVolume No. 1, Issue No. 2, February - March 2013, 207 - 210

ISSN 2320 –5547 @ 2013 http://www.ijitr.com All rights Reserved. Page | 209

( ) (6)

Where denotes the pixel grid size and the each

value is taken in the middle of the edge between

the center pixel and each of the correspondingneighbor. In this formulation controls the

anisotropic nature of the smoothing by modulatingpermeability between pixel neighbors. Multi-gridalgorithms are fairly efficient having complexity. Toget a relative measure of local contrasts that willequally respect boundaries in shadows and regions(different illumination conditions) and the equation isgiven as

(7)

Where is the weight between two neighboring

pixels whose intensities are and

IV. PRINCIPAL COMPONENT ANALYSISBASED FEATURE EXTRACTION

PCA commonly referred to as the use of Eigen faces,is the technique pioneered by Kirby and Sirivich.With PCA, the probe and gallery images must be thesame size and must first be normalized to line up theeyes and mouth of the subjects within the image. ThePCA approach is then used to reduce the dimensionof the data by means of data compression basics andreveals the most effective low dimensional structureof facial patterns. This reduction in dimensionsremoves information that is not useful and preciselydecomposes the face structure into orthogonalcomponents known as Eigen faces. Each face imagemay be represented as the weighted sum of the Eigenfaces, which are stored in a 1D array. A probe imageis compared against a gallery image by measuring thedistance between their respective feature vectors. ThePCA approach typically requires the full frontal faceto be presented each time; otherwise the imageresults in poor performance. The primary advantageof this technique is that it can reduce the data neededto identify the individual to 1/1000th of the datapresented.

The application of the PCA algorithm to the Yaledatabases illustrates the improvement in theaccuracy. This can be shown in the figure as

(a)

(b)Fig.2. (a) Image before PCA Fig (b) Image after PCA

Algorithm

SCORE FUSION BASED WEIGHTED SUMMETHOD

There are many ways to score the values of extractedfeatures of human face, among those the simple andaccurate robust method to combine the scores isWeighted sum Technique. To compute a weightedsum as follows:

(7)

Where the weight iw is the amount of confidence we

have in the thi classifier and its score. In this work,

we use 1/EER as a measure of such confidence. Thus,we have a new score

(8)

The weighted sum method based upon the EER isheuristic but intuitively appealing and easy toimplement. In addition, this method has theadvantage that it is robust to the difference instatistics between training data and test data. Eventhough the training data and test data have differentstatistics, the relative strength of the componentclassifiers is less likely to change significantly,suggesting that the weighting still makes sense.

V. CONCLUSION

Thus a robust face recognition system is beingproposed which is insensitive to illuminationvariations. The pre processing algorithm of RPMstates the experimentation on brightness perceptionthat consists of exposing an observer to a uniformfield of intensity. The PCA based feature extractionmethod coveys that it reduces the dimension of theimage by removing the unwanted information.Finally the score fusion of the extracted features isdone by the weighted sum method. We introduced asimple and automatic image-processing algorithm forcompensation of illumination-induced variations inimages. At the high level, the algorithm mimics someaspects of human visual perception. If desired, theuser may adjust a single parameter whose meaning isintuitive and simple to understand. The algorithmdelivers large performance improvements for

Page 4: V.karthikeyan published article a..a

V.Karthikeyan * et al. / (IJITR) INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND RESEARCHVolume No. 1, Issue No. 2, February - March 2013, 207 - 210

ISSN 2320 –5547 @ 2013 http://www.ijitr.com All rights Reserved. Page | 210

standard face recognition algorithms across multipleface databases.

VI. EXPERIMENTAL RESULTS

Fig.3. Test Image

Fig. 4 Preprocessed Image

Fig.5.Preprocessed Image 2

Fig.6.Preprocessed Image 3

Fig.7.Data base Image

VII. REFERENCES[1] Phillips, P., Moon, H., Rizvi, S., Rauss, P.:

The FERET evaluation methodology for face-recognition algorithms. IEEE PAMI 22(2000) 1090{1104

[2] Blackburn, D., Bone, M., Philips, P.: FacialRecognition vendor test 2000: evaluationreport (2000)

[3] Gross, R., Shi, J., Cohn, J.: Quo vadis faceRecognition In: Third Workshop on EmpiricalEvaluation Methods in Computer Vision(2001)

[4] Belhumeur, P.N., Hespanha, J.P., Kriegman,D.J.: Eigenfaces vs. Fisherfaces: Recognitionusing class specific linear Projection IEEEPAMI 19 (1997)

[5] Belhumeur, P., Kriegman, D.: What is the setof images of an object under allPossibleLighting conditions. Int. J. of Computer Vision28 (1998)

[6] Georghiades, A., Kriegman, D., Belhumeur,P.: From few to many: Generative models forrecognition under variable pose andillumination. IEEE PAMI (2001)

[7] Riklin-Raviv, T., Shashua, A.: The Quotientimage: class-based re-rendering andrecognition with varying illuminationconditions. In: IEEE PAMI. (2001)

[8] Georghiades, A., Kriegman, D., Belhumeur,P.: Illumination cones for recognition Undervariable lighting: Faces. In: Proc. I EEEConf. on CVPR. (1998)

[9] Blanz, V., Romdhani, S., Vetter, T.: Faceidenti_cation across di_erent poses andillumination with a 3D morphable model. In:IEEE Conf. on Automatic Face and Gesturerecognition (2002).

[10] Tumblin, J., Turk, G.: LCIS: A boundaryhierarchy for detail-preserving contrastreduction. In: ACM SIGGRAPH. (1999)