We introduce the use of Confidence c as a weighted vote for the voting machine to avoid low...

2
We introduce the use of Confidence c as a w eighted vote for the voting machine to avoi d low confidence Result r of individual exp ert from affecting the final result. We ado pt different approaches to find the result and confidence. Eigenface, Fisherface and EGM: We employ K nearest-neighbor classifiers, which five ne arest training set images with the test im age are chosen. The final result for expert i is defined as the class j with the highes t votes v in J classes among the five resul ts: and its confidence is defined as the number of votes of the result class divided by K: SVM: To recognize an image in J different classes, J C 2 SVMs are constructed. The image is tested against each SVM and the class j with the highest votes in all SVMs is selec ted as the recognition result r i . The confid ence is defined as the number of votes of t he result class divided by J-1. Neural Networks: We choose a binary vector of size J for the target representation. Th e target class is set to 1 and the others a re set to 0. The class j with output value closest to 1 is chosen as the result and th e output value is chosen as the confidence. Department of Computer Science and Department of Computer Science and Engineering Engineering The Chinese University of Hong Kong The Chinese University of Hong Kong Figure 1: SFRCM Overview Voting Machine Result (r) & Confidence (c) Eigenface Fisherface EGM SVM Neural Network Weights (w) SFRCM adopts static structure in committee machine . Each expert gives its Result r and Confidence c for the result to the voting machine. Together with the Weight w of each expert, recognized class is chosen with the highest Score s among j classes which is defined as: 5 1 , * i i i i j r j c w s Weight w is derived from the average performance of the algorithms in the ORL and Yale testing. Performance of each expert is normalized to ensure that its weight is positive, and within the range [0, 1] by an exponential mapping function: where p i is the average performance of expert i. The use of weight further reduce high confidence result of poor performance expert to affect the ensemble result significantly. , ) exp( ) exp( 5 1 i i i i p p w )) ( argmax( j v ri . ) ( K r v c i i Ho-Man Tang, Michael R. Lyu and Irwin King Ho-Man Tang, Michael R. Lyu and Irwin King Department of Computer Science and Engineering Department of Computer Science and Engineering The Chinese University of Hong Kong, Shatin, N.T. Ho The Chinese University of Hong Kong, Shatin, N.T. Ho ng Kong SAR. ng Kong SAR. {hmtang,lyu,king}@cse.cuhk.edu.hk {hmtang,lyu,king}@cse.cuhk.edu.hk Result & Confidence Introduction Face Recognition Committee Machine Dynamic Vs. Static Structures Weight Static Structure In recent years, committee machine, an ense mble of estimators, has proven to give more accurate results than a single predictor. T here exists two types of structure: Static Structure: This is generally known a s an ensemble method. Input is not involve d in combining the experts. Dynamic Structure: Input is directly involv ed in the combining mechanism. It uses an i ntegrating unit to adjust the weight of eac h expert according to the input. This poster describes the design of Face Re cognition Committee Machine (FRCM). It is c omposed of five state-of-the-art face recog nition algorithms: (1) Eigenface, (2) Fishe rface, (3) Elastic Graph Matching (EGM), (4) Support Vector Machine (SVM) and (5) Ne ural Networks. We propose Static (SFRCM) an d Dynamic (DFRCM) structure for the FRCM, a nd compare their performances and the five algorithms on ORL and Yale face database to show the improvement. Recogniz ed Class Input Image

Transcript of We introduce the use of Confidence c as a weighted vote for the voting machine to avoid low...

Page 1: We introduce the use of Confidence c as a weighted vote for the voting machine to avoid low confidence Result r of individual expert from affecting the.

We introduce the use of Confidence c as a weighted vote for the voting machine to avoid low confidence Result r of individual expert from affecting the final result. We adopt different approaches to find the result and confidence.

•Eigenface, Fisherface and EGM: We employ K nearest-neighbor classifiers, which five nearest training set images with the test image are chosen. The final result for expert i is defined as the class j with the highest votes v in J classes among the five results:

and its confidence is defined as the number of votes of the result class divided by K:

•SVM: To recognize an image in J different classes, JC2 SVMs are constructed. The image is tested against each SVM and the class j with the highest votes in all SVMs is selected as the recognition result ri. The confidence is defined as the number of votes of the result class divided by J-1.

•Neural Networks: We choose a binary vector of size J for the target representation. The target class is set to 1 and the others are set to 0. The class j with output value closest to 1 is chosen as the result and the output value is chosen as the confidence.

Department of Computer Science and EngineeringDepartment of Computer Science and Engineering The Chinese University of Hong KongThe Chinese University of Hong Kong

Figure 1: SFRCM Overview

Voting

Machine

Result (r) & Confidence (c)

Eigenface

Fisherface

EGM

SVM

Neural

Network

Weights (w)

SFRCM adopts static structure in committee machine . Each expert gives its Result r and Confidence c for the result to the voting machine. Together with the Weight w of each expert, recognized class is chosen with the highest Score s among j classes which is defined as:

5

1

,*i

iiij rjcws

Weight w is derived from the average performance of the algorithms in the ORL and Yale testing. Performance of each expert is normalized to ensure that its weight is positive, and within the range [0, 1] by an exponential mapping function:

where pi is the average performance of expert i. The use of weight further reduce high confidence result of poor performance expert to affect the ensemble result significantly.

,)exp(

)exp(5

1

i i

ii

p

pw

)),(argmax( jvri

.)(

K

rvc

ii

Ho-Man Tang, Michael R. Lyu and Irwin KingHo-Man Tang, Michael R. Lyu and Irwin KingDepartment of Computer Science and EngineeringDepartment of Computer Science and Engineering

The Chinese University of Hong Kong, Shatin, N.T. Hong Kong SAR.The Chinese University of Hong Kong, Shatin, N.T. Hong Kong SAR.{hmtang,lyu,king}@cse.cuhk.edu.hk{hmtang,lyu,king}@cse.cuhk.edu.hk

Result & ConfidenceIntroduction

Face Recognition Committee MachineDynamic Vs. Static Structures

Weight

Static Structure

In recent years, committee machine, an ensemble of estimators, has proven to give more accurate results than a single predictor. There exists two types of structure:

Static Structure: This is generally known as an ensemble method. Input is not involved in combining the experts.Dynamic Structure: Input is directly involved in the combining mechanism. It uses an integrating unit to adjust the weight of each expert according to the input.

This poster describes the design of Face Recognition Committee Machine (FRCM). It is composed of five state-of-the-art face recognition algorithms: (1) Eigenface, (2) Fisherface, (3) Elastic Graph Matching (EGM), (4) Support Vector Machine (SVM) and (5) Neural Networks. We propose Static (SFRCM) and Dynamic (DFRCM) structure for the FRCM, and compare their performances and the five algorithms on ORL and Yale face database to show the improvement.

Recognized

Class

Input Image

Page 2: We introduce the use of Confidence c as a weighted vote for the voting machine to avoid low confidence Result r of individual expert from affecting the.

In DFRCM, each expert is trained independently on different face databases. Expert's performance is then determined in the testing phase, which is defined as:

where ni,j is the total number of correction recognition and ti,j isthe total number of trail for expert i on face database j. We propose a feedback mechanism to solve the second problem,which updates the weights for the experts continuously.

1. Initialize ni,j and ti,j to 02. Train each expert i on different database j3. While TESTING

a) Determine j for each test imageb) Recognize the image in each expert ic) If ti,j != 0 then Calculate pi,j

d) Else Set pi,j = 0e) Calculate wi,j

f) Determine ensemble resultg) If FEEDBACK then Update ni,j and ti,j

4. End while

In SFRCM, input is not involved in the determination of weight. However, there are two major drawbacks:

•Fixed weights under all situations: Experts may have various performances under different situations. Therefore, fixed weights for faces under all situations are undesirable. •No update mechanism for weights: Weight for the experts cannot be updated once the system is trained.

SFRCM Drawbacks Experimental Results

Department of Computer Science and EngineeringDepartment of Computer Science and Engineering The Chinese University of Hong KongThe Chinese University of Hong Kong

S Eigen Fisher EGM SVM NN SFRCM

centerlight 53.3% 93.3% 66.7% 86.7% 73.3% 93.3%glasses 80.0% 100.0% 53.3% 86.7% 86.7% 100.0%happy 93.3% 100.0% 80.0% 100.0% 93.3% 100.0%leftlight 26.7% 26.7% 33.3% 26.7% 26.7% 33.3%

noglasses 100.0% 100.0% 80.0% 100.0% 100.0% 100.0%normal 86.7% 100.0% 86.7% 100.0% 93.3% 100.0%

rightlight 26.7% 40.0% 40.0% 13.3% 26.7% 33.3%sad 86.7% 93.3% 93.3% 100.0% 93.3% 100.0%

sleepy 86.7% 100.0% 73.3% 100.0% 100.0% 100.0%surprised 86.7% 66.7% 33.3% 73.3% 66.7% 86.7%

wink 100.0% 100.0% 66.7% 93.3% 93.3% 100.0%Pi 75.2% 83.6% 64.2% 80.0% 77.6% 86.1%

Table 4: SFRCM Yale Result

Figure 2: DFRCM OverviewInput Image

Voting Machine

Recognized Class

Eigenface Fisherface EGM SVM

w1 w2 w3 w4 w5

r1,c1 r2,c2 r3,c3 r4,c4 r5,c5

Gating

Network

,,, / jijii,j tnp

Table 2: SFRCM ORL ResultS Eigen Fisher EGM SVM NN SFRCM

1 92.5% 100.0% 90.0% 95.0% 92.5% 95.0%2 85.0% 100.0% 72.5% 100.0% 95.0% 100.0%3 87.5% 100.0% 85.0% 100.0% 95.0% 100.0%4 90.0% 97.5% 70.0% 100.0% 92.5% 100.0%5 85.0% 100.0% 82.5% 100.0% 95.0% 100.0%6 87.5% 97.5% 70.0% 97.5% 92.5% 97.5%7 82.5% 95.0% 75.0% 95.0% 95.0% 100.0%8 92.5% 95.0% 80.0% 97.5% 90.0% 97.5%9 90.0% 100.0% 72.5% 97.5% 90.0% 100.0%10 85.0% 97.5% 80.0% 95.0% 92.5% 97.5%Pi 87.8% 98.3% 77.8% 97.8% 93.0% 98.8%

Table 1: DFRCM ORL ResultS Eigen Fisher EGM SVM NN DFRCM

1 82.5% 90.0% 90.0% 92.5% 97.5% 92.5%2 85.0% 100.0% 72.5% 100.0% 97.5% 100.0%3 87.5% 100.0% 57.5% 100.0% 92.5% 100.0%4 75.0% 92.5% 67.5% 95.0% 87.5% 100.0%5 72.5% 97.5% 72.5% 90.0% 87.5% 95.0%6 82.5% 90.0% 70.0% 97.5% 87.5% 95.0%7 80.0% 92.5% 57.5% 92.5% 90.0% 97.5%8 77.5% 87.5% 67.5% 95.0% 87.5% 95.0%9 75.0% 90.0% 62.5% 97.5% 90.0% 100.0%10 85.0% 97.5% 72.5% 95.0% 92.5% 95.0%Pi 80.3% 93.8% 69.0% 95.5% 91.0% 97.0%

S Eigen Fisher EGM SVM NN DFRCM

centerlight 40.0% 73.3% 46.7% 93.3% 60.0% 100.0%glasses 73,3% 93.3% 66.7% 86.7% 86.7% 86.7%happy 73.3% 86.7% 86.7% 86.7% 93.3% 86.7%leftlight 26.7% 40.0% 13.3% 26.7% 40.0% 40.0%

noglasses 93.3% 100.0% 86.7% 100.0% 93.3% 100.0%normal 86.7% 93.3% 86.7% 86.7% 93.3% 86.7%

rightlight 26.7% 40.0% 66.7% 20.0% 26.7% 40.0%sad 66.7% 93.3% 80.0% 93.3% 86.7% 93.3%

sleepy 80.0% 93.3% 60.0% 100.0% 93.3% 93.3%surprised 73.3% 53.3% 46.7% 66.7% 46.7% 73.3%

wink 93.3% 86.7% 46.7% 100.0% 100.0% 100.0%Pi 66.7% 77.6% 62.4% 78.2% 74.5% 81.8%

Table 3: DFRCM Yale Result

We evaluate the performance of DFRCM SFRCM and the experts with ORL and Yale face database. We use leaving-one-out for SFRCM and cross validation partition for DFRCM in the experiments. The results are shown as follows:

Feedback Mechanism

Dynamic Structure

Neural Network

To overcome the first problem, we develop a gating network in DFRCM which includes a neural network to accept input images and assign a specific weight for each individual expert.