Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your...

30
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li, Antonio Torralba, Paul Viola, David Lowe, Gabor Melli (by way of the Internet) for slides

Transcript of Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your...

Page 1: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Cos 429: Face Detection (Part 2)

Viola-Jones and AdaBoost

Guest Instructor: Andras Ferencz(Your Regular Instructor: Fei-Fei Li)

Thanks to Fei-Fei Li, Antonio Torralba, Paul Viola, David Lowe, Gabor Melli (by way of the Internet) for slides

Page 2: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Face Detection

Page 3: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Face Detection

Page 4: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Sliding Windows

1. hypothesize: try all possible rectangle locations, sizes

2. test: classify if rectangle contains a face (and only the face)

Note: 1000's more false windows then true ones.

Page 5: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Classification (Discriminative)

Faces

Background

In some feature space

Page 6: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Image Features

4 Types of “Rectangle filters” (Similar to Haar wavelets Papageorgiou, et al. )

Based on 24x24 grid:160,000 features to choose from g(x) =

sum(WhiteArea) - sum(BlackArea)

Page 7: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Image Features

F(x) = α1 f

1(x) + α

2 f

2(x) + ...

fi(x) =

1 if gi(x) > θ

i

-1 otherwise

Need to: (1) Select Features i=1..n, (2) Learn thresholds θ

i ,

(3) Learn weights αi

Page 8: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

A Peak Ahead: the learned features

Page 9: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Why rectangle features? (1)The Integral Image

• The integral image computes a value at each pixel (x,y) that is the sum of the pixel values above and to the left of (x,y), inclusive.

• This can quickly be computed in one pass through the image

(x,y)

Page 10: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Why rectangle features? (2)Computing Sum within a Rectangle

• Let A,B,C,D be the values of the integral image at the corners of a rectangle

• Then the sum of original image values within the rectangle can be computed:

sum = A – B – C + D• Only 3 additions are

required for any size of rectangle!– This is now used in many

areas of computer vision

D B

C A

Page 11: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Boosting

How to select the best features?

How to learn the classification function?

F(x) = α1 f

1(x) + α

2 f

2(x) + ....

Page 12: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

• Defines a classifier using an additive model:

Boosting

Strong classifier

Weak classifier

WeightFeaturesvector

Page 13: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Each data point has

a class label:

wt =1and a weight:

+1 ( )-1 ( )

yt =

Boosting• It is a sequential procedure:

xt=1

xt=2

xt

Page 14: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Toy exampleWeak learners from the family of lines

h => p(error) = 0.5 it is at chance

Each data point has

a class label:

wt =1and a weight:

+1 ( )-1 ( )

yt =

Page 15: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Toy example

This one seems to be the best

Each data point has a class label:

wt =1and a weight:

+1 ( )-1 ( )

yt =

This is a ‘weak classifier’: It performs slightly better than chance.

Page 16: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Toy example

We set a new problem for which the previous weak classifier performs at chance again

Each data point has a class label:

wt wt exp{-yt Ht}

We update the weights:

+1 ( )-1 ( )

yt =

Page 17: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Toy example

We set a new problem for which the previous weak classifier performs at chance again

Each data point has a class label:

wt wt exp{-yt Ht}

We update the weights:

+1 ( )-1 ( )

yt =

Page 18: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Toy example

We set a new problem for which the previous weak classifier performs at chance again

Each data point has a class label:

wt wt exp{-yt Ht}

We update the weights:

+1 ( )-1 ( )

yt =

Page 19: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Toy example

We set a new problem for which the previous weak classifier performs at chance again

Each data point has a class label:

wt wt exp{-yt Ht}

We update the weights:

+1 ( )-1 ( )

yt =

Page 20: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Toy example

The strong (non- linear) classifier is built as the combination of all the weak (linear) classifiers.

f1 f2

f3

f4

Page 21: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Given: m examples (x1, y1), …, (xm, ym) where xiX, yiY={-1, +1}

Initialize D1(i) = 1/m The goodness of ht is

calculated over Dt and the bad guesses.

For t = 1 to T

])([Pr ~ iitDit yxht

1. Train learner ht with min error

t

tt

1ln

2

12. Compute the hypothesis weight

iit

iit

t

tt

yxhe

yxhe

Z

iDiD

t

t

)(if

)(if)()(1

3. For each example i = 1 to m

T

ttt xhxH

1

)(sign)(

Output

The weight Adapts. The bigger t becomes

the smaller t becomes.

Zt is a normalization factor.

AdaBoost Algorithm

Boost example if incorrectly predicted.

Linear combination of models.

Page 22: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Boosting with Rectangle Features

• For each round of boosting:– Evaluate each rectangle filter on each

example (compute g(x))– Sort examples by filter values– Select best threshold (θ) for each filter (one

with lowest error) – Select best filter/threshold combination

from all candidate features (= Feature f(x)) – Compute weight (α) and incorporate

feature into strong classifier F(x) F(x) + α f(x)

– Reweight examples

Page 23: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Boosting Boosting fits the additive model

by minimizing the exponential loss

Training samples

The exponential loss is a differentiable upper bound to the misclassification error.

Page 24: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Exponential loss

-1.5 -1 -0.5 0 0.5 1 1.5 20

0.5

1

1.5

2

2.5

3

3.5

4 Squared error

Exponential loss

yF(x) = margin

Misclassification errorLoss

Squared errorExponential loss

Page 25: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Boosting Sequential procedure. At each step we add

For more details: Friedman, Hastie, Tibshirani. “Additive Logistic Regression: a Statistical View of Boosting” (1998)

to minimize the residual loss

inputDesired outputParametersweak classifier

Page 26: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Example Classifier for Face Detection

ROC curve for 200 feature classifier

A classifier with 200 rectangle features was learned using AdaBoost

95% correct detection on test set with 1 in 14084false positives.

Not quite competitive...

Page 27: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Building Fast Classifiers

• Given a nested set of classifier hypothesis classes

• Computational Risk Minimization

vs false neg determined by

% False Pos

% D

etec

tion

0 50

50

100

FACEIMAGESUB-WINDOW

Classifier 1

F

T

NON-FACE

Classifier 3T

F

NON-FACE

F

T

NON-FACE

Classifier 2T

F

NON-FACE

Page 28: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Cascaded Classifier

1 Feature 5 Features

F

50%20 Features

20% 2%

FACE

NON-FACE

F

NON-FACE

F

NON-FACE

IMAGESUB-WINDOW

• A 1 feature classifier achieves 100% detection rate and about 50% false positive rate.

• A 5 feature classifier achieves 100% detection rate and 40% false positive rate (20% cumulative)– using data from previous stage.

• A 20 feature classifier achieve 100% detection rate with 10% false positive rate (2% cumulative)

Page 29: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Output of Face Detector on Test Images

Page 30: Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,

Solving other “Face” Tasks

Facial Feature Localization

DemographicAnalysis

Profile Detection