Classification Tamara Berg CSE 595 Words & Pictures.
-
Upload
adelia-fowler -
Category
Documents
-
view
219 -
download
0
Transcript of Classification Tamara Berg CSE 595 Words & Pictures.
Classification
Tamara BergCSE 595 Words & Pictures
HW2
• Online after class – Due Oct 10, 11:59pm• Use web text descriptions as proxy for class
labels. • Train color attribute classifiers on web
shopping images. • Classify test images as to whether they display
attributes.
Topic Presentations
• First group starts on Tuesday• Audience – please read papers!
Example: Image classification
apple
pear
tomato
cow
dog
horse
input desired output
Slide credit: Svetlana Lazebnik
Slide from Dan Kleinhttp://yann.lecun.com/exdb/mnist/index.html
Slide from Dan Klein
Slide from Dan Klein
Slide from Dan Klein
Slide from Dan Klein
Example: Seismic data
Body wave magnitude
Surf
ace
wav
e m
agni
tude
Nuclear explosions
Earthquakes
Slide credit: Svetlana Lazebnik
Slide from Dan Klein
The basic classification framework
y = f(x)
• Learning: given a training set of labeled examples {(x1,y1), …, (xN,yN)}, estimate the parameters of the prediction function f
• Inference: apply f to a never before seen test example x and output the predicted value y = f(x)
output classification function
input
Slide credit: Svetlana Lazebnik
Some classification methods
106 examples
Nearest neighbor
Shakhnarovich, Viola, Darrell 2003Berg, Berg, Malik 2005…
Neural networks
LeCun, Bottou, Bengio, Haffner 1998Rowley, Baluja, Kanade 1998…
Support Vector Machines and Kernels Conditional Random Fields
McCallum, Freitag, Pereira 2000Kumar, Hebert 2003…
Guyon, VapnikHeisele, Serre, Poggio, 2001…
Slide credit: Antonio Torralba
Example: Training and testing
• Key challenge: generalization to unseen examples
Training set (labels known) Test set (labels unknown)
Slide credit: Svetlana Lazebnik
Slide credit: Dan Klein
Slide from Min-Yen Kan
Classification by Nearest Neighbor
Word vector document classification – here the vector space is illustrated as having 2 dimensions. How many dimensions would the data actually live in?
Slide from Min-Yen Kan
Classification by Nearest Neighbor
Classification by Nearest Neighbor
Classify the test document as the class of the document “nearest” to the query document (use vector similarity to find most similar doc)
Slide from Min-Yen Kan
Classification by kNN
Classify the test document as the majority class of the k documents “nearest” to the query document. Slide from Min-Yen Kan
Slide from Min-Yen Kan
What are the features? What’s the training data? Testing data? Parameters?
Classification by kNN
Decision tree classifier
Example problem: decide whether to wait for a table at a restaurant, based on the following attributes:1. Alternate: is there an alternative restaurant nearby?2. Bar: is there a comfortable bar area to wait in?3. Fri/Sat: is today Friday or Saturday?4. Hungry: are we hungry?5. Patrons: number of people in the restaurant (None, Some, Full)6. Price: price range ($, $$, $$$)7. Raining: is it raining outside?8. Reservation: have we made a reservation?9. Type: kind of restaurant (French, Italian, Thai, Burger)10. WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60)
Slide credit: Svetlana Lazebnik
Decision tree classifier
Slide credit: Svetlana Lazebnik
Decision tree classifier
Slide credit: Svetlana Lazebnik
Linear classifier
• Find a linear function to separate the classes
f(x) = sgn(w1x1 + w2x2 + … + wDxD) = sgn(w x)
Slide credit: Svetlana Lazebnik
Discriminant Function• It can be arbitrary functions of x, such as:
Nearest Neighbor
Decision Tree
LinearFunctions
( ) Tg b x w x
Slide credit: Jinwei Gu
Linear Discriminant Function• g(x) is a linear function:
( ) Tg b x w x
x1
x2
wT x + b = 0
wT x + b < 0
wT x + b > 0
A hyper-plane in the feature space
Slide credit: Jinwei Gu
denotes +1
denotes -1
x1
• How would you classify these points using a linear discriminant function in order to minimize the error rate?
Linear Discriminant Function
denotes +1
denotes -1
x1
x2
Infinite number of answers!
Slide credit: Jinwei Gu
• How would you classify these points using a linear discriminant function in order to minimize the error rate?
Linear Discriminant Function
x1
x2
Infinite number of answers!
denotes +1
denotes -1Slide credit: Jinwei Gu
• How would you classify these points using a linear discriminant function in order to minimize the error rate?
Linear Discriminant Function
x1
x2
Infinite number of answers!
denotes +1
denotes -1Slide credit: Jinwei Gu
x1
x2• How would you classify these points using a linear discriminant function in order to minimize the error rate?
Linear Discriminant Function
Infinite number of answers!
Which one is the best?
denotes +1
denotes -1Slide credit: Jinwei Gu
Large Margin Linear Classifier
“safe zone”• The linear discriminant
function (classifier) with the maximum margin is the best
Margin is defined as the width that the boundary could be increased by before hitting a data point
Why it is the best? strong generalization ability
Margin
x1
x2
Linear SVMSlide credit: Jinwei Gu
Large Margin Linear Classifier
x1
x2 Margin
wT x + b = 0
wT x + b = -1w
T x + b = 1
x+
x+
x-
Support Vectors
Slide credit: Jinwei Gu
Solving the Optimization Problem
SV
( ) T Ti i
i
g b b
x w x x x
The linear discriminant function is:
Notice it relies on a dot product between the test point x and the support vectors xi
Slide credit: Jinwei Gu
Linear separability
Slide credit: Svetlana Lazebnik
Non-linear SVMs: Feature Space General idea: the original input space can be mapped to
some higher-dimensional feature space where the training set is separable:
Φ: x → φ(x)
This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt
Nonlinear SVMs: The Kernel Trick With this mapping, our discriminant function is now:
SV
( ) ( ) ( ) ( )T Ti i
i
g b b
x w x x x
No need to know this mapping explicitly, because we only use the dot product of feature vectors in both the training and test.
A kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space:
( , ) ( ) ( )Ti j i jK x x x x
Slide credit: Jinwei Gu
Nonlinear SVMs: The Kernel Trick
Linear kernel:
2
2( , ) exp( )
2i j
i jK
x x
x x
( , ) Ti j i jK x x x x
( , ) (1 )T pi j i jK x x x x
0 1( , ) tanh( )Ti j i jK x x x x
Examples of commonly-used kernel functions:
Polynomial kernel:
Gaussian (Radial-Basis Function (RBF) ) kernel:
Sigmoid:
Slide credit: Jinwei Gu
Support Vector Machine: Algorithm
• 1. Choose a kernel function
• 2. Choose a value for C
• 3. Solve the quadratic programming problem (many software packages available)
• 4. Construct the discriminant function from the support vectors
Slide credit: Jinwei Gu
Some Issues• Choice of kernel - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity
measures
• Choice of kernel parameters - e.g. σ in Gaussian kernel - σ is the distance between closest points with different classifications - In the absence of reliable criteria, applications rely on the use of a
validation set or cross-validation to set such parameters.
This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt Slide credit: Jinwei Gu
Summary: Support Vector Machine
• 1. Large Margin Classifier – Better generalization ability & less over-fitting
• 2. The Kernel Trick– Map data points to higher dimensional space in
order to make them linearly separable.– Since only dot product is used, we do not need to
represent the mapping explicitly.
Slide credit: Jinwei Gu
• A simple algorithm for learning robust classifiers– Freund & Shapire, 1995– Friedman, Hastie, Tibshhirani, 1998
• Provides efficient algorithm for sparse visual feature selection– Tieu & Viola, 2000– Viola & Jones, 2003
• Easy to implement, doesn’t require external optimization tools.
Boosting
Slide credit: Antonio Torralba
• Defines a classifier using an additive model:
Boosting
Strong classifier
Weak classifier
WeightFeaturesvector
Slide credit: Antonio Torralba
• Defines a classifier using an additive model:
• We need to define a family of weak classifiers
Boosting
Strong classifier
Weak classifier
WeightFeaturesvector
from a family of weak classifiers
Slide credit: Antonio Torralba
Adaboost
Slide credit: Antonio Torralba
Each data point has
a class label:
wt =1and a weight:
+1 ( )
-1 ( )yt =
Boosting• It is a sequential procedure:
xt=1
xt=2
xt
Slide credit: Antonio Torralba
Toy exampleWeak learners from the family of lines
h => p(error) = 0.5 it is at chance
Each data point has
a class label:
wt =1and a weight:
+1 ( )
-1 ( )yt =
Slide credit: Antonio Torralba
Toy example
This one seems to be the best
Each data point has
a class label:
wt =1and a weight:
+1 ( )
-1 ( )yt =
This is a ‘weak classifier’: It performs slightly better than chance.Slide credit: Antonio Torralba
Toy example
We set a new problem for which the previous weak classifier performs at chance again
Each data point has
a class label:
wt wt exp{-yt Ht}
We update the weights:
+1 ( )
-1 ( )yt =
Slide credit: Antonio Torralba
Toy example
We set a new problem for which the previous weak classifier performs at chance again
Each data point has
a class label:
wt wt exp{-yt Ht}
We update the weights:
+1 ( )
-1 ( )yt =
Slide credit: Antonio Torralba
Toy example
We set a new problem for which the previous weak classifier performs at chance again
Each data point has
a class label:
wt wt exp{-yt Ht}
We update the weights:
+1 ( )
-1 ( )yt =
Slide credit: Antonio Torralba
Toy example
We set a new problem for which the previous weak classifier performs at chance again
Each data point has
a class label:
wt wt exp{-yt Ht}
We update the weights:
+1 ( )
-1 ( )yt =
Slide credit: Antonio Torralba
Toy example
The strong (non- linear) classifier is built as the combination of all the weak (linear) classifiers.
f1 f2
f3
f4
Slide credit: Antonio Torralba
Adaboost
Slide credit: Antonio Torralba