Lecture 27: Recognition Basics CS4670/5670: Computer Vision Kavita Bala Slides from Andrej Karpathy...

download Lecture 27: Recognition Basics CS4670/5670: Computer Vision Kavita Bala Slides from Andrej Karpathy and Fei-Fei Li

of 31

  • date post

    14-Jan-2016
  • Category

    Documents

  • view

    216
  • download

    1

Embed Size (px)

Transcript of Lecture 27: Recognition Basics CS4670/5670: Computer Vision Kavita Bala Slides from Andrej Karpathy...

Lecture 1: Images and image filtering

Lecture 27: Recognition BasicsCS4670/5670: Computer VisionKavita BalaSlides from Andrej Karpathy and Fei-Fei Lihttp://vision.stanford.edu/teaching/cs231n/

AnnouncementsPA 3 Artifact votingVote by Tuesday nightTodayImage classification pipelineTraining, validation, testingScore function and loss function

Building up to CNNs for learning5-6 lectures on deep learningImage Classification

Slides from Andrej Karpathy and Fei-Fei Lihttp://vision.stanford.edu/teaching/cs231n/Image Classification: Problem

Data-driven approachCollect a database of images with labelsUse ML to train an image classifierEvaluate the classifier on test imagesSlides from Andrej Karpathy and Fei-Fei Lihttp://vision.stanford.edu/teaching/cs231n/Data-driven approachCollect a database of images with labelsUse ML to train an image classifierEvaluate the classifier on test images

Slides from Andrej Karpathy and Fei-Fei Lihttp://vision.stanford.edu/teaching/cs231n/Train and TestSplit dataset between training images and test images

Be careful about inflation of resultsClassifiersNearest NeighborkNN

SVMNearest Neighbor ClassifierTrainRemember all training images and their labels

PredictFind the closest (most similar) training imagePredict its label as the true labelHow to find the most similar training image? What is the distance metric?

Slides from Andrej Karpathy and Fei-Fei Lihttp://vision.stanford.edu/teaching/cs231n/Choice of distance metricHyperparameter

Slides from Andrej Karpathy and Fei-Fei Lihttp://vision.stanford.edu/teaching/cs231n/k-nearest neighbor

Find the k closest points from training dataLabels of the k points vote to classifyHow to pick hyperparameters?MethodologyTrain and testTrain, validate, test

Train for original modelValidate to find hyperparametersTest to understand generalizabilityValidation

Slides from Andrej Karpathy and Fei-Fei Lihttp://vision.stanford.edu/teaching/cs231n/Cross-validation

Slides from Andrej Karpathy and Fei-Fei Lihttp://vision.stanford.edu/teaching/cs231n/CIFAR-10 and NN results

Slides from Andrej Karpathy and Fei-Fei Lihttp://vision.stanford.edu/teaching/cs231n/

Visualization: L2 distance

Background plays an unrealistically large role18Complexity and StorageN training images, M testing images

Training: O(1)Testing: O(MN)

HmmNormally need the oppositeSlow training (ok), fast testing (necessary)SummaryData-driven: Train, validate, testNeed labeled data

ClassifierNearest neighbor, kNN (approximate NN, ANN)Score function

Slides from Andrej Karpathy and Fei-Fei Lihttp://vision.stanford.edu/teaching/cs231n/Linear Classifier

Linear Classifier

Computing scores

This is a bad cat score24Geometric Interpretation

Interpretation: Template matching

Each row of W is a template for that class26Linear classifiersFind linear function (hyperplane) to separate positive and negative examples

Which hyperplaneis best?27Support vector machinesFind hyperplane that maximizes the margin between the positive and negative examplesC. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998 28Support vector machinesFind hyperplane that maximizes the margin between the positive and negative examples

MarginSupport vectorsFor support, vectors,

29Support vector machinesFind hyperplane that maximizes the margin between the positive and negative examples

MarginSupport vectorsDistance between point and hyperplane:

For support, vectors,

Therefore, the margin is 2 / ||w|| 30This is a generalization of binary SVMs to multiple classesBias Trick