Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature...

29
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara

Transcript of Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature...

Page 1: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Advanced Machine Learning & Perception

Instructor: Tony Jebara

Page 2: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Topic 1 • Introduction, researchy course, latest papers

• Going beyond simple machine learning

• Perception, strange spaces, images, time, behavior

• Info, policies, texts, web page

• Syllabus, Overview, Review

• Gaussian Distributions

• Representation & Appearance Based Methods

• Least Squares, Correlation, Gaussians, Bases

• Principal Components Analysis and its Shortcomings

Page 3: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

About me • Tony Jebara, Associate Professor in Computer Science

• Started at Columbia in 2002

• PhD from MIT in Machine Learning • Thesis: Discriminative, Generative and Imitative Learning (2001)

• Research Areas: (Columbia Machine Learning Lab, CEPSR 6LE5) • www.cs.columbia.edu/learning

• Machine Learning

• Some Computer Vision

Page 4: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Course Web Page http://www.cs.columbia.edu/~jebara/4772

http://www.cs.columbia.edu/~jebara/6772

Some material & announcements will be online

But, many things will be handed out in class such as photocopies of papers for readings, etc.

Check NEWS link to see deadlines, homework, etc.

Available online, see TA info, etc.

Follow the policies, homework, deadlines, readings closely please!

Page 5: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Syllabus Week 1: Introduction, Review of Basic Concepts, Representation Issues, Vector and Appearance-Based Models, Correlation and Least Squared Error Methods, Bases, Eigenspace Recognition, Principal Components Analysis

Week 2: Nonlinear Dimensionality Reduction, Manifolds, Kernel PCA, Locally Linear Embedding, Maximum Variance Unfolding, Minimum Volume Embedding

Week 3: Maximum Entropy, Exponential Families, Maximum Entropy Discrimination, Large Margin Probability Models

Week 4: Conditional Random Fields and Linear Models, Iterative Scaling and Majorization

Week 5: Graphical Models, Multi-Class Support Vector Machines, Structured Support Vector Machines, Cutting Plane Algorithms

Week 6: Kernels and Probabilistic Kernels

Page 6: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Syllabus Week 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions

Week 8: Meta-Learning and Multi-Task Support Vector Machines

Week 9: Semi-Supervised Learning and Graph-Based Semi-Supervised Learning

Week 10: High-Tree Width Graphical Models, Approximate Inference, Graph Structure Learning

Week 11: Clustering, Spectral Clustering, Normalized Cuts.

Week 12: Boosting, Mixtures of Experts, AdaBoost, Online Learning

Week 13: Project Presentations

Week 14: Project Presentations

Page 7: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Beyond Canned Learning • Latest research methods, high dimensions, nonlinearities, dynamics, manifolds, invariance, unlabeled, feature selection • Modeling Images / People / Activity / Time Series / MoCAP

Page 8: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Representation & Vectorization • How to represent our data? Images, time series, genes… • Vectorization: the poor man’s representation • Almost anything can be written as a long vector • E.g. image is read lexicographically RGB of eachpixel is added to a vector

• Or, a gene sequence can be written as a binary vector

GATACAC = [0100 0001 1000 0001 0010 0001 0010]

• For images, this is called “Appearance Based” representation • But, we lose many important properties this way • We will fix this in later lectures • For now, it is an easy way to proceed

1302

⎢⎢⎢⎢⎢⎢

⎥⎥⎥⎥⎥⎥

Page 9: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Least Squares Detection • How to find a face in an image given a template? • Template Matching and Sum-Squares-Difference (SSD) • Naïve: try all positions/scales, find least squares distance

• Correlation and Normalized Correlation Could normalize length of all vectors (fixes lighting)

i* = arg min

i∈ 1,T⎡⎣⎢⎤⎦⎥

12

µ−xi

2

µ =µ

µ

i* = arg mini∈ 1,T⎡⎣⎢

⎤⎦⎥

12

µT µ− 2µTxi

+ xiTx

i( )= arg max

i∈ 1,T⎡⎣⎢⎤⎦⎥µTx

i

i* = arg min

i∈ 1,T⎡⎣⎢⎤⎦⎥

12

µ−xi( )T µ−x

i( )

Page 10: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Least Squares as Gaussian Model • Minimum squared error is equivalent to maximum likelihood under a Gaussian

• Can now treat it as a probabilistic problem • Trying to find the most likely position (or sub-image) in search image given the Gaussian model of the template • Define the log of the likelihood as: • For a Gaussian probability or likelihood is:

i* = arg mini∈ 1,T⎡⎣⎢

⎤⎦⎥

12

µ−xi

2

= arg maxi∈ 1,T⎡⎣⎢

⎤⎦⎥log 1

2π( )D/2exp − 1

2µ−x

i

2⎛⎝⎜⎜⎜

⎞⎠⎟⎟⎟

⎝⎜⎜⎜

⎠⎟⎟⎟⎟

log p x | θ( )

p x

i| θ( ) = 1

2π( )D/2exp − 1

2µ−x

i

2⎛⎝⎜⎜⎜

⎞⎠⎟⎟⎟

Page 11: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

• Gaussian extend to D-dimensions and have 2 parameters:

• Mean vector µ vector (translates) • Covariance matrix Σ (stretches and rotates)

• Max and expectation = µ • Mean is any real vector, variance is now Σ matrix • Covariance matrix is positive semi-definite • Covariance matrix is symmetric • Need matrix inverse (inv) • Need matrix determinant (det) • Need matrix trace operator (trace)

Multivariate Gaussian

p x | µ,Σ( ) = 1

2π( )D/2Σ

exp − 12

x − µ( )T Σ−1 x − µ( )⎛⎝⎜⎜⎜

⎞⎠⎟⎟⎟

x,µ ∈ ℜD, Σ ∈ ℜD×D

Page 12: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Max Likelihood Gaussian • How to make face detector to work on all faces? • Why use a template? How can we use many templates? • Have IID samples of template vectors i=1..N:

• Represent IID samples with parameters as network:

• Let us get a good Gaussian from these many templates. • Standard approach: Max Likelihood

i = 1…N

x1,x 2,x 3,…,xN

log p xi | µ,Σ( )

i=1

N

∑ = log 1

2π( )D/2Σ

exp − 12

xi − µ( )T Σ−1 xi − µ( )⎛⎝⎜⎜⎜

⎞⎠⎟⎟⎟

i=1

N

More efficiently drawn using Replicator Plate

Page 13: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Max Likelihood Gaussian

∂∂µ

log 1

2π( )D/2Σ

exp − 12

xi − µ( )T Σ−1 xi − µ( )⎛⎝⎜⎜⎜

⎞⎠⎟⎟⎟

i=1

N

∑ = 0

∂∂µ

− D2log 2π− 1

2log Σ − 1

2xi − µ( )T Σ−1 xi − µ( )

i=1

N

∑ = 0

xi − µ( )T Σ−1

i=1

N

∑ = 0

µ = 1N

xi

i=1

N

• Max over µ

see Jordan Ch. 12, get sample mean…

• For Σ need Trace operator:

and several properties:

∂xTx∂x

= 2xT

tr A( ) = tr AT( ) = Addd=1

D∑tr AB( ) = tr BA( )tr BAB−1( ) = tr A( )tr xxTA( ) = tr xTAx( ) = xTAx

Page 14: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Max Likelihood Gaussian • Likelihood rewritten in trace notation:

• Max over A=Σ-1 use properties:

• Get sample covariance:

l = − D2log 2π− 1

2log Σ − 1

2xi − µ( )T Σ−1 xi − µ( )i=1

N∑= − ND

2log 2π+ N

2log Σ−1 − 1

2tr xi − µ( )T Σ−1 xi − µ( )⎡

⎣⎢⎢

⎦⎥⎥i=1

N∑

= − ND2

log 2π+ N2log Σ−1 − 1

2tr xi − µ( ) xi − µ( )T Σ−1⎡

⎣⎢⎢

⎦⎥⎥i=1

N∑

= − ND2

log 2π+ N2log A − 1

2tr xi − µ( ) xi − µ( )T A⎡

⎣⎢⎢

⎦⎥⎥i=1

N∑

∂ log A

∂A= A−1( )T

∂tr BA⎡⎣⎢⎤⎦⎥

∂A= BT

∂l∂A

= −0 + N2

A−1( )T − 12

xi − µ( ) xi − µ( )T⎡

⎣⎢⎢

⎦⎥⎥

T

i=1

N∑

= N2Σ− 1

2xi − µ( ) xi − µ( )Ti=1

N∑

∂l∂A

= 0 → Σ = 1N

xi − µ( ) xi − µ( )Ti=1

N∑

Page 15: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Principal Components Analysis • Problem: for high dimensional data, D is large • Storing Σ, inverting Σ-1 and determining |Σ| are expensive! • Idea: limit Gaussian model to directions of high variance • Use Principal Components Analysis to mimic Σ

Page 16: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Principal Components Analysis • PCA approximates each datapoint as a mean vector plus steps along eigenvector directions. E.g. ci steps along v

• More generally, PCA uses a set of eigenvectors M (where M<<D)

• PCA selects to minimize • The optimal directions are eigenvectors of covariance • Which directions to keep: highest eigenvalues (variances)

x

x

i1( )

x

i2( )

⎢⎢⎢⎢

⎥⎥⎥⎥≈

µ 1( )µ 2( )

⎢⎢⎢⎢

⎥⎥⎥⎥+c

i

v 1( )v 2( )

⎢⎢⎢⎢

⎥⎥⎥⎥

x

i≈ x

i=µ + c

ij

v

jj =1

M∑

µ,c

ij,v

j{ }

x

i− x

i

2

i=1

N∑

Page 17: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Principal Components Analysis • Use eigenvectors, mean & coefficients to approximate data

• PCA finds eigenvectors by decomposing covariance matrix:

• Eigenvectors are orthonormal: • Eigenvalues are non-negative and non-increasing

• PCA selects the M eigenvectors with largest eigenvalues • Truncating gives an approximate covariance: • PCA finds coefficients by:

x

i≈µ + c

ij

v

jj =1

M∑

Σ =VΛVT = λi

v

i

v

iT

i=1

D∑Σ

11Σ

12Σ

13

Σ12Σ

22Σ

23

Σ13Σ

23Σ

33

⎢⎢⎢⎢⎢⎢

⎥⎥⎥⎥⎥⎥

=v

1⎡⎣⎢⎤⎦⎥v

2⎡⎣⎢⎤⎦⎥v

3⎡⎣⎢⎤⎦⎥

⎡⎣⎢

⎤⎦⎥

λ1

0 0

0 λ2

0

0 0 λ3

⎢⎢⎢⎢⎢⎢

⎥⎥⎥⎥⎥⎥

v

1⎡⎣⎢⎤⎦⎥v

2⎡⎣⎢⎤⎦⎥v

3⎡⎣⎢⎤⎦⎥

⎡⎣⎢

⎤⎦⎥T

c

ij=x

i−µ( )T vj

v

iT v

j= δ

ij

λ1≥ λ

2≥≥ λ

D≥ 0

Σ = λ

i

v

i

v

iT

i=1

M∑

Page 18: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

PCA via the Snapshot Method • Careful… how big is the covariance matrix? • Assume 1000 images each containing D=20,000 pixels • It is DxD pixels, that’s unstoreable! • Also, finding the eigenvectors or inverting DxD, requires O(D3)!

• First compute mean of all data (easy) and subtract it from each point

Instead of: compute

Then find eigendecomposition of Gram matrix

Eigenvectors of Σ are then:

Σ = 1

Nx

ix

iT

i=1

N∑ Φ where Φ

i, j= x

iTx

j

Φ= VΛ VT

v

i∝ x

jvi

j( )j =1

N∑

Page 19: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

PCA via the Snapshot Method

µ

x

1,…,x

N{ } =

µ + c∑ 1j

v

j,…{ } =

Page 20: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Truncated Gaussian Detection

Σ = λkv

kv

kT ≈

k=1

D∑ Σ

Σ = λkv

kv

kT +

k=1

M∑ ρvkv

kT

k=M +1

D∑Σ−1 = 1

λk

vkv

kT +

k=1

M∑ 1ρv

kv

kT

k=M +1

D∑Σ−1 = 1

λk

− 1ρ( )vk

vkT +

k=1

M∑ 1ρI

Σ = λkk=1

M∏ ρk=M +1

D∏

• Approximate Σ with PCA plus spherical term via

Eigenvalues

Specific Eigenvalues & Eigenvectors

Spherical

ρ = 1D−M

λkk=M +1

D∑= 1

D−Mλ

kk=1

D∑ − λkk=1

M∑( )= 1

D−MΣ

kkk=1

D∑ − λkk=1

M∑( )Σ

kk= 1

Nx

ik( )− µ k( )( )2

i=1

N∑

p x | µ,Σ( )

Page 21: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Truncated Gaussian Detection • Instead of minimizing squared error, use Gaussian model

• Use Snapshot PCA to efficiently store the big covariance • This maximum likelihood Gaussian model achieved state of the art face finding as evaluated by NIST/DARPA (Moghaddam et al., 2000)

• Top performer after 2000 is Viola-Jones (boosted cascade)

p x | µ,Σ( ) = 1

2π( )D/2Σ

exp − 12

x − µ( )T Σ−1 x − µ( )⎛⎝⎜⎜⎜

⎞⎠⎟⎟⎟

Page 22: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Gaussian Face Recognition • Instead of modeling face images with Gaussian model the difference of two face images with a Gaussian • Each difference of all pairs of images in our data is represented as a D-dimensional vector x • Also have a binary label y, y=1 same person same, y=0 not

• One Gaussian for same-face deltas another for different people deltas

yi= 1,x

i=

x ∈ RD y ∈ 0,1{ }

y

j= 0,x

j=- -

Page 23: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Gaussian Classification • Have two classes, each with their own Gaussian:

• Generation: 1) flip a coin, get y 2) pick Gaussian y, sample x from it

• Maximum Likelihood:

i = 1…N p x | µ,Σ,y( ) = N x | µ

y,Σ

y( ) p y | α( ) = αy 1−α( )1−y p α( )p µ( )p Σ( )p y | α( )p x | y,µ,Σ( )

l = log p xi,y

i| α,µ,Σ( )i=1

N∑= log p y

i| α( ) +

i=1

N∑ log p xi| y

i,µ,Σ( )i=1

N∑= log p y

i| α( ) +

i=1

N∑ log p xi| µ

0,Σ

0( ) + log p xi| µ

1,Σ

1( )yi ∈1∑yi ∈0∑

x

1,y

1( ),…, xN,y

N( ){ } x ∈ RD y ∈ 0,1{ }

Page 24: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Gaussian Classification • Max Likelihood can be done separately for the 3 terms

• Count # of pos & neg examples (class prior): • Get mean & cov of negatives and mean & cov of positives:

• Given (x,y) pair, can now compute likelihood • To make classification, a bit of Decision Theory • Without x, can compute prior guess for y • Give me x, want y, I need posterior • Bayes Optimal Decision: • Optimal iff we have true probability

l = log p y

i| α( ) +

i=1

N∑ log p xi| µ

0,Σ

0( ) + log p xi| µ

1,Σ

1( )yi ∈1∑yi ∈0∑

p x,y( )

p y | x( )

y = arg max

y= 0,1{ } p y | x( )

α = N1

N0 +N1

µ

0= 1

N0

xiyi∈0

∑ Σ

0= 1

N0

xi− µ

0( ) xi− µ

0( )yi∈0∑

T

µ

1= 1

N1

xiyi∈1

∑ Σ

1= 1

N1

xi− µ

1( ) xi− µ

1( )yi∈1∑

T

p y( )

Page 25: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Gaussian Classification

p y = 1 | x( ) =p x,y = 1( )

p x,y = 0( ) + p x,y = 1( )

=αN x | µ

1,Σ

1( )1−α( )N x | µ

0,Σ

0( ) +αN x | µ1,Σ

1( )

• Example cases, plotting decision boundary when = 0.5

• If covariances are equal:

linear decision

• If covariances are different:

quadratic decision

Page 26: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Intra-Extra Personal Gaussians

p y = 1 | x( ) =αN x | µ

1,Σ

1( )1−α( )N x | µ

0,Σ

0( ) +αN x | µ1,Σ

1( )

N x | µ

1,Σ

1( )• Intrapersonal Gaussian model

• Covariance is approximated by these eigenvectors:

• Extrapersonal Gaussian model

• Covariances is approximated by these eigenvectors:

• Question: what are the Gaussian means? • Probability a pair is the same person:

N x | µ

0,Σ

0( )

Page 27: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Other Standard Bases

x

i≈µ + c

ij

v

jj =1

C∑

• There are other choices for the eigenvectors, not just PCA • Could pick eigenvectors without looking at the data • Just for their interesting properties

• Fourier basis: denoises, only keeps smooth parts of image

• Wavelet basis: localized or windowed Fourier

• PCA: optimal least squares linear dataset reconstruction

Page 28: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Basis for Natural Images • What happens if we do PCA on all natural images instead of just faces?

• Get difference of Gaussian bases • Like Gabor or Wavelet basis • Not specific like faces • Multi-scale & orientation • Also called steerable filters • Similar to visual cortex

Page 29: Advanced Machine Learning & Perception › ~jebara › 6772 › notes › notes1.pdfWeek 7: Feature Selection and Kernel Selection, Support Vector Machine Extensions Week 8: Meta-Learning

Tony Jebara, Columbia University

Problems with Linear Bases • Coefficient representation changes wildly if image rotates and so does p(x)

• The eigenspace is sensitive to rotations, translations and transformations of the image

• Simple linear/Gaussian/PCA models are not enough • What worked for aligned faces breaks for general image datasets • Most of the PCA eigenvectors and spectrum energy is wasted due to NONLINEAR EFFECTS…

c

ij=x

i−µ( )T vj