Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.

9
Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000

Transcript of Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.

Page 1: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.

Linear Subspace Transforms

PCA, Karhunen-Loeve, Hotelling

C306, 2000

Page 2: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.

New idea: Image dependent basis

Previous transforms, (Fourier and Cosine) uses a fixed basis.

Better compression can be achieved by tailoring the basis to the image (pixel) statistics.

Page 3: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.

Sample data

Let x =[x1, x2,…,xw] be a sequence of random variables (vectors)

Exampe 1: x = Column flattening of an image, so x is a sample over several images.

Example 2: x = Column flattening of an image block. x I could be sampled from only one image.

Page 4: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.

Covariance Matrix

Note 1: Can more compactly write: C = X’*X and m = sum(x)/wNote 2: Covariance matrix is both symmetric and

real, therefore the matrix can be diagonalized and its eigenvalues will be real

)1()(1 1

eqnmmxxW

C Tj

W

j

Tj

)2()(1 1

eqnxW

mW

ii

Page 5: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.

Coordinate Transform

Diagonalize C, ie find B s.t.

B’CB = D = diag( )

Note:

1. C symmetric => B orthogonal

2. B columns = C eigenvectors

j

Page 6: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.

DefinitionKL, PCA, Hotelling transform

The forward coordinate transform

y = B’*(x-mx) Inverse transform x = B*y + mx

Note: my = E[y] = 0 zero mean

C = E[yy’] = D diagonal cov matrix

Page 7: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.

Dimensionality reduction

Can write: x = y1B1 + y2B2 +…+ ywBw

But can drop some and only use a subset, i.e. x = y1B1 + y2B2

Result: Fewer y values encode almost all the data in x

Page 8: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.

Remaining error

Mean square error of the approximation is

Proof: (on blackboard)

1N

KjjMSE

Page 9: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.

SummaryHotelling,PCA,KLT

Advantages:

1. Optimal coding in the least square sense

2. Theoretcially well founded

Disadvantages:

1. Computationally intensive

2. Image dependent basis