LECTURE 09: Discriminant Analysis

18
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 09: Discriminant Analysis Objectives: Principal Components Analysis Fisher Linear Discriminant Analysis Multiple Discriminant Analysis HLDA and ICA Examples Resources: Java PR Applet W.P.: Fisher DTREG: LDA S.S.: DFA

description

LECTURE 09: Discriminant Analysis. •Objectives: Principal Components Analysis Fisher Linear Discriminant Analysis Multiple Discriminant Analysis HLDA and ICA Examples Resources: Java PR Applet W.P.: Fisher DTREG: LDA S.S.: DFA. Principal Component Analysis. - PowerPoint PPT Presentation

Transcript of LECTURE 09: Discriminant Analysis

Page 1: LECTURE  09:  Discriminant Analysis

ECE 8443 – Pattern RecognitionECE 8527 – Introduction to Machine Learning and Pattern Recognition

LECTURE 09: Discriminant Analysis

• Objectives:Principal Components AnalysisFisher Linear Discriminant AnalysisMultiple Discriminant AnalysisHLDA and ICAExamples

• Resources:Java PR AppletW.P.: FisherDTREG: LDAS.S.: DFA

Page 2: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 2

Principal Component Analysis

• Consider representing a set of n d-dimensional samples x1,…,xn by a single vector, x0.

• Define a squared-error criterion:

• It is easy to show that the solution to this problem is given by:

• The sample mean is a zero-dimensional representation of the data set.

• Consider a one-dimensional solution in which we project the data into a line running through the sample mean:

where e is a unit vector in the direction of this line, and a is a scalar representing the distance of any point from the mean.

• We can write the squared-error criterion as:

2

1000

n

kkJ xxx

n

kkn 1

01 xmx

emx a

2

1211 )(,,...,,

n

kkkn aaaaJ xeme

Page 3: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 3

Minimization Using Lagrange Multipliers• The vector, e, that minimizes J1 also maximizes .

• Use Lagrange multipliers to maximize subject to the constraint .

• Let be the undetermined multiplier, and differentiate:

with respect to e, to obtain:

• Set to zero and solve:

• It follows to maximize we want to select an eigenvector corresponding to the largest eigenvalue of the scatter matrix.

• In other words, the best one-dimensional projection of the data (in the least mean-squared error sense) is the projection of the data onto a line through the sample mean in the direction of the eigenvector of the scatter matrix having the largest eigenvalue (hence the name Principal Component).

• For the Gaussian case, the eigenvectors are the principal axes of the hyperellipsoidally shaped support region!

• Let’s work some examples (class-independent and class-dependent PCA).

Seet 1e

1 eeSee ttu

eSee

22 u

Seet

eSe

Seet

Page 4: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 4

Discriminant Analysis• Discriminant analysis seeks directions that are efficient for discrimination.

• Consider the problem of projecting data from d dimensions onto a line with the hope that we can optimize the orientation of the line to minimize error.

• Consider a set of n d-dimensional samples x1,…,xn in the subset D1 labeled 1 and n2 in the subset D2 labeled 2.

• Define a linear combination of x:

and a corresponding set of n samples y1, …, yn divided into Y1 and Y2.

2

1000

n

kkJ xxx

xw ty

• Our challenge is to find w that maximizes separation.

• This can be done by considering the ratio of the between-class scatter to the within-class scatter.

Page 5: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 5

Separation of the Means and Scatter• Define a sample mean for class i:

• The sample mean for the projected points are:

The sample mean for the projected points is just the projection of the mean (which is expected since this is a linear transformation).

• It follows that the distance between the projected means is:

• Define a scatter for the projected samples:

• An estimate of the variance of the pooled data is:

and is called the within-class scatter.

iDi

i n xxm 1

it

D

t

iYyii

ii ny

nm mwxw

x

11~

2121~~ mwmw ttmm

22 ~~ iYy

ii mys

)~~)(1( 22

21 ss

n

Page 6: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 6

Fisher Linear Discriminant and Scatter

• The Fisher linear discriminant maximizes the criteria:

• Define a scatter for class I, Si :

• The total scatter, Sw, is:

• We can write the scatter for the projected samples as:

• Therefore, the sum of the scatters can be written as:

22

21

221~~

~~)(

ss

mmwJ

iD

tiii

xm-xm-xS

21 SSS W

wSwwmxmxw

mwxw

x

x

itt

iD

it

Di

tti

i

i

s

22~

wSw Wtss 2

221

~~

Page 7: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 7

Separation of the Projected Means

• The separation of the projected means obeys:

• Where the between class scatter, SB, is given by:

• Sw is the within-class scatter and is proportional to the covariance of the pooled data.

• SB , the between-class scatter, is symmetric and positive definite, but because it is the outer product of two vectors, its rank is at most one.

• This implies that for any w, SBw is in the direction of m1-m2.

• The criterion function, J(w), can be written as:

wSw

wmmmmw

mwmw

Bt

tt

ttmm

2121

22121

~~

tB 2121 mmmmS

wSwwSww

Wt

Bt

J

Page 8: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 8

Linear Discriminant Analysis

• This ratio is well-known as the generalized Rayleigh quotient and has the

well-known property that the vector, w, that maximizes J(), must satisfy:

• The solution is:

• This is Fisher’s linear discriminant, also known as the canonical variate.

• This solution maps the d-dimensional problem to a one-dimensional problem (in this case).

• From Chapter 2, when the conditional densities, p(x|i), are multivariate normal with equal covariances, the optimal decision boundary is given by:

where , and w0 is related to the prior probabilities.

• The computational complexity is dominated by the calculation of the within-class scatter and its inverse, an O(d2n) calculation. But this is done offline!

• Let’s work some examples (class-independent PCA and LDA).

wSwS WB

)( 211 mmSw -

W

00 wtxw 211 μ-μw

Page 9: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 9

Multiple Discriminant Analysis• For the c-class problem in a d-dimensional space, the natural generalization

involves c-1 discriminant functions.

• The within-class scatter is defined as:

• Define a total mean vector, m:

and a total scatter matrix, ST, by:

• The total scatter is related to the within-class scatter (derivation omitted):

• We have c-1 discriminant functions of the form:

c

i D

tii

c

iiW

i

S11 x

m-xm-xS

ic

iin

nmm

1

1

x

m-xm-xS tT

BWT SSS

c

i

tiiiB n

1))(( mmmmS

121 ,...,c-,iti

t xwxWy

Page 10: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 10

Multiple Discriminant Analysis (Cont.)• The criterion function is:

• The solution to maximizing J(W) is once again found via an eigenvalue decomposition:

• Because SB is the sum of c rank one or less matrices, and because only c-1 of these are independent, SB is of rank c-1 or less.

• An excellent presentation on applications of LDA can be found at PCA Fails!

WSW

WSW(W)

Wt

Bt

J

00 iWiBWiB wSSSS

Page 11: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 11

Heteroscedastic Linear Discriminant Analysis (HLDA)• Heteroscedastic: when random variables have different variances.

• When might we observe heteroscedasticity? Suppose 100 students enroll in a typing class — some of which have typing

experience and some of which do not. After the first class there would be a great deal of dispersion in the number

of typing mistakes. After the final class the dispersion would be smaller. The error variance is non-constant — it decreases as time increases.

• An example is shown to the right. The two classeshave nearly the same mean, but different variances,and the variances differ in one direction.

• LDA would project these classes onto a linethat does not achieve maximal separation.

• HLDA seeks a transform that will account for theunequal variances.

• HLDA is typically useful when classes have significant overlap.

Page 12: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 12

Partitioning Our Parameter Vector• Let W be partitioned into the first p columns corresponding to the dimensions

we retain, and the remaining d-p columns corresponding to the dimensions we discard.

• Then the dimensionality reduction problem can be viewed in two steps: A non-singular transform is applied to x to transform the features, and A dimensionality reduction is performed where reduce the output of this

linear transformation, y, to a reduced dimension vector, yp.

• Let us partition the mean and variances as follows:

where is common to all terms and are different for each class.

)()()(

)(

0

0pd

pdxpd

ppxpj

j

d0,

1p0,

pj,

j,1

j

pj

pj

Page 13: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 13

Density and Likelihood Functions• The density function of a data point under the model, assuming a Gaussian

model (as we did with PCA and LDA), is given by:

where is an indicator function for the class assignment for each data point. (This simply represents the density function for the transformed data.)

• The log likelihood function is given by:

• Differentiating the likelihood with respect to the unknown means and variances gives:

2

)()(

)(

)()()(

2

igiΤ

igT

igiΤ xx

igni exP

log)})2log(()(){(2

,,log)()(

1)()(

igigj

nN

iigi

ΤTigi

ΤijF xxxL 1

)(ig

)~(1~

)~(1~

~~~~0

pnT

pnp

pnpjTp

j

pj

jT

pnjΤp

j

CN

CN

XX

Page 14: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 14

Optimal Solution• Substituting the optimal values into the likelihood equation, and then

maximizing with respect to θ gives:

• These equations do not have a closed-form solution. For the general case, we must solve them iteratively using a gradient descent algorithm and a two-step process in which we estimate means and variances from θ and then estimate the optimal value of θ from the means and variances.

• Simplifications exist for diagonal and equal covariances, but the benefits of the algorithm seem to diminish in these cases.

• To classify data, one must compute the log-likelihood distance from each class and then assign the class based on the maximum likelihood.

• Let’s work some examples (class-dependent PCA, LDA and HLDA).

• HLDA training is significantly more expensive than PCA or LDA, but classification is of the same complexity as PCA and LDA because this is still essentially a linear transformation plus a Mahalanobis distance computation.

}logˆlog2

ˆlog2

{argmaxˆ1

NCN

CNpn

Tpn

c

j

jpn

TpnF

Page 15: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 15

Independent Component Analysis (ICA)• Goal is to discover underlying structure in a signal.

• Originally gained popularity for applications in blind source separation (BSS), the process of extracting one or more unknown signals from noise(e.g., cocktail party effect).

• Most often applied to time series analysis though it can also be used for traditional pattern recognition problems.

• Define a signal as a sum of statistically independent signals:

• If we can estimate A, then we can compute s by inverting A:

• This is the basic principle of blind deconvolution or BSS.

• The unique aspect of ICA is that it attempts to model x as a sum of statistically independent non-Gaussian signals. Why?

Asx

ndnd

n

n

d s

ss

aa

aaaaaa

x

xx

2

1

1

22221

11211

2

1

1~ -where AWWxy

Page 16: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 16

Objective Function• Unlike mean square error approaches, ICA attempts to optimize the

parameters of the model based on a variety of information theoretic measures: Mutual information:

Negentropy:

Maximum likelihood:

• Mutual information and Negentropy are related by:

• It is common in ICA to zero mean and prewhiten the data (using PCA) so that the technique can focus on the non-Gaussian aspects of the data. Since these are linear operations, they do not impact the non-Gaussian aspects of the model.

• There are no closed form solutions for the problem described above, and a gradient descent approach must be used to find the model parameters. We will need to develop more powerful mathematics to do this (e.g., the Expectation Maximization algorithm).

n

iin ) - HH(y ) ,y,,yI(y

121 )(y

)H() H() J( gauss yyy

Wxw log)((log1 1

TtfLT

t

n

i

Tii

j

in yJ C) ,y,,yI(y )(21

Page 17: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 17

FastICA• One very popular algorithm for ICA is based on finding a projection of x that

maximizes non-Gaussianity.

• Define an approximation to Negentropy:

• Use an iterative equation solver to find the weight vector, w: Choose an initial random guess for w. Compute: Let: If the direction of w changes, iterate.

• Later in the course we will see many iterative algorithms of this form, and formally derive their properties.

• FastICA is very similar to a gradient descent solution of the maximum likelihood equations.

• ICA has been successfully applied to a wide variety of BSS problems including audio, EEG, and financial data.

GEyGE)J( 2)()( y

wxwwxxw )]([)]([ TT gEgE

www /

Page 18: LECTURE  09:  Discriminant Analysis

ECE 8527: Lecture 09, Slide 18

Summary• Principal Component Analysis (PCA): represents the data by minimizing the

squared error (representing data in directions of greatest variance).• PCA is commonly applied in a class-dependent manner where a whitening

transformation is computed for each class, and the distance from the mean of that class is measured using this class-specific transformation.

• PCA gives insight into the important dimensions of your problem by examining the direction of the eigenvectors.

• Linear DIscriminant Analysis (LDA): attempts to project the data onto a line that represents the direction of maximum discrimination.

• LDA can be generalized to a multi-class problem through the use of multiple discriminant functions (c classes require c-1 discriminant functions).

• Alternatives: Other forms of discriminant analysis exist (e.g., HLDA and ICA) that relax the assumptions about the covariance structure of the data.