Non -Parametric Techniques

34
Non-Parametric Techniques Non-Parametric Techniques Jacob Hays Amit Pillay James DeFelice 4.1, 4.2, 4.3

Transcript of Non -Parametric Techniques

Page 1: Non -Parametric Techniques

Non-Parametric TechniquesNon-Parametric TechniquesJacob Hays

Amit Pillay

James DeFelice

4.1, 4.2, 4.3

Page 2: Non -Parametric Techniques

Parametric vs. Non-Parametric

• Parametric▫ Based on Functions (e.g Normal Distribution)

▫ Unimodal – Only one peak▫ Unimodal – Only one peak

▫ Unlikely real data confines to function

• Non-Parametric▫ Based on Data

▫ As many peaks as Data has

▫ Methods for both p(wj | x) and P(wj | x)

Page 3: Non -Parametric Techniques

Density Estimation

• Probability a vector x will fall in region R.

∫ℜ

= (1) ')'( dxxpP

• Assume n samples, identically distributed. By a Binomial equation, Probability that k samples are in Region R is

• Expected value for k = nP, so P ≈ k/n

∫ℜ

(2) )1( knk

k PPk

nP

−−

=

Page 4: Non -Parametric Techniques

Density Estimation

• For large n, k/n is a good estimate for P• If p(x) is continuous, and p does not vary in R

(4) )(')'(∫ ≅= VxpdxxpP

• Where V is volume of R• Combine with P = k/n V

nkxp

/)( ≅

(4) )(')'(∫ℜ

≅= VxpdxxpP

Page 5: Non -Parametric Techniques

If volume V is fixed, and n is increased towards ∞, P(x) converges to the average p of that volume.

It peaks at the true probability, which is 0.7, and with infinite n, will converge to 0.7.

Page 6: Non -Parametric Techniques

Density Estimation

• If n is fixed, and V approaches zero, V will become so small it has zero samples, or reside directly on a point, making p(x) ≈ 0 or ∞directly on a point, making p(x) ≈ 0 or ∞

• In Practice, can not allow volume to become too small, since data is limited.▫ If you use a non-zero V, estimation will have some

variance in k/n from actual.

• In theory, with unlimited data, can get around limitations

Page 7: Non -Parametric Techniques

Density Est. with Infinite data

• To get the density at x. Assume a sequence of regions (R1 , R2 , … Rn) that all contain x. In Rithe estimate uses i samplesthe estimate uses i samples

• Vn is volume of Rn , kn is the number of samples in Rn . pn(x) is the nth estimate for n.▫ pn(x) = kn / n / Vn

▫ Goal is to get pn(x) to converge to p(x)

Page 8: Non -Parametric Techniques

Convergence of pn(x) to p(x)

• pn(x) converges to p(x) if the following is true

• Region R covers negligible space

0lim =∞→

nn

Vspace

• p(x) is average of infinite samples (unless p(x) = 0)

• The samples of k, are a negligible amount of the whole set n. n gets bigger faster then k does.

∞→n

∞=∞→

nn

klim

0lim =∞→ n

kn

n

Page 9: Non -Parametric Techniques

Satisfying conditions

• Two common methods to satisfy conditions that both converge

• Volume of Region R based on n• Volume of Region R based on n

▫ Parzen Windows

▫ Vn = 1 / √n

• Number of points in region (k) based on n▫ kn nearest neighbors

▫ kn = √n

Page 10: Non -Parametric Techniques

Example

• Volume based on n

• Volume based on kn

Page 11: Non -Parametric Techniques

Parzen Windows

• Assume Rn is d-dimensional hypercube• hn length of an edge of that cube• Volume of cube is Vn = hn

d

• Need to determine kn (number of samples that n

fall within Rn)

Page 12: Non -Parametric Techniques

Parzen Windows

• Define a “window” function that tells us if a sample is in Rn:

ℜ= ) of edge theoflength :(h nnhVd

nn

=≤=

otherwise 0

d , 1,...j 2

1u 1

(u)

:function windowfollowing thebe (u)

j

nn

ϕ

ϕLet

nn

Page 13: Non -Parametric Techniques

Example

• Assume d = 2, hn = 1

• ϕ((x-xi)/hn) = 1 if xi falls within Rn

x2

1/2

1/2

-1/2

-1/2

x1

=≤=

otherwise 0

d , 1,...j 2

1u 1

(u) jϕ

Page 14: Non -Parametric Techniques

Parzen Windows

=≤=

otherwise 0

d , 1,...j 2

1u 1

(u) jϕ ∑=

=

−=

ni

i n

in

h

xxk

1

ϕ

• Number of samples in Rn computed as kn • Derive new pn(x)

▫ Earlier, pn(x) = (kn/n)/Vn , now redefined as

−−−−ϕϕϕϕ==== ∑∑∑∑

====

==== n

ini

1i nn

h

xx

V

1

n

1)x(p

Page 15: Non -Parametric Techniques

Generalize ϕ(x)

• pn(x) is average of functions of x and samples xi • Window function ϕ(x) is being used for

interpolation▫ Each xi contributes to pn(x) according to its distance

from xfrom x

• We’d like ϕ(x) to be a legitimate density function

0)( ≥vϕ

∫ =1)( dvvϕ

Page 16: Non -Parametric Techniques

Window Width

• Remember that:

• New definition:

) of edge theoflength :(h nn ℜ= d

nn hV

−= ∑

=

= n

ini

i n

nh

xx

Vnxp ϕ

1

11

)(

• New definition:

• hn clearly affects the amplitude and width of delta function

=

nnn

h

x

Vx ϕδ

1)(

∑=

=

−=ni

i

inn xxn

xp1

)(1

)( δ

Page 17: Non -Parametric Techniques

Window Width

• Very large hn ▫ Small amplitude of delta function▫ xi must be far from x before δn(x-xi) changes from δn(0)

▫ pn(x) is superposition of a broad, slowly changing function (out of focus)

▫ Too little resolution

• Very small hn ▫ Large amplitude of delta function▫ Peak of δn(x-xi) is large, occurs near x=xi

▫ pn(x) is superposition of sharp pulses (erratic, noisy)▫ Too much statistical instability

Page 18: Non -Parametric Techniques

Window Width

• For any hn distribution is normalized

( ) ( )∫∫ ∫ ==

−=− 1

1duudx

h

xx

Vdxxx

n

i

nin ϕϕδ

Page 19: Non -Parametric Techniques

Convergence

• With limited samples, best we can do for hn is compromise

• With unlimited samples, we can let Vn slowly approach zero as n increases, and pn(x) � p(x)n

• For any fixed x, pn(x) depends on r.v. samples (x1, x1,.. xn) …

( ) ( ) ( )xxpxpnn

2

n varianceand mean some has σ

Page 20: Non -Parametric Techniques

Convergence of mean

• Expected value of estimate is averaged value of unknown, true density p(x)▫ “blurred” or “smoothed” version of p(x) as seen

( ) ( ) ( )∫ −= dvvpvxxp nnδ

▫ “blurred” or “smoothed” version of p(x) as seen through the averaging window

• Limits, as n � ∞▫ Vn� 0▫ nVn� ∞▫ δn(x-v) � delta function centered at x▫ expected value of estimate � true density

Page 21: Non -Parametric Techniques

Convergence of variance

• For any n▫ expected value of estimate � true density

� if we let Vn� 0

� for some set of n samples estimate is useless (“spiky”)

▫ need to consider variance▫ need to consider variance� Should let Vn� 0 slower than n � ∞

( ) ( )( ) ( )n

n

nnV

xpx

⋅≤

ϕσ

sup2

Page 22: Non -Parametric Techniques

Example

Page 23: Non -Parametric Techniques

� The behavior of the Parzen-window method

� Case where p(x) �N(0,1)

Let ϕ(u) = (1/√(2π) exp(-u2/2) and hn = h1/√n (n>1)

Illustration

Let ϕ(u) = (1/√(2π) exp(-u /2) and hn = h1/√n (n>1)(h1: known parameter)

Thus:

is an average of normal densities centered at the samples xi.

−−−−==== ∑∑∑∑

====

==== n

ini

1i n

nh

xx

h

1

n

1)x(p ϕϕϕϕ

Page 24: Non -Parametric Techniques
Page 25: Non -Parametric Techniques
Page 26: Non -Parametric Techniques

Probabilistic Neural Network

• The PNN for this case has• 1. d input units comprising the input layer,• 2. n pattern units comprising of the pattern

layer,• 3. c category units• Each input unit is connected to each pattern • Each input unit is connected to each pattern

unit.• Each pattern unit is connected to one and only

onecategory unit corresponding the category of the training sample.

• The connections from the input to pattern units have modifiable weights w which will be learned during the training.

Page 27: Non -Parametric Techniques

x1

x2.

p1

p2.

W11

Input layerPatterns

xd

.

.

.pn

.

.

.. .Wdn

Wd2

Input layerPatterns layer

Page 28: Non -Parametric Techniques

p1

p2...

Patternslayer

p

.

.

.ω1

ω2..Category

.

.

pn

pk... ..

ωc.Category units.

.

Activations (Emission of nonlinear functions)

Page 29: Non -Parametric Techniques
Page 30: Non -Parametric Techniques

PNN training

• The training procedure is simple consisting of three simple steps.

• 1) Normalize the training feature vectors so that ||xi|| = 1 for all i = 1…n.

• 2) Set the weight vector wi = xi for all i = 1…n. wi• 2) Set the weight vector wi = xi for all i = 1…n. wiconsists of weights connecting the input units to the ith pattern unit.

• 3) Connect the pattern unit i to the category unit corresponding to the category of xi for all i = 1…n.

Page 31: Non -Parametric Techniques

PNN Classification1. Normalize the test pattern x and place it at the input

units

2. Each pattern unit computes the inner product in order to yield the net activation

x.wnet t

kk ====

and emit a nonlinear function

3. Each output unit sums the contributions from all pattern units connected to it

4. Classify by selecting the maximum value of Pn(x | ωj) (j = 1, …, c)

−−−−====

2

kk

1netexp)net(f

σσσσ

)x|(P)|x(P j

n

1i

ijn ωωωωϕϕϕϕωωωω ∝∝∝∝====∑∑∑∑====

Page 32: Non -Parametric Techniques

Advantages of PNN

• Speed of learning ▫ Since Wk=Xk, it requires a single pass thru

trainingtraining

• Time complexity▫ For parallel implementation its O(1) as inner

product can be done in parallel

• New training patterns can be incorporated quite easily

Page 33: Non -Parametric Techniques

Summary

• Non parametric estimation can be applied to any random distribution of datarandom distribution of data

• Parzen window method provide a better estimation of pdf

• Estimation depends upon no. of samples and Parzen window size

• PPN gives an efficient Parzen window method implementation

Page 34: Non -Parametric Techniques

References

• R.J. Schalkoff. (1992) Pattern Recognition: Statistical, Structural, and Neural Approaches, Statistical, Structural, and Neural Approaches, Wiley.*

• Pattern Classification (2nd Edition) by R.O. Duda, P. E. Hart and D. Stork, Wiley 2001