Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf ·...

58
1 Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks to C Guestrin, T Dietterich, R Parr, N Ray

Transcript of Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf ·...

Page 1: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

1

Linear Regression,

Regularization

Bias-Variance Tradeoff

HTF: Ch3, 7

B: Ch3

Thanks to C Guestrin, T Dietterich, R Parr, N Ray

Page 2: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

2

Outline

� Linear Regression�MLE = Least Squares!

�Basis functions

� Evaluating Predictors�Training set error vs Test set error

�Cross Validation

� Model Selection�Bias-Variance analysis

�Regularization, Bayesian Model

Page 3: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

3

What is best choice of Polynomial?

Noisy Source Data

Page 4: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

4

Fit using Degree 0,1,3,9

Page 5: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

5

Comparison

� Degree 9 is the best match to the samples

(over-fitting)

� Degree 3 is the best match to the source

� Performance on test data:

Page 6: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

6

What went wrong?

� A bad choice of polynomial?

� Not enough data?

�Yes

Page 7: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

7

Terms

� x – input variable

� x* – new input variable

� h(x) – “truth” – underlying response function

� t = h(x) + ε – actual observed response

� y(x; D) – predicted response,

based on model learned from dataset D

� ŷ(x) = ED[ y(x; D) ] – expected response,

averaged over (models based on) all datasets

� Eerr = ED,(x*,t*)[ (t*– y(x*))2 ]– expected L2 error on new instance x

*

Page 8: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

8

Bias-Variance Analysis in Regression

� Observed value is t(x) = h(x) + ε� ε ~ N(0, σ2)

� normally distributed: mean 0, std deviation σ2

� Note: h(x) = E[ t(x) | x ]

� Given training examples, D = {(xi, ti)},let

y(.) = y(.; D) be predicted function, based on model learned using D

� Eg, linear model yw(x) = w ⋅ x + w0using w =MLE(D)

Page 9: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

9

Example: 20 points

t = x + 2 sin(1.5x) + N(0, 0.2)

Page 10: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

10

Bias-Variance Analysis

� Given a new data point x*

� return predicted response: y(x*)

�observed response: t* = h(x*) + ε

� The expected prediction error is …

Eerr = ED,(x*,t*)[ (t*– y(x*))2 ]

Page 11: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

11

Expected Loss

� [y(x) – t]2 = [y(x) – h(x) + h(x) – t]2 = [y(x) – h(x)]2

+ 2 [y(x) – h(x)] [h(x) – t] + [h(x) – t]2

Mismatch between OUR hypothesis y(.) & target h(.)

… we can influence thisNoise in distribution of target

… nothing we can do

Expected value is 0 as h(x) = E[t|x]

� Eerr = ∫ [y(x) – t]2 p(x,t) dx dt

= ∫{y(x) − h(x)}2 p(x)dx + ∫{h(x) − t}2 p(x, t)dxdt

Page 12: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

12

Relevant Part of Loss

� Really y(x) = y(x; D) fit to data D…so consider expectation over data sets D

�Let ŷ(x) = ED[y(x; D)]

� ED[ {h(x) – y(x; D) }2 ]= ED[h(x)– ŷ(x) + ŷ(x) – y(x; D) ]}2

= ED[ {h(x) – ŷ(x)}2] + 2ED[ {h(x) – ŷ(x)} { y(x; D) – ED[y(x; D) }]

+ ED[{ y(x; D) – ED[y(x; D)] }2 ]

= {h(x) – ŷ(x)}2 + ED[ { y(x; D) – ŷ(x) }2 ]

Bias2Variance

Eerr = ∫{y(x) − h(x)}2 p(x)dx + ∫{h(x) − t}2 p(x,t)dxdt

0

Page 13: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

13

50 fits (20 examples each)

Page 14: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

14

Bias, Variance, NoiseBias

Variance

Noise

=

50 fits (20 examples each)

Page 15: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

15

Understanding Bias

� Measures how well our approximation architecture

can fit the data

� Weak approximators� (e.g. low degree polynomials)

will have high bias

� Strong approximators� (e.g. high degree polynomials)

will have lower bias

{ ŷ(x) – h(x) }2

Page 16: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

16

Understanding Variance

� No direct dependence on target values

� For a fixed size D:� Strong approximators tend to have more variance

… different datasets will lead to DIFFERENT predictors

� Weak approximators tend to have less variance… slightly different datasets may lead to SIMILAR

predictors

� Variance will typically disappear as |D| →∞

ED[ { y(x; D) – ŷD(x) }2 ]

Page 17: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

17

Summary of Bias,Variance,Noise

� Eerr = E[ (t*– y(x*))2 ] =

E[ (y(x*) – ŷ(x*))2 ]

+ (ŷ(x*)– h(x*))2

+ E[ (t* – h(x*))2 ]

= Var( h(x*) ) + Bias( h(x*) )2 + Noise

Expected prediction error= Variance + Bias2 + Noise

Page 18: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

18

Bias, Variance, and Noise

� Bias: ŷ(x*)– h(x*)

� the best error of model ŷ(x*) [average over datasets]

� Variance: ED[ ( yD(x*) – ŷ(x*) )2 ]

� How much yD(x*) varies from

one training set D to another

� Noise: E[ (t* – h(x*))2 ] = E[ε2] = σ2

� How much t* varies from h(x*) = t* + ε

� Error, even given PERFECT model, and ∞ data

Page 19: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

19

50 fits (20 examples each)

Page 20: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

20

Predictions at x=2.0

Page 21: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

21

50 fits (20 examples each)

Page 22: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

22

Predictions at x=5.0

Bias

Variance

true value

Page 23: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

23

Observed Responses at x=5.0

Noise Plot

Noise

Page 24: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

24

Model Selection: Bias-Variance

� C1 “more expressive than” C2

iff

representable in C1 ⇒ representable in C2

“C2 ⊂ C1”

� Eg, LinearFns ⊂ QuadraticFns

0-HiddenLayerNNs ⊂ 1-HiddenLayerNNs

⇒ can ALWAYs get better fit using C1, over C2

� But … sometimes better to look for y ∊ C2

C1

C2

Page 25: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

25

Standard Plots…

Page 26: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

26

Why?

� C2 ⊂ C1 ⇒∀ y ∊ C2

∃ x* ∊ C1 that is at-least-as-good-as y

� But given limited sample,might not find this best x*

� Approach: consider Bias2 + Variance!!

Page 27: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

27

Bias-Variance tradeoff – Intuition

� Model too “simple” ⇒does not fit the data well�A biased solution

� Model too complex ⇒small changes to the data, changes predictor a lot�A high-variance solution

Page 28: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

28

Bias-Variance Tradeoff� Choice of hypothesis class introduces learning bias

� More complex class ⇒ less bias

� More complex class ⇒ more variance

Page 29: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

29

2

2

~Bias2

~Variance

Page 30: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

30

� Behavior of test sample and training sample error as function of model complexity� light blue curves show the training error err,

� light red curves show the conditional test error ErrT

for 100 training sets of size 50 each� Solid curves = expected test error Err and expected training error E[err].

Page 31: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

31

Empirical Study…

� Based on different regularizers

Page 32: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

32

Effect of Algorithm Parameters

on Bias and Variance

� k-nearest neighbor:

� increasing k typically

increases bias and reduces variance

� decision trees of depth D:

� increasing D typically

increases variance and reduces bias

� RBF SVM with parameter σ:

� increasing σ typicallyincreases bias and reduces variance

Page 33: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

33

Least Squares Estimator

� Truth: f(x) = xTβObserved: y = f(x) + ε Ε[ ε ] = 0

� Least squares estimator f(x0) = x0

Tβ β = (XTX)-1XTy

�Unbiased: f(x0) = E[ f(x0) ]

f(x0) – E[ f(x0) ]

= x0Tβ −Ε[ x0

T(XTX)-1XTy ]

= x0Tβ −Ε[ x0

T(XTX)-1XT(Xβ + ε) ]

= x0Tβ −Ε[ x0

Tβ + x0T(XTX)-1XTε ]

= x0Tβ −x0

Tβ + x0T(XTX)-1XT Ε[ε ] = 0

N data points

K component values

a datapoint

X =

x1, …, xk

Page 34: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

34

Gauss-Markov Theorem

� Least squares estimator f(x0) = x0T (XTX)-1XTy

� … is unbiased: f(x0) = E[ f(x0) ]

� … is linear in y … f(x0) = c0Ty where c0

T

� Gauss-Markov Theorem:

Least square estimate has the minimum variance

among all linear unbiased estimators.

� BLUE: Best Linear Unbiased Estimator

� Interpretation: Let g(x0) be any other …

� unbiased estimator of f(x0) … ie, E[ g(x0) ] = f(x0)

� that is linear in y … ie, g(x0) = cTy

then Var[ f(x0) ] ≤ Var[ g(x0) ]

Page 35: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

35

Variance of Least Squares Estimator

� Least squares estimator f(x0) = x0

Tβ β = (XTX)-1XTy

� Variance: E[ (f(x0) – E[ f(x0) ] )

2 ] = E[ (f(x0) – f(x0) )

2 ]

= E[ ( x0T (XTX)-1XT β − x0

Tβ )2 ]= Ε[ (x0

T(XTX)-1XT(Xβ + ε) − x0Tβ )2 ]

= Ε[ (x0Tβ + x0

T(XTX)-1XT ε − x0Tβ )2 ]

= Ε[ (x0T(XTX)-1XT ε)2 ]

= σε2 p/N

y = f(x) + ε Ε[ ε ] = 0

var(ε) = σε

2

… in “in-sample error” model …

Page 36: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

36

Trading off Bias for Variance

� What is the best estimator for the given linear additive model?

� Least squares estimator f(x0) = x0

Tβ β = (XTX)-1XTy

is BLUE: Best Linear Unbiased Estimator

�Optimal variance, wrt unbiased estimators

�But variance is O( p / N ) …

� So if FEWER features, smaller variance…… albeit with some bias??

Page 37: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

37

Feature Selection

� LS solution can have large variance

� variance ∝ p (#features)

� Decrease p ⇒ decrease variance…but increase bias

� If decreases test error, do it!

⇒ Feature selection

� Small #features also means:

� easy to interpret

Page 38: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

38

Statistical Significance Test

� Y = β0 + ∑j βj Xj

� Q: Which Xj are relevant?

A: Use statistical hypothesis testing!

� Use simple model:Y = β0 + ∑j βj Xj + ε ε ~ N(0, σe

2)

� Here: β ~ N( β, (XTX)-1 σe2)

� Use

vj is the jth diagonal element of (XTX)-1

• Keep variable Xi if zj is large…

j

j

jv

β

ˆ

ˆ=

β̂

∑=

−−−

=N

i

ii yypN 1

2)ˆ(1

1σ̂

Page 39: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

39

Measuring Bias and Variance

� In practice (unlike in theory),

only ONE training set D

� Simulate multiple training sets by

bootstrap replicates�D’ = {x | x is drawn at random with

replacement from D }

� |D’| = |D|

Page 40: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

40

Estimating Bias / Variance

S

S1

Original Data Bootstrap Replicate

LearningAlg h1

Hypothesis

{ h1(x) | x ∈ T1}

h1’s predictions

T1=S/S1

Page 41: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

41

Estimating Bias / Variance

S

S1

� Each Si is bootstrap replicate

� Ti = S / Si

� hi = hypothesis, based on Si

Original Data Bootstrap Replicate Hypothesis h ’s predictions

LearningAlg h1

{ h1(x) | x ∈ T1}T1

LearningAlg hb

{ hb(x) | x ∈ Tb}Tb

Sb

⋮ ⋮ ⋮

Page 42: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

42

Average Response for each xi

hb(xr)…hb(x1)∈? Tb

h2(xr)…--∈? T2

h(xr) = 1/kr Σ hi(xr)…h(x1) = 1/k1 Σ hi(x1)

…h1(x1)∈? T1

xr…x1

h(xj) = Σ{i: x ∈Ti} hi(xj) / ||{i: x ∈Ti}||

Page 43: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

43

Procedure for Measuring

Bias and Variance

� Construct B bootstrap replicates of SS1, …, SB

� Apply learning alg to each replicate Sb

to obtain hypothesis hb

� Let Tb = S \ Sb = data points not in Sb(out of bag points)

� Compute predicted value hb(x)

for each x ∈ Tb

Page 44: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

44

Estimating Bias and Variance

� For each x ∈ S,

�observed response y

�predictions y1, …, yk

� Compute average prediction h(x) = avei {yi}

� Estimate bias: h(x) – y

� Estimate variance:

Σ{i: x ∈Ti} ( hi(x) – h(x) )2 / (k-1)

� Assume noise is 0

Page 45: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

45

Outline

� Linear Regression�MLE = Least Squares!

�Basis functions

� Evaluating Predictors�Training set error vs Test set error

�Cross Validation

� Model Selection�Bias-Variance analysis

�Regularization, Bayesian Model

Page 46: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

46

Regularization

� Idea: Penalize overly-complicated answers

� Regular regression minimizes:

� Regularized regression minimizes:

Note: May exclude constants from the norm

( ) wwwwwwwwxxxx λ+−∑2)( );(

i i

ity

( )2)( );(∑ −i i

ity wwwwxxxx

Page 47: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

47

Regularization: Why?

� For polynomials,extreme curves typically require extreme values

� In general, encourages use of few features

�only features that lead to a substantial

increase in performance

� Problem: How to choose λ

Page 48: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

48

Solving Regularized Form

tXXXwTT 1)(* −=

[ ]

∑−= ∑

j

i

j

ii

j

w xwtwSolving2* minarg

tXXXwTT

I1)(* −+= λ

[ ]

+∑−= ∑ ∑

j i

ii

j

ii

j

w wxwtwSolving22* minarg λ

Page 49: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

49

Regularization: Empirical Approach

� Problem: magic constant λ trading-off complexity vs. fit

� Solution 1:� Generate multiple models

� Use lots of test data to discover and discard bad models

� Solution 2: k-fold cross validation:� Divide data S into k subsets { S1, …, Sk }

� Create validation set S-i = Si - S� Produces k groups, each of size (k -1)/k

� For i=1..k: Train on S-i, Test on Si

� Combine results … mean? median? …

Page 50: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

50

A Bayesian Perspective

� Given a space of possible hypotheses H={hj}

� Which hypothesis has the highest posterior:

� As P(D) does not depend on h:

argmax P(h|D) = argmax P(D|h) P(h)

� “Uniform P(h)” ⇒ Maximum Likelihood Estimate

� (model for which data has highest prob.)

� … can use P(h) for regularization …

)(

)()|()|(

DP

hPhDPDhP =

Page 51: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

51

Bayesian Regression� Assume that, given x, noise is iid Gaussian

� Homoscedastic noise model (same σ for each position)

Page 52: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

52

Maximum Likelihood Solution

MLE fit for mean is

� just linear regression fit

� does not depend upon σ2

−−

==i

yt

m

i

eyttPhDP

2

2

));((

)()1(

2)),;(|,...,()|(

2

2)(

πσσ

σ

w w w wxxxx

wwwwxxxx

Page 53: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

53

Bayesian learning of

Gaussian parameters

P(µ |η,λ)

η

� Conjugate priors� Mean: Gaussian prior

� Variance: Wishart Distribution

� Prior for mean:

Remember this??

Page 54: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

54

Bayesian Solution

� Introduce prior distribution over weights

� Posterior now becomes:

( )INphp2,0)|()( | λλ wwwwwwww ==

k

ww

i

yt

m

Ti

ee

PyttPhPhDP

2

2

2

2

));((

)()1(

22

)()),;(|,...,()()|(

22

2)(

πλπσ

σ

λσ

−−−

∏=

=

wwwwxxxx(i)(i)(i)(i)

wwwwwwwwxxxx

Page 55: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

55

Regularized Regression

vs Bayesian Regression

� Regularized Regression minimizes:

� Bayesian Regression maximizes:

� These are identical (up to constants)… take log of Bayesian regression criterion

k

ww

i

ytTi

ee

2

2

2

2

));((

22

22

2)(

πλπσ

λσ

−−−

wwwwxxxx(i)(i)(i)(i)22

2)(

22

));((

λσ

wwwwwwwwwwwwxxxx T

i

iyt

const−

+−−

+∑(i)

( ) wwwwwwwwxxxx κ+−∑2)( );(

i

i

i yt

Page 56: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

56

Viewing L2 Regularization

� Using Lagrange Multiplier…

[ ]

+∑−= ∑ ∑

j i

ii

j

ii

j

w wxwtw22* minarg λ

[ ]

∑−= ∑

j

i

j

ii

j

w xwtw2* minarg⇒

ω≤∑i

iw2s.t.

Page 57: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

57

Use L2 vs L1 Regularization

[ ]

+∑−= ∑ ∑

j i

q

ii

j

ii

j

w wxwtw ||minarg2* λ

[ ]

∑−= ∑

j

i

j

ii

j

w xwtw2* minarg⇒ ω≤∑

i

q

iw ||s.t.

Intersections often on axis!

… so wi = 0 !!

LASSO!

Page 58: Linear Regression, Regularization Bias-Variance Tradeoffrgreiner/C-466/SLIDES/3b-Regression.pdf · Linear Regression, Regularization Bias-Variance Tradeoff HTF: Ch3, 7 B: Ch3 Thanks

58

What you need to know� Regression

� Optimizing sum squared error == MLE !

� Basis functions = features

� Relationship between regression and Gaussians

� Evaluating Predictor� TestSetError ≠ Prediction Error

� Cross Validation

� Bias-Variance trade-off� Model complexity …

� Regularization ≈ Bayesian modeling

� L1 regularization – prefers 0 weights!

Play with Applet