CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse...

35
CSC321 Lecture 2 A Simple Learning Algorithm : Linear Regression Roger Grosse and Nitish Srivastava January 7, 2015 Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear Regression January 7, 2015 1 / 20

Transcript of CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse...

Page 1: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

CSC321 Lecture 2A Simple Learning Algorithm : Linear Regression

Roger Grosse and Nitish Srivastava

January 7, 2015

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 1 / 20

Page 2: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Outline

In this lecture we will

See a simple example of a machine learning model, linear regression.

Learn how to formulate a supervised learning problem.

Learn how to train the model.

It’s not a neural net algorithm, but it will provide a lot of useful intuitionfor algorithms we will cover in this course.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 2 / 20

Page 3: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

A Machine Learning Problem

Suppose we are given some data about basketball players -

Height in feetAvg Points Scored

Per Game

6.8 9.26.3 11.76.4 15.86.2 8.6

......

5.8 6.0 6.2 6.4 6.6 6.8 7.0 7.2 7.4 7.6Height in feet

5

10

15

20

25

30

Avg po

ints scored pe

r gam

e

What is the predicted number of points scored by a new player who is 6.5feet tall ?

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 3 / 20

Page 4: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Formulate as a Supervised Learning Problem

We are given labelled examples (the training set):Inputs: {x (1), x (2), . . . , x (N)} - Height in feet.Targets: {t(1), t(2), . . . , t(N)} - Avg points scored per game.

Choose a model ≡ Make an assumption about the data’s behaviour.Let’s say we choose -

y = wx + b

We call w the weight and b the bias. These are the trainableparameters.

Learning: Extract knowledge from the data to learn the model.

{x (1), x (2), . . . , x (N)}{t(1), t(2), . . . , t(N)} w , b

Inference: Given a new x and the learned model, make a prediction y .

x w , b y

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 4 / 20

Page 5: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Formulate as a Supervised Learning Problem

We are given labelled examples (the training set):Inputs: {x (1), x (2), . . . , x (N)} - Height in feet.Targets: {t(1), t(2), . . . , t(N)} - Avg points scored per game.

Choose a model ≡ Make an assumption about the data’s behaviour.Let’s say we choose -

y = wx + b

We call w the weight and b the bias. These are the trainableparameters.

Learning: Extract knowledge from the data to learn the model.

{x (1), x (2), . . . , x (N)}{t(1), t(2), . . . , t(N)} w , b

Inference: Given a new x and the learned model, make a prediction y .

x w , b y

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 4 / 20

Page 6: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Formulate as a Supervised Learning Problem

We are given labelled examples (the training set):Inputs: {x (1), x (2), . . . , x (N)} - Height in feet.Targets: {t(1), t(2), . . . , t(N)} - Avg points scored per game.

Choose a model ≡ Make an assumption about the data’s behaviour.Let’s say we choose -

y = wx + b

We call w the weight and b the bias. These are the trainableparameters.

Learning: Extract knowledge from the data to learn the model.

{x (1), x (2), . . . , x (N)}{t(1), t(2), . . . , t(N)} w , b

Inference: Given a new x and the learned model, make a prediction y .

x w , b y

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 4 / 20

Page 7: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Formulate as a Supervised Learning Problem

We are given labelled examples (the training set):Inputs: {x (1), x (2), . . . , x (N)} - Height in feet.Targets: {t(1), t(2), . . . , t(N)} - Avg points scored per game.

Choose a model ≡ Make an assumption about the data’s behaviour.Let’s say we choose -

y = wx + b

We call w the weight and b the bias. These are the trainableparameters.

Learning: Extract knowledge from the data to learn the model.

{x (1), x (2), . . . , x (N)}{t(1), t(2), . . . , t(N)} w , b

Inference: Given a new x and the learned model, make a prediction y .

x w , b y

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 4 / 20

Page 8: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Formulate as a Supervised Learning Problem

We are given labelled examples (the training set):Inputs: {x (1), x (2), . . . , x (N)} - Height in feet.Targets: {t(1), t(2), . . . , t(N)} - Avg points scored per game.

Choose a model ≡ Make an assumption about the data’s behaviour.Let’s say we choose -

y = wx + b

We call w the weight and b the bias. These are the trainableparameters.

Learning: Extract knowledge from the data to learn the model.

{x (1), x (2), . . . , x (N)}{t(1), t(2), . . . , t(N)} w , b

Inference: Given a new x and the learned model, make a prediction y .

x w , b y

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 4 / 20

Page 9: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Learning

Design an objective function (or loss function) that is -

Minimized when the model does what you want it to do.

Easy to minimize (smooth, well-behaved).

Here we want y = wx + b to be close to t, for every training case.

Therefore one choice could be,

L(w , b) =1

2

N∑i=1

(wx (i) + b − t(i))2

This is called squared loss.Need to find w , b such that L(w , b) is minimized.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 5 / 20

Page 10: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Learning

Design an objective function (or loss function) that is -

Minimized when the model does what you want it to do.

Easy to minimize (smooth, well-behaved).

Here we want y = wx + b to be close to t, for every training case.Therefore one choice could be,

L(w , b) =1

2

N∑i=1

(wx (i) + b − t(i))2

This is called squared loss.Need to find w , b such that L(w , b) is minimized.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 5 / 20

Page 11: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Learning

L(w , b) =1

2

∑i

(wx (i) + b − t(i))2

∂L

∂w=

∑i

(wx (i) + b − t(i))x (i)

∂L

∂b=

∑i

wx (i) + b − t(i)

Since L is a nonnegative quadratic function in w and b, any critical pointis a minimum. Therefore, we minimize L by setting

∂L

∂w= 0,

∂L

∂b= 0.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 6 / 20

Page 12: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Learning

L(w , b) =1

2

∑i

(wx (i) + b − t(i))2

∂L

∂w=

∑i

(wx (i) + b − t(i))x (i)

∂L

∂b=

∑i

wx (i) + b − t(i)

Since L is a nonnegative quadratic function in w and b, any critical pointis a minimum. Therefore, we minimize L by setting

∂L

∂w= 0,

∂L

∂b= 0.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 6 / 20

Page 13: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Learning

L(w , b) =1

2

∑i

(wx (i) + b − t(i))2

∂L

∂w=

∑i

(wx (i) + b − t(i))x (i)

∂L

∂b=

∑i

wx (i) + b − t(i)

Since L is a nonnegative quadratic function in w and b, any critical pointis a minimum. Therefore, we minimize L by setting

∂L

∂w= 0,

∂L

∂b= 0.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 6 / 20

Page 14: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Learning

w

(∑i

x (i) · x (i))

+ b

(∑i

x (i)

)−

(∑i

t(i)x (i)

)= 0

w

(∑i

x (i)

)+ bN −

(∑i

t(i)

)= 0

Now we have 2 linear equations and 2 unknowns w and b. Solve!

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 7 / 20

Page 15: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Learning

w

(∑i

x (i) · x (i))

+ b

(∑i

x (i)

)−

(∑i

t(i)x (i)

)= 0

w

(∑i

x (i)

)+ bN −

(∑i

t(i)

)= 0

Now we have 2 linear equations and 2 unknowns w and b. Solve!

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 7 / 20

Page 16: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Inference

5.8 6.0 6.2 6.4 6.6 6.8 7.0 7.2 7.4 7.65

10

15

20

25

30

To make a prediction about a new player, just use y = wx + b.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 8 / 20

Page 17: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Inference

5.8 6.0 6.2 6.4 6.6 6.8 7.0 7.2 7.4 7.65

10

15

20

25

30

To make a prediction about a new player, just use y = wx + b.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 8 / 20

Page 18: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Multi-variable Linear Regression

Multi-variable : Instead of x ∈ R, we have x = (x1, x2, . . . , xM) ∈ RM .

For example,Height in feet (x1) Weight in pounds (x2) Avg Points Scored Per Game (t)

6.8 225 9.26.3 180 11.76.4 190 15.86.2 180 8.6

......

...Choose a model -

y = w1x1 + w2x2 + . . .+ wMxM + b = w>x + b

Parameters to be learned : w, b.Objective function -

L(w, b) =N∑i=1

(w>x(i) + b − t(i))2

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 9 / 20

Page 19: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Multi-variable Linear Regression

Multi-variable : Instead of x ∈ R, we have x = (x1, x2, . . . , xM) ∈ RM .For example,

Height in feet (x1) Weight in pounds (x2) Avg Points Scored Per Game (t)6.8 225 9.26.3 180 11.76.4 190 15.86.2 180 8.6

......

...

Choose a model -

y = w1x1 + w2x2 + . . .+ wMxM + b = w>x + b

Parameters to be learned : w, b.Objective function -

L(w, b) =N∑i=1

(w>x(i) + b − t(i))2

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 9 / 20

Page 20: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Multi-variable Linear Regression

Multi-variable : Instead of x ∈ R, we have x = (x1, x2, . . . , xM) ∈ RM .For example,

Height in feet (x1) Weight in pounds (x2) Avg Points Scored Per Game (t)6.8 225 9.26.3 180 11.76.4 190 15.86.2 180 8.6

......

...Choose a model -

y = w1x1 + w2x2 + . . .+ wMxM + b = w>x + b

Parameters to be learned : w, b.

Objective function -

L(w, b) =N∑i=1

(w>x(i) + b − t(i))2

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 9 / 20

Page 21: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Multi-variable Linear Regression

Multi-variable : Instead of x ∈ R, we have x = (x1, x2, . . . , xM) ∈ RM .For example,

Height in feet (x1) Weight in pounds (x2) Avg Points Scored Per Game (t)6.8 225 9.26.3 180 11.76.4 190 15.86.2 180 8.6

......

...Choose a model -

y = w1x1 + w2x2 + . . .+ wMxM + b = w>x + b

Parameters to be learned : w, b.Objective function -

L(w, b) =N∑i=1

(w>x(i) + b − t(i))2

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 9 / 20

Page 22: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Multi-variable Linear Regression

We can use more general basis functions (also called “features”).

y = w1φ1(x) + w2φ2(x) + . . .+ wMφM(x) = w>Φ(x)

Parameters to be learned : w.

For example, 1-D Polynomial fitting

φ0(x) = 1

φ1(x) = x

φ2(x) = x2

φ3(x) = x3

... =...

φM(x) = xM

y =

=bias︷ ︸︸ ︷w0φ0(x) +w1φ1(x) + w2φ2(x) + . . .+ wMφM(x) = w>Φ(x)

Note : Linear regression means linear in parameters w, not linear in x.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 10 / 20

Page 23: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Multi-variable Linear Regression

We can use more general basis functions (also called “features”).

y = w1φ1(x) + w2φ2(x) + . . .+ wMφM(x) = w>Φ(x)

Parameters to be learned : w.For example, 1-D Polynomial fitting

φ0(x) = 1

φ1(x) = x

φ2(x) = x2

φ3(x) = x3

... =...

φM(x) = xM

y =

=bias︷ ︸︸ ︷w0φ0(x) +w1φ1(x) + w2φ2(x) + . . .+ wMφM(x) = w>Φ(x)

Note : Linear regression means linear in parameters w, not linear in x.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 10 / 20

Page 24: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Multi-variable Linear Regression

We can use more general basis functions (also called “features”).

y = w1φ1(x) + w2φ2(x) + . . .+ wMφM(x) = w>Φ(x)

Parameters to be learned : w.For example, 1-D Polynomial fitting

φ0(x) = 1

φ1(x) = x

φ2(x) = x2

φ3(x) = x3

... =...

φM(x) = xM

y =

=bias︷ ︸︸ ︷w0φ0(x) +w1φ1(x) + w2φ2(x) + . . .+ wMφM(x) = w>Φ(x)

Note : Linear regression means linear in parameters w, not linear in x.Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 10 / 20

Page 25: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Learning Multi-variable Linear Regression

Feature matrix:

Φ =

Φ(x (1))>

Φ(x (2))>

...Φ(x (N))>

Vector of predictions:

Φw =

wTΦ(x (1))wTΦ(x (2))

...wTΦ(x (N))

Objective function

L(w) =1

2

∑i

(wTΦ(x (i))− t(i))2

=1

2||Φw − t||2

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 11 / 20

Page 26: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Learning Multi-variable Linear Regression

Optimum occurs where

∇wL(w) = Φ>(Φw − t) = 0

Therefore,Φ>Φw −Φ>t = 0

w =(

Φ>Φ)−1

Φ>t

Question : When will Φ>Φ be invertible ?

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 12 / 20

Page 27: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Learning Multi-variable Linear Regression

Optimum occurs where

∇wL(w) = Φ>(Φw − t) = 0

Therefore,Φ>Φw −Φ>t = 0

w =(

Φ>Φ)−1

Φ>t

Question : When will Φ>Φ be invertible ?

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 12 / 20

Page 28: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Fitting polynomials

x

t

0 1

−1

0

1

-Pattern Recognition and Machine Learning, Christopher Bishop.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 13 / 20

Page 29: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Fitting polynomials

y = w0

x

t

M = 0

0 1

−1

0

1

-Pattern Recognition and Machine Learning, Christopher Bishop.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 14 / 20

Page 30: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Fitting polynomials

y = w0 + w1x

x

t

M = 1

0 1

−1

0

1

-Pattern Recognition and Machine Learning, Christopher Bishop.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 15 / 20

Page 31: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Fitting polynomials

y = w0 + w1x + w2x2 + w3x

3

x

t

M = 3

0 1

−1

0

1

-Pattern Recognition and Machine Learning, Christopher Bishop.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 16 / 20

Page 32: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Fitting polynomials

y = w0 + w1x + w2x2 + w3x

3 + . . .+ w9x9

x

t

M = 9

0 1

−1

0

1

-Pattern Recognition and Machine Learning, Christopher Bishop.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 17 / 20

Page 33: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Model selection

Underfitting : The model is too simple - does not fit the data.

x

t

M = 0

0 1

−1

0

1

Overfitting : The model is too complex - fits perfectly, does not generalize.

x

t

M = 9

0 1

−1

0

1

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 18 / 20

Page 34: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Model selection

Need to select a model which is neither too simple, nor too complex.

x

t

M = 3

0 1

−1

0

1

Later in this course, we will see talk more about controlling modelcomplexity.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 19 / 20

Page 35: CSC321 Lecture 2 A Simple Learning Algorithm : Linear ...rgrosse/csc321/lec2.pdf · Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary

Next class

Another machine learning model, an early neural net : Perceptron.

- Frank Rosenblatt, with the image sensor (left) of the Mark I Perceptron40

Reminder - Do the quizzes for video lectures A and B by 11.59pm nextMonday.

Roger Grosse and Nitish Srivastava CSC321 Lecture 2A Simple Learning Algorithm : Linear RegressionJanuary 7, 2015 20 / 20