CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf ·...

37
CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten digits (part 1 & 2) and Bayesian neural net (part 3) Yue Li Email: [email protected] Wed 11-12 Feb 12 Fri 10-11 Feb 14

Transcript of CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf ·...

Page 1: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

CSC321 Tutorial 5 part B:Assignment 2:

neural networks for classifying handwritten digits(part 1 & 2)

and Bayesian neural net (part 3)

Yue LiEmail: [email protected]

Wed 11-12 Feb 12Fri 10-11 Feb 14

Page 2: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

• In this assignment you will be using neural networks forclassifying handwritten digits.

• First copy and unzip the archive below into your directory.

http://www.cs.toronto.edu/~bonner/courses/2014s/

csc321/assignments/hw2_matlab.zip

• An easy way to do this is as follows:

$ wget http://www.cs.toronto.edu/~bonner/courses/

2014s/csc321/assignments/hw2_matlab.zip

$ unzip hw2 matlab.zip

$ cd hw2 matlab

$ matlab (or matlab -nojvm)

Page 3: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

>> load assign2data2012.mat;

• This will load the training and test data into your workspace.

• The training data consists of 150 images of handwritten digits(15 of each class). Each image is of size 16 × 16 pixels andeach pixel is represented by a real number between 0 (black)and 1 (white).

• The test data contains 3000 images (300 of each class).

In order to get a feel for what we are dealing with you can see thetraining data by typing:

>> showimages(data);

Page 4: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

• The network we use consists of a single hidden layer withsigmoidal activation units and a 10-way softmax output unit.

• The network tries to minimize cross-entropy error (the erroron a case is the negative log probability that the network givesto the correct answer).

• It uses momentum to speed the learning and weightcost tokeep the weights small.

Page 5: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

… x1 x2 x256

…… y1 y2 y100

xi

yj

fully connected between input and hidden units

16 X 16 = 256 input units

256 X 100 input-to-hidden wij

wij inhid

data

Variable in classbp2.m

Neural Network information

100 hidden units

h1 h2 h100 hj ……

wjk

100 hidden activities

hidsum

hidacts

…… z1 z10

100 X 10 hidden-to-output wjk

hidout

10 output units outsum

a1 …… a10

softmax output output

Neural Network Schematics

fully connected between hidden and output units

zk

ak

Page 6: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

Forward Pass

Math Matlab

yj =∑256

i=1 wijxi + bj hidsum = data*inhid +

repmat(hidbiases, numcases, 1);

hj = 1

1+e−yj

hidacts = 1./(1+exp(-hidsum));

zk =∑60

j=1 wjkhj + bk outsum = hidacts*hidout +

repmat(outbiases, numcases, 1);

ak = ezk∑10k=1 e

zkunnormalisedoutputs=exp(outsum);

rowsums=sum(unnormalisedoutputs,2);

outputs=unnormalisedoutputs

./repmat(rowsums,1,numout);

Page 7: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

Cross-entropy and misclassification error

Math Matlab

E = −∑N∗

n=1

∑10k=1 tk log ak E = - sum(sum(targets .*

log(tiny+outputs)));

t̂ = arg maxk ak testguesses =

(toutputs - repmat(

(max(toutputs’))’, 1,

numout)) >= 0;

Emisclass =∑N∗

n=1(tn − t̂n) terrors =

sum(sum(testtargets))

- sum(sum(testtargets .*

testguesses));

∗N = 150 for training cases; N = 3000 for testing cases.

Page 8: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

Backward Pass

Math Matlab

∂E∂hj

=∑10

k=1∂E∂ak

∂ak∂zk

∂zk∂hj

=

(ak − tk)wjk

dEbydhidacts =

dEbydoutsum*(hidout’);

∂E∂yj

= ∂E∂hj

∂hj∂yj

= ∂E∂hj

hj(1− hj) dEbydhidsum =

dEbydhidacts .*

hidacts.*(1-hidacts);

∂E∂bk

=∑N

n=1∂E∂ak

dEbydoutbiases =

sum(dEbydoutsum);

Page 9: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

Backward Pass cont’d

Math Matlab

∂E∂wjk

= ∂E∂ak

∂ak∂zk

∂zk∂wij

= (ak − tk)hj dEbydhidout =

hidacts’*dEbydoutsum;

∂E∂bj

=∑N∗

n=1∂E∂yk

dEbydhidbiases =

sum(dEbydhidsum);

∂E∂wij

= ∂E∂yj

∂yj∂wij

= ∂E∂yj

xi dEbydinhid = data’ *

dEbydhidsum;

Page 10: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

Weight Update (related to part 1&2)

Math Matlab

∆w(t)ij = η∆w

(t−1)ij − ε( ∂E

∂wij+

λw(t−1)ij )

inhidinc =

momentum*inhidinc -

epsilon*(dEbydinhid +

weightcost*inhid);

∆w(t)jk = η∆w

(t−1)jk − ε( ∂E

∂wjk+

λw(t−1)jk )

hidoutinc =

momentum*hidoutinc -

epsilon*(dEbydhidout +

weightcost*hidout);

Page 11: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

Weight Update cont’d

Math Matlab

∆b(t)j = η∆b

(t−1)j − ε∂Ebj hidbiasesinc =

momentum*hidbiasesinc -

epsilon*dEbydhidbiases;

∆b(t)k = η∆b

(t−1)k − ε∂Ebk outbiasesinc =

momentum*outbiasesinc -

epsilon*dEbydoutbiases;

Page 12: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

Weight Update cont’d

Math Matlab

w(t)ij = w

(t−1)ij + ∆w

(t)ij inhid = inhid + inhidinc;

w(t)jk = w

(t−1)jk + ∆w

(t)jk hidout = hidout + hidoutinc;

b(t)j = b

(t−1)j + ∆b

(t)j hidbiases = hidbiases +

hidbiasesinc;

b(t)k = b

(t−1)k + ∆b

(t)k outbiases = outbiases +

outbiasesinc;

Page 13: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

The main matlab file “classbp2.m” expects you to set severalglobal variables that control these behaviours by hand (see thecode file). This makes it easy to set up experiments in which youtry many different settings:

>> restart = 1;

>> maxepoch = 2000;

>> numhid = 100;

>> epsilon = .01;

>> finalmomentum = 0.8;

>> weightcost = 0;

>> classbp2;

Page 14: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

• classbp2 trains the network.

• It prints out the cross-entropy (E) on the training set and onthe test set.

• It also prints out the number of errors (it chooses themaximum output as the right answer).

• When it has finished it plots graphs of the number of testerrors and of the cross-entropy cost on the test set.

• You can set the variable errorprintfreq in classbp2 tomake it measure the error as frequently as you want.

• You must always reset the restart variable to 1 whenever youwant to redo the training from random initial weights. If youdon’t the network will start learning from where it left off.

• If you call showweights(inhid) you can see theinput-to-hidden weights as gray-level images.

Page 15: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

PART 1 (3 points)• Using numhid=100 and maxepoch=2000 and weightcost=0,

play around with epsilon and finalmomentum to findsettings that make tE low after 2000 epochs.

• Briefly report what you discover. Include the values ofepsilon and finalmomentum that work best and say whatvalues they produce for the test errors and the cross-entropyerror.

• If you were instead asked to find the epsilon that producedthe best minimum value (not the best final value) for test setcross entropy, would you expect to find a larger or smallerepsilon (assuming the minimum value occurs before the lastepoch)? In a sentence, justify your answer.

• Suggestion: experiment with epsilon and finalmomentum

within a reasonable range by fixing one and experimenting withthe other variable: try a large range first (e.g. ε = 10x , wherex ∈ [−8,−1]) in order to find a smaller range which achievesthe best tE. Then refine the search within that smaller rangeto find the best epsilon and finalmomentum.

Page 16: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

−8 −7 −6 −5 −4 −3 −2 −10

2000

4000

6000

8000

10000

cro

ss−

en

tro

y (

tE)

Test Error vs Learning Rate

ε (log10)

−8 −7 −6 −5 −4 −3 −2 −1500

1000

1500

2000

2500

3000

mis

cla

ssfica

tio

n (

terr

ors

)

tE

terrors

finalmomentum fixed to 0.8; (DEMO ONLY; try do better than it)

Page 17: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

−8 −7 −6 −5 −4 −3 −2 −10

5000

10000

Fin

al cro

ss−

en

tro

y

Final and Minimum Cross−entroy vs Learning Rate

ε (log10)

−8 −7 −6 −5 −4 −3 −2 −10

5000

10000

Min

imu

m c

ross−

en

tro

y

final tE

min tE

finalmomentum fixed to 0.8; (DEMO ONLY; try do better than it)

Page 18: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

−8 −7 −6 −5 −4 −3 −2 −1500

1000

1500

2000

2500

3000

Fin

al M

iscla

ssific

atio

n

Final and Minimum Misclassification vs Learning Rate

ε (log10)

−8 −7 −6 −5 −4 −3 −2 −1500

1000

1500

2000

2500

3000

Min

imu

m M

iscla

ssific

atio

n

final terrors

min terrors

finalmomentum fixed to 0.8; (DEMO ONLY; try do better than it)

Page 19: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

PART 2 (2 points)• Using numhid=100 and maxepoch=2000 andfinalmomentum=0.7 set epsilon to a sensible value based onyour experiments in part 1 and then try various values forweightcost to see how it affects the final value of tE.

• You may find the file experiment.m useful, but you will haveto edit it.

• Briefly report what you discovered and include a plot of thefinal value of tE against the weightcost.

Your report on Parts 1 and 2 combined should be NOT MORETHAN ONE PAGE long, but graphs and printouts of runs can beattached.

Page 20: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

0 0.1 0.2 0.3 0.4 0.51820

1840

1860

1880

1900

1920

1940

1960

1980

Final Cross−Entropy vs Weightcost

weightcost

tE

epsilon fixed to 10−4; finalmomentum fixed to 0.8. (DEMOONLY - try do better)

Page 21: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

PART 3: (5 points)

• In this part we will try tofind a Bayesian solution toa small toy problem. Thenet is on the right.

• Each data point will consistof 2 real valued numbers x1and x2.

• h1 and h2 representsigmoidal units.

• We are interested in learningW = (w1,w2,w3,w4).

target

x1

h1

w1

h2

w3

w4

x2

w2

Page 22: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

PART 3: (5 points)

t̂ = h2

h2 =1

1 + exp(−z2)

z2 = w3x1 + w4h1

h1 =1

1 + exp(−z1)

z1 = w1x1 + w2x2

target

x1

h1

w1

h2

w3

w4

x2

w2

Page 23: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

• Recall, that in a Bayesian setting, we have a prior probabilitydistribution over parameters W. Each setting of parametersto values constitutes a hypothesis.

• We have some prior belief about what hypotheses are morelikely to be true. One such belief could be that the hypotheseswhich have small values of w’s are more likely (also called aGaussian Prior; bottom left graph).

• Another belief could be that all hypotheses are equally likely(a Uniform Prior; bottom right graph). In this assignment wewill be working with a uniform prior over W.

−1 −0.5 0 0.5 10.2

0.25

0.3

0.35

0.4

wi

−1 −0.5 0 0.5 1−0.5

0

0.5

1

1.5

wi

Page 24: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

We then observe the data and modify our belief by weighting eachhypothesis by its likelihood. In other words, the modified beliefabout hypothesis H is the prior about H multiplied by thelikelihood that the data could be explained by H:

p(W|D) =p(W) · p(D|W)

p(D)(Bayes Rule)

where

• the weight vector is W = (w1,w2,w3,w4);

• the input data matrix D contains N training cases of (x1, x2)(i.e. an N × 2 matrix);

• p(W) is the prior (belief) for the weight W;

• p(D|W) = exp(−cost) is the likelihood under the weights W,where cost = −

∑i=1[ti log t̂i + (1− ti ) log(1− t̂i )]

• p(D) =∑

W p(W)p(D|W) probability of the data, consideredas a normalization factor;

• p(W|D) is the posterior probability of the weights.

Page 25: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

• We then use the weighted average of the hypotheses to makedecisions. The model so obtained is called a Bayesianestimate.

• Note that another way of making decisions could be to choosethe hypothesis with the highest modified belief (instead oftaking a weighted average of all of them). This is called theMaximum a-posteriori (MAP) estimate.

• Still another way could be to choose the hypothesis with thelargest likelihood (and ignore our prior beliefs). This is calledthe Maximum Likelihood (ML) estimate.

• In this assignment we will be comparing Bayesian andML/MAP estimators. Since we are using a uniform prior, MLand MAP estimates are the same.

Page 26: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

Prediction on test data Xj :Bayesian estimate:

t̂(Bayes)j = p(tj |Xj) =

∑W

p(W|D)p(t|Xj ,W)

MAP:

WMAP = arg maxW

p(W|D)

= arg maxW

p(W)p(D|W)

p(D)

= arg maxW

p(W)p(D|W)

t̂(MAP)j = p(tj |Xj) = p(tj |Xj ,WMAP)

ML:

WML = arg maxW

p(D|W)

t̂(ML)j = p(tj |Xj) = p(tj |Xj ,WML)

Page 27: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

For p(W) ∼ Uniform(W)

WMAP = arg maxW

p(W)p(D|W)

= arg maxW

p(D|W)

= WML

Note: the above is true for part 3 despite changing the true Wdistribution (i.e., teachernet) in the second part of part 3.

Page 28: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

Evaluation (Contrastive Divergence):

ErrorSum =−N′∑i=1

[tn log(t̂i ) + (1− ti ) log(1− t̂i )

]+∑i

[t log(ti ) + (1− ti ) log(1− ti )]

ErrorPerBit =ErrorSum

N ′ log(2)

where N ′ is the number of test cases (i.e., testnumcases).

Note: Use ErrorPerBit to evaluate Bayesian and MAP estimaterather than ErrorSum.

Page 29: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

• We will generate our training data randomly. Each dimensionof each data point will be a sample drawn from a normaldistribution with mean 0 and standard deviation(’inputstandev’, currently set to ’4’):

data=inputstandev*randn(numcases,2);

• The weights w will be set to uniform random values in [-1, 1]and we assert that this is the ’true’ net (also called theteacher net):

prior=ones(9^4,1)/(9^4);

• The targets are binary valued and will be sampled from aBernoulli distribution defined by the predictions made by theteacher net (i.e., given input x, the target will be 1 withprobability p, where p is the output of the teacher network oninput x):

targets = .001 + .998*(rand(1,numcases) <applyweights(wteacher, data));

Page 30: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

>> makeallvecs;• This makes a matrix in which each row is a possible weight

vector. That is, we have all possible hypotheses with us in thematrix allvecs. Each wi can take on 9 possible values (-1 to1 in intervals of 0.25). The prior is uniform over these 9ˆ4possible weight vectors.

>> maketeacher;• This makes a teacher network by sampling the four weights

from a uniform distribution from [-1, 1]:• wteacher = 2*rand(1,4) - 1;• Suggestion: try different prior distributions such as a standard

normal N (0, 1) with the Matlab built-in function randn

Page 31: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

>> numcases=10;

• numcases is the number of training cases that the model willuse. The number of testcases is fixed at 100.

• Suggestion: Try larger numcases for the number of trainingcases and compare the results from Bayesian and MAP/MLestimations. Try for the same number of training casesmultiple times and take the average to obtain more reliableerror estimate.

>> bayeswithbest;

• The code will print out the Bayesian estimate’s error on thetraining and test sets as well as the error of the MAP/MLestimate

Page 32: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

• Figure 1 will show you how well the outputs of the teacher onthe training data can be predicted by bayes-averaging theoutputs of all possible nets.

• “Bayes-averaging” means weighting the prediction of each netby the posterior probability of that net given the training dataand the prior (which is flat in this example).

• Each column corresponds to a data point. The first row givesthe output of the teacher net. The second row gives theoutput of the Bayesian estimate. The third row gives theoutput of the MAP/ML estimate. The size of the square isproportional to the output value.

• Figure 2 will show the same on the test data.

• Figure 3 shows a histogram of the posterior probabilitydistribution across all 9ˆ4 weight vectors. Notice that theposterior can be very spread out so that even the best netgets a very small posterior probability.

Page 33: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

Bayesian estimate (bayespredictions):

p(t|D) =∑∀W

p(W|D)p(t|D,W)

0 0.2 0.4 0.6 0.8 1

x 10−3

0

500

1000

1500

2000

2500

3000

3500

histogram of posterior probabilities p(W|D) even the biggest values are small

Page 34: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

MAP (bestpredictions):

WMAP = arg maxW

p(W|D)

p(t|D) ≡ p(t|D,WMAP)

0 0.2 0.4 0.6 0.8 1

x 10−3

0

500

1000

1500

2000

2500

3000

3500

histogram of posterior probabilities p(W|D) even the biggest values are small

Page 35: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

Bayesian vs MAP using small number training cases:

0 5 10 15 20

0.7

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5Bayesian and MAP Test Error Per Bit vs Training Cases Number

numcases

err

or

in b

its p

er

ca

se

Bayesian

MAP

Page 36: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

Bayesian vs MAP using large number training cases:

0 200 400 600 800 10000.6

0.62

0.64

0.66

0.68

0.7

0.72

0.74

0.76

0.78Bayesian and MAP Test Error Per Bit vs Training Cases Number

numcases

err

or

in b

its p

er

ca

se

Bayesian

MAP

Page 37: CSC321 Tutorial 5 part B: Assignment 2: neural networks ...yueli/CSC321_UTM_2014_files/tut5b.pdf · CSC321 Tutorial 5 part B: Assignment 2: neural networks for classifying handwritten

• Your report should be AT MOST HALF A PAGE and shoulddescribe the effects of changing the number of training cases.

Suggestion: Try larger numcases for the number of trainingcases and compare the results from Bayesian and MAP/MLestimations. Try for the same number of training casesmultiple times and take the average to obtain more reliableerror estimate.

• You should also try modifying maketeacher to set differentkinds of teacher nets and see how the results depend on theparticular teacher net. (Changing maketeacher amounts tochanging the distribution that the teacher weights are chosenfrom, e.g., by changing the parameters of the uniformdistribution or by using a different kind of distributionaltogether.)

Suggestion: What performance would you expect when usinga “wrong” prior; e.g., when the true weight is sampled from astandard normal distribution while we are using uniform prior?How do Bayesian and MAP/ML compare now?

• A2 Due on Feb 25 at 3pm