Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first...

53
Chapter 6: Multilayer Neural Networks (Sections 6.1 - 6.3) Introduction Feedforward Operation and Classification Backpropagation Algorithm

Transcript of Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first...

Page 1: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)

• Introduction

• Feedforward Operation and Classification

• Backpropagation Algorithm

Page 2: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Recognition

Jain CSE 802, Spring 2013

Two main challenges

• Representation

• Matching

Page 3: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Representation and Matching

Page 4: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

DriverLicenseInformation

2009driverlicensephotoGallery:34million(30MDMVphotos,4Mmugshots)

Courtesy:PeteLangenfeld,MSP

HowGoodisFaceRepresentation?

Page 5: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

1 2 3 4 5

6 7 8 9 10

Top-10retrievals

Gallery:34million(30MDMVphotos,4Mmugshots)

Smilemakesadifference!

Courtesy:PeteLangenfeld,MSP

HowGoodisFaceRepresentation?

Page 6: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

StateoftheArtinFR:VerificationLFW(2007) IJB-A(2015)FRGCv2.0(2006) MBGC(2010)

D.Wang,C.OttoandA.K.Jain,"FaceSearchatScale:80MillionGallery",arXiv,July28,2015

LFWStandardProtocol99.77%(Accuracy)

3,000genuine&3,000imposterpairs;10-foldCV

LFWBLUFRProtocol88%[email protected]%FAR

156,915genuine,~46Mimposterpairs;10-foldCV

Page 7: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Neural Networksn Massive parallelism is essential for complex

recognition tasks (speech & image recognition)

n Humans take only ~200ms for most cognitive tasks; this suggests parallel computation in human brain

n Biological networks achieve excellent recognition performance via dense interconnection of simple computational elements (neurons)n Number of neurons » 1010 – 1012

n Number of interconnections/neuron » 103 – 104

n Total number of interconnections » 1014

n Damage to a few neurons or synapse (links) does not appear to impair performance (robustness)

Page 8: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Neuron

n Nodes are nonlinear, typically analog

where is internal threshold or offset

x1x2

xdY (output)

w1

wd

Page 9: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

n Feed-forward networks with one or more layers (hidden) between input & output nodes

n How many nodes & hidden layers?

n Network training?

Neural Networks

.

.

.

.

.

.

.

..

d inputs First hidden layerNH1 input units

Second hidden layerNH2 input units

c outputs

Page 10: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Form of the Discriminant Function

• Linear: Hyperplane decision boundaries

• Non-Linear: Arbitrary decision boundaries

• Adopt a model and then use the resulting decision boundary

• Specify the desired decision boundary

Page 11: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Linear Discriminant Function• For a 2-class problem, discriminant function that is a

linear combination of input features can be written as

Weight vector

Bias or Threshold weight

Sign of the function

value gives the class

label

Page 12: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Quadratic Discriminant Function• Quadratic Discriminant Function

• Obtained by adding pair-wise products of features

• g(x) positive implies class 1; g(x) negative implies class 2

• g(x) = 0 represents a hyperquadric, as opposed to hyperplanes in linear discriminant case

• Adding more terms such as wijkxixjxk results in polynomial discriminant functions

Linear Part(d+1) parameters

Quadratic part, d(d+1)/2 additional parameters

Page 13: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Generalized Discriminant Function

• A generalized linear discriminant function, where y= f(x) can be written as

• Equivalently,

Setting yi(x) to be monomials results in polynomial discriminant functions

Dimensionality of the augmented feature space.

Weights in the augmented feature space. Note that the function is linear in a.

td xyxyxy )](),...,(),([ ˆ21=yt

daaa ],...,,[a ˆ21=also called the augmented feature vector.

Page 14: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Perceptron 13

• Perceptron is a linear classifier; it makes predictions based on a linear predictor function combining a set of weights with feature vector

• The perceptron algorithm was invented by Rosenblatt in the late 1950s; its first implementation, in custom hardware, was one of the first artificial neural networks to be produced

• The algorithm allows for online learning; it processes training samples one at a time

Page 15: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Two-category Linearly Separable Case

• Let y1,y2,…,yn be n training samples in augmented feature space, which are linearly separable

• We need to find a weight vector a such that• aty > 0 for examples from the positive class• aty < 0 for examples from the negative class

• “Normalizing” the input examples by multiplying them with their class label (replace all samples from class 2 by their negatives), find a weight vector a such that • aty > 0 for all the examples (here y is multiplied with class label)

• The resulting weight vector is called a separating vector or a solution vector

Page 16: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

The Perceptron Criterion Function• Goal: Find weight vector a such that aty > 0 for all the

samples (assuming it exists)• Mathematically, this can be expressed as finding a weight

vector a that minimizes the no. of samples misclassified• Function is piecewise constant (discontinuous, and hence non-

differentiable) and is difficult to optimize• Perceptron Criterion Function:

Now, the minimization is mathematically tractable, and hence it is a better criterion fn. than no. of misclassifications.

The criterion is proportional to the sum of distances from the misclassified samples to the decision boundary

Find a that minimizes this criterion

Page 17: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Fixed-increment Single Sample Perceptron

• Also called perceptron learning in an online setting• For large datasets, this is more efficient compared to batch

mode

n = no. of training samples; a = weight vector; k = iteration #Chapter 5, page 230

Page 18: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Perceptron Convergence Theorem

If training samples are linearly separable, then the sequence of weight vectors given by Fixed-increment single-sample Perceptron will terminate at a solution vector

What happens if the patterns are non-linearly separable?

Page 19: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Multilayer Perceptron

Can we learn the nonlinearity at the same time as the linear discriminant? This is the goal of multilayer neural networks or multilayer Perceptrons

Pattern Classification, Chapter 6

18

Page 20: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

19

Page 21: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

20

Feedforward Operation and Classification

• A three-layer neural network consists of an input layer, a hidden layer and an output layer interconnected by modifiable (learned) weights represented by links between layers

• Multilayer neural network implements linear discriminants, but in a space where the inputs have been mapped nonlinearly

• Figure 6.1 shows a simple three-layer network

Page 22: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

21

NNo training here

No training involved here, since we are implementing a known input/output mapping

Page 23: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

22

Page 24: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

23

• A single “bias unit” is connected to each unit in addition to the input units

• Net activation:

where the subscript i indexes units in the input layer, j in the hidden layer; wji denotes the input-to-hidden layer weights at the hidden unit j. (In neurobiology, such weights or connections are called “synapses”)

• Each hidden unit emits an output that is a nonlinear function of its activation, that is: yj = f(netj)

å å= =

º=+=d

1i

d

0i

tjjii0jjiij ,x.wwxwwxnet

Page 25: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

24Figure 6.1 shows a simple threshold function

• The function f(.) is also called the activation function or “nonlinearity” of a unit. There are more general activation functions with desirables properties

• Each output unit similarly computes its net activation based on the hidden unit signals as:

where the subscript k indexes units in the ouput layer and nH denotes the number of hidden units

îíì

<-³

º=0net if 1

0net if 1)netsgn()net(f

å å= =

==+=H Hn

1j

n

0j

tkkjj0kkjjk ,y.wwywwynet

Page 26: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

25

• The output units are referred as zk. An output unit computes the nonlinear function of its net input, emitting

zk = f(netk)

• In the case of c outputs (classes), we can view the network as computing c discriminant functions zk = gk(x); the input x is classified according to the largest discriminant function gk(x) " k = 1, …,c

• The three-layer network with the weights listed in fig. 6.1 solves the XOR problem

Page 27: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

26• The hidden unit y1 computes the boundary:

³ 0 Þ y1 = +1x1 + x2 + 0.5 = 0

< 0 Þ y1 = -1

• The hidden unit y2 computes the boundary:£ 0 Þ y2 = +1

x1 + x2 -1.5 = 0< 0 Þ y2 = -1

• Output unit emits z1 = +1 if and only if y1 = +1 and y2 = +1Using the terminology of computer logic, the units are behaving like gates, where the first hidden unit is an OR gate, the second hidden unit is an AND gate, and the output unit implementszk = y1 AND NOT y2 = (x1 OR x2) and NOT(x1 AND x2) = x1 XOR x2

which provides the nonlinear decision of fig. 6.1

Page 28: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

27• General Feedforward Operation – case of c output units

• Hidden units enable us to express more complicated nonlinear functions and extend classification capability

• Activation function does not have to be a sign function, it is often required to be continuous and differentiable

• We can allow the activation in the output layer to be different from the activation function in the hidden layer or have different activation for each individual unit

• Assume for now that all activation functions are identical

c)1,...,(k

(1) wwxwfwfz)x(gHn

1j0k

d

1i0jijikjkk

=

÷÷ø

öççè

æ+÷÷ø

öççè

æ+=º å å

= =

Page 29: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

28• Expressive Power of multi-layer Networks

Question: Can every decision boundary be implemented by a three-layer network?

Answer: Yes (due to A. Kolmogorov)“Any continuous function from input to output can be implemented in a three-layer net, given sufficient number of hidden units nH, proper nonlinearities, and weights.”Any continuous function g(x) defined on the unit cube can be represented in the following form

for properly chosen functions dj and bij

( ) )2n];1,0[I(Ix )x()x(g n1n2

1jiijj ³=Î"= å

+

=

bSd

Page 30: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

29

Page 31: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

30

• Network has two modes of operation:

• LearningThe supervised learning consists of presenting an input pattern and modifying the network parameters (weights) to bring the actual outputs closer to the desired target values

• FeedforwardThe feedforward operations consists of presenting a pattern to the input units and passing (or feeding) the signals through the network in order to yield a decision from the outputs units

Page 32: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

31

• Goal is to learn the interconnection weights based on the training patterns and the desired outputs

• In a three-layer network, it is a straightforward matter to understand how the output, and thus the error, depends on the hidden-to-output layer weights

• The power of backpropagation is that it enables us to compute an effective error for each hidden unit, and thus derive a learning rule for the input-to-hidden weights. This is known as:

The credit assignment problem

Network Learning: Backpropagation Algorithm

Page 33: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

32

Page 34: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

33Network Learning

• Start with an untrained network, present a training pattern to the input layer, pass the signal through the network and determine the output.

• Let tk be the k-th target (or desired) output and zk be the k-th computed output with k = 1, …, c. Let w represent all the weights of the network

• The training error:

• The backpropagation learning rule is based on gradient descent• The weights are initialized with random values and are

changed in a direction that will reduce the error:

å=

-=-=c

1k

22kk zt

21)zt(

21)w(J

wJw¶¶

-= hD

Page 35: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

34where h is the learning rate which indicates the relative size of the change in weights

w(m +1) = w(m) + Dw(m)where m is the m-th training pattern presented

• Error on the hidden–to-output weights

where the sensitivity of unit k is defined as:

and describes how the overall error changes with the activation of the unit’s net activation

kj

kk

kj

k

kkj wnet

wnet.

netJ

wJ

¶¶

-=¶¶

¶¶

=¶¶ d

kk net

J¶¶

-=d

)net('f)zt(netz.

zJ

netJ

kkkk

k

kkk -=

¶¶

¶¶

-=¶¶

-=d

Page 36: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

35

Since netk = wkt.y, therefore:

Conclusion: the weight update (or learning rule) for the hidden-to-output weights is:

Dwkj = hdkyj = h(tk – zk) f’ (netk)yj

• Learning rule for the input-to-hiden units is more subtle and is the crux of the credit assignment problem

• Error on the input-to-hidden units: Using the chain rule

jkj

k ywnet

=¶¶

ji

j

j

j

jji wnet

.nety

.yJ

wJ

¶¶

=¶¶

Page 37: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

36However,

Similarly as in the preceding case, we define the sensitivity of a hidden unit:

Above equation is the core of the “credit assignment”problem: “The sensitivity at a hidden unit is simply the sum of the individual sensitivities at the output units weighted by the hidden-to-output weights wkj, all multiplied by f’(netj)”; see fig 6.5Conclusion: Learning rule for the input-to-hidden weights:

å å

åå

= =

==

--=¶¶

¶¶

--=

¶¶

--=úû

ùêë

é-

¶¶

=¶¶

c

1k

c

1kkjkkk

j

k

k

kkk

c

1k j

kkk

2k

c

1kk

jj

w)net('f)zt(ynet.

netz)zt(

yz)zt()zt(

21

yyJ

å=

ºc

1kkkjjj w)net('f dd

[ ] ijkkjjiji x)net('f wxwj

!!! "!!! #$d

dShdhD ==

Page 38: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Sensitivity at Hidden Node

Page 39: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Backpropagation Algorithm• More specifically, the “backpropagation of

errors” algorithm• During training, an error must be propagated

from the output layer back to the hidden layer to learn the input-to-hidden weights

• It is gradient descent in a layered network• Exact behavior of the learning algorithm

depends on the starting point• Start the process with random values of weights;

in practice you learn many networks with different initializations

Pattern Classification, Chapter 6

38

Page 40: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

39• Training protocols: • Stochastic: patterns are chosen randomly from training

set; network weights are updated for each pattern• Batch: Present all patterns before updating weights• On-line: present each pattern once & only once (no

memory for storing patterns)• Stochastic backpropagation algorithm:

Begin initialize nH; w, criterion q, h, m ¬ 0

do m ¬ m + 1xm ¬ randomly chosen patternwji ¬ wji + hdjxi; wkj ¬ wkj + hdkyj

until ||ÑJ(w)|| < qreturn w

End

Page 41: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

40• Stopping criterion

• The algorithm terminates when the change in the criterion function J(w) is smaller than some preset value q; other stopping criteria that lead to better performance than this one

• A weight update may reduce the error on the single pattern being presented but can increase the error on the full training set

• In stochastic backpropgation and batch propagation, we must make several passes through the training data

Page 42: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

41• Learning Curves

• Before training starts, the error on the training set is high; as the learning proceeds, error becomes smaller

• Error per pattern depends on the amount of training data and the expressive power (such as the number of weights) in the network

• Average error on an independent test set is always higher than on the training set, and it can decrease as well as increase

• A validation set is used in order to decide when to stop training ; we do not want to overfit the network and decrease the power of the classifier’s generalization“Stop training when the error on the validation set is minimum”

Page 43: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Pattern Classification, Chapter 6

42

Page 44: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Representation at the Hidden Layer

• What do the learned weights mean?• The weights connecting hidden layer to output

layer form a linear discriminant• The weights connecting input layer to hidden

layer represent a mapping from the input feature space to a latent feature space

• For each hidden unit, the weights from input layer describe the input pattern that leads to the maximum activation of that node

Page 45: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Backpropagation as Feature Mapping

• 64-2-3 sigmoidal network for classifying three characters (E,F,L)• Non-linear interactions between the features may cause the

features of the pattern to not manifest in a single hidden node (in contrary to the example shown above)

• It may be difficult to draw similar interpretations in large networks and caution must be exercised while analyzing weights

Input layer to hidden layer weights for a character recognition taskWeights at two hidden nodes represented as 8x8 patterns

Left gets activated for F, right gets activated for L, and both get activated for E

Page 46: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Practical Techniques for Improving Backpropagation

• A naïve application of backpropagation procedures may lead to slow convergence and poor performance

• Some practical suggestions; no theoretical results• Activation Function f(.)

• Must be non-linear (otherwise, 3-layer network is just a linear discriminant) and saturate (have max and min value) to keep weights and activation functions bounded

• Activation function and its derivative must be continuous and smooth; optionally monotonic

• Choice may depend on the problem. Eg. Gaussian activation if the data comes from a mixture of Gaussians

• Eg: sigmoid (most popular), polynomial, tanh, sign function• Parameters of activation function (e.g. Sigmoid)

• Centered at 0, odd function f(-net) = -f(net) (anti-symmetric); leads to faster learning

• Choice depends on the range of the input values

Page 47: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Activation Function

The anti-symmetric sigmoid function: f(-x) = -f(x).a = 1.716, b = 2/3.

First & second derivative

Page 48: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Practical Considerations• Scaling inputs (important not just for neural networks)

• Large differences in scale of different features due to the choice of units is compensated by normalizing them to be in the same range, [0,1] or [-1,1]; without normalization, error will hardly depend on feature with very small values

• Standardization: Shift the inputs to have zero mean and unit variance

• Target Values• One-of-C representation for the target vector (C is no. of classes).

Better to use +1 and –1 that lie well within the range of sigmoid function saturation values (+1.716, -1.716)

• Higher values (e.g. 1.716, saturation point of a sigmoid) may require the weights to go to infinity to minimize the error

• Training with Noise• For small training sets, it is better to add noise to the input patterns

and generate new “virtual” training patterns

Page 49: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Practical Considerations• Number of Hidden units (nH)

• Governs the expressive power of the network• The easier the task, the fewer the nodes needed• Rule of thumb: total no. of weights must be less than the number of

training examples (preferably 10 times less); no. of hidden units determines the total no. of weights

• A more principled method is to adjust the network complexity in response to training data; e.g. start with a “large” no. of hidden units and “decay”, prune, or eliminate weights

• Initializing weights• We cannot initialize weights to zero, otherwise learning cannot take

place• Choose initial weights w such that |w| < w’• w’ too small – slow learning; too large – early saturation and no

learning• w’ is chosen to be 1/Öd for input layer, and 1/ ÖnH for hidden layer

Page 50: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Total no. of Weights

Error per pattern with the increase in number of hidden nodes.

• 2-nH-1 network (with bias) trained on 90 2D-Gaussian patterns (n = 180) from each class (sampled from mixture of 3 Gaussians)

• Minimum test error occurs at 17-21 weights in total (4-5 hidden nodes). This illustrates the rule of thumb that n/10 weights often gives lowest error

Page 51: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Practical Considerations• Learning Rate

• Small learning rate: slow convergence• Large learning rate: high oscillation and slow convergence

Page 52: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Practical Considerations• Momentum

• Prevents the algorithm from getting stuck at plateaus and local minima

• Weight decay• Avoid overfitting by imposing the condition that weights must be

small• After each update, weights are decayed by some factor• Related to regularization (also used in SVM)

• Hints• Additional input nodes added to NN that are only

used during training. Help learn better feature representation.

Page 53: Chapter 6: Multilayer Neural Networks (Sections 6.1-6.3)cse802/S17/slides/Lec_09_Feb08.pdfthe first artificial neural networks to be produced • The algorithm allows for online learning;

Practical Considerations

• Training setup• Online, stochastic, batch-mode

• Stop training• Halt when validation error reaches (first) minimum

• Number of hidden layers• More layers -> more complex• Networks with more hidden layers are more prone to

get caught in local minima• Smaller the better (KISS)

• Criterion function• We talked about squared error, but there are others

Pattern Classification, Chapter 6

52