Foundation of Machine Learning - GitHub Pages

117
Foundation of Machine Learning Amir H. Payberah [email protected] 2020-09-28

Transcript of Foundation of Machine Learning - GitHub Pages

Page 1: Foundation of Machine Learning - GitHub Pages

Foundation of Machine Learning

Amir H. [email protected]

2020-09-28

Page 2: Foundation of Machine Learning - GitHub Pages

The Course Web Page

https://fid3024.github.io

1 / 67

Page 3: Foundation of Machine Learning - GitHub Pages

Linear Regression

2 / 67

Page 4: Foundation of Machine Learning - GitHub Pages

Linear Regression (1/2)

I Given the dataset of m houses.

Living area No. of bedrooms Price

2104 3 400

1600 3 330

2400 3 369...

......

I Predict the prices of other houses, as a function of the size of living area and numberof bedrooms?

3 / 67

Page 5: Foundation of Machine Learning - GitHub Pages

Linear Regression (2/2)

I Building a model that takes input x ∈ Rn and predicts output y ∈ R.

I In linear regression, the output y is a linear function of the input x.

y = fw(x) = w1x1 + w2x2 + · · ·+ wnxn

y = wᵀx

• y: the predicted value• n: the number of features• xi: the ith feature value• wj: the jth model parameter (w ∈ Rn)

4 / 67

Page 6: Foundation of Machine Learning - GitHub Pages

Linear Regression (2/2)

I Building a model that takes input x ∈ Rn and predicts output y ∈ R.

I In linear regression, the output y is a linear function of the input x.

y = fw(x) = w1x1 + w2x2 + · · ·+ wnxn

y = wᵀx

• y: the predicted value• n: the number of features• xi: the ith feature value• wj: the jth model parameter (w ∈ Rn)

4 / 67

Page 7: Foundation of Machine Learning - GitHub Pages

Loss Function

I For each value of the w, how close the y(i) is to the corresponding y(i).

I E.g., Mean Squared Error (MSE)

J(w) =1

m

m∑i=1

costw(y(i), y(i)) =1

m

m∑i=1

(y(i) − y(i))2

5 / 67

Page 8: Foundation of Machine Learning - GitHub Pages

Objective

I Minimizing the loss function J(w).

I Gradient descent

6 / 67

Page 9: Foundation of Machine Learning - GitHub Pages

Gradient Descent

I Tweaking parameters w iteratively in order to minimize a loss function J(w).

I Start at a random point, and repeat the following steps, until the stopping criterionis satisfied:

1. Determine a descent direction ∇J(w)2. Choose a step size η3. Update the parameters: w← w − η∇J(w)

7 / 67

Page 10: Foundation of Machine Learning - GitHub Pages

Gradient Descent

I Tweaking parameters w iteratively in order to minimize a loss function J(w).

I Start at a random point, and repeat the following steps, until the stopping criterionis satisfied:

1. Determine a descent direction ∇J(w)2. Choose a step size η3. Update the parameters: w← w − η∇J(w)

7 / 67

Page 11: Foundation of Machine Learning - GitHub Pages

Gradient Descent

I Tweaking parameters w iteratively in order to minimize a loss function J(w).

I Start at a random point, and repeat the following steps, until the stopping criterionis satisfied:

1. Determine a descent direction ∇J(w)

2. Choose a step size η3. Update the parameters: w← w − η∇J(w)

7 / 67

Page 12: Foundation of Machine Learning - GitHub Pages

Gradient Descent

I Tweaking parameters w iteratively in order to minimize a loss function J(w).

I Start at a random point, and repeat the following steps, until the stopping criterionis satisfied:

1. Determine a descent direction ∇J(w)2. Choose a step size η

3. Update the parameters: w← w − η∇J(w)

7 / 67

Page 13: Foundation of Machine Learning - GitHub Pages

Gradient Descent

I Tweaking parameters w iteratively in order to minimize a loss function J(w).

I Start at a random point, and repeat the following steps, until the stopping criterionis satisfied:

1. Determine a descent direction ∇J(w)2. Choose a step size η3. Update the parameters: w← w − η∇J(w)

7 / 67

Page 14: Foundation of Machine Learning - GitHub Pages

Batch Gradient Descent vs. Mini-Batch Stochastic GradientDescent

I Gradient descent• X is the total dataset.• J(w) = 1

|X|∑

x∈X costw(y(i), y(i))

= 1|X|∑

x∈X l(x,w)

• w← w − η 1|X|∑

x∈X∇l(x,w)

I Mini-batch stochastic gradient descent• β is the mini-batch, i.e., a random subset of X.• J(w) = 1

|X|∑

x∈β costw(y(i), y(i)) = 1|β|∑

x∈β l(x,w)

• w← w − η 1|β|∑

x∈β ∇l(x,w)

8 / 67

Page 15: Foundation of Machine Learning - GitHub Pages

Batch Gradient Descent vs. Mini-Batch Stochastic GradientDescent

I Gradient descent• X is the total dataset.• J(w) = 1

|X|∑

x∈X costw(y(i), y(i)) = 1|X|∑

x∈X l(x,w)

• w← w − η 1|X|∑

x∈X∇l(x,w)

I Mini-batch stochastic gradient descent• β is the mini-batch, i.e., a random subset of X.• J(w) = 1

|X|∑

x∈β costw(y(i), y(i)) = 1|β|∑

x∈β l(x,w)

• w← w − η 1|β|∑

x∈β ∇l(x,w)

8 / 67

Page 16: Foundation of Machine Learning - GitHub Pages

Batch Gradient Descent vs. Mini-Batch Stochastic GradientDescent

I Gradient descent• X is the total dataset.• J(w) = 1

|X|∑

x∈X costw(y(i), y(i)) = 1|X|∑

x∈X l(x,w)

• w← w − η 1|X|∑

x∈X∇l(x,w)

I Mini-batch stochastic gradient descent• β is the mini-batch, i.e., a random subset of X.• J(w) = 1

|X|∑

x∈β costw(y(i), y(i)) = 1|β|∑

x∈β l(x,w)

• w← w − η 1|β|∑

x∈β ∇l(x,w)

8 / 67

Page 17: Foundation of Machine Learning - GitHub Pages

Batch Gradient Descent vs. Mini-Batch Stochastic GradientDescent

I Gradient descent• X is the total dataset.• J(w) = 1

|X|∑

x∈X costw(y(i), y(i)) = 1|X|∑

x∈X l(x,w)

• w← w − η 1|X|∑

x∈X∇l(x,w)

I Mini-batch stochastic gradient descent• β is the mini-batch, i.e., a random subset of X.• J(w) = 1

|X|∑

x∈β costw(y(i), y(i)) = 1|β|∑

x∈β l(x,w)

• w← w − η 1|β|∑

x∈β ∇l(x,w)

8 / 67

Page 18: Foundation of Machine Learning - GitHub Pages

Batch Gradient Descent vs. Mini-Batch Stochastic GradientDescent

I Gradient descent• X is the total dataset.• J(w) = 1

|X|∑

x∈X costw(y(i), y(i)) = 1|X|∑

x∈X l(x,w)

• w← w − η 1|X|∑

x∈X∇l(x,w)

I Mini-batch stochastic gradient descent• β is the mini-batch, i.e., a random subset of X.• J(w) = 1

|X|∑

x∈β costw(y(i), y(i)) = 1|β|∑

x∈β l(x,w)

• w← w − η 1|β|∑

x∈β ∇l(x,w)

8 / 67

Page 19: Foundation of Machine Learning - GitHub Pages

Binomial Logistic Regression

9 / 67

Page 20: Foundation of Machine Learning - GitHub Pages

Binomial Logistic Regression (1/2)

I Given the dataset of m cancer tests.

Tumor size Cancer

330 1

120 0

400 1...

...

I Predict the risk of cancer, as a function of the tumor size?

10 / 67

Page 21: Foundation of Machine Learning - GitHub Pages

Binomial Logistic Regression (2/2)

I Linear regression: the model computes the weighted sum of the input features (plusa bias term).

y = w0x0 + w1x1 + w2x2 + · · ·+ wnxn = wᵀx

I Binomial logistic regression: the model computes a weighted sum of the input features(plus a bias term), but it outputs the logistic of this result.

z = w0x0 + w1x1 + w2x2 + · · ·+ wnxn = wᵀx

y = σ(z) =1

1 + e−z=

1

1 + e−wᵀx

11 / 67

Page 22: Foundation of Machine Learning - GitHub Pages

Binomial Logistic Regression (2/2)

I Linear regression: the model computes the weighted sum of the input features (plusa bias term).

y = w0x0 + w1x1 + w2x2 + · · ·+ wnxn = wᵀx

I Binomial logistic regression: the model computes a weighted sum of the input features(plus a bias term), but it outputs the logistic of this result.

z = w0x0 + w1x1 + w2x2 + · · ·+ wnxn = wᵀx

y = σ(z) =1

1 + e−z=

1

1 + e−wᵀx

11 / 67

Page 23: Foundation of Machine Learning - GitHub Pages

Loss Function (1/3)

I Naive idea: minimizing the Mean Squared Error (MSE)

cost(y(i), y(i)) = (y(i) − y(i))2

J(w) =1

m

m∑i

cost(y(i), y(i)) =1

m

m∑i

(y(i) − y(i))2

J(w) = MSE(w) =1

m

m∑i

(1

1 + e−wᵀx(i)− y(i))2

I This cost function is a non-convex function for parameter optimization.

12 / 67

Page 24: Foundation of Machine Learning - GitHub Pages

Loss Function (1/3)

I Naive idea: minimizing the Mean Squared Error (MSE)

cost(y(i), y(i)) = (y(i) − y(i))2

J(w) =1

m

m∑i

cost(y(i), y(i)) =1

m

m∑i

(y(i) − y(i))2

J(w) = MSE(w) =1

m

m∑i

(1

1 + e−wᵀx(i)− y(i))2

I This cost function is a non-convex function for parameter optimization.

12 / 67

Page 25: Foundation of Machine Learning - GitHub Pages

Loss Function (1/3)

I Naive idea: minimizing the Mean Squared Error (MSE)

cost(y(i), y(i)) = (y(i) − y(i))2

J(w) =1

m

m∑i

cost(y(i), y(i)) =1

m

m∑i

(y(i) − y(i))2

J(w) = MSE(w) =1

m

m∑i

(1

1 + e−wᵀx(i)− y(i))2

I This cost function is a non-convex function for parameter optimization.

12 / 67

Page 26: Foundation of Machine Learning - GitHub Pages

Loss Function (2/3)

cost(y(i), y(i)) =

{−log(y(i)) if y(i) = 1

−log(1− y(i)) if y(i) = 0

when y = 1 when y = 0

13 / 67

Page 27: Foundation of Machine Learning - GitHub Pages

Loss Function (3/3)

I We can define J(w) as below

cost(y(i), y(i)) =

{−log(y(i)) if y(i) = 1

−log(1− y(i)) if y(i) = 0

J(w) =1

m

m∑i

cost(y(i), y(i)) = −1m

m∑i

(y(i)log(y(i)) + (1− y(i))log(1− y(i)))

14 / 67

Page 28: Foundation of Machine Learning - GitHub Pages

Loss Function (3/3)

I We can define J(w) as below

cost(y(i), y(i)) =

{−log(y(i)) if y(i) = 1

−log(1− y(i)) if y(i) = 0

J(w) =1

m

m∑i

cost(y(i), y(i)) = −1m

m∑i

(y(i)log(y(i)) + (1− y(i))log(1− y(i)))

14 / 67

Page 29: Foundation of Machine Learning - GitHub Pages

Binomial vs. Multinomial Logistic Regression (1/2)

I In a binomial classifier, y ∈ {0, 1}, the estimator is y = p(y = 1 | x;w).• We find one set of parameters w.

wᵀ = [w0, w1, · · · , wn]

I In multinomial classifier, y ∈ {1, 2, · · · , k}, we need to estimate the result for eachindividual label, i.e., yj = p(y = j | x;w).

• We find k set of parameters W.

W =

[w0,1, w1,1, · · · , wn,1][w0,2, w1,2, · · · , wn,2]

...[w0,k, w1,k, · · · , wn,k]

=

wᵀ

1

wᵀ2

...wᵀ

k

15 / 67

Page 30: Foundation of Machine Learning - GitHub Pages

Binomial vs. Multinomial Logistic Regression (1/2)

I In a binomial classifier, y ∈ {0, 1}, the estimator is y = p(y = 1 | x;w).• We find one set of parameters w.

wᵀ = [w0, w1, · · · , wn]

I In multinomial classifier, y ∈ {1, 2, · · · , k}, we need to estimate the result for eachindividual label, i.e., yj = p(y = j | x;w).

• We find k set of parameters W.

W =

[w0,1, w1,1, · · · , wn,1][w0,2, w1,2, · · · , wn,2]

...[w0,k, w1,k, · · · , wn,k]

=

wᵀ

1

wᵀ2

...wᵀ

k

15 / 67

Page 31: Foundation of Machine Learning - GitHub Pages

Binomial vs. Multinomial Logistic Regression (1/2)

I In a binomial classifier, y ∈ {0, 1}, the estimator is y = p(y = 1 | x;w).• We find one set of parameters w.

wᵀ = [w0, w1, · · · , wn]

I In multinomial classifier, y ∈ {1, 2, · · · , k}, we need to estimate the result for eachindividual label, i.e., yj = p(y = j | x;w).

• We find k set of parameters W.

W =

[w0,1, w1,1, · · · , wn,1][w0,2, w1,2, · · · , wn,2]

...[w0,k, w1,k, · · · , wn,k]

=

wᵀ

1

wᵀ2

...wᵀ

k

15 / 67

Page 32: Foundation of Machine Learning - GitHub Pages

Binomial vs. Multinomial Logistic Regression (2/2)

I In a binary class, y ∈ {0, 1}, we use the sigmoid function.

wᵀx = w0x0 + w1x1 + · · ·+ wnxn

y = p(y = 1 | x;w) = σ(wᵀx) =1

1 + e−wᵀx

I In multiclasses, y ∈ {1, 2, · · · , k}, we use the softmax function.

wᵀjx = w0,jx0 + w1,jx1 + · · ·+ wn,jxn, 1 ≤ j ≤ k

yj = p(y = j | x;wj) = σ(wᵀjx) =

ewᵀj x∑k

i=1 ewᵀi x

16 / 67

Page 33: Foundation of Machine Learning - GitHub Pages

Binomial vs. Multinomial Logistic Regression (2/2)

I In a binary class, y ∈ {0, 1}, we use the sigmoid function.

wᵀx = w0x0 + w1x1 + · · ·+ wnxn

y = p(y = 1 | x;w) = σ(wᵀx) =1

1 + e−wᵀx

I In multiclasses, y ∈ {1, 2, · · · , k}, we use the softmax function.

wᵀjx = w0,jx0 + w1,jx1 + · · ·+ wn,jxn, 1 ≤ j ≤ k

yj = p(y = j | x;wj) = σ(wᵀjx) =

ewᵀj x∑k

i=1 ewᵀi x

16 / 67

Page 34: Foundation of Machine Learning - GitHub Pages

Sigmoid vs. Softmax

I Sigmoid function: σ(wᵀx) = 11+e−wᵀx

I Softmax function: σ(wᵀjx) = e

wᵀj x∑k

i=1 ewᵀi x

• Calculate the probabilities of each target class over all possible target classes.• The softmax function for two classes is equivalent the sigmoid function.

17 / 67

Page 35: Foundation of Machine Learning - GitHub Pages

Deep Neural Network

18 / 67

Page 36: Foundation of Machine Learning - GitHub Pages

The Linear Threshold Unit (LTU)

I Each input connection is associated with a weight.

I Computes a weighted sum of its inputs and applies a step function to that sum.

I z = w1x1 + w2x2 + · · ·+ wnxn = wᵀx

I y = step(z) = step(wᵀx)

19 / 67

Page 37: Foundation of Machine Learning - GitHub Pages

The Perceptron

I The perceptron is a single layer of LTUs.

I Train the model.

y = fw(X)J(w) = cost(y, y)w← w − η∇J(w)

20 / 67

Page 38: Foundation of Machine Learning - GitHub Pages

The Perceptron

I The perceptron is a single layer of LTUs.

I Train the model.

y = fw(X)J(w) = cost(y, y)w← w − η∇J(w)

20 / 67

Page 39: Foundation of Machine Learning - GitHub Pages

Feedforward Neural Network Architecture

I A feedforward neural network is composed of:• One input layer• One or more hidden layers• One final output layer

21 / 67

Page 40: Foundation of Machine Learning - GitHub Pages

Training Feedforward Neural Networks

I How to train a feedforward neural network?

I For each training instance x(i) the algorithm does the following steps:

1. Forward pass: make a prediction (i.e., y(i)).2. Measure the error (i.e., cost(y(i), y(i))).3. Backward pass: go through each layer in reverse to measure the error contribution from

each connection.4. Tweak the connection weights to reduce the error (update W and b).

I It’s called the backpropagation training algorithm

22 / 67

Page 41: Foundation of Machine Learning - GitHub Pages

Training Feedforward Neural Networks

I How to train a feedforward neural network?

I For each training instance x(i) the algorithm does the following steps:

1. Forward pass: make a prediction (i.e., y(i)).2. Measure the error (i.e., cost(y(i), y(i))).3. Backward pass: go through each layer in reverse to measure the error contribution from

each connection.4. Tweak the connection weights to reduce the error (update W and b).

I It’s called the backpropagation training algorithm

22 / 67

Page 42: Foundation of Machine Learning - GitHub Pages

Training Feedforward Neural Networks

I How to train a feedforward neural network?

I For each training instance x(i) the algorithm does the following steps:

1. Forward pass: make a prediction (i.e., y(i)).

2. Measure the error (i.e., cost(y(i), y(i))).3. Backward pass: go through each layer in reverse to measure the error contribution from

each connection.4. Tweak the connection weights to reduce the error (update W and b).

I It’s called the backpropagation training algorithm

22 / 67

Page 43: Foundation of Machine Learning - GitHub Pages

Training Feedforward Neural Networks

I How to train a feedforward neural network?

I For each training instance x(i) the algorithm does the following steps:

1. Forward pass: make a prediction (i.e., y(i)).2. Measure the error (i.e., cost(y(i), y(i))).

3. Backward pass: go through each layer in reverse to measure the error contribution fromeach connection.

4. Tweak the connection weights to reduce the error (update W and b).

I It’s called the backpropagation training algorithm

22 / 67

Page 44: Foundation of Machine Learning - GitHub Pages

Training Feedforward Neural Networks

I How to train a feedforward neural network?

I For each training instance x(i) the algorithm does the following steps:

1. Forward pass: make a prediction (i.e., y(i)).2. Measure the error (i.e., cost(y(i), y(i))).3. Backward pass: go through each layer in reverse to measure the error contribution from

each connection.

4. Tweak the connection weights to reduce the error (update W and b).

I It’s called the backpropagation training algorithm

22 / 67

Page 45: Foundation of Machine Learning - GitHub Pages

Training Feedforward Neural Networks

I How to train a feedforward neural network?

I For each training instance x(i) the algorithm does the following steps:

1. Forward pass: make a prediction (i.e., y(i)).2. Measure the error (i.e., cost(y(i), y(i))).3. Backward pass: go through each layer in reverse to measure the error contribution from

each connection.4. Tweak the connection weights to reduce the error (update W and b).

I It’s called the backpropagation training algorithm

22 / 67

Page 46: Foundation of Machine Learning - GitHub Pages

Training Feedforward Neural Networks

I How to train a feedforward neural network?

I For each training instance x(i) the algorithm does the following steps:

1. Forward pass: make a prediction (i.e., y(i)).2. Measure the error (i.e., cost(y(i), y(i))).3. Backward pass: go through each layer in reverse to measure the error contribution from

each connection.4. Tweak the connection weights to reduce the error (update W and b).

I It’s called the backpropagation training algorithm

22 / 67

Page 47: Foundation of Machine Learning - GitHub Pages

Generalization

23 / 67

Page 48: Foundation of Machine Learning - GitHub Pages

Generalization

I Generalization: make a model that performs well on test data.• Have a small test error.

I Challenges

1. Make the training error small.2. Make the gap between training and test error small.

I Overfitting vs. underfitting

[https://ml.berkeley.edu/blog/2017/07/13/tutorial-4]

24 / 67

Page 49: Foundation of Machine Learning - GitHub Pages

Generalization

I Generalization: make a model that performs well on test data.• Have a small test error.

I Challenges

1. Make the training error small.2. Make the gap between training and test error small.

I Overfitting vs. underfitting

[https://ml.berkeley.edu/blog/2017/07/13/tutorial-4]

24 / 67

Page 50: Foundation of Machine Learning - GitHub Pages

Generalization

I Generalization: make a model that performs well on test data.• Have a small test error.

I Challenges

1. Make the training error small.2. Make the gap between training and test error small.

I Overfitting vs. underfitting

[https://ml.berkeley.edu/blog/2017/07/13/tutorial-4]

24 / 67

Page 51: Foundation of Machine Learning - GitHub Pages

Avoiding Overfitting

I Early stopping

I l1 and l2 regularization

I Max-norm regularization

I Dropout

I Data augmentation

25 / 67

Page 52: Foundation of Machine Learning - GitHub Pages

Early Stopping

I As the training steps go by, its prediction error on the training/validation set naturallygoes down.

I After a while the validation error stops decreasing and starts to go back up.• The model has started to overfit the training data.

I In the early stopping, we stop training when the validation error reaches a minimum.

26 / 67

Page 53: Foundation of Machine Learning - GitHub Pages

Early Stopping

I As the training steps go by, its prediction error on the training/validation set naturallygoes down.

I After a while the validation error stops decreasing and starts to go back up.• The model has started to overfit the training data.

I In the early stopping, we stop training when the validation error reaches a minimum.

26 / 67

Page 54: Foundation of Machine Learning - GitHub Pages

Early Stopping

I As the training steps go by, its prediction error on the training/validation set naturallygoes down.

I After a while the validation error stops decreasing and starts to go back up.• The model has started to overfit the training data.

I In the early stopping, we stop training when the validation error reaches a minimum.

26 / 67

Page 55: Foundation of Machine Learning - GitHub Pages

l1 and l2 Regularization

I Penalize large values of weights wj.

~J(w) = J(w) + λR(w)

I l1 regression: R(w) = λ∑n

i=1 |wi| is added to the cost function.

~J(w) = J(w) + λ

n∑i=1

|wi|

I l2 regression: R(w) = λ∑n

i=1 w2i is added to the cost function.

~J(w) = J(w) + λ

n∑i=1

w2i

27 / 67

Page 56: Foundation of Machine Learning - GitHub Pages

l1 and l2 Regularization

I Penalize large values of weights wj.

~J(w) = J(w) + λR(w)

I l1 regression: R(w) = λ∑n

i=1 |wi| is added to the cost function.

~J(w) = J(w) + λ

n∑i=1

|wi|

I l2 regression: R(w) = λ∑n

i=1 w2i is added to the cost function.

~J(w) = J(w) + λ

n∑i=1

w2i

27 / 67

Page 57: Foundation of Machine Learning - GitHub Pages

l1 and l2 Regularization

I Penalize large values of weights wj.

~J(w) = J(w) + λR(w)

I l1 regression: R(w) = λ∑n

i=1 |wi| is added to the cost function.

~J(w) = J(w) + λ

n∑i=1

|wi|

I l2 regression: R(w) = λ∑n

i=1 w2i is added to the cost function.

~J(w) = J(w) + λ

n∑i=1

w2i

27 / 67

Page 58: Foundation of Machine Learning - GitHub Pages

Max-Norm Regularization

I Max-norm regularization: constrains the weights wj of the incoming connections foreach neuron j.

• Prevents them from getting too large.

I After each training step, clip wj as below, if ||wj||2 > r:

wj ← wjr

||wj||2• r is the max-norm hyperparameter

• ||wj||2 = (∑

i w2i,j)

12 =

√w21,j + w22,j + · · ·+ w2n,j

28 / 67

Page 59: Foundation of Machine Learning - GitHub Pages

Max-Norm Regularization

I Max-norm regularization: constrains the weights wj of the incoming connections foreach neuron j.

• Prevents them from getting too large.

I After each training step, clip wj as below, if ||wj||2 > r:

wj ← wjr

||wj||2• r is the max-norm hyperparameter

• ||wj||2 = (∑

i w2i,j)

12 =

√w21,j + w22,j + · · ·+ w2n,j

28 / 67

Page 60: Foundation of Machine Learning - GitHub Pages

Dropout (1/2)

I At each training step, each neuron drops out temporarily with a probability p.

• The hyperparameter p is called the dropout rate.• A neuron will be entirely ignored during this training step.• It may be active during the next step.• Exclude the output neurons.

I After training, neurons don’t get dropped anymore.

29 / 67

Page 61: Foundation of Machine Learning - GitHub Pages

Dropout (1/2)

I At each training step, each neuron drops out temporarily with a probability p.• The hyperparameter p is called the dropout rate.

• A neuron will be entirely ignored during this training step.• It may be active during the next step.• Exclude the output neurons.

I After training, neurons don’t get dropped anymore.

29 / 67

Page 62: Foundation of Machine Learning - GitHub Pages

Dropout (1/2)

I At each training step, each neuron drops out temporarily with a probability p.• The hyperparameter p is called the dropout rate.• A neuron will be entirely ignored during this training step.

• It may be active during the next step.• Exclude the output neurons.

I After training, neurons don’t get dropped anymore.

29 / 67

Page 63: Foundation of Machine Learning - GitHub Pages

Dropout (1/2)

I At each training step, each neuron drops out temporarily with a probability p.• The hyperparameter p is called the dropout rate.• A neuron will be entirely ignored during this training step.• It may be active during the next step.

• Exclude the output neurons.

I After training, neurons don’t get dropped anymore.

29 / 67

Page 64: Foundation of Machine Learning - GitHub Pages

Dropout (1/2)

I At each training step, each neuron drops out temporarily with a probability p.• The hyperparameter p is called the dropout rate.• A neuron will be entirely ignored during this training step.• It may be active during the next step.• Exclude the output neurons.

I After training, neurons don’t get dropped anymore.

29 / 67

Page 65: Foundation of Machine Learning - GitHub Pages

Dropout (1/2)

I At each training step, each neuron drops out temporarily with a probability p.• The hyperparameter p is called the dropout rate.• A neuron will be entirely ignored during this training step.• It may be active during the next step.• Exclude the output neurons.

I After training, neurons don’t get dropped anymore.

29 / 67

Page 66: Foundation of Machine Learning - GitHub Pages

Dropout (2/2)

I Each neuron can be either present or absent.

I 2N possible networks, where N is the totalnumber of droppable neurons.

• N = 4 in this figure.

30 / 67

Page 67: Foundation of Machine Learning - GitHub Pages

Data Augmentation

I One way to make a model generalize better is to train it on more data.

I This will reduce overfitting.

I Create fake data and add it to the training set.

31 / 67

Page 68: Foundation of Machine Learning - GitHub Pages

Data Augmentation

I One way to make a model generalize better is to train it on more data.

I This will reduce overfitting.

I Create fake data and add it to the training set.

31 / 67

Page 69: Foundation of Machine Learning - GitHub Pages

Batch Size

32 / 67

Page 70: Foundation of Machine Learning - GitHub Pages

Training Deep Neural Networks

I Computationally intensive

I Time consuming

[https://cloud.google.com/tpu/docs/images/inceptionv3onc--oview.png]

33 / 67

Page 71: Foundation of Machine Learning - GitHub Pages

Why?

I Massive amount of training dataset

I Large number of parameters

34 / 67

Page 72: Foundation of Machine Learning - GitHub Pages

Accuracy vs. Data/Model Size

[Jeff Dean at AI Frontiers: Trends and Developments in Deep Learning Research]

35 / 67

Page 73: Foundation of Machine Learning - GitHub Pages

Accuracy vs. Data/Model Size

[Jeff Dean at AI Frontiers: Trends and Developments in Deep Learning Research]

36 / 67

Page 74: Foundation of Machine Learning - GitHub Pages

Accuracy vs. Data/Model Size

[Jeff Dean at AI Frontiers: Trends and Developments in Deep Learning Research]

37 / 67

Page 75: Foundation of Machine Learning - GitHub Pages

Scale Matters

38 / 67

Page 76: Foundation of Machine Learning - GitHub Pages

Distributed Gradient Descent (1/2)

I Replicate a whole model on every device.

I Each device has model replica with a copy of model parameters.

[Tang et al., Communication-Efficient Distributed Deep Learning: A Comprehensive Survey, 2020]

39 / 67

Page 77: Foundation of Machine Learning - GitHub Pages

Distributed Gradient Descent (2/2)

I Parameter Server (PS): maintains global model.

I Once each device completes processing, the weights are transferred to PS, whichaggregates all the gradients.

I The PS, then, sends back the results to each device.

[Tang et al., Communication-Efficient Distributed Deep Learning: A Comprehensive Survey, 2020]

40 / 67

Page 78: Foundation of Machine Learning - GitHub Pages

Distributed Gradient Descent (2/2)

I Parameter Server (PS): maintains global model.

I Once each device completes processing, the weights are transferred to PS, whichaggregates all the gradients.

I The PS, then, sends back the results to each device.

[Tang et al., Communication-Efficient Distributed Deep Learning: A Comprehensive Survey, 2020]

40 / 67

Page 79: Foundation of Machine Learning - GitHub Pages

Distributed Gradient Descent (2/2)

I Parameter Server (PS): maintains global model.

I Once each device completes processing, the weights are transferred to PS, whichaggregates all the gradients.

I The PS, then, sends back the results to each device.

[Tang et al., Communication-Efficient Distributed Deep Learning: A Comprehensive Survey, 2020]

40 / 67

Page 80: Foundation of Machine Learning - GitHub Pages

Batch Size vs. Number of GPUs

I w← w − η 1|β|∑

x∈β∇l(x,w)

I The more samples processed during each batch, the faster a training job will complete.

I E.g., ImageNet + ResNet-50

[https://medium.com/@emwatz/lessons-for-improving-training-performance-part-1-b5efd0f0dcea]

41 / 67

Page 81: Foundation of Machine Learning - GitHub Pages

Batch Size vs. Number of GPUs

I w← w − η 1|β|∑

x∈β∇l(x,w)

I The more samples processed during each batch, the faster a training job will complete.

I E.g., ImageNet + ResNet-50

[https://medium.com/@emwatz/lessons-for-improving-training-performance-part-1-b5efd0f0dcea]

41 / 67

Page 82: Foundation of Machine Learning - GitHub Pages

Batch Size vs. Number of GPUs

I w← w − η 1|β|∑

x∈β∇l(x,w)

I The more samples processed during each batch, the faster a training job will complete.

I E.g., ImageNet + ResNet-50

[https://medium.com/@emwatz/lessons-for-improving-training-performance-part-1-b5efd0f0dcea]

41 / 67

Page 83: Foundation of Machine Learning - GitHub Pages

Batch Size vs. Time to Accuracy

I ResNet-32 on Titan X GPU

[Peter Pietzuch - Imperial College London]

42 / 67

Page 84: Foundation of Machine Learning - GitHub Pages

Batch Size vs. Validation Error

[Goyal et al., Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour, 2018]

43 / 67

Page 85: Foundation of Machine Learning - GitHub Pages

Improve the Validation Error

44 / 67

Page 86: Foundation of Machine Learning - GitHub Pages

Improve the Validation Error

I Scaling learning rate

I Batch normalization

I Label smoothing

I Momentum

45 / 67

Page 87: Foundation of Machine Learning - GitHub Pages

Scaling Learning Rate

I w← w − η 1|β|∑

x∈β∇l(x,w).

I Linear scaling: multiply the learning rate by k, when the mini batch size is multipliedby k.

I Constant warmup: start with a small learning rate for few epochs, and then increasethe learning rate to k times learning rate.

I Gradual warmup: start with a small learning rate, and then gradually increase it bya constant for each epoch till it reaches k times learning rate.

[Goyal et al., Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour, 2018]

46 / 67

Page 88: Foundation of Machine Learning - GitHub Pages

Scaling Learning Rate

I w← w − η 1|β|∑

x∈β∇l(x,w).

I Linear scaling: multiply the learning rate by k, when the mini batch size is multipliedby k.

I Constant warmup: start with a small learning rate for few epochs, and then increasethe learning rate to k times learning rate.

I Gradual warmup: start with a small learning rate, and then gradually increase it bya constant for each epoch till it reaches k times learning rate.

[Goyal et al., Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour, 2018]

46 / 67

Page 89: Foundation of Machine Learning - GitHub Pages

Scaling Learning Rate

I w← w − η 1|β|∑

x∈β∇l(x,w).

I Linear scaling: multiply the learning rate by k, when the mini batch size is multipliedby k.

I Constant warmup: start with a small learning rate for few epochs, and then increasethe learning rate to k times learning rate.

I Gradual warmup: start with a small learning rate, and then gradually increase it bya constant for each epoch till it reaches k times learning rate.

[Goyal et al., Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour, 2018]

46 / 67

Page 90: Foundation of Machine Learning - GitHub Pages

Scaling Learning Rate

I w← w − η 1|β|∑

x∈β∇l(x,w).

I Linear scaling: multiply the learning rate by k, when the mini batch size is multipliedby k.

I Constant warmup: start with a small learning rate for few epochs, and then increasethe learning rate to k times learning rate.

I Gradual warmup: start with a small learning rate, and then gradually increase it bya constant for each epoch till it reaches k times learning rate.

[Goyal et al., Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour, 2018]

46 / 67

Page 91: Foundation of Machine Learning - GitHub Pages

Scaling Learning Rate

I w← w − η 1|β|∑

x∈β∇l(x,w).

I Linear scaling: multiply the learning rate by k, when the mini batch size is multipliedby k.

I Constant warmup: start with a small learning rate for few epochs, and then increasethe learning rate to k times learning rate.

I Gradual warmup: start with a small learning rate, and then gradually increase it bya constant for each epoch till it reaches k times learning rate.

[Goyal et al., Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour, 2018]

46 / 67

Page 92: Foundation of Machine Learning - GitHub Pages

Batch Normalization (1/2)

I Changes in minibatch size change the underlying loss function being optimized.

I Batch Normalization computes statistics along the minibatch dimension.

µβ =1

|β|∑x∈β

x

σ2β =1

|β|∑x∈β

(x− µβ)2

47 / 67

Page 93: Foundation of Machine Learning - GitHub Pages

Batch Normalization (1/2)

I Changes in minibatch size change the underlying loss function being optimized.

I Batch Normalization computes statistics along the minibatch dimension.

µβ =1

|β|∑x∈β

x

σ2β =1

|β|∑x∈β

(x− µβ)2

47 / 67

Page 94: Foundation of Machine Learning - GitHub Pages

Batch Normalization (1/2)

I Changes in minibatch size change the underlying loss function being optimized.

I Batch Normalization computes statistics along the minibatch dimension.

µβ =1

|β|∑x∈β

x

σ2β =1

|β|∑x∈β

(x− µβ)2

47 / 67

Page 95: Foundation of Machine Learning - GitHub Pages

Batch Normalization (2/2)

I Zero-centering and normalizing the inputs, then scaling and shifting the result.

x =x− µβ√σ2β + ε

z = αx + γ

I x: the zero-centered and normalized input.

I z: the output of the BN operation, which is a scaled and shifted version of the inputs.

I α: the scaling parameter vector for the layer.

I γ: the shifting parameter (offset) vector for the layer.

I ε: a tiny number to avoid division by zero.

48 / 67

Page 96: Foundation of Machine Learning - GitHub Pages

Label Smoothing

I A generalization technique.

I Replaces one-hot encoded label vector yhot with a mixture of yhot and the uniformdistribution.

yls = (1− α)yhot + α/K

I K is the number of label classes, and α is a hyperparameter.

[Shallue et al., Measuring the Effects of Data Parallelism on Neural Network Training, 2019]

49 / 67

Page 97: Foundation of Machine Learning - GitHub Pages

Momentum (1/3)

I Regular gradient descent optimization: w← w − η∇J(w)

I At each iteration, momentum optimization adds the local gradient to the momentumvector m.

m← βm + η∇J(w)

w← w −m

[Aurelien Geron, Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2019]

50 / 67

Page 98: Foundation of Machine Learning - GitHub Pages

Momentum (1/3)

I Regular gradient descent optimization: w← w − η∇J(w)

I At each iteration, momentum optimization adds the local gradient to the momentumvector m.

m← βm + η∇J(w)

w← w −m

[Aurelien Geron, Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2019]

50 / 67

Page 99: Foundation of Machine Learning - GitHub Pages

Momentum (2/3)

I Nesterov momentum measure the gradient of the cost function slightly ahead in thedirection of the momentum.

m = βm + η∇J(w + βm)

w← w −m

[Aurelien Geron, Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2019]

51 / 67

Page 100: Foundation of Machine Learning - GitHub Pages

Momentum (3/3)

[Shallue et al., Measuring the Effects of Data Parallelism on Neural Network Training, 2019]

52 / 67

Page 101: Foundation of Machine Learning - GitHub Pages

CROSSBOW: Scaling Deep Learning withSmall Batch Sizes on Multi-GPU Servers

53 / 67

Page 102: Foundation of Machine Learning - GitHub Pages

I How to design a deep learning system that scales training with multiple GPUs, evenwhen the preferred batch size is small?

54 / 67

Page 103: Foundation of Machine Learning - GitHub Pages

Crossbow

[Peter Pietzuch - Imperial College London]

55 / 67

Page 104: Foundation of Machine Learning - GitHub Pages

Problem: Small Batches

I Small batch sizes underutilise GPUs.

I One batch per GPU: not enough data and instruction parallelism for every operator.

[Peter Pietzuch - Imperial College London]

56 / 67

Page 105: Foundation of Machine Learning - GitHub Pages

Problem: Small Batches

I Small batch sizes underutilise GPUs.

I One batch per GPU: not enough data and instruction parallelism for every operator.

[Peter Pietzuch - Imperial College London]

56 / 67

Page 106: Foundation of Machine Learning - GitHub Pages

Idea: Multiple Replicas Per GPU

I Train multiple model replicas per GPU.

I A learner is an entity that trains a single model replica independently with a givenbatch size.

[Peter Pietzuch - Imperial College London]

I But, now we must synchronise a large number of model replicas.

57 / 67

Page 107: Foundation of Machine Learning - GitHub Pages

Idea: Multiple Replicas Per GPU

I Train multiple model replicas per GPU.

I A learner is an entity that trains a single model replica independently with a givenbatch size.

[Peter Pietzuch - Imperial College London]

I But, now we must synchronise a large number of model replicas.

57 / 67

Page 108: Foundation of Machine Learning - GitHub Pages

Problem: Similiar Starting Point

I All learners always start from the same point.

I Limited exploration of parameter space.

[Peter Pietzuch - Imperial College London]

58 / 67

Page 109: Foundation of Machine Learning - GitHub Pages

Idea: Independent Replicas

I Maintain independent model replicas.

I Increased exploration of space through parallelism.

I Each model replica uses small batch size.

[Peter Pietzuch - Imperial College London]

59 / 67

Page 110: Foundation of Machine Learning - GitHub Pages

Crossbow: Synchronous Model Averaging

I Allow learners to diverge, but correct trajectories based on average model.

I Accelerate average model trajectory with momentum to find minima faster.

[Peter Pietzuch - Imperial College London]

60 / 67

Page 111: Foundation of Machine Learning - GitHub Pages

GPUs with Synchronous Model Averaging

I Synchronously apply corrections to model replicas.

[Peter Pietzuch - Imperial College London]

61 / 67

Page 112: Foundation of Machine Learning - GitHub Pages

GPUs with Synchronous Model Averaging

I Ensures consistent view of average model.

I Takes GPU bandwidth into account during synchronisation.

[Peter Pietzuch - Imperial College London]

62 / 67

Page 113: Foundation of Machine Learning - GitHub Pages

Crossbow

[Peter Pietzuch - Imperial College London]

63 / 67

Page 114: Foundation of Machine Learning - GitHub Pages

Summary

64 / 67

Page 115: Foundation of Machine Learning - GitHub Pages

Summary

I Stochastic Gradient Descent (SGD)

I Generalization• Regularization• Max-norm• Dropout

I Distributed SGD

I Batch size• Scaling learing rate• Batch normalization• Label smoothing• Momntum

I Crossbow

65 / 67

Page 116: Foundation of Machine Learning - GitHub Pages

Reference

I P. Goyal et al., Accurate, large minibatch sgd: Training imagenet in 1 hour, 2017

I C. Shallue et al., Measuring the effects of data parallelism on neural network training,2018

I A. Koliousis et al. CROSSBOW: scaling deep learning with small batch sizes onmulti-gpu servers, 2019

66 / 67

Page 117: Foundation of Machine Learning - GitHub Pages

Questions?

67 / 67