Deciding what to try next

30
Advice for applying machine learning Deciding what to try next Machine Learning

description

Deciding what to try next. Advice for applying machine learning. Machine Learning. Debugging a learning algorithm: Suppose you have implemented regularized linear regression to predict housing prices. - PowerPoint PPT Presentation

Transcript of Deciding what to try next

Page 1: Deciding what to try next

Advice for applying machine learning

Deciding what to try next

Machine Learning

Page 2: Deciding what to try next

Andrew Ng

Debugging a learning algorithm:Suppose you have implemented regularized linear regression to predict housing prices.

However, when you test your hypothesis on a new set of houses, you find that it makes unacceptably large errors in its predictions. What should you try next?

- Get more training examples- Try smaller sets of features- Try getting additional features- Try adding polynomial features- Try decreasing- Try increasing

Page 3: Deciding what to try next

Andrew Ng

Machine learning diagnostic:Diagnostic: A test that you can run to gain insight what is/isn’t working with a learning algorithm, and gain guidance as to how best to improve its performance.

Diagnostics can take time to implement, but doing so can be a very good use of your time.

Page 4: Deciding what to try next

Advice for applying machine learning

Evaluating a hypothesis

Machine Learning

Page 5: Deciding what to try next

Andrew Ng

Evaluating your hypothesis

Fails to generalize to new examples not in training set.

size

pric

e

size of houseno. of bedroomsno. of floorsage of house

kitchen sizeaverage income in neighborhood

Page 6: Deciding what to try next

Andrew Ng

Evaluating your hypothesisDataset:

Size Price2104 4001600 3302400 3691416 2323000 5401985 3001534 3151427 1991380 2121494 243

Page 7: Deciding what to try next

Andrew Ng

Training/testing procedure for linear regression

- Learn parameter from training data (minimizing training error )

- Compute test set error:

Page 8: Deciding what to try next

Andrew Ng

Training/testing procedure for logistic regression- Learn parameter from training data- Compute test set error:

- Misclassification error (0/1 misclassification error):

Page 9: Deciding what to try next

Advice for applying machine learning

Model selection and training/validation/test sets

Machine Learning

Page 10: Deciding what to try next

Andrew Ng

Overfitting example

size

pric

e Once parameterswere fit to some set of data (training set), the error of the parameters as measured on that data (the training error xxxxx) is likely to be lower than the actual generalization error.

Page 11: Deciding what to try next

Andrew Ng

Model selection1.2.3.

10.

ChooseHow well does the model generalize? Report test set error .Problem: is likely to be an optimistic estimate of generalization error. I.e. our extra parameter ( = degree of polynomial) is fit to test set.

Page 12: Deciding what to try next

Andrew Ng

Evaluating your hypothesisDataset:

Size Price2104 4001600 3302400 3691416 2323000 5401985 3001534 3151427 1991380 2121494 243

Page 13: Deciding what to try next

Andrew Ng

Train/validation/test errorTraining error:

Cross Validation error:

Test error:

Page 14: Deciding what to try next

Andrew Ng

Model selection

1.2.3.

10.

PickEstimate generalization error for test set

Page 15: Deciding what to try next

Advice for applying machine learning

Diagnosing bias vs. variance

Machine Learning

Page 16: Deciding what to try next

Andrew Ng

Bias/variance

High bias(underfit)

“Just right” High variance(overfit)

Pric

e

Size

Pric

e

Size

Pric

e

Size

Page 17: Deciding what to try next

Andrew Ng

Bias/variance

degree of polynomial d

erro

r

Training error:

Cross validation error:

Page 18: Deciding what to try next

Andrew Ng

Diagnosing bias vs. variance

degree of polynomial d

erro

r

Suppose your learning algorithm is performing less well than you were hoping. ( or is high.) Is it a bias problem or a variance problem?

(cross validation error)

(training error)

Bias (underfit):

Variance (overfit):

Page 19: Deciding what to try next

Advice for applying machine learning

Regularization and bias/variance

Machine Learning

Page 20: Deciding what to try next

Andrew Ng

Linear regression with regularization

Large xxHigh bias (underfit)

Intermediate xx“Just right”

Small xxHigh variance (overfit)

Model:Pr

ice

Size

Pric

e

Size

Pric

e

Size

Page 21: Deciding what to try next

Andrew Ng

Choosing the regularization parameter

Page 22: Deciding what to try next

Andrew Ng

1. Try2. Try3. Try4. Try5. Try

12. Try

Model:

Choosing the regularization parameter

Pick (say) . Test error:

Page 23: Deciding what to try next

Andrew Ng

Bias/variance as a function of the regularization parameter

Page 24: Deciding what to try next

Advice for applying machine learning

Learning curves

Machine Learning

Page 25: Deciding what to try next

Andrew Ng

Learning curves

(training set size)

erro

r

Page 26: Deciding what to try next

Andrew Ng

High bias

(training set size)

erro

r

size

pric

e

size

pric

e

If a learning algorithm is suffering from high bias, getting more training data will not (by itself) help much.

Page 27: Deciding what to try next

Andrew Ng

High variance

(training set size)

erro

r

size

pric

e

size

pric

e

If a learning algorithm is suffering from high variance, getting more training data is likely to help.

(and small )

Page 28: Deciding what to try next

Advice for applying machine learning

Deciding what to try next (revisited)

Machine Learning

Page 29: Deciding what to try next

Andrew Ng

Debugging a learning algorithm:Suppose you have implemented regularized linear regression to predict housing prices. However, when you test your hypothesis in a new set of houses, you find that it makes unacceptably large errors in its prediction. What should you try next?

- Get more training examples- Try smaller sets of features- Try getting additional features- Try adding polynomial features- Try decreasing- Try increasing

Page 30: Deciding what to try next

Andrew Ng

Neural networks and overfitting“Small” neural network

(fewer parameters; more prone to underfitting)

“Large” neural network(more parameters; more prone

to overfitting)

Computationally more expensive.

Use regularization ( ) to address overfitting.

Computationally cheaper