CSE 546: Machine Learning

45
CSE 546: Machine Learning Simon Du and Jamie Morgenstern

Transcript of CSE 546: Machine Learning

Page 1: CSE 546: Machine Learning

CSE 546: Machine LearningSimon Du and Jamie Morgenstern

Page 2: CSE 546: Machine Learning

Traditional Algorithms

Reddit Google Twitter?Social media mentions of Cats vs. Dogs

Graphics courtesy of https://theoutline.com/post/3128/dogs-cats-internet-popularity?zd=1

Page 3: CSE 546: Machine Learning

Traditional Algorithms

Reddit Google Twitter?Social media mentions of Cats vs. Dogs

Graphics courtesy of https://theoutline.com/post/3128/dogs-cats-internet-popularity?zd=1

Write a program that sorts tweets into those containing “cat”, “dog”, or other

Page 4: CSE 546: Machine Learning

Traditional Algorithms

Reddit Google Twitter?Social media mentions of Cats vs. Dogs

Graphics courtesy of https://theoutline.com/post/3128/dogs-cats-internet-popularity?zd=1

Write a program that sorts tweets into those containing “cat”, “dog”, or other

Twitter?

for tweet in tweets:

cats = []dogs = []

if “cat” in tweet:cats.append(tweet)

elseif “dog” in tweet:

other = []

dogs.append(tweet)

else:

other.append(tweet)

return cats, dogs, other

Page 5: CSE 546: Machine Learning

Machine Learning algorithms

Write a program that sorts images into those containing “birds”, “airplanes”, or other.

airplaneotherbird

Page 6: CSE 546: Machine Learning

Machine Learning algorithms

Write a program that sorts images into those containing “birds”, “airplanes”, or other.

airplaneotherbird

for image in images:

birds = []

planes = []

if bird in image:birds.append(image)

elseif plane in image:

other = []

planes.append(image)

else:

other.append(tweet)

return birds, planes, other

Page 7: CSE 546: Machine Learning

Machine Learning algorithms

Write a program that sorts images into those containing “birds”, “airplanes”, or other.

airplaneotherbird

for image in images:

birds = []

planes = []

if bird in image:birds.append(image)

elseif plane in image:

other = []

planes.append(image)

else:

other.append(tweet)

return birds, planes, other

feature 1

feat

ure

2

Page 8: CSE 546: Machine Learning

Machine Learning algorithms

Write a program that sorts images into those containing “birds”, “airplanes”, or other.

for image in images:

birds = []

planes = []

if bird in image:birds.append(image)

elseif plane in image:

other = []

planes.append(image)

else:

other.append(tweet)

return birds, planes, other

feature 1

feat

ure

2

airplaneotherbird

Page 9: CSE 546: Machine Learning

Machine Learning algorithms

Write a program that sorts images into those containing “birds”, “airplanes”, or other.

airplaneotherbird

feature 1

feat

ure

2

The decision rule of if bird in image: is LEARNED using DATA

The decision rule of if “cat” in tweet: is hard coded by expert.

Page 10: CSE 546: Machine Learning

Machine Learning Ingredients

• Data: past observations

• Hypotheses/Models: devised to capture the patterns in data

• Prediction: apply model to forecast future observations

Page 11: CSE 546: Machine Learning

ML uses past data to make personalized predictions

You may also like…

Page 12: CSE 546: Machine Learning

Mix of statistics (theory) and algorithms (programming)

RegressionPredict continuous value: ex: stock market, credit score, temperature, Netflix rating

ClassificationPredict categorical value: loan or not? spam or not? what disease is this?

Unsupervised Learning

Predict structure: tree of life from DNA, find similar images, community detection

Flavors of ML

Page 13: CSE 546: Machine Learning

What this class is:• Fundamentals of ML: bias/variance tradeoff, overfitting,

optimization and computational tradeoffs, supervised learning (e.g., linear, boosting, deep learning), unsupervised models (e.g. k-means, EM, PCA)

• Preparation for further learning: the field is fast-moving, you will be able to apply the basics and teach yourself the latest

What this class is not:• Survey course: laundry list of algorithms, how to win Kaggle • An easy course: familiarity with intro linear algebra and probability

are assumed, homework will be time-consuming

CSE446/546: Machine LearningInstructor: Simon Du and Jamie Morgenstern Contact: [email protected] Course Website: https://courses.cs.washington.edu/courses/cse446/21sp/

Page 14: CSE 546: Machine Learning

Prerequisites

■ Formally: ■ MATH 308, CSE 312, STAT 390 or equivalent

■ Familiarity with: ■ Linear algebra

■ linear dependence, rank, linear equations, SVD ■ Multivariate calculus ■ Probability and statistics

■ Distributions, marginalization, moments, conditional expectation ■ Algorithms

■ Basic data structures, complexity ■ “Can I learn these topics concurrently?” ■ Use HW0 to judge skills ■ See website for review materials!

Page 15: CSE 546: Machine Learning

Grading

■ 5 homework (100%=10%+20%+20%+20%+30%) □ Each contains both theoretical questions and will have

programming □ Collaboration okay but must write who you collaborated

with. You must write, submit, and understand your answers and code (which run on autograder)

□ Do not Google for answers. ■ NO exams ■ We will assign random subgroups as PODs (when dust

clears)

Page 16: CSE 546: Machine Learning

Homework

□ HW 0 is out (Due next Wednesday 10/6 Midnight) □ Short review □ Work individually, treat as barometer for readiness

□ HW 1,2,3,4 □ They are not easy or short. Start early.

□ Submit to Gradescope □ Regrade requests on Gradescope □ There is no credit for late work, 5 late days

Page 17: CSE 546: Machine Learning

Homework

□ HW 0 is out (Due next Wednesday 10/6 Midnight) □ Short review □ Work individually, treat as barometer for readiness

□ HW 1,2,3,4 □ They are not easy or short. Start early.

□ Submit to Gradescope □ Regrade requests on Gradescope □ There is no credit for late work, 5 late days

1. All code must be written in Python 2. All written work must be typeset (e.g., LaTeX)

See course website for tutorials and references.

Page 18: CSE 546: Machine Learning

Homework & Grading

□ CSE 446: □ Just do A problems □ Doing B problems will not get higher grades □ Grade is on 4.0 scale (relative to students in 446)

□ CSE 546: □ If just do A problems, grade is up to 3.8 □ B problems are for 0.2 □ Final grade = A grade + B grade (relative students in

546)

Page 19: CSE 546: Machine Learning

Communication Chanels

■ Announcements, questions about class, homework help □ EdStem (invitation sent, contact TAs if you need

access) □ Weekly Section □ Office hours

■ Regrade requests □ Directly to Gradescope

■ Personal concerns □ Email: [email protected]

■ Anonymous feedback □ See website for link

Page 20: CSE 546: Machine Learning

Textbooks

■ Required Textbook: □Machine Learning: a Probabilistic Perspective;

Kevin Murphy

■ Optional Books (free PDF): □The Elements of Statistical Learning: Data Mining,

Inference, and Prediction; Trevor Hastie, Robert Tibshirani, Jerome Friedman

Page 21: CSE 546: Machine Learning

Addcodes

■ Email: Elle Brown ([email protected]) for addcodes

Page 22: CSE 546: Machine Learning

Enjoy!

■ ML is becoming ubiquitous in science, engineering and beyond

■ It’s one of the hottest topics in industry today ■ This class should give you the basic foundation for applying

ML and developing new methods ■ The fun begins…

Page 23: CSE 546: Machine Learning

Maximum Likelihood EstimationJamie Morgenstern

Page 24: CSE 546: Machine Learning

©2019 Kevin Jamieson

Your first consulting job

□ Billionaire: I have a special coin, if I flip it, what’s the probability it will be heads?

□ You: Please flip it a few times:

□ You: The probability is:

□ Billionaire: Why?

Page 25: CSE 546: Machine Learning

©2019 Kevin Jamieson

Coin – Binomial Distribution

P (D|✓) =

■ Data: sequence D= (HHTHT…), k heads out of n flips■ Hypothesis: P(Heads) = θ, P(Tails) = 1-θ

□ Flips are i.i.d.:□ Independent events□ Identically distributed according to Binomial

distribution

Page 26: CSE 546: Machine Learning

©2019 Kevin Jamieson

Maximum Likelihood Estimation

P (D|✓) = ✓k(1� ✓)n�k

b✓MLE = argmax✓

P (D|✓)

= argmax✓

logP (D|✓)P (D|✓)

b✓MLE

■ Data: sequence D= (HHTHT…), k heads out of n flips■ Hypothesis: P(Heads) = θ, P(Tails) = 1-θ

■ Maximum likelihood estimation (MLE): Choose θ that maximizes the probability of observed data:

Page 27: CSE 546: Machine Learning

©2019 Kevin Jamieson

Your first learning algorithm

■ Set derivative to zero: d

d✓logP (D|✓) = 0

b✓MLE = argmax✓

logP (D|✓)

= argmax✓

log ✓k(1� ✓)n�k

Page 28: CSE 546: Machine Learning

How many flips do I need?

28

■ You: flip the coin 5 times. Billionaire: I got 3 heads.

■ You: flip the coin 50 times. Billionaire: I got 20 heads.

■ Billionaire: Which one is right? Why?

©2019 Kevin Jamieson

b✓MLE =k

n

b✓MLE =

b✓MLE =

Page 29: CSE 546: Machine Learning

©2019 Kevin Jamieson

Simple bound (based on Hoeffding’s inequality)

■ For n flips and k heads the MLE is unbiased for true θ*:

■ Hoeffding’s inequality says that for any ε>0:

b✓MLE =k

nE[b✓MLE ] = ✓⇤

P (|b✓MLE � ✓⇤| � ✏) 2e�2n✏2

Page 30: CSE 546: Machine Learning

©2019 Kevin Jamieson

PAC Learning

■ PAC: Probably Approximate Correct ■ Billionaire: I want to know the parameter θ*, within ε = 0.1, with

probability at least 1-δ = 0.95. How many flips?

P (|b✓MLE � ✓⇤| � ✏) 2e�2n✏2

Page 31: CSE 546: Machine Learning

©2019 Kevin Jamieson

What about continuous variables?

■ Billionaire: What if I am measuring a continuous variable? ■ You: Let me tell you about Gaussians…

Page 32: CSE 546: Machine Learning

©2019 Kevin Jamieson

Some properties of Gaussians

■ affine transformation (multiplying by scalar and adding a constant) □ X ~ N(µ,σ2) □ Y = aX + b ➔ Y ~ N(aµ+b,a2σ2)

■ Sum of Gaussians □ X ~ N(µX,σ2

X) □ Y ~ N(µY,σ2

Y)

□ Z = X+Y ➔ Z ~ N(µX+µY, σ2X+σ2

Y)

Page 33: CSE 546: Machine Learning

©2019 Kevin Jamieson

MLE for Gaussian

■ Prob. of i.i.d. samples D={x1,…,xN} (e.g., exam scores):

■ Log-likelihood of data:

P (D|µ,�) = P (x1, . . . , xn|µ,�)

=

✓1

�p2⇡

◆n nY

i=1

e�(xi�µ)2

2�2

logP (D|µ,�) = �n log(�p2⇡)�

nX

i=1

(xi � µ)2

2�2

Page 34: CSE 546: Machine Learning

©2019 Kevin Jamieson

Your second learning algorithm:MLE for mean of a Gaussian

• What’s MLE for mean?d

dµlogP (D|µ,�) = d

"�n log(�

p2⇡)�

nX

i=1

(xi � µ)2

2�2

#

Page 35: CSE 546: Machine Learning

©2019 Kevin Jamieson

MLE for variance

• Again, set derivative to zero:d

d�logP (D|µ,�) = d

d�

"�n log(�

p2⇡)�

nX

i=1

(xi � µ)2

2�2

#

Page 36: CSE 546: Machine Learning

©2019 Kevin Jamieson

Learning Gaussian parameters

■ MLE:

■ MLE for the variance of a Gaussian is biased

□ Unbiased variance estimator:

bµMLE =1

n

nX

i=1

xi

c�2MLE =

1

n

nX

i=1

(xi � bµMLE)2

E[c�2MLE ] 6= �2

c�2unbiased =

1

n� 1

nX

i=1

(xi � bµMLE)2

Page 37: CSE 546: Machine Learning

©2019 Kevin Jamieson

Maximum Likelihood Estimation

Observe X1, X2, . . . , Xn drawn IID from f(x; ✓) for some “true” ✓ = ✓⇤

Likelihood function Ln(✓) =nY

i=1

f(Xi; ✓)

ln(✓) = log(Ln(✓)) =nX

i=1

log(f(Xi; ✓))Log-Likelihood function

Maximum Likelihood Estimator (MLE) b✓MLE = argmax✓

Ln(✓)

Page 38: CSE 546: Machine Learning

©2019 Kevin Jamieson

Maximum Likelihood Estimation

Observe X1, X2, . . . , Xn drawn IID from f(x; ✓) for some “true” ✓ = ✓⇤

Likelihood function Ln(✓) =nY

i=1

f(Xi; ✓)

ln(✓) = log(Ln(✓)) =nX

i=1

log(f(Xi; ✓))Log-Likelihood function

Maximum Likelihood Estimator (MLE) b✓MLE = argmax✓

Ln(✓)

Properties (under benign regularity conditions—smoothness, identifiability, etc.):

Asymptotically consistent and normal:b✓MLE�✓⇤

bse ⇠ N (0, 1)

Asymptotic Optimality, minimum variance (see Cramer-Rao lower bound)

Page 39: CSE 546: Machine Learning

Recap

©2019 Kevin Jamieson

■ Learning is… □ Collect some data

■ E.g., coin flips □ Choose a hypothesis class or model

■ E.g., binomial □ Choose a loss function

■ E.g., data likelihood □ Choose an optimization procedure

■ E.g., set derivative to zero to obtain MLE □ Justifying the accuracy of the estimate

■ E.g., Hoeffding’s inequality

Page 40: CSE 546: Machine Learning

©2019 Kevin Jamieson

Maximum Likelihood Estimation, cont.Machine Learning – CSE446 Jamie Morgenstern University of Washington

Maximum Likelihood Estimation, cont.

Page 41: CSE 546: Machine Learning

The 1d regression problem

©2018 Kevin Jamieson

# square feet

Sal

e P

rice

Given past sales data on zillow.com, predict: y = House sale price from x = # sq. ft.

Linear model Noise model

Training Data:

{(xi, yi)}ni=1

Model:yi = xiw + b+ ✏i

✏i ⇠ N (0,�2)

Page 42: CSE 546: Machine Learning

The 1d regression problem

42©2018 Kevin Jamieson

# square feet

Sal

e P

rice

Given past sales data on zillow.com, predict: y = House sale price from x = # sq. ft. Training Data:

{(xi, yi)}ni=1

Model:yi = xiw + b+ ✏i

✏i ⇠ N (0,�2)

Loss function:nX

i=1

� log(p(yi|xi, w, b))

Page 43: CSE 546: Machine Learning

The 1d regression problem

©2018 Kevin Jamieson

Model:yi = xiw + b+ ✏i

✏i ⇠ N (0,�2)Loss function:nX

i=1

� log(p(yi|xi, w, b))

=nX

i=1

� log(1p2⇡�2

exp

⇢�(yi � (wxi + b))2

2�2

�)

Page 44: CSE 546: Machine Learning

The 1d regression problem

©2018 Kevin Jamieson

Model:yi = xiw + b+ ✏i

✏i ⇠ N (0,�2)Loss function:nX

i=1

� log(p(yi|xi, w, b))

=nX

i=1

� log(1p2⇡�2

exp

⇢�(yi � (wxi + b))2

2�2

�)

argminw,b

nX

i=1

� log(p(yi|xi, w, b)) ⌘ argminw,b

nX

i=1

(yi � (wxi + b))2

Page 45: CSE 546: Machine Learning

The 1d regression problem

45©2018 Kevin Jamieson

# square feet

Sal

e P

rice

Given past sales data on zillow.com, predict: y = House sale price from x = # sq. ft. Training Data:

{(xi, yi)}ni=1

Model:yi = xiw + b+ ✏i

✏i ⇠ N (0,�2)

Loss function:nX

i=1

(yi � (wxi + b))2