A majorization-minimization algorithm for (multiple...

21
A majorization-minimization algorithm for (multiple) hyperparameterlearning ICML 2009 Montreal, Canada 17 th June 2009 Chuan-Sheng Foo Chuong B. Do Andrew Y. Ng Stanford University

Transcript of A majorization-minimization algorithm for (multiple...

Page 1: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

A majorization-minimization algorithm

for (multiple) hyperparameter learning

ICML 2009 Montreal, Canada

17th June 2009

Chuan-Sheng Foo Chuong B. Do Andrew Y. Ng

Stanford University

Page 2: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

Supervised learning

• Training set of m IID examples

• Probabilistic model

• Estimate parameters

Labels may be real-valued, discrete, structured

Page 3: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

• Regularized maximum likelihood estimation

• Also maximum a posteriori (MAP) estimation

Regularization prevents overfitting

Log-prior over

model parameters

Data log-

likelihood

L2-regularized Logistic Regression Regularization Hyperparameter

Page 4: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

How to select the hyperparameter(s)?

• Grid search

+ Simple to implement

− Scales exponentially with # hyperparameters

• Gradient-based algorithms

+ Scales well with # hyperparameters

− Non-trivial to implement

Can we get the best of both worlds?

Page 5: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

Our contribution

�Striking ease of implementation

�Simple, closed-form updates for C

�Leverage existing solvers

�Scales well to multiple hyperparameter case

�Applicable to wide range of models

Page 6: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

Outline

1. Problem definition

2. The “integrate out” strategy

3. The Majorization-Minimization algorithm

4. Experiments

5. Discussion

Page 7: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

The “integrate out” strategy

• Treat hyperparameter C as a random variable

• Analytically integrate out C

• Need a convenient prior p(C)

Page 8: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

Integrating out a single hyperparameter

• For L2 regularization,

• A convenient prior:

• The result:

2. Neither convex nor concave in w1. C is gone

Page 9: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

The Majorization-Minimization Algorithm

• Replace hard problem by series of easier ones

• EM-like; two steps:

1. Majorization

Upper bound the objective function

2. Minimization

Minimize the upper bound

Page 10: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

MM1: Upper-bounding the new prior

• New prior:

• Linearize the log:

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5-5

-4

-3

-2

-1

0

1

2

3

4

x

y

log(x)

expansion at x=1

expansion at x=1.5

expansion at x=2

Page 11: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

MM2: Solving the resultant

optimization problem

• Resultant linearized prior

• Get standard L2-regularization!

Terms independent of w

Use existing solvers!

Page 12: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

Visualization of the upper bound

-5 -4 -3 -2 -1 0 1 2 3 4 50

0.5

1

1.5

2

2.5

3

x

y

log(0.5 x2 + 1)

expansion at x=1

expansion at x=1.5

expansion at x=2

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5-5

-4

-3

-2

-1

0

1

2

3

4

x

y

log(x)

expansion at x=1

expansion at x=1.5

expansion at x=2

Page 13: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

Overall algorithm

1. Closed form updates for C

2. Leverage existing solvers

Converges to a local minimum

Page 14: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

What about multiple hyperparameters?

• Regularization groups

w = ( w1, w2, w3, w4, w5 )

Unigram

feature

weights

Hairpin

loops

Bigram

feature

weights

Bulge

loops

NLP

RNA

Secondary

Structure

Prediction

C = ( C1 , C2 )

Mapping from

weights to groups

“To C or not to C. That

is the question…”

Page 15: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

What about multiple hyperparameters?

Separately update each

regularization group

Sum weights

in each group

Weighted L2-regularization

Page 16: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

Experiments

• 4 probabilistic models

– Linear regression (too easy, not shown)

– Binary logistic regression

– Multinomial logistic regression

– Conditional log-linear model

• 3 competing algorithms

– Grid search

– Gradient-based algorithm (Do et al., 2007)

– Direct optimization of new objective

• Algorithm run with α = 0, β = 1

Page 17: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

50

60

70

80

90

100a

ust

rali

an

bre

ast

-

can

cer

dia

be

tes

ge

rma

n-

nu

me

r

he

art

ion

osp

he

re

liv

er-

dis

ord

ers

mu

shro

om

s

son

ar

spli

ce

w1

a

Acc

ura

cy

Grid Grad Direct MM

Results: Binary Logistic Regression

Page 18: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

Results: Multinomial Logistic Regression

30405060

708090

100

con

ne

ct-4

dn

a

g

lass

ir

is

le

6e

r

mn

ist1

sa

7m

ag

e

se

gm

en

t

sv

mg

uid

e2

usp

s v

eh

icle

v

ow

el

win

e

Acc

ura

cy

Grid Grad Direct MM

Page 19: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

Results: Conditional Log-Linear Models

• RNA secondary structure

prediction

• Multiple hyperparameters

ROC Area

0.58

0.59

0.6

0.61

0.62

0.63

0.64

0.65

Single Grouped

Gradient Direct MM

AGCAGAGUGGCGCA

GUGGAAGCGUGCUG

GUCCCAUAACCCAGA

GGUCCGAGGAUCGA

AACCUUGCUCUGCUA

(((((((((((((.......))))..((((((....

(((....)))....))))))......))))))))).

Page 20: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

Discussion

• How to choose α, β in Gamma prior?

– Sensitivity experiments

– Simple choice reasonable

– Further investigation required

• Simple assumptions sometimes wrong

• But competitive performance with Grid, Grad

• Suited for ‘Quick-and-dirty’ implementations

Page 21: A majorization-minimization algorithm for (multiple ...ai.stanford.edu/~csfoo/slides/MMHyperLearn-slides.pdf · •Training set of mIID examples •Probabilistic model •Estimate

Thank you!