Maximum Entropy Model (III): training and smoothing

41
Maximum Entropy Model (III): training and smoothing LING 572 Advanced Statistical Methods for NLP February 4, 2020 1

Transcript of Maximum Entropy Model (III): training and smoothing

Page 1: Maximum Entropy Model (III): training and smoothing

Maximum Entropy Model (III): training and smoothing

LING 572Advanced Statistical Methods for NLP

February 4, 2020

1

Page 2: Maximum Entropy Model (III): training and smoothing

Outline● Overview

● The Maximum Entropy Principle

● Modeling**

● Decoding

● Training**

● Smoothing**

● Case study: POS tagging: covered in ling570 already

2

Page 3: Maximum Entropy Model (III): training and smoothing

Training

3

Page 4: Maximum Entropy Model (III): training and smoothing

Algorithms● Generalized Iterative Scaling (GIS): (Darroch and Ratcliff, 1972)

● Improved Iterative Scaling (IIS): (Della Pietra et al., 1995)

● L-BFGS:

● …

4

Page 5: Maximum Entropy Model (III): training and smoothing

GIS: setup**● Requirements for running GIS:

● If that’s not the case, let

● Add “correction” feature function:

5

∀(x, y) ∈ X × Yk

∑j=1

fj(x, y) = C

C = max(xi,yi)∈S

k

∑j=1

fj(x, y)

∀(x, y) ∈ X × Y fk+1(x, y) = C −k

∑j=1

fj(x, y)

Page 6: Maximum Entropy Model (III): training and smoothing

GIS algorithm

● Compute empirical expectation:

• Initialize to 0 or some other value

• Repeat until convergence for each j:● Calculate p(y | x) under the current model:

● Calculate model expectation under current model:

● Update model parameters:

dj = Ep̃ fj =1N

N

∑i=1

fj(xi, yi)

λ(0)j

6

p(n)(y |x) =e ∑k

j=1 λ(n)j fj(x,y)

Z

Ep(n) fj =1N

N

∑i=1

∑y∈Y

p(n)(y |xi)fj(xi, y)

λ(n+1)j = λ(n)

j +1C

(logdj

Ep(n) fj)

Page 7: Maximum Entropy Model (III): training and smoothing

“Until convergence”

7

L(p) = ∑(x,y)∈S

p̃(x, y)log p(y |x)

L(p(n)) = ∑(x,y)∈S

p̃(x, y)log p(n)(y |x)

L(p(n+1)) − L(p(n)) < threshold

L(p(n+1)) − L(p(n))L(p(n))

< threshold

Page 8: Maximum Entropy Model (III): training and smoothing

Calculating LL(p)LL = 0;

for each training instance x

let y be the true label of x

prob = p(y | x); # p is the current model

LL += log (prob);

8

Page 9: Maximum Entropy Model (III): training and smoothing

Properties of GIS● L(p(n+1)) >= L(p(n))

● The sequence is guaranteed to converge to p*.

● The convergence can be very slow.

● The running time of each iteration is O(NPA):● N: the training set size● P: the number of classes● A: the average number of features that are active for an instance (x, y).

9

Page 10: Maximum Entropy Model (III): training and smoothing

L-BFGS● BFGS stands for Broyden-Fletcher-Goldfarb-Shanno: authors of four single-

authored papers published in 1970.

● L-BFGS: Limited-memory BFGS, proposed in 1980s.

● Quasi-Newton method for unconstrained optimization. **

● Especially efficient on problems involving a large number of variables.

10

Page 11: Maximum Entropy Model (III): training and smoothing

L-BFGS (cont)**● References:● J. Nocedal. Updating Quasi-Newton Matrices with Limited Storage (1980), Mathematics of Computation

35, pp. 773-782. ● D.C. Liu and J. Nocedal. On the Limited Memory Method for Large Scale Optimization (1989),

Mathematical Programming B, 45, 3, pp. 503-528.

● Implementation: ● Fortune: http://www.ece.northwestern.edu/~nocedal/lbfgs.html● C: http://www.chokkan.org/software/liblbfgs/index.html● Perl: http://search.cpan.org/~laye/Algorithm-LBFGS-0.12/lib/Algorithm/LBFGS.pm● Scipy: https://docs.scipy.org/doc/scipy/reference/optimize.minimize-lbfgsb.html

11

Page 12: Maximum Entropy Model (III): training and smoothing

Outline● Overview

● The Maximum Entropy Principle

● Modeling**

● Decoding

● Training**

● Smoothing**

● Case study: POS tagging

12

Page 13: Maximum Entropy Model (III): training and smoothing

SmoothingMany slides come from

(Klein and Manning, 2003)

13

Page 14: Maximum Entropy Model (III): training and smoothing

Papers● (Klein and Manning, 2003)

● Chen and Rosenfeld (1999): A Gaussian Prior for Smoothing Maximum Entropy Models, CMU Technical report (CMU-CS-99-108).

14

Page 15: Maximum Entropy Model (III): training and smoothing

Smoothing● MaxEnt models for NLP tasks can have millions of features.

● Overfitting is a problem.

● Feature weights can be infinite, and the iterative trainers can take a long time to reach those values.

15

Page 16: Maximum Entropy Model (III): training and smoothing

An example

16

Page 17: Maximum Entropy Model (III): training and smoothing

17

Page 18: Maximum Entropy Model (III): training and smoothing

Approaches● Early stopping

● Feature selection

● Regularization**

18

Page 19: Maximum Entropy Model (III): training and smoothing

Early Stopping● Prior use of early stopping● Decision tree heuristics

● Similarly here● Stop training after a few iterations

● The values of parameters will be finite.

● Commonly used in early MaxEnt work

19

Page 20: Maximum Entropy Model (III): training and smoothing

Feature selection● Methods:● Using predefined functions: e.g., Dropping features with low counts ● Wrapper approach: Feature selection during training

● Equivalent to setting the removed features’ weights to be zero.

● Reduces model size, but performance could suffer.

20

Page 21: Maximum Entropy Model (III): training and smoothing

Regularization**● In statistics and machine learning, regularization is any method of preventing

overfitting of data by a model.

● Typical examples of regularization in statistical machine learning include ridge regression, lasso, and L2-norm in support vector machines.

● In this case, we change the objective function:

21

Posterior Prior Likelihood

log p(Y, λ |X) = log P(λ) + log P(Y |X, λ)

Page 22: Maximum Entropy Model (III): training and smoothing

MAP estimate**● ML: Maximum likelihood

● MAP: Maximum A Posteriori

22

P(X, Y |λ)P(Y |X, λ)

P(λ |X, Y)P(Y, λ |X)log p(Y, λ |X) = log P(λ) + log P(Y |X, λ)

Page 23: Maximum Entropy Model (III): training and smoothing

The prior**● Uniform distribution, Exponential prior, …

● Gaussian prior:

23

P(λi) =1

σi 2πexp(−

(λi − μi)2

2σ2)

log p(Y, λ |X) = log P(λ) + log P(Y |X, λ)

=k

∑i=1

log P(λi) + log P(Y |X, λ)

= − k log σ 2π −k

∑i=1

(λi − μ)2

2σ2+ log P(Y |X, λ)

Page 24: Maximum Entropy Model (III): training and smoothing

● Maximize P(Y|X, λ):

● Maximize P(Y, λ | X):

● In practice:

24

Ep fj = Ep̃ fj

Ep fj = Ep̃ fj −λj − μ

σ2

μ = 0 2σ2 = 1

Page 25: Maximum Entropy Model (III): training and smoothing

L1 or L2 regularization**

25

Orthant-Wise limited-memory Quasi-Newton (OW-LQN) method (Andrew and Gao, 2007)

L-BFGS method (Nocedal, 1980)

L1 = ∑i

log P(yi, λ |xi) −∥λ∥

σ

L2 = ∑i

log P(yi, λ |xi) −∥λ∥2

2σ2

Page 26: Maximum Entropy Model (III): training and smoothing

Example: POS tagging

26

Page 27: Maximum Entropy Model (III): training and smoothing

Benefits of smoothing**● Softens distributions

● Pushes weights onto more explanatory features

● Allows many features to be used safely

● Can speed up convergence

27

Page 28: Maximum Entropy Model (III): training and smoothing

Summary: training and smoothing● Training: many methods (e.g., GIS, IIS, L-BFGS).

● Smoothing:● Early stopping● Feature selection● Regularization

● Regularization:● Changing the objective function by adding the prior● A common prior: Gaussian distribution● Maximizing posterior is no longer the same as maximizing entropy.

28

Page 29: Maximum Entropy Model (III): training and smoothing

Outline● Overview

● The Maximum Entropy Principle

● Modeling**:

● Decoding:

● Training**: compare empirical expectation and model expectation and modify the weights accordingly

● Smoothing**: change the objective function

● Case study: POS tagging

29

Ze

xypyxf j

k

jj ),(

1

)|(∑=

=

λ

Page 30: Maximum Entropy Model (III): training and smoothing

Additional slides

30

Page 31: Maximum Entropy Model (III): training and smoothing

The “correction” feature function for GIS

31

∑=

+ −=k

jjk yxfCyxf

11 ),(),(

...),(),( 2111 == ++ cxfcxf kk

The weight of fk+1 will not affect P(y | x).

Therefore, there is no need to estimate the weight.

Page 32: Maximum Entropy Model (III): training and smoothing

Ex4 (cont)

32

??

Page 33: Maximum Entropy Model (III): training and smoothing

Training

33

Page 34: Maximum Entropy Model (III): training and smoothing

IIS algorithm● Compute dj, j=1, …, k+1 and

● Initialize (any values, e.g., 0)

● Repeat until converge● For each j● Let be the solution to

● Update

34

),(),(1

# yxfyxfk

jj∑

=

=

jyxf

jx

n deyxfyxp j =Δ

∈∑ ),()( #

),(),( λ

ε

jnj

nj λλλ Δ+=+ )()1(

)1(jλ

jλΔ

Page 35: Maximum Entropy Model (III): training and smoothing

Calculating

35

∑=

=∈∀k

jj Cxfx

1)(ε

)(log1

)( jp

ij fE

dC n

=Δλ

jλΔ

jλΔ

If

Then

GIS is the same as IIS

Else must be calculated numerically.

Page 36: Maximum Entropy Model (III): training and smoothing

Feature selection

36

Page 37: Maximum Entropy Model (III): training and smoothing

Feature selection● Throw in many features and let the machine select the weights● Manually specify feature templates

● Problem: too many features

● An alternative: greedy algorithm● Start with an empty set S● Add a feature at each iteration

37

Page 38: Maximum Entropy Model (III): training and smoothing

Two scenariosScenario #1: no feature selection during training

● Define features templates

● Create the feature set

● Determine the optimum feature weights via GIS or IIS

Scenario #2: with feature selection during training

● Define feature templates

● Create a candidate feature set F

● At every iteration, choose the feature from F (with max gain) and determine its weight (or choose top-n features and their weights).

38

Page 39: Maximum Entropy Model (III): training and smoothing

Notation

39

The gain in the log-likelihood of the training data:

After adding a feature:

With the feature set S:

Page 40: Maximum Entropy Model (III): training and smoothing

Feature selection algorithm(Berger et al., 1996)

● Start with S being empty; thus ps is uniform.

● Repeat until the gain is small enough● For each candidate feature f● Computer the model using IIS● Calculate the log-likelihood gain

● Choose the feature with maximal gain, and add it to S

40

fSp ∪

➔ Problem: too expensive

Page 41: Maximum Entropy Model (III): training and smoothing

Approximating gains(Berger et. al., 1996)

● Instead of recalculating all the weights, calculate only the weight of the new feature.

41