trunc newton slides - Stanford...

27
Truncated Newton Method approximate Newton methods truncated Newton methods truncated Newton interior-point methods EE364b, Stanford University

Transcript of trunc newton slides - Stanford...

Page 1: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Truncated Newton Method

• approximate Newton methods

• truncated Newton methods

• truncated Newton interior-point methods

EE364b, Stanford University

Page 2: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Newton’s method

• minimize convex f : Rn → R

• Newton step ∆xnt found from (SPD) Newton system

∇2f(x)∆xnt = −∇f(x)

using Cholesky factorization

• backtracking line search on function value f(x) or norm of gradient‖∇f(x)‖

• stopping criterion based on Newton decrement λ2/2 = −∇f(x)T∆xnt

or norm of gradient ‖∇f(x)‖

EE364b, Stanford University 1

Page 3: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Approximate or inexact Newton methods

• use as search direction an approximate solution ∆x of Newton system

• idea: no need to compute ∆xnt exactly; only need a good enoughsearch direction

• number of iterations may increase, but if effort per iteration is smallerthan for Newton, we win

• examples:

– solve H∆x = −∇f(x), where H is diagonal or band of ∇2f(x)– factor ∇2f(x) every k iterations and use most recent factorization

EE364b, Stanford University 2

Page 4: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Truncated Newton methods

• approximately solve Newton system using CG or PCG, terminating(sometimes way) early

• also called Newton-iterative methods; related to limited memoryNewton (or BFGS)

• total effort is measured by cumulative sum of CG steps done

• for good performance, need to tune CG stopping criterion, to use justenough steps to get a good enough search direction

• less reliable than Newton’s method, but (with good tuning, goodpreconditioner, fast z → ∇2f(x)z method, and some luck) can handlevery large problems

EE364b, Stanford University 3

Page 5: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Truncated Newton method

• backtracking line search on ‖∇f(x)‖

• typical CG termination rule: stop after Nmax steps or

η =‖∇2f(x)∆x+∇f(x)‖

‖∇f(x)‖≤ ǫpcg

• with simple rules, Nmax, ǫpcg are constant

• more sophisticated rules adapt Nmax or ǫpcg as algorithm proceeds(based on, e.g., value of ‖∇f(x)‖, or progress in reducing ‖∇f(x)‖)

η = min(0.1, ‖∇f(x)‖1/2) guarantees (with large Nmax) superlinearconvergence

EE364b, Stanford University 4

Page 6: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

CG initialization

• we use CG to approximately solve ∇2f(x)∆x+∇f(x) = 0

• if we initialize CG with ∆x = 0

– after one CG step, ∆x points in direction of negative gradient(so, Nmax = 1 results in gradient method)

– all CG iterates are descent directions for f

• another choice: initialize with ∆x = ∆xprev, the previous search step

– initial CG iterates need not be descent directions– but can give advantage when Nmax is small

EE364b, Stanford University 5

Page 7: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

• simple scheme: if ∆xprev is a descent direction (∆xTprev∇f(x) < 0)

start CG from

∆x =−∆xT

prev∇f(x)

∆xTprev∇

2f(x)∆xprev∆xprev

otherwise start CG from ∆x = 0

EE364b, Stanford University 6

Page 8: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Example

ℓ2-regularized logistic regression

minimize f(w) = (1/m)∑m

i=1 log(

1 + exp(−bixTi w)

)

+∑n

i=1 λiw2i

• variable is w ∈ Rn

• problem data are xi ∈ Rn, bi ∈ {−1, 1}, i = 1, . . . ,m, andregularization parameter λ ∈ Rn

+

• n is number of features; m is number of samples/observations

EE364b, Stanford University 7

Page 9: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Hessian and gradient

∇2f(w) = ATDA+ 2Λ, ∇f(w) = ATg + 2Λw

where

A = [b1x1 · · · bmxm]T , D = diag(h), Λ = diag(λ)

gi = −(1/m)/(1 + exp(Aw)i)

hi = (1/m) exp(Aw)i/(1 + exp(Aw)i)2

we never form ∇2f(w); we carry out multiplication z → ∇2f(w)z as

∇2f(w)z =(

ATDA+ 2Λ)

z = AT (D(Az)) + 2Λz

EE364b, Stanford University 8

Page 10: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Problem instance

• n = 10000 features, m = 20000 samples (10000 each with bi = ±1)

• xi have random sparsity pattern, with around 10 nonzero entries

• nonzero entries in xi drawn from N (bi, 1)

• λi = 10−8

• around 500000 nonzeros in ∇2f , and 30M nonzeros in Cholesky factor

EE364b, Stanford University 9

Page 11: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Methods

• Newton (using Cholesky factorization of ∇2f(w))

• truncated Newton with ǫcg = 10−4, Nmax = 10

• truncated Newton with ǫcg = 10−4, Nmax = 50

• truncated Newton with ǫcg = 10−4, Nmax = 250

EE364b, Stanford University 10

Page 12: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Convergence versus iterations

0 5 10 15 20 25 3010

−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

cg 10cg 50cg 250Newton

‖∇f‖

k

EE364b, Stanford University 11

Page 13: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Convergence versus cumulative CG steps

0 200 400 600 800 1000 1200 1400 1600 180010

−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

cg 10cg 50cg 250

‖∇f‖

cumulative CG iterations

EE364b, Stanford University 12

Page 14: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

• convergence of exact Newton, and truncated Newton methods withNmax = 50 and 250 essentially the same, in terms of iterations

• in terms elapsed time (and memory!), truncated Newton methods farbetter than Newton

• truncated Newton with Nmax = 10 seems to jam near ‖∇f(w)‖ ≈ 10−6

• times (on AMD270 2GHz, 12GB, Linux) in sec:

method ‖∇f(w)‖ ≤ 10−5 ‖∇f(w)‖ ≤ 10−8

Newton 1600 2600cg 10 4 —cg 50 17 26cg 250 35 54

EE364b, Stanford University 13

Page 15: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Truncated PCG Newton method

approximate search direction found via diagonally preconditioned PCG

0 200 400 600 800 1000 1200 1400 1600 180010

−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

cg 10cg 50cg 250pcg 10pcg 50pcg 250

‖∇f‖

cumulative CG iterations

EE364b, Stanford University 14

Page 16: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

• diagonal preconditioning allows Nmax = 10 to achieve high accuracy;speeds up other truncated Newton methods

• times:

method ‖∇f(w)‖ ≤ 10−5 ‖∇f(w)‖ ≤ 10−8

Newton 1600 2600cg 10 4 —cg 50 17 26cg 250 35 54pcg 10 3 5pcg 50 13 24pcg 250 23 34

• speedups of 1600:3, 2600:5 are not bad(and we really didn’t do much tuning . . . )

EE364b, Stanford University 15

Page 17: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Extensions

• can extend to (infeasible start) Newton’s method with equalityconstraints

• since we don’t use exact Newton step, equality constraints notguaranteed to hold after finite number of steps (but ‖rp‖ → 0)

• can use for barrier, primal-dual methods

EE364b, Stanford University 16

Page 18: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Truncated Newton interior-point methods

• use truncated Newton method to compute search direction ininterior-point method

• tuning PCG parameters for optimal performance on a given problemclass is tricky, since linear systems in interior-point methods oftenbecome ill-conditioned as algorithm proceeds

• but can work well (with luck, good preconditioner)

EE364b, Stanford University 17

Page 19: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Network rate control

rate control problem

minimize −U(f) = −∑n

j=1 log fjsubject to Rf � c

with variable f

• f ∈ Rn++ is vector of flow rates

• U(f) =∑n

j=1 log fj is flow utility

• R ∈ Rm×n is route matrix (Rij ∈ {0, 1})

• c ∈ Rm is vector of link capacities

EE364b, Stanford University 18

Page 20: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Dual rate control problem

dual problem

maximize g(λ) = n− cTλ+∑m

i=1 log(rTi λ)

subject to λ � 0

with variable λ ∈ Rm

duality gap

η = −U(f)− g(λ)

= −n∑

j=1

log fj − n+ cTλ−m∑

i=1

log(rTi λ)

EE364b, Stanford University 19

Page 21: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Primal-dual search direction (BV §11.7)

primal-dual search direction ∆f , ∆λ given by

(D1 +RTD2R)∆f = g1 − (1/t)RTg2, ∆λ = D2R∆f − λ+ (1/t)g2

where s = c−Rf ,

D1 = diag(1/f21 , . . . , 1/f

2n), D2 = diag(λ1/s1, . . . , λm/sm)

g1 = (1/f1, . . . , 1/fn), g2 = (1/s1, . . . , 1/sm)

EE364b, Stanford University 20

Page 22: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Truncated Newton primal-dual algorithm

primal-dual residual:

r = (rdual, rcent) =(

−g2 +RTλ, diag(λ)s− (1/t)1)

given f with Rf ≺ c; λ ≻ 0

while η/g(λ) > ǫ

t := µm/ηcompute ∆f using PCG as approximate solution of

(D1 +RTD2R)∆f = g1 − (1/t)RTg2∆λ := D2R∆f − λ+ (1/t)g2carry out line search on ‖r‖2, and update:

f := f + γ∆f , λ := λ+ γ∆λ

EE364b, Stanford University 21

Page 23: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

• problem instance

– m = 200000 links, n = 100000 flows– average of 12 links per flow, 6 flows per link– capacities random, uniform on [0.1, 1]

• algorithm parameters

– truncated Newton with ǫcg = min(0.1, η/g(λ)), Nmax = 200(Nmax never reached)

– diagonal preconditioner– warm start– µ = 2– ǫ = 0.001 (i.e., solve to guaranteed 0.1% suboptimality)

EE364b, Stanford University 22

Page 24: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Primal and dual objective evolution

0 50 100 150 200 250 300 3502

2.5

3

3.5

4

4.5

5

5.5

6x 10

5

cumulative PCG iterations

−U(f)g(λ)

EE364b, Stanford University 23

Page 25: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Relative duality gap evolution

0 50 100 150 200 250 300 35010

−4

10−3

10−2

10−1

100

101

cumulative PCG iterations

relative

dualitygap

EE364b, Stanford University 24

Page 26: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Primal and dual objective evolution (n = 106)

0 50 100 150 200 250 300 350 4002

2.5

3

3.5

4

4.5

5

5.5

6x 10

6

cumulative PCG iterations

−U(f)g(λ)

EE364b, Stanford University 25

Page 27: trunc newton slides - Stanford Universityweb.stanford.edu/class/ee364b/lectures/trunc_newton_slides.pdf · Newton’smethod • minimize convex f : Rn → R • Newton step ∆x nt

Relative duality gap evolution (n = 106)

0 50 100 150 200 250 300 350 40010

−4

10−3

10−2

10−1

100

101

cumulative PCG iterations

relative

dualitygap

EE364b, Stanford University 26