L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf ·...

31
L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1

Transcript of L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf ·...

Page 1: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

L. Vandenberghe EE236C (Spring 2016)

1. Gradient method

• gradient method, first-order methods

• quadratic bounds on convex functions

• analysis of gradient method

1-1

Page 2: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Approximate course outline

First-order methods

• gradient, conjugate gradient, quasi-Newton methods

• subgradient, proximal gradient methods

• accelerated (proximal) gradient methods

Decomposition and splitting methods

• first-order methods and dual reformulations

• alternating minimization methods

• monotone operators and operator-splitting methods

Interior-point methods

• conic optimization

• primal-dual interior-point methods

Gradient method 1-2

Page 3: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Gradient method

to minimize a convex differentiable function f : choose initial point x(0) and repeat

x(k) = x(k−1) − tk∇f(x(k−1)), k = 1, 2, . . .

Step size rules

• fixed: tk constant

• backtracking line search

• exact line search: minimize f(x− t∇f(x)) over t

Advantages of gradient method

• every iteration is inexpensive

• does not require second derivatives

Gradient method 1-3

Page 4: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Quadratic example

f(x) =1

2(x21 + γx22) (with γ > 1)

with exact line search and starting point x(0) = (γ, 1)

‖x(k) − x?‖2‖x(0) − x?‖2

=

(γ − 1

γ + 1

)k

−10 0 10−4

0

4

x1

x2

gradient method is often slow; convergence very dependent on scaling

Gradient method 1-4

Page 5: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Nondifferentiable example

f(x) =√x21 + γx22 for |x2| ≤ x1, f(x) =

x1 + γ|x2|√1 + γ

for |x2| > x1

with exact line search, starting point x(0) = (γ, 1), converges to non-optimal point

−2 0 2 4−2

0

2

x1

x2

gradient method does not handle nondifferentiable problemsGradient method 1-5

Page 6: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

First-order methods

address one or both disadvantages of the gradient method

Methods with improved convergence

• quasi-Newton methods

• conjugate gradient method

• accelerated gradient method

Methods for nondifferentiable or constrained problems

• subgradient method

• proximal gradient method

• smoothing methods

• cutting-plane methods

Gradient method 1-6

Page 7: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Outline

• gradient method, first-order methods

• quadratic bounds on convex functions

• analysis of gradient method

Page 8: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Convex function

a function f is convex if dom f is a convex set and Jensen’s inequality holds:

f(θx+ (1− θ)y) ≤ θf(x) + (1− θ)f(y) for all x, y ∈ dom f , θ ∈ [0, 1]

First-order condition

for (continuously) differentiable f , Jensen’s inequality can be replaced with

f(y) ≥ f(x) +∇f(x)T (y − x) for all x, y ∈ dom f

Second-order condition

for twice differentiable f , Jensen’s inequality can be replaced with

∇2f(x) � 0 for all x ∈ dom f

Gradient method 1-7

Page 9: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Strictly convex function

f is strictly convex if dom f is a convex set and

f(θx+(1−θ)y) < θf(x)+(1−θ)f(y) for all x, y ∈ dom f , x 6= y, and θ ∈ (0, 1)

strict convexity implies that if a minimizer of f exists, it is unique

First-order condition

for differentiable f , strict Jensen’s inequality can be replaced with

f(y) > f(x) +∇f(x)T (y − x) for all x, y ∈ dom f , x 6= y

Second-order condition

note that ∇2f(x) � 0 is not necessary for strict convexity (cf., f(x) = x4)

Gradient method 1-8

Page 10: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Monotonicity of gradient

a differentiable function f is convex if and only if dom f is convex and

(∇f(x)−∇f(y))T (x− y) ≥ 0 for all x, y ∈ dom f

i.e., the gradient ∇f : Rn → Rn is a monotone mapping

a differentiable function f is strictly convex if and only if dom f is convex and

(∇f(x)−∇f(y))T (x− y) > 0 for all x, y ∈ dom f , x 6= y

i.e., the gradient ∇f : Rn → Rn is a strictly monotone mapping

Gradient method 1-9

Page 11: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Proof

• if f is differentiable and convex, then

f(y) ≥ f(x) +∇f(x)T (y − x), f(x) ≥ f(y) +∇f(y)T (x− y)

combining the inequalities gives (∇f(x)−∇f(y))T (x− y) ≥ 0

• if ∇f is monotone, then g′(t) ≥ g′(0) for t ≥ 0 and t ∈ dom g, where

g(t) = f(x+ t(y − x)), g′(t) = ∇f(x+ t(y − x))T (y − x)

hence

f(y) = g(1) = g(0) +

∫ 1

0

g′(t) dt ≥ g(0) + g′(0)

= f(x) +∇f(x)T (y − x)

this is the first-order condition for convexity

Gradient method 1-10

Page 12: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Lipschitz continuous gradient

the gradient of f is Lipschitz continuous with parameter L > 0 if

‖∇f(x)−∇f(y)‖2 ≤ L‖x− y‖2 for all x, y ∈ dom f

• note that the definition does not assume convexity of f

• we will see that for convex f with dom f = Rn, this is equivalent to

L

2xTx− f(x) is convex

(i.e., if f is twice differentiable, ∇2f(x) � LI for all x)

Gradient method 1-11

Page 13: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Quadratic upper bound

suppose ∇f is Lipschitz continuous with parameter L and dom f is convex

• then g(x) = (L/2)xTx− f(x), with dom g = dom f , is convex

• convexity of g is equivalent to a quadratic upper bound on f :

f(y) ≤ f(x) +∇f(x)T (y − x) + L

2‖y − x‖22 for all x, y ∈ dom f

f (y) (x, f (x))

Gradient method 1-12

Page 14: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Proof

• Lipschitz continuity of ∇f and the Cauchy-Schwarz inequality imply

(∇f(x)−∇f(y))T (x− y) ≤ L‖x− y‖22 for all x, y ∈ dom f

this is monotonicity of the gradient

∇g(x) = Lx−∇f(x)

• hence, g is a convex function if its domain dom g = dom f is convex

• the quadratic upper bound is the first-order condition for convexity of g

g(y) ≥ g(x) +∇g(x)T (y − x) for all x, y ∈ dom g

Gradient method 1-13

Page 15: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Consequence of quadratic upper bound

if dom f = Rn and f has a minimizer x?, then

1

2L‖∇f(x)‖22 ≤ f(x)− f(x?) ≤

L

2‖x− x?‖22 for all x

• right-hand inequality follows from quadratic upper bound at x = x?

• left-hand inequality follows by minimizing quadratic upper bound

f(x?) ≤ infy∈dom f

(f(x) +∇f(x)T (y − x) + L

2‖y − x‖22

)= f(x)− 1

2L‖∇f(x)‖22

minimizer of upper bound is y = x− (1/L)∇f(x) because dom f = Rn

Gradient method 1-14

Page 16: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Co-coercivity of gradient

if f is convex with dom f = Rn and (L/2)xTx− f(x) is convex then

(∇f(x)−∇f(y))T (x− y) ≥ 1

L‖∇f(x)−∇f(y)‖22 for all x, y

this property is known as co-coercivity of ∇f (with parameter 1/L)

• co-coercivity implies Lipschitz continuity of ∇f (by Cauchy-Schwarz)

• hence, for differentiable convex f with dom f = Rn

Lipschitz continuity of ∇f ⇒ convexity of (L/2)xTx− f(x)⇒ co-coercivity of ∇f⇒ Lipschitz continuity of ∇f

therefore the three properties are equivalent

Gradient method 1-15

Page 17: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Proof of co-coercivity: define two convex functions fx, fy with domain Rn

fx(z) = f(z)−∇f(x)Tz, fy(z) = f(z)−∇f(y)Tz

the functions (L/2)zTz − fx(z) and (L/2)zTz − fy(z) are convex

• z = x minimizes fx(z); from the left-hand inequality on page 1-14,

f(y)− f(x)−∇f(x)T (y − x) = fx(y)− fx(x)

≥ 1

2L‖∇fx(y)‖22

=1

2L‖∇f(y)−∇f(x)‖22

• similarly, z = y minimizes fy(z); therefore

f(x)− f(y)−∇f(y)T (x− y) ≥ 1

2L‖∇f(y)−∇f(x)‖22

combining the two inequalities shows co-coercivityGradient method 1-16

Page 18: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Strongly convex function

f is strongly convex with parameterm > 0 if

g(x) = f(x)− m2xTx is convex

Jensen’s inequality: Jensen’s inequality for g is

f(θx+ (1− θ)y) ≤ θf(x) + (1− θ)f(y)− m2θ(1− θ)‖x− y‖22

Monotonicity: monotonicity of ∇g gives

(∇f(x)−∇f(y))T (x− y) ≥ m‖x− y‖22 for all x, y ∈ dom f

this is called strong monotonicity (coercivity) of ∇f

Second-order condition: ∇2f(x) � mI for all x ∈ dom f

Gradient method 1-17

Page 19: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Quadratic lower bound

from 1st order condition of convexity of g:

f(y) ≥ f(x) +∇f(x)T (y − x) + m

2‖y − x‖22 for all x, y ∈ dom f

f (y)

(x, f (x))

• implies sublevel sets of f are bounded

• if f is closed (has closed sublevel sets), it has a unique minimizer x? and

m

2‖x− x?‖22 ≤ f(x)− f(x?) ≤

1

2m‖∇f(x)‖22 for all x ∈ dom f

Gradient method 1-18

Page 20: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Extension of co-coercivity

• if f is strongly convex and ∇f is Lipschitz continuous, then the function

g(x) = f(x)− m2‖x‖22

is convex and ∇g is Lipschitz continuous with parameter L−m

• co-coercivity of g gives

(∇f(x)−∇f(y))T (x− y) ≥ mL

m+ L‖x− y‖22 +

1

m+ L‖∇f(x)−∇f(y)‖22

for all x, y ∈ dom f

Gradient method 1-19

Page 21: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Outline

• gradient method, first-order methods

• quadratic bounds on convex functions

• analysis of gradient method

Page 22: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Analysis of gradient method

x(k) = x(k−1) − tk∇f(x(k−1)), k = 1, 2, . . .

with fixed step size or backtracking line search

Assumptions

1. f is convex and differentiable with dom f = Rn

2. ∇f(x) is Lipschitz continuous with parameter L > 0

3. optimal value f? = infx f(x) is finite and attained at x?

Gradient method 1-20

Page 23: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Analysis for constant step size

• from quadratic upper bound (page 1-12) with y = x− t∇f(x):

f(x− t∇f(x)) ≤ f(x)− t(1− Lt2)‖∇f(x)‖22

• therefore, if x+ = x− t∇f(x) and 0 < t ≤ 1/L,

f(x+) ≤ f(x)− t

2‖∇f(x)‖22 (1)

≤ f? +∇f(x)T (x− x?)− t

2‖∇f(x)‖22

= f? +1

2t

(‖x− x?‖22 − ‖x− x? − t∇f(x)‖

22

)= f? +

1

2t

(‖x− x?‖22 − ‖x+ − x?‖22

)second line follows from convexity of f

Gradient method 1-21

Page 24: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

• define x = x(i−1), x+ = x(i), ti = t, and add the bounds for i = 1, . . . , k:

k∑i=1

(f(x(i))− f?) ≤ 1

2t

k∑i=1

(‖x(i−1) − x?‖22 − ‖x(i) − x?‖22

)=

1

2t

(‖x(0) − x?‖22 − ‖x(k) − x?‖22

)≤ 1

2t‖x(0) − x?‖22

• since f(x(i)) is non-increasing (see (1))

f(x(k))− f? ≤ 1

k

k∑i=1

(f(x(i))− f?) ≤ 1

2kt‖x(0) − x?‖22

Conclusion: number of iterations to reach f(x(k))− f? ≤ ε is O(1/ε)

Gradient method 1-22

Page 25: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Backtracking line search

initialize tk at t̂ > 0 (for example, t̂ = 1); take tk := βtk until

f(x− tk∇f(x)) < f(x)− αtk‖∇f(x)‖22

f(x− t∇f(x))

f(x)− t‖∇f(x)‖22

f(x)− αt‖∇f(x)‖22

t

0 < β < 1; we will take α = 1/2 (mostly to simplify proofs)

Gradient method 1-23

Page 26: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Analysis for backtracking line search

line search with α = 1/2, if f has a Lipschitz continuous gradient

t = 1/L

f(x)− t∇f(x)

f(x)− t(1− tL

2)‖∇f(x)‖22

f(x)− t

2‖∇f(x)‖22

selected step size satisfies tk ≥ tmin = min{t̂, β/L}

Gradient method 1-24

Page 27: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Convergence analysis

• as on page 1-21:

f(x(i)) ≤ f(x(i−1))− ti2‖∇f(x(i−1))‖22

≤ f? +∇f(x(i−1))T (x(i−1) − x?)− ti2‖∇f(x(i−1))‖22

≤ f? +1

2ti

(‖x(i−1) − x?‖22 − ‖x(i) − x?‖22

)≤ f? +

1

2tmin

(‖x(i−1) − x?‖22 − ‖x(i) − x?‖22

)the first line follows from the line search condition

• add the upper bounds to get

f(x(k))− f? ≤ 1

k

k∑i=1

(f(x(i))− f?) ≤ 1

2ktmin‖x(0) − x?‖22

Conclusion: same 1/k bound as with constant step size

Gradient method 1-25

Page 28: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Gradient method for strongly convex functions

better results exist if we add strong convexity to the assumptions on p. 1-20

Analysis for constant step size

if x+ = x− t∇f(x) and 0 < t ≤ 2/(m+ L):

‖x+ − x?‖22 = ‖x− t∇f(x)− x?‖22= ‖x− x?‖22 − 2t∇f(x)T (x− x?) + t2‖∇f(x)‖22

≤ (1− t 2mLm+ L

)‖x− x?‖22 + t(t− 2

m+ L)‖∇f(x)‖22

≤ (1− t 2mLm+ L

)‖x− x?‖22

(step 3 follows from result on p. 1-19)

Gradient method 1-26

Page 29: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Distance to optimum

‖x(k) − x?‖22 ≤ ck‖x(0) − x?‖22, c = 1− t 2mLm+ L

• implies (linear) convergence

• for t = 2/(m+ L), get c =(γ − 1

γ + 1

)2

with γ = L/m

Bound on function value (from page 1-14)

f(x(k))− f? ≤ L

2‖x(k) − x?‖22 ≤

ckL

2‖x(0) − x?‖22

Conclusion: number of iterations to reach f(x(k))− f? ≤ ε is O(log(1/ε))

Gradient method 1-27

Page 30: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

Limits on convergence rate of first-order methods

First-order method: any iterative algorithm that selects x(k) in the set

x(0) + span{∇f(x(0)),∇f(x(1)), . . . ,∇f(x(k−1))}

Problem class: any function that satisfies the assumptions on page 1-20

Theorem (Nesterov): for every integer k ≤ (n− 1)/2 and every x(0), there existfunctions in the problem class such that for any first-order method

f(x(k))− f? ≥ 3

32

L‖x(0) − x?‖22(k + 1)2

• suggests 1/k rate for gradient method is not optimal

• recent fast gradient methods have 1/k2 convergence (see later)

Gradient method 1-28

Page 31: L.Vandenberghe EE236C(Spring2016) 1.Gradientmethodvandenbe/236C/lectures/gradient.pdf · Approximatecourseoutline First-ordermethods gradient,conjugategradient,quasi-Newtonmethods

References

• Yu. Nesterov, Introductory Lectures on Convex Optimization. A Basic Course(2004), section 2.1 (the result on page 1-28 is Theorem 2.1.7 in the book)

• B. T. Polyak, Introduction to Optimization (1987), section 1.4

• the example on page 1-5 is from N. Z. Shor, Nondifferentiable Optimization andPolynomial Problems (1998), page 37

Gradient method 1-29