Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

36
Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1

description

Naïve One-Dimensional Search Suppose: – That we want to find the value of x that minimizes f(x) – That the only thing that we can do is calculate the value of f(x) for any value of x We could calculate f(x) for a range of values of x and choose the one that minimizes f(x) © 2011 Daniel Kirschen and University of Washington 3

Transcript of Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Page 1: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Non-Linear Programming

© 2011 Daniel Kirschen and University of Washington

1

Page 2: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Motivation• Method of Lagrange multipliers

– Very useful insight into solutions– Analytical solution practical only for small problems– Direct application not practical for real-life problems

because these problems are too large– Difficulties when problem is non-convex

• Often need to search for the solution of practical optimization problems using:– Objective function only or– Objective function and its first derivative or– Objective function and its first and second derivatives

© 2011 Daniel Kirschen and University of Washington 2

Page 3: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Naïve One-Dimensional Search

• Suppose: – That we want to find the value of x that minimizes

f(x)– That the only thing that we can do is calculate the

value of f(x) for any value of x• We could calculate f(x) for a range of values of

x and choose the one that minimizes f(x)

© 2011 Daniel Kirschen and University of Washington 3

Page 4: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

x

f(x)

Naïve One-Dimensional Search

• Requires a considerable amount of computing time if range is large and a good accuracy is needed

© 2011 Daniel Kirschen and University of Washington 4

Page 5: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

One-Dimensional Search

x

f(x)

x0

© 2011 Daniel Kirschen and University of Washington 5

Page 6: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

One-Dimensional Search

x

f(x)

x0 x1

© 2011 Daniel Kirschen and University of Washington 6

Page 7: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

One-Dimensional Search

x

f(x)

x0 x1 x2

If the function is convex, we have bracketed the optimum

Current searchrange

© 2011 Daniel Kirschen and University of Washington 7

Page 8: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

One-Dimensional Search

x

f(x)

x0 x1 x2x3

Optimum is between x1 and x2

We do not need to consider x0 anymore

© 2011 Daniel Kirschen and University of Washington 8

Page 9: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

One-Dimensional Search

x

f(x)

x0 x1 x2x3

Repeat the process until the range is sufficiently small

x4

© 2011 Daniel Kirschen and University of Washington 9

Page 10: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

One-Dimensional Search

x

f(x)

x0 x1 x2x3

The procedure is valid only if the function is convex!

x4

© 2011 Daniel Kirschen and University of Washington 10

Page 11: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Multi-Dimensional Search

• Unidirectional search not applicable• Naïve search becomes totally impossible as

dimension of the problem increases• If we can calculate the first derivatives of the

objective function, much more efficient searches can be developed

• The gradient of a function gives the direction in which it increases/decreases fastest

© 2011 Daniel Kirschen and University of Washington 11

Page 12: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Steepest Ascent/Descent Algorithm

x1

x2

© 2011 Daniel Kirschen and University of Washington 12

Page 13: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Steepest Ascent/Descent Algorithm

x1

x2

© 2011 Daniel Kirschen and University of Washington 13

Page 14: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Unidirectional Search

Gradient direction

Objective function

© 2011 Daniel Kirschen and University of Washington 14

Page 15: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Steepest Ascent/Descent Algorithm

x1

x2

© 2011 Daniel Kirschen and University of Washington 15

Page 16: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Steepest Ascent/Descent Algorithm

x1

x2

© 2011 Daniel Kirschen and University of Washington 16

Page 17: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Steepest Ascent/Descent Algorithm

x1

x2

© 2011 Daniel Kirschen and University of Washington 17

Page 18: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Steepest Ascent/Descent Algorithm

x1

x2

© 2011 Daniel Kirschen and University of Washington 18

Page 19: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Steepest Ascent/Descent Algorithm

x1

x2

© 2011 Daniel Kirschen and University of Washington 19

Page 20: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Steepest Ascent/Descent Algorithm

x1

x2

© 2011 Daniel Kirschen and University of Washington 20

Page 21: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Steepest Ascent/Descent Algorithm

x1

x2

© 2011 Daniel Kirschen and University of Washington 21

Page 22: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Steepest Ascent/Descent Algorithm

x1

x2

© 2011 Daniel Kirschen and University of Washington 22

Page 23: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Steepest Ascent/Descent Algorithm

x1

x2

© 2011 Daniel Kirschen and University of Washington 23

Page 24: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Steepest Ascent/Descent Algorithm

x1

x2

© 2011 Daniel Kirschen and University of Washington 24

Page 25: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Steepest Ascent/Descent Algorithm

x1

x2

© 2011 Daniel Kirschen and University of Washington 25

Page 26: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Choosing a Direction

• Direction of steepest ascent/descent is not always the best choice

• Other techniques have been used with varying degrees of success

• In particular, the direction chosen must be consistent with the equality constraints

© 2011 Daniel Kirschen and University of Washington 26

Page 27: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

How far to go in that direction?• Unidirectional searches can be time-

consuming• Second order techniques that use information

about the second derivative of the objective function can be used to speed up the process

• Problem with the inequality constraints– There may be a lot of inequality constraints– Checking all of them every time we move in one

direction can take an enormous amount of computing time

© 2011 Daniel Kirschen and University of Washington 27

Page 28: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

28

Handling of inequality constraints

© 2011 Daniel Kirschen and University of Washington

x1

x2

Move in the direction of the gradient

How do I know thatI have to stop here?

Page 29: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

29

Handling of inequality constraints

© 2011 Daniel Kirschen and University of Washington

x1

x2 How do I know thatI have to stop here?

Page 30: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

30

Penalty functions

© 2011 Daniel Kirschen and University of Washington

Penalty

xmaxxmin

Page 31: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

31

Penalty functions

© 2011 Daniel Kirschen and University of Washington

• Replace enforcement of inequality constraints by addition of penalty terms to objective function

Penalty

xmaxxmin

K(x-xmax)2

Page 32: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

32

Problem with penalty functions

© 2011 Daniel Kirschen and University of Washington

• Stiffness of the penalty function must be increased progressively to enforce the constraints tightly enough

• Not very efficient method

Penalty

xmaxxmin

Page 33: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

33

Barrier functions

© 2011 Daniel Kirschen and University of Washington

Barrier cost

xmaxxmin

Page 34: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

34

Barrier functions

© 2011 Daniel Kirschen and University of Washington

• Barrier must be made progressively closer to the limit• Works better than penalty function• Interior point methods

Barrier cost

xmaxxmin

Page 35: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Non-Robustness

© 2011 Daniel Kirschen and University of Washington 35

x1

x2

B

A

C

D

X Y

Different starting points may lead to different solutions if theproblem is not convex

Page 36: Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Conclusions

• Very sophisticated non-linear programming methods have been developed

• They can be difficult to use: – Different starting points may lead to different

solutions– Some problems will require a lot of iterations– They may require a lot of “tuning”

© 2011 Daniel Kirschen and University of Washington 36