8 Numerical Methods for Constrained Optimization.pdf

download 8 Numerical Methods for Constrained Optimization.pdf

of 35

Transcript of 8 Numerical Methods for Constrained Optimization.pdf

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    1/35

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    2/35

    Chen CL 4

    Power Generation via Fuel OilFormulation

    NLPFormulation

    min z1s.t. pi= xi1+xi2 i= 1, 2

    p1+p2 50

    zj2

    i=1

    aij0+aij1xij+aij2x2ij

    xij 0, 0 z2 10,18 p1 30, 14 p2 25

    Chen CL 5

    Power Generation via Fuel OilGAMS Code

    $TITLE Power Generation via Fuel Oil

    $OFFUPPER

    $OFFSYMXREF OFFSYMLIST

    OPTION SOLPRINT = OFF;

    * Define index sets

    SETS G Power Generators /gen1*gen2/

    F Fuels /oil, gas/

    K Constants in Fuel Consumption Equations /0*2/;

    * Define and Input the Problem Data

    TABLE A(G,F,K) Coefficients in the fuel consumption equations

    0 1 2

    gen1.oil 1.4609 .15186 .00145

    gen1.gas 1.5742 .16310 .001358

    gen2.oil 0.8008 .20310 .000916

    gen2.gas 0.7266 .22560 .000778;

    PARAMETER PMAX(G) Maximum power outputs of generators /

    GEN1 30.0, GEN2 25.0/;

    PARAMETER PMIN(G) Minimum power outputs of generators /

    GEN1 18.0, GEN2 14.0/;

    SCALAR GASSUP Maximum supply of BFG in units per h /10.0/

    PREQ Total power output required in MW /50.0/;

    Chen CL 6

    * Define optimization variables

    VARIABLES P(G) Total power output of generators in MW

    X(G, F) Power outputs of generators from specific fuels

    Z(F) Total Amounts of fuel purchased

    OILPUR Total amount of fuel oil purchased;

    POSITIVE VARIABLES P, X, Z;

    * Define Objective Function and Constraints

    EQUATIONS TPOWER Required power must be generated

    PWR(G) Power generated by individual generators

    OILUSE Amount of oil purchased to be minimized

    FUELUSE(F) Fuel usage must not exceed purchase;TPOWER.. SUM(G, P(G)) =G= PREQ;

    PWR(G).. P(G) =E= SUM(F, X(G,F));

    FUELUSE(F).. Z(F) =G= SUM((K,G), a(G,F,K)*X(G,F)**(ORD(K)-1));

    OILUSE.. OILPUR =E= Z("OIL");

    * Impose Bounds and Initialize Optimization Variables

    * Upper and lower bounds on P from the operating ranges

    P.UP(G) = PMAX(G);

    P.LO(G) = PMIN(G);

    * Upper bound on BFG consumption from GASSUP

    Z.UP("gas") = GASSUP;

    * Specify initial values for power outputs

    P.L(G) = .5*(PMAX(G)+PMIN(G));

    * Define model and solve

    MODEL FUELOIL /all/;

    SOLVE FUELOIL USING NLP MINIMIZING OILPUR;

    DISPLAY X.L, P.L, Z.L, OILPUR.L;

    Chen CL 7

    Power Generation via Fuel OilResult

    Power outputs for generators 1 and2 are30 MW and 20 MW

    Use all BFG (10) due to free 36.325 MW total powerPurchase4.681 ton/h of fuel oil 13.675 MW remaining power

    Generator1 is operating at the maximum capacity of its operating

    range

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    3/35

    Chen CL 8

    Alkylation Process OptimizationProcess Flowsheet

    Chen CL 9

    Alkylation Process OptimizationNotation and Variable

    Chen CL 10

    Alkylation Process OptimizationTotal Profit

    f(x) = C1x4x7

    C2x1

    C3x2

    C4x3

    C5x5 (a)

    C1 = alkylate product value ($0.063/octan-barrel)

    C2 = olefin feed cost ($5.04/barrel)

    C3 = isobutane recycle cost ($0.035/barrel)

    C4 = acid addition costs ($10.00/per thousand pounds)

    C5 = isobutane makeup cost ($3.36/barrel)

    Chen CL 11

    Alkylation Process OptimizationSome Regression Models

    Reactor temperatures between80 90oF and the reactor acid strength by weightpercent at 85 93was

    x4= x1(1.12 + 0.13167x8

    0.00667x28) (b)

    Alkylate yield x4 equals the olefin feed x1 plus the isobutane makeup x5 lessshrinkage. The volumetric shrinkage can be expressed as 0.22volume per volumeof alkylate yield

    x5= 1.22x4 x1 (c)

    The acid strength by weight percent x6 could be derived from an equation thatexpressed the acid addition rate x3 as a function of the alkylate yield x4, the

    acid dilution factor x9, and acid strength by weight percent x6 (the additionacid was assumed to have acid strength of98%)

    x6= 98000x3

    x4x9+ 1000x3(d)

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    4/35

    Chen CL 12

    The motor octane number x7 was a function of the external isobutane-to-olefinratio x8 and the acid strength by weight percent x6 (for the same reactortemperatures and acid strengths as for the alkylate yield x4)

    x7= 86.35 + 1.098x8 0.038x28+ 0.325(x6 89) (e)

    The external isobutane-to-olefin ratio x8 was equal to the sum of the isobutanerecyclex2 and the isobutane makeup x5 divided by the olefin feed x1

    x8=x2+x5

    x1(f)

    The acid dilution factor x9 could be expressed as a linear function of the F-4performance number x10

    x9= 35.82 0.222x10 (g)

    The last dependent variable is the F-4 performance number x10, which wasexpressed as a function of the motor octane number x7

    x10= 133 + 3x7 (h)

    Chen CL 13

    Alkylation Process OptimizationConstrainedNLP Problem

    max f(x) = C1x4x7 C2x1 C3x2 C4x3 C5x5 (a)s.t. Eq.s (b) (h)

    (10 variables, 7 constraints)

    Chen CL 14

    Alkylation Process OptimizationGAMS Code

    $ TITLE ALKYLATION PROBLEM FROM GINO USERS MANUAL

    $ OFFSYMXREF

    $ OFFSYMLIST

    OPTION LIMROW=0;

    OPTION LIMCOL=0;

    POSITIVE VARIABLES X1,X2,X3,X4,X5,X6,X7,X8,X9,X10;

    VARIABLE OBJ;

    EQUATIONS E1,E2,E3,E4,E5,E6,E7,E8;

    E1..X4=E=X1*(1.12+.13167*X8-0.0067*X8**2);

    E2..X7=E=86.35+1.098*X8-0.038*X8**2+0.325*(X6-89.);

    E3..X9=E=35.82-0.222*X10;

    E4..X10=E=3*X7-133;

    E5..X8*X1=E=X2+X5;

    E6..X5=E=1.22*X4-X1;

    E7..X6*(X4*X9+1000*X3)=E=98000*X3;

    E8.. OBJ =E= 0.063*X4*X7-5.04*X1-0.035*X2-10*X3-3.36*X5;

    X1.LO = 0.;

    X1.UP = 2000.;

    X2.LO = 0.;

    X2.UP = 16000.;

    X3.LO = 0.;

    Chen CL 15

    X3.UP = 120.;

    X4.LO = 0.;

    X4.UP = 5000.;

    X5.LO = 0.;

    X5.UP = 2000.;

    X6.LO = 85.;

    X6.UP = 93.;

    X7.LO = 90;

    X7.UP = 95;

    X8.LO = 3.;

    X8.UP = 12.;

    X9.LO = 1.2;

    X9.UP = 4.;X10.LO = 145.;

    X10.UP = 162.;

    X1.L =1745;

    X2.L =12000;

    X3.L =110;

    X4.L =3048;

    X5.L =1974;

    X6.L =89.2;

    X7.L =92.8;

    X8.L =8;

    X9.L =3.6;X10.L =145;

    MODEL ALKY/ALL/;

    SOLVE ALKY USING NLP MAXIMIZING OBJ;

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    5/35

    Chen CL 16

    Alkylation Process OptimizationRelaxation of Regression Variables

    max f(x) = C1x4x7 C2x1 C3x2 C4x3 C5x5 (a)s.t. Rx4 = x1(1.12 + 0.13167x8 0.00667x28) (b)

    x5 = 1.22x4 x1 (c)x6 =

    98000x3x4x9+1000x3

    (d)

    Rx7 = 86.35 + 1.098x8 0.038x28+ 0.325(x6 89) (e)x8x1 = x2+x5 (f)

    Rx9 = 35.82 0.222x10 (g)Rx10 = 133 + 3x7 (h)

    0.99x4 Rx4 1.01x40.99x7 Rx7 1.01x70.99x9 Rx9 1.01x9

    0.99x10 Rx10 1.01x10

    Chen CL 17

    Alkylation Process OptimizationSolution

    f(x

    0

    ) = 872.3 (initial guess) = f(x

    ) = 1768.75

    Chen CL 18

    Constraint Status at A Design Point

    Minimize: f(x)

    Subject to: gj(x) 0 j = 1, , mhi(x) = 0 i= 1, ,

    Active Constraint gj(x

    (k)) = 0, hi(x(k)) = 0

    Inactive Constraint gj(x(k)) < 0

    Violated Constraint gj(x(k)) > 0, hi(x(

    k)) = 0-Active Constraint gj(x

    (k)) < 0, gj(x(k)) + >0

    Chen CL 19

    Conceptual Steps ofConstrained Optimization Algorithms

    Initialized from A Feasible Point

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    6/35

    Chen CL 20

    Conceptual Steps ofConstrained Optimization Algorithms

    Initialized from An Infeasible Point

    Chen CL 21

    Sequential Unconstrained MinimizationTechniques (SUMT)

    Minimize: f(x)

    Subject to: gj(x) 0 j = 1, , mhk(x) = 0 k= 1, ,

    Minimize: (x, rp) = f(x) +rpP(x)

    P(x) : penalty if any constraint is violated

    rp : weighting factor

    Chen CL 22

    The Exterior Penalty Function Methods

    Minimize: f(x)

    Subject to: gj(x) 0 j = 1, , mh

    k(x) = 0 k= 1,

    ,

    min (x, rp) = f(x) +rpP(x)

    P(x) =m

    j=1

    [max (0, gj(x))]2

    +

    k=1

    [hk(x)]2

    Chen CL 23

    The Exterior Penalty Function Method:Example

    Minimize: f= (x+2)2

    20

    Subject to: g1= 1x

    2 0

    g2= x2

    2 0

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    7/35

    Chen CL 28 Chen CL 29

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    8/35

    Chen CL 28

    Pseudo-objective function: rp= 1.0

    Chen CL 29

    Algorithm for the Exterior Penalty FunctionMethod

    rp: small value large value

    Chen CL 30

    Advantages and Disadvantages of ExteriorPenalty Function Methods

    It is applicable to general constrained problems, i.e. equalities as

    well as inequalities can be treated

    The starting design point can be arbitrary

    The method iterates through the infeasible region where the

    problem functions may be undefined

    If the iterative process terminates prematurely, the final design may

    not be feasible and hence not usable

    Chen CL 31

    The Interior Penalty Function Methods(Barrier Function Methods)

    Minimize: f(x)

    Subject to: gj(x) 0 j = 1, , mhk(x) = 0 k= 1, ,

    min (x, rp, rp) =f(x) +r

    p

    mj=1

    1gj(x)

    +rp

    k=1

    [hk(x)]2

    Chen CL 32 Chen CL 33

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    9/35

    Chen CL 32

    The Interior Penalty Function Method:Example

    Minimize: f= (x+2)2

    20

    Subject to: g1= 1x

    2 0

    g2= x2

    2 0

    Chen CL 33

    Min:(x, rp) = (x+ 2)2

    20 +rp

    21 x+

    2x 2

    Chen CL 34

    Algorithm for the Interior Penalty FunctionMethod

    rp: large value small value 0

    Chen CL 35

    Advantages and Disadvantages of InteriorPenalty Function Methods

    The starting design point must be feasible

    The method always iterates through the feasible region

    Chen CL 36 Chen CL 37

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    10/35

    Augmented Lagrange Multiplier Method:Equality-Constrained Problems

    Min: f(x)

    s.t. hk(x) = 0 k= 1, , L(x, ) = f(x) +

    k

    khk(x)

    A(x, , rp) = f(x) +k

    khk(x) +rp[hk(x)]

    2

    NC: A

    xi=

    f

    xi+

    k(k+ 2rphk)

    hkxi

    = 0

    at x, Lxi

    = fxi

    +k

    khkxi

    = 0

    k = k+ 2rphk(x)(update k with finite upper bound for rp)

    Augmented Lagrange Multiplier Method:Equality-Constrained Problems

    Chen CL 38

    Augmented Lagrange Multiplier Method:Example

    Min:f(x) = x21+x22

    s.t. h(x) = x1+x2 1 = 0

    Chen CL 39

    Augmented Lagrange Multiplier Method:Example

    A(x, , rp) = x21+x22+(x1+x2 1) +rp(x1+x2 1)2

    xA(x, , rp) =

    2x1++ 2rp(x1+x2 1)2x2++ 2rp(x1+x2

    1)

    =

    0

    0

    x1 = x2 = 2rp 2 + 4rp

    Solve the problem for rp= 1; = 0, 2, respectively

    True optimum: = 1, x1= x2= 0.5

    Chen CL 40 Chen CL 41

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    11/35

    Augmented Lagrange Multiplier Method:Inequality-Constrained Problems

    Min:f(x)

    s.t.gj(x) 0 j = 1, , m gj(x) + Z2j = 0

    A(x, , Z, rp) = f(x) +j

    j

    gj(x) +Z2j

    +rp

    gj(x) +Z

    2j

    2 A(x, , rp) = f(x) +

    j jj+rp

    2j

    2

    (??)

    j = max gj(x),j2rp j = j+ 2rpj (update rule)

    Case 1: g active Z2 = 0, = + 2rpg > 0, = g > 2rp

    Case 2: g inactive Z2 = 0, = + 2rp(g+ Z2) = 0, =g + Z2 = 2rp > g

    Augmented Lagrange Multiplier Method:General Problems

    Min:f(x)

    s.t.gj(x) 0 j = 1, , mhk(x) = 0 k= 1, ,

    A(x, , rp) = f(x) +

    j jj+rp

    2j

    +

    k k+mhk(x) +rp[hk(x)]

    2

    j = max

    gj(x),j2rp j = j+ 2rpj(x) j = 1, , m

    k+m = k+m+ 2rphk(x) k= 1, ,

    Chen CL 42 Chen CL 43

    Advantages of ALM Method

    Relatively insensitive to value ofrp

    Precise g(x) = 0 andh(x) = 0 is possible

    Acceleration is achieved by updating

    Starting point may be either feasible or infeasible

    At optimum,j= 0will automatically identify theactiveconstraintset

    Chen CL 44 Chen CL 45

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    12/35

    Generalized Reduced Gradient Method(GRG)

    Min: f(x) x

    Rn

    s.t. gj(x) 0 j = 1, , Jhk(x) = 0 k= 1, , K

    xi xi xui

    Min: f(x) x Rn

    s.t. gj(x) +xn+j = 0 j = 1, , Jhk(x) = 0 k= 1, , K

    xi xi xuixn+j 0 j = 1, , J

    Min: f(x) x Rn+J

    s.t. hk(x) = 0 k= 1, , K, , K+Jxi xi xui i= 1 n n+J

    Note:

    one hk(x) = 0 can reduce one degree-of-freedom

    one variable can be represented by others K+J depedent variables can be represented by

    other (n+J) (K+J) = (n K) independent var.s

    Chen CL 46

    Divide x(n+J)1 intoindep/dep(design/state, Y/Z) variables tosatisfyhk(x) = 0

    x=

    Y

    Z

    (n+J)1Y =

    y1...

    ynK

    Z=

    z1...

    zK+J

    Variations of objective and constraint functions:

    df(x) =nKi=1

    f

    yidyi+

    K+Ji=1

    f

    zidzi = TYfdY + TZf dZ

    dhk(x) =

    nKi=1

    hkyi

    dyi+

    K+Ji=1

    hkzi

    dzi = TYhkdY + TZhkdZ

    dh =

    dh1 dhK dhK+JT

    = CdY + DdZ

    Chen CL 47

    C =

    h1y1

    h1ynK... ... ...

    hKy1

    hKynK... ... ...

    hK+Jy1

    hK+J

    ynK

    =

    TY

    h1...

    TY

    hK...

    TY

    hK+J

    D =

    h1z1

    h1zK+J... ... ...

    hKz1

    hKzK+J... ... ...

    hK+Jz1

    hK+JzK+J

    =

    TZ

    h1...

    TZ

    hK...

    TZhK+J

    Chen CL 48 Chen CL 49

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    13/35

    x x +dx should maintain h= 0 dh= 0

    dh = CdY + DdZ = 0

    DdZ =

    CdY

    OR dZ = D1CdY

    df(x) = TY

    f dY + TZ

    fdZ

    =T

    Yf T

    ZfD1C

    1(nK)

    dY

    = GTRdY (GeneralizedReducedGradient)

    df

    dY(x) = GR = Yf

    D1C

    T Zf ((n K) 1)

    Use GR to generate search directionS step size

    dZ= D1CdY is based on linear approximation some hk(x +dx) = 0 fix Y, correctdZ to maintain h +dh= 0

    (repeat until dZ is small)

    h(x) +dh(x) = 0

    h(x) + CdY + DdZ= 0

    dZ= D1 [h(x) + CdY]

    Chen CL 50

    GRG: Algorithm

    Specifydesign(Y) andstate(Z) variables

    select state variables (Z) to avoid singularity ofD

    any component of x that is equal to its lower/upper bound

    initially is set to be a design variable slack variable should be set as state variables

    Compute GR (analytical or numerical)

    Test for convergence: stop if||GR|| <

    Determination of search direction S:steepest descent: S= GRFletcher-Reeve conjugate gradients,

    DFP, BFGS,

    Chen CL 51

    Find optimum step size along s

    dY = S =

    s1...

    snK

    dZ = D1CdY = D1C(S) = D1CS

    = T =

    t1...

    tK+J

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    14/35

    Chen CL 56 Chen CL 57

    Li i i f Th C i d P bl

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    15/35

    h(xnew) = 2.4684 = 0!! xnew F R

    Dnew =

    hx3

    =

    4(x33)new

    =

    4(1.02595)3

    =

    4.32

    Cnew =

    hx1

    hx2

    =

    1 +x22 2x1x2

    new

    =

    1 0.028

    dZ = D1 [h(x) CdY]new= 14.322.4684 +

    1 0.028

    2.0242.024

    = [0.116]

    Znew = Zold+dZ=

    1.02595

    +

    0.116

    =

    1.142

    h(xnew) = 1.876 = 0!! xnew F R......

    Linearization of The Constrained Problem

    Min: f(x)

    s.t. hj(x) = 0 j = 1, , pgj(x)

    0 j = 1,

    , m

    Min: f(x(k) + x) f(x(k)) +f

    Tf(x(k))x

    s.t. hj(x(k) + x) hj(x(k)) +

    hj Thj(x(k))x = 0

    gj(x(k) + x)

    gj(x(

    k)) +

    Tgj(x

    (k))x gj 0

    Chen CL 58

    Linearization of The Constrained Problem

    fk f(x(k)) ej hj(x(k)) bj gj(x(k))ci f(x

    (k))xi

    di x(k)inij

    hj(x(k))

    xi

    aij

    gj(x(k))

    xiMin: f =

    ni=1

    cidi = cTd

    s.t. hj =n

    i=1

    nijdi = ej

    [n1 np]Td= NTd= e

    gj =

    ni=1

    aijdi bj [a1 am]Td= ATd b

    Chen CL 59

    Definition of Linearized Subproblem:Example

    Min: f(x) = x21+x22 3x1x2

    s.t. g1(x) = 1

    6x21+

    16x

    22 1 0

    g2(x) =

    x1

    0

    g3(x) = x2 0 x(0) = (1, 1)

    f(x) =2x1 3x2

    2x2 3x1

    g1(x) =

    13x1

    13x2

    g2(x) = 10 g3(x) =

    0

    1

    Chen CL 60

    D fi iti f Li i d S b blChen CL 61

    D fi iti f Li i d S b bl

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    16/35

    Definition of Linearized Subproblem:Example

    Definition of Linearized Subproblem:Example

    f(x(0)) = 1, b1= g1(x(0)) = 23,b2 = g2(x(0)) = 1, b3= g3(x(0)) = 1

    f(x(0)) =11

    g1(x(0)) =

    13

    13

    A =13 1 0

    13 0 1

    b= 2

    31

    1

    Chen CL 62

    Definition of Linearized Subproblem:Example

    Min: f =

    1 1

    d1

    d2

    s.t.

    13

    13

    1 00 1

    d1

    d1

    23

    1

    1

    Or

    Min: f = d1 d2

    s.t. g1 = 1

    3d1+1

    3d2 2

    3

    g2 = d1 1g3 = d2 1

    Chen CL 63

    Definition of Linearized Subproblem:Example

    Or

    Min: f(x) = f(x(0)) + f (x x(0))

    = 1 + 1 1(x1 1)(x2 1)= x1 x2+ 1

    s.t. g1(x) = g1(x(0)) + g1 (x x(0))

    = 23+

    13

    13

    (x1 1)(x2 1)

    = 13(x1+x2 4) 0g2(x) = x1 0g3(x) = x2 0

    Chen CL 64

    Definition of Linearized Subproblem:Chen CL 65

    Sequential Linear Programming Algorithm

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    17/35

    Definition of Linearized Subproblem:Example

    Optimum solution: line D E

    d1+d2= 2 (x1 1) + (x2 1) = 2 x1+x2 4 = 0

    Sequential Linear Programming AlgorithmBasic Idea

    Min: f(x)

    s.t. hj(x) = 0 j = 1, , pgj(x) 0 j = 1, , m

    Min: f(x(k) + x) f(x(k)) +f

    Tf(x(k))x

    s.t. hj(x(k) + x) hj(x(k)) +

    hj

    Thj(x(k))x = 0

    gj(x(k)

    + x) gj(x(k)

    ) + T

    gj(x(k)

    )x gj

    0

    Chen CL 66

    Sequential Linear Programming AlgorithmBasic Idea

    fk f(x(k)) ej hj(x(k)) bj gj(x(k))ci f(x

    (k))xi

    di x(k)inij hj(x

    (k)

    )xi aij gj(x(k)

    )xi

    Min: f =

    ni=1

    cidi = cTd

    s.t. hj =

    n

    i=1nijdi = ej

    [n1 np]Td= NTd= e

    gj =

    ni=1

    aijdi bj

    [a1 am]Td= ATd b

    (k)i di (k)iu, i= 1, , n (move limits)

    Chen CL 67

    Sequential Linear Programming AlgorithmBasic Idea

    Move limits: the linearized problem may not have a bounded

    solution or the changes in design may become too large

    Selection of proper move limits is of critical importance because it

    can mean success or failure of the SLP algorithm

    All bi 0

    di: no sign restriction

    di= d+i di , d+i 0, di 0

    Stopping Criteria: gj 1, j = 1 m; hj 1, j = 1 p||d|| 2

    Chen CL 68

    An SLP AlgorithmChen CL 69

    An SLP Example

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    18/35

    An SLP Algorithm

    Step 1: x(0), k= 0, 1, 2

    Step 2: calculate fk; bj, j = 1 m; ej, j = 1 p

    Step 3: evaluate ci= fxi, nij = hj

    xi , aij = gj

    xi

    Step 4: select proper move limits (k)i ,

    (k)iu (some fraction of current design)

    Step 5: define the LP subproblem

    Step 6: use Simplex method to solve for d(k)

    Step 7: check for convergence, Stop ?

    Step 8: update the design x(k+1) =x(k) + d(k)

    set k = k + 1 and go to Step 2

    An SLP Example

    Min: f(x) = x21+x22 3x1x2

    s.t. g1(x) = 1

    6x21+

    16x

    22 1 0

    g2(x) =

    x1

    0

    g3(x) = x2 0 x(0) = (3, 3)

    f(x) =2x1 3x2

    2x2 3x1

    g1(x) =

    13x1

    13x2

    g2(x) = 10 g3(x) =

    0

    1

    Chen CL 70

    An SLP Example

    f(x(0)) = 9, b1= g1(x(0)) = 2< 0 (violated)b2 = g2(x(0)) = 3(inactive) b3= g3(x(0)) = 3 (inactive)

    f(x(0)) =33 = c g1(x(0)) = 1

    1

    A =

    1 1 01 0

    1

    b=

    23

    3

    Chen CL 71

    An SLP Example

    Min: f =3 3

    d1d2

    s.t.

    1 1

    1 00 1

    d1d2

    2

    3

    3

    Or

    Min: f = 3d1 3d2s.t. g1 = d1+d2 2

    g2 = d1 3g3 = d2 3

    Chen CL 72

    An SLP ExampleChen CL 73

    SLP: Example

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    19/35

    An SLP Example

    Feasible region: A B C;Cost function parallel to B C(Opt Soln:BC d1,2= 1)

    100%move limits 3 d1, d2 3 (ADEF) d1= 1, d2= 1 x1= 2, x2= 2 (x1 3 =d1= 1)

    20%move limits

    0.6 d1, d2 0.6 (A1D2E1F1) no feasible solution

    The linearized constraints are satisfied,

    but the original nonlinear constraint g1is still violated

    SLP: Example

    Min: f(x) = x21+x22 3x1x2

    s.t. g1(x) = 1

    6x21+

    16x

    22 1 0

    g2(x) =

    x1

    0

    g3(x) = x2 0 x(0) = (1, 1) 1,2= 0.001, 15% design change

    Min: f = d1 d2s.t. g1 =

    13d1+

    13d2 23

    g2 = d1 1g

    3 =

    d

    2 1

    0.15 d1, d1 0.15

    Chen CL 74

    SLP: Example

    Solution region: DEFG F, (d1= d2= 0.15)

    Chen CL 75

    SLP: Example

    d1 d+1 d1, d2 d+2 d2Min: f = d+1 +d1 d+2 +d2

    s.t. g1 = 1

    3

    d+1 d1 +d+2 d2

    23g2 =

    d+1 +d

    1

    1

    g3 = d+2 +d2 1d+1 d1 0.15d1 d+1 0.15d+2 d2 0.15d2 d+2 0.15

    d

    +

    1, d

    1 0d+2, d

    2 0

    Chen CL 76

    SLP: ExampleChen CL 77

    Observations on The SLP Algorithm

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    20/35

    SLP: Example

    d+1 = 0.15, d

    1 = 0, d

    +2 = 0.15, d

    2 = 0

    d1 = d+1 d1 = 0.15d2 = d

    +2 d2 = 0.15

    x(1) = x(0) + d(0) = (1.15, 1.15)

    f(x(1)) = 1.3225g1(x

    (1)) =

    0.6166 (inactive)

    ||d|| = 0.212 > 2 next iteration

    Observations on The SLP Algorithm

    The rate of convergence and performance depend to a large extent

    on the selection of move limits

    The method cannot be used as a black box approach for engineering

    design problems(selection of move limits is a trial and error process)

    The method is not convergent since no descent function is defined

    The method can cycle between two points if the optimum solution

    is not a vertex of the constraint set

    Lack of robustness

    Chen CL 78

    Quadratic Programming ProblemQuadratic Step Size Constraint

    Constrained Nonlinear Programming

    Linear Programming Subproblem: lack of robustness Quadratic Programming Subproblem:

    with descent function and step size determination strategy

    Quadratic Step Size Constraint:

    (k)i di (k)iu, i= 1, , n (move limits) 12||d|| = 12dTd = 12

    (di)

    2

    Chen CL 79

    Quadratic Programming ProblemQuadratic Step Size Constraint

    Min: f =

    n

    i=1cidi = c

    Td

    s.t. hj =n

    i=1

    nijdi = ej

    [n1 np]Td= NTd= e

    gj =n

    i=1

    aijdi bj

    [a1 am]Td= ATd b

    12

    (di)

    2 (move limits)

    Chen CL 80

    Quadratic Programming (QP) SubproblemChen CL 81

    Quadratic Programming (QP) Subproblem

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    21/35

    Quadratic Programming (QP) Subproblem

    Min: f =n

    i=1

    cidi = cTd

    s.t.

    hj =

    ni=1

    nijdi = ej [n1 np]Td= NTd= egj =

    ni=1

    aijdi bj

    [a1 am]Td= ATd b

    12

    (di)

    2 (0.5dTd ) (? check KTC,= 1)

    Min: f = cTd+0.5dTd

    s.t. NTd = e

    ATd ba convex programming problemunique soln

    Quadratic Programming (QP) Subproblem

    Min: f(x) = 2x31+ 15x22 8x1x2 4x1

    s.t. h(x) = x21+x1x2+ 1 = 0

    g(x

    ) = x1 1

    4x

    2

    2 1 0 x(0)

    = (1, 1)Opt: x = A: (1, 2), B: (1, 2) f(x) = 74(A), 78(B)

    Chen CL 82

    Quadratic Programming (QP) Subproblem

    c= f = (6x21 8x2 4, 30x2 8x1) x(0)

    = (6, 22)h = (2x1+x2, x1) x

    (0)

    = (3, 1)

    g = (1,

    x2/2)

    x(0)= (1,

    0.5)

    f(1, 1 ) = 5, h(1, 1) = 3 = 0, g(1, 1) = 0.25 < 0

    Chen CL 83

    Quadratic Programming (QP) Subproblem

    LP Sub-P: Min: f = 6d1+ 22d2s.t. h = 3d1+d2= 3

    g = d1 0.5d2 0.2550%move limits:

    0.5

    d1, d2

    0.5

    infeasible (HIJK)

    100%move limits: 1 d1, d2 1 L : d1= 23, d2= 1, f= 18

    Chen CL 84

    Quadratic Programming (QP) SubproblemChen CL 85

    Solution of QP Problems

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    22/35

    Quadratic Programming (QP) Subproblem

    QP Sub-P:

    Min: f = 6d1+ 22d2+0.5(d21+d22)s.t. h = 3d

    1+d

    2=

    3

    g = d1 0.5d2 0.25 G : d1= 0.5, d1= 1.5, f= 28.75

    Solution of QP Problems

    Min: q(x) = cTx + 12xTHx (H=I ?)

    s.t. ATx bm1N

    T

    x = ep1

    x 0n1

    ATx + s = bm1x 0

    L = cTx + 12xTHx + uT(ATx + s b) Tx+vT(NTx e)

    Chen CL 86

    Solution of QP Problems

    KT C v= y z, y, z 0

    c + Hx + Au + N(y z) = 0

    ATx + s b = 0NTx e = 0

    ujsj = 0 j = 1 mixi = 0 i= 1 n

    sj, uj, i

    0

    Chen CL 87

    Solution of QP Problems

    H A

    Inn 0nm N

    N

    AT 0mm 0mn Imm 0mp 0mp

    NT 0pm 0pn 0pm 0pp 0pp

    x

    u

    s

    y

    z

    =

    c

    b

    e

    B(n+m+p)2(n+m+p)x2(n+m+p)1 = D(n+m+p)1xii= 0, ujsj = 0

    XiXn+m+i = 0 i= 1

    n+m

    Xi 0 i= 1 2(n+m+p

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    23/35

    Chen CL 92

    Basic w X1 X2 X3 X4 X5 X6 X7 X8 Y1 Y2 Y3 Y4 DChen CL 93

    Basic w X1 X2 X3 X4 X5 X6 X7 X8 Y1 Y2 Y3 Y4 D

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    24/35

    Basic w X1 X2 X3 X4 X5 X6 X7 X8 Y1 Y2 Y3 Y4 D

    Y1 0 2 0 1 1 0 0 1 1 1 0 0 0 6Y2 0 0 2 1 0 1 0 3 3 0 1 0 0 6Y3 0 1 1 0 0 0 1 0 0 0 0 1 0 4

    Y4 0 1 3 0 0 0 0 0 0 0 0 0 1 1

    w 1 4 0 2 1 1 1 2 2 0 0 0 0 17Y1 0 0 6 1 1 0 0 1 1 1 0 0 2 4Y2 0 0 2 1 0 1 0 3 3 0 1 0 0 6Y3 0 0 4 0 0 0 1 0 0 0 0 1 1 3

    X1 0 1 3 0 0 0 0 0 0 0 0 0 1 1w 1 0 12 2 1 1 1 2 2 0 0 0 4 13X2 0 0 1

    16 16 0 0 16 16 16 0 0 13 23

    Y2 0 0 0 2

    313 1 0

    103

    103

    13 1 0

    23

    143

    Y3 0 0 0 23 23 0 1 23 23 23 0 1 13 13X1 0 1 0

    12 12 0 0 12 12 12 0 0 0 3

    w 1 0 0 0 1 1 1 4 4 2 0 0 0 5

    Basic w X1 X2 X3 X4 X5 X6 X7 X8 Y1 Y2 Y3 Y4 DX2 0 0 1 0 0 0

    14 0 0 0 0

    141

    434

    Y2 0 0 0 4 3 1 5 0 0 3 1 5 1 3X8 0 0 0 1 1 0 32 1 1 1 0 32 12 12X1 0 1 0 0 0 0

    34 0 0 0 0

    34

    14

    134

    w 1 0 0 4 3 1 5 0 0 2 0 6 2 3X2 0 0 1 0 0 0

    14 0 0 0 0

    14 14 34

    X3 0 0 0 134 14 54 0 0 34 14 54 14 34X8 0 0 0 0

    141

    4141 1 14 14 14 14 54

    X1 0 1 0 0 0 0 3

    4 0 0 0 0 3

    414

    134

    w 1 0 0 0 0 0 0 0 0 1 1 1 1 0

    Chen CL 94

    Sequential Quadratic Programming (SQP)Constrained Steepest Descent Method

    Min: f = cTx + 12xTx

    d = c (steepest descent direction if no constraint) constrained QP subproblem is solved sequentially

    Min: f = cTd + 0.5dTd

    s.t. NTd = e

    ATd b

    d : modify

    c to satisfy constraints

    Q: Descent function ? Step size ?

    Chen CL 95

    CSD: Pshenichnys Descent Function

    (x) = f(x) +RV(x)

    (x(k)) = f(x(k)) +RV(x(k))

    k = fk+RVk

    Vk = max

    {0;

    |h1

    |,

    ,

    |hp

    |; g1,

    , gm

    } 0

    (maximum constraint violation)

    R rk=p

    i=1

    v(k)i + mi=1

    u(k)i

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    25/35

    Chen CL 100

    CSD: Step Size DeterminationChen CL 101

    CSD: Step Size Example

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    26/35

    Effect of parameter on step size determination

    a larger tends to reduce step size to satisfy descent condition of

    Inequality

    Min: f(x) = x21+ 320x1x2

    s.t. g1(x) = x1

    60x2 1 0

    g2(x) = 1

    x1(x1x2)

    3600 0

    g3(x) = x1 0g4(x) = x2 0, x(0) = (40, 0.5)

    d(0) = (25.6, 0.45)u = (4880, 19400, 0, 0)(?), = 0.5

    R =ui = 4880 + 19400 + 0 + 0 = 24280 = 0.5(25.62 + 0.452) = 328

    Chen CL 102

    CSD: Step Size Example

    f(x(0)) = (40)2 + 320(40)(0.5) = 8000

    g1(x(0)) = 4060(0.5) 1 = 0.333> 0, (violation)

    g2(x(0)) = 1 40(400.5)3600 = 0.5611> 0, (violation)

    g3(x

    (0)

    ) = 40< 0, (inactive)g4(x

    (0)) = .5< 0, (inactive)

    V0 = max {0; 0.333, 0.5611, 40, 0.5} = 0.56110 = f0+RV0

    = 8000 + (24280)(0.5611) = 21624

    Chen CL 103

    CSD: Step Size Example

    Initial: t0 = 1 (j = 0)

    x(1,0)1 = x

    (0)1 +t0d

    (0)1 = 40 + (1)(25.6) = 65.6

    x(1,0)2 = x

    (0)2 +t0d

    (0)2 =.5 + (1)(0.45) = 0.95

    f1,0= f(x(1,0)) = (65.6)2 + 320(65.6)(.95) = 24246

    g1(x(1,0)) = 65.660(0.95) 1 = 0.151> 0, (violation)g2(x

    (1,0)) = 1 65.6(65.60.95)3600 = .1781> 0, (inactive)g3(x

    (1,0)) = 65.6< 0, (inactive)g4(x

    (1,0)) = 0.95< 0, (inactive)V1,0 = max {0; 0.151, .1781, 65.6, 0.95} = 0.151

    1,0 = f1,0+RV1,0= 24246 + (24280)(0.151) = 27912

    LHS = 27912 + 328 = 28240 > 21624 = RHS!

    Chen CL 104

    CSD: Step Size ExampleChen CL 105

    SQP: CSD Algorithm

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    27/35

    Second: t1 = 0.5 (j = 1)

    x(1,1)1 = x

    (0)1 +t1d

    (0)1 = 40 + (0.5)(25.6) = 52.8

    x(1,1)2 = x

    (0)2 +t1d

    (0)2 =.5 + (0.5)(0.45) =.725

    f1,1= f(x(1,1)) = (52.8)2 + 320(52.8)(.725) = 15037

    g1(x(1,1)) = 52.860(.725) 1 = 0.2138> 0, (violation)

    g2(x(1,1)) = 1 52.8(52.8.725)3600 = 0.2362> 0, (violation)

    g3(x(1,1)) = 52.8< 0, (inactive)

    g4(x(1,1)) = .725< 0, (inactive)

    V1,1 = max

    {0; 0.2138, 0.2362,

    52.8,

    .725

    }= 0.2362

    1,1 = f1,1+RV1,1

    = 15037 + (24280)(0.2362) = 20772

    LHS = 20772 + (0.5)328 = 20936 < 21624 = RHS

    Step 1: k= 0; x(0), R0(= 1), 0< (= 0.2)< 1; 1, 2

    Step 2: f(x(k)), hj(x(k)), gj(x(

    k)); Vk;

    f(x(k)),

    hj(x

    (k)),

    gj(x

    (k))

    Step 3: define and solve QP subproblem d(k), v(k), u(k)

    Step 4: Stop if||d(k)|| 2 andVk 1

    Step 5: calculate rk (sum of Lagrange multipliers),

    set R = max{Rk, rk}

    Step 6: set x(k+1) =x(k) +kd(k)

    (minimize (inexact) (x) = f(x) +RV(x) along x(k))

    Chen CL 106

    SQP: CSD Algorithm

    Step 7: save Rk+1= R, k= k+ 1 and go toStep 2

    The CSD algorithm along with the foregoing step size determination

    procedure is convergent provided second derivatives of all the

    functions are piece-wise continuous (Lipschitz Condition) and

    the set of design points x(k) are bounded as follows:

    (x(k)) (x(0))

    Chen CL 107

    SQP: CSD Example

    Min: f(x) = x21+x22 3x1x2

    s.t. g1(x) = 1

    6x21+

    16x

    22 1 0

    g2(x) = x1 0g3(x) = x2 0 x(0) = (1, 1)

    R0 = 10, = 0.5, 1= 2= 0.001

    x = (

    3,

    3), u= (3, 0, 0), f = 3.0625

    Chen CL 108

    Iteration 1 (k=0)

    Chen CL 109

    linearized constraints and linearized constraint set

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    28/35

    Step 1: k= 0; x(0) = (1, 1), R0= 10, = 0.5; 1= 2= 0.001

    Step 2:

    f(x(0)

    ) = 1 g1(x(0)

    ) = 23

    g2(x(0)) = 1 g3(x(0)) = 1 (inactive)

    f(x(0)) = (1, 1) g1(x(0)) = (13,13)g2(x(0)) = (1, 0) g3(x(0)) = ( 0, 1)

    V0 = 0

    Chen CL 110

    Step 3: define and solve QP subproblem d(k), v(k), u(k)

    Min: f = (d1 d2) + 0.5(d21+d22)s.t. g1 =

    13d1+

    13d2 23

    d1 1, d2 1

    L = (

    d1

    d2) + 0.5(d

    21+d

    22) +u1 13(d1+d2

    2) +s21u2(d1 1 +s22) +u3(d2 1 +s23)Ld1

    = 1 +d1+ 13u1 u2= 0Ld2

    = 1 +d2+ 13u1 u3= 013(d1+d2 2) +s21= 0d1 1 +s22= 0

    d

    2 1 +s2

    3= 0

    uisi = 0, ui 0, i= 1, 2, 3 d(0) = (1, 1) (D), u(0) = (0, 0, 0), f= 1

    Chen CL 111

    Step 4:||

    d(0)

    ||=

    2

    2, continue

    Step 5: r0=

    ui= 0, R= max{R0, r0} = max{10, 0} = 10

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    29/35

    Chen CL 116

    SQP: Effect of in CSD MethodChen CL 117

    Step 6: Iteration 1

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    30/35

    Same as last example, case 1: = 0.5 0.01

    Step1 5: same as before

    0 = f0+RV0= 1 + (10)(0) = 10 = ||d(0)||2 =0.01(1 + 1) =0.02

    x(1,0) = x(0) +t0d(0) = (2, 2) (t0= 1)

    f1,0

    = f(2, 2) =

    4

    V1,0 = V(2, 2) = max{0;13, 2, 2} = 131,0 = f1,0+RV1,0= 4 + (10)13 = 23

    0 t00 = 1 (1)(0.02) = 1.02 < 1,0 t1= 0.5x(1,1) = x(0) +t1d

    (0) = (1.5, 1.5) (t1= 0.5)

    f1,1 = f(1.5, 1.5) = 2.25V1,1 = V(1.5, 1.5) = max{0; 14, 1.5, 1.5} = 01,1 = f1,1+RV1,1= 2.25 + (10)(0) = 2.25

    0 t10 = 1 (0.5)(0.02) = 1.01 > 1,0 0 = t1= 0.5, x(1) = (1.5, 1.5)

    Chen CL 118

    Step 6: Iteration 2

    1 = f1+RV1= 2.25 + (10)(0) = 2.251 = ||d(1)||2 =0.02(0.125) =0.0025

    x(2,0) = x(1) +t0d(1) = (1.75, 1.75) (t0= 1)

    f2,0 = f(1.75, 1.75) = 3.0625V2,0 = V(1.75, 1.75) = max{0; 0.0208, 1.75, 1.75} = 0.0202,0 = f2,0+RV2,0= 3.0625 + (10)(0.0208) = 2.8541

    1 t01 = 2.25 (1)(0.0025) = 2.2525 < 2,0 1 = t0= 1.0, x(2) = (1.75, 1.75)

    A smaller value ofhas no effect on the first two iteration

    Chen CL 119

    Same as last example, case 2: = 0.5 0.9Iteration 2: step size t1= 0.5

    x(2) = (1.625, 1.625), f2= 2.641, g1= 0.1198, V1= 0

    A larger value results in a smaller step size and the new design

    point remains strictly feasible

    Chen CL 120

    SQP: Effect ofR in CSD Method Same as last example, case 1: R= 10 1

    Chen CL 121

    Iteration 2

    Step 2:

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    31/35

    p

    Iteration 1

    Step 1 5: same as before Step 6:

    0 = f0+RV0= 1 + (1)(0) = 10 = ||d(0)||2 = 0.5(1 + 1) = 1

    x(1,0) = x(0) +t0d(0) = (2, 2) (t0= 1)

    f1,0 = f(2, 2) = 4V1,0 = V(2, 2) = max{0;13, 2, 2} = 131,0 = f1,0+RV1,0= 4 + (1)13 = 113

    0 t00 = 1 (1)(1) = 0 > 1,0 0 = t0= 1, x(1) = (2, 2)

    Step 2:

    f(x(1)) = 4 g1(x(1)) =g2(x

    (1)) = 1 g3(x(1)) = 1 (inactive)

    f(x(1)

    ) = g1(x(1)

    ) =g2(x(1)) = (1, 0) g3(x(1)) = ( 0, 1)

    V1 = 1

    3

    Step 3: define and solve QP subproblem d(k), v(k), u(k)

    Min: f = (

    1.5d1

    1.5d2) + 0.5(d

    21+d

    22)

    s.t. g1 = 0.5d1+ 0.5d2 0.25d1 1.5, d2 1.5

    d(1) = (0.25, 0.25) (D), u(1) = (278, 0, 0), f= 4

    Chen CL 122

    Step 4:||d(1)|| = 0.3535 2, continue Step 5: r0=

    ui=

    278, R = max{R1, r1} = max{1,278 } = 278

    Step 6:

    1 = f1+RV1= 4 + (278)(13) = 2.8751 = ||d(1)||2 = 0.5(0.3535)2 = 0.0625

    x(2,0) = x(1) +t0d(1) = (1.75, 1.75) (t0= 1)

    f2,0 = f(1.75, 1.75) = 3.0625V2,0 = V(1.75, 1.75) = max{0; 0.0208, 1.75, 1.75} = 0.022,0 = f2,0+RV2,0= 3.0625 + (278)(0.0208) = 2.9923

    1 t01 = 2.875 (1)(0.0625) = 2.9375 > 2,0 1 = t0= 1.0, x(2) = (1.75, 1.75)

    Step 7: R2= R1= 27

    8, k= 2 go to Step 2

    A smaller value ofR gives a larger step size in the first iteration,

    but results at the end of iteration 2 is the same as before

    Chen CL 123

    Observations on the CSD Algorithm CSD algorithm is a 1st-order method for constrained optimization

    CSD can treat equality as well as inequality constraints

    Golden section search may be used to find the step size by

    minimizing the descent function instead of trying to satisfy the

    descent function (not suggested as it is inefficient)

    Rate of convergence ofCSDcan be improved by including higher-

    order information about problem functions in the QP subproblem

    Now, the step size is not allowed to be greater than one

    In practice, step size can be larger than one

    Numerical uncertainties in selection of parameters , R0

    Starting point can affect performance of the algorithm

    Chen CL 124

    SQP:Constrained Quasi-Newton MethodsT i t d t i f ti f th L f ti i t

    Chen CL 125

    L(x, v)

    =

    0

    let y =

    x

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    32/35

    To introduce curvature information for the Lagrange function into

    the quadratic cost functionnewQP subproblem

    Min: f(x) s.t. hi(x) = 0, i= 1

    p

    L(x, v) = f(x) +p

    i=1

    vihi(x) = f(x) + vTh(x)

    KTC: L(x, v) = f(x) +p

    i=1

    vihi(x)

    = f(x) + N v = 0h(x) = 0

    note: N =

    h1x1

    hpx1... ... ...

    h1xn

    hpxn

    np

    h(x)

    = 0

    , let y= v

    or F(y) = 0

    Newton: FT(y(k))y(k) = F(y(k))

    2L N

    NT 0

    (k) xv

    (k) = Lh

    2Lx(k) + Nv(k) = L

    2Lx(k) + Nv(k+1) v(k) = f(x(k)) N v(k) 2Lx(k) + N v(k+1) = f(x(k))

    2L N

    NT 0

    (k) xv(k+1)

    (k) = fh

    solve these equations to obtain x(k), v(k+1)

    Chen CL 126

    Note: the same as minimizing the following constrained function

    Min: fTx + 0.5xT2Lxs.t. h + NTx = 0

    L = fT

    x + 0.5xT

    2

    Lx + vT

    (h + NT

    x)KTC: L = f+ 2Lx + N v= 0

    h + NTx = 0

    2L NNT 0

    x

    v

    = f

    h

    Chen CL 127

    Now, the solutionx should be treated as a search direction d andstep size determined by minimizing an appropriate descent function

    to obtain a convergent algorithm

    the newQP subproblem

    Min: f = cTd + 12dTHd

    s.t. h = NTd = e

    g = ATd b

    Chen CL 128

    Quasi-Newton Hessian ApproximationChen CL 129

    SQP:Modified CSD Algorithm

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    33/35

    H(k+1) = H(k) + D(k) E(k)

    s(k) = kd(k) (change in design)

    z(k) = H(k)s(k)

    y(k) = L(x(k+1), u(k), v(k)) L(x(k), u(k), v(k))1 = s

    (k) y(k), 2 = s(k) z(k) = 1 if1

    0.22, otherwise =

    0.8221

    w(k) = y(k) + (1 )z(k), 3= s(k) w(k)

    D(k) = 1

    3w(k)w(k)

    T, E(k) =

    1

    2z(k)z(k)

    T

    PLBA(Pshenichny-Lim-Belegundu-Arora) method

    Step 1: same as CSD, let H(0) =I

    Step 2: calculate functions and gradients, maximum violation of

    constraints,update Hessianifk >0

    Step 3: solve d(k), u(k), v(k)

    Step 4-7: same as CSD

    Chen CL 130

    SQP:Modified CSD Algorithm ExampleMin: f(x) = x21+x

    22 3x1x2

    s.t. g1(x) = 1

    6x21+

    16x

    22 1 0

    g2(x) = x1 0g3(x) = x2 0 x(0) = (1, 1)

    R0 = 10, = 0.5, 1= 2= 0.001x = (

    3,

    3), u= (3, 0, 0), f = 3.0625

    Chen CL 131

    Iteration 1: same as before

    d(0) = (1, 1), 0= 0.5, x(1) = (1.5, 1.5)

    u(0) = (0, 0, 0), R1= 10, H(0) =I

    Iteration 2: Step 2:

    atx(1) = (1.5, 1.5)

    f=

    6.75, g1=

    0.25, g2= g3=

    1.5

    Chen CL 132

    f = (1.5, 1.5), g1= (0.5, 0.5) ( 1 0) (0 1)

    Chen CL 133

    Step 3: QP subproblem

    f ( 2 2)

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    34/35

    g2 = (1, 0), g3= (0, 1)s(0) = 0d

    (0) = (0.5, 0.5)

    z(0) = H(0)s(0) = (0.5, 0.5)

    y

    (0)

    = f(x(1)

    ) f(x(0)

    ) = (0.5, 0.5)1 = s

    (0) y(0) = 0.5, 2= s(0) z(0) = 0.5 = 0.8(0.5)/(0.5 + 0.5) = 0.4

    w(0) = 0.4(0.5, 0.5) + (1 0.4)(0.5, 0.5) = (0.1, 0.1)3 = s

    (0) w(0) = 0.1

    D

    (0)

    = 0.1 0.10.1 0.1 E(0) = 0.5 0.50.5 0.5H(0) =

    1 0

    0 1

    +

    0.1 0.1

    0.1 0.1

    0.5 0.5

    0.5 0.5

    =

    0.6 .4.4 0.6

    Min: f = 1.5d1 1.5d2+ 0.5(0.6d21 0.8d1d2+ 0.6d22)s.t. g1 = 0.5d1+ 0.5d2 0.25

    g2 = d1 1.5

    g3 = d2 1.5 d(1) = (0.25, 0.25), u(1) = (2.9, 0, 0)

    Same as previous CSD method,

    In general, inclusion of approximate Hessian will give different

    directions and better convergence

    Chen CL 134

    Newton-Raphson Method to SolveMultiple Nonlinear Equations

    F(x) = 0 (n 1)x(k+1) = x(k) + x(k), k= 0, 1, 2,

    Stop if ||F(x

    )|| = ni=1

    {Fi(x

    )}2

    1/2

    Ex: F1(x1, x2) = 0 F2(x1, x2) = 0

    F1(x(k)1 , x

    (k)2 ) +

    F1x1

    x(k)1 +

    F1x2

    x(k)2 = 0

    F2(x(k)1 , x

    (k)2 ) +

    F2x1

    x(k)1 +

    F2x2

    x(k)2 = 0

    F

    (k)1

    F(k)

    2 F

    (k)

    = F1x1

    F1x2

    F2x1

    F2x2

    FT

    =J

    x

    (k)1

    x(k)

    2 x(k)

    = 0

    0 x(k) = J1F(k) or solve these eq.s directly

    Chen CL 135

    Newton-Raphson Method to Solve MultipleNonlinear Equations: Example

    F1(x) = 1.0 4.0106x21x2 = 0F2(x) = 250 4.0106x1x22 = 0

    J =

    F1x1

    F1x2

    F2

    x1

    F2

    x2

    = 4.0 10

    6

    2x31x2

    1x21x

    22

    1

    x21x

    22

    2

    x1x32

    Step 1: x(0) = (500, 1.0), = 0.1, k= 0

    Step 2: F1 = 15, F2= 7750 ||F|| = 7750>

    Step 3: J(0) =

    8125 16

    16 16000

    Step 4: solve 8125 16

    16 16000x

    (k)1

    x(k)2 =

    15

    7750 x(0) = (151, 0.33) x(1) = (651, 1.33)

  • 8/11/2019 8 Numerical Methods for Constrained Optimization.pdf

    35/35