Research Article A Conjugate Gradient Type Method for .A Conjugate Gradient Type Method for the...

download Research Article A Conjugate Gradient Type Method for .A Conjugate Gradient Type Method for the Nonnegative

of 7

  • date post

    18-Jul-2018
  • Category

    Documents

  • view

    215
  • download

    0

Embed Size (px)

Transcript of Research Article A Conjugate Gradient Type Method for .A Conjugate Gradient Type Method for the...

  • Hindawi Publishing CorporationJournal of Applied MathematicsVolume 2013, Article ID 986317, 6 pageshttp://dx.doi.org/10.1155/2013/986317

    Research ArticleA Conjugate Gradient Type Method for the NonnegativeConstraints Optimization Problems

    Can Li

    College of Mathematics, Honghe University, Mengzi 661199, China

    Correspondence should be addressed to Can Li; canlymathe@163.com

    Received 16 December 2012; Accepted 20 March 2013

    Academic Editor: Theodore E. Simos

    Copyright 2013 Can Li. This is an open access article distributed under the Creative Commons Attribution License, whichpermits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

    We are concernedwith the nonnegative constraints optimization problems. It is well known that the conjugate gradientmethods areefficient methods for solving large-scale unconstrained optimization problems due to their simplicity and low storage. Combiningthe modified Polak-Ribiere-Polyak method proposed by Zhang, Zhou, and Li with the Zoutendijk feasible direction method, weproposed a conjugate gradient type method for solving the nonnegative constraints optimization problems. If the current iterationis a feasible point, the direction generated by the proposed method is always a feasible descent direction at the current iteration.Under appropriate conditions, we show that the proposed method is globally convergent. We also present some numerical resultsto show the efficiency of the proposed method.

    1. Introduction

    Due to their simplicity and their low memory requirement,the conjugate gradient methods play a very important rolefor solving unconstrained optimization problems, especiallyfor the large-scale optimization problems. Over the years,many variants of the conjugate gradient method have beenproposed, and some are widely used in practice. The key fea-tures of the conjugate gradient methods are that they requireno matrix storage and are faster than the steepest descentmethod.

    The linear conjugate gradient method was proposed byHestenes and Stiefel [1] in the 1950s as an iterative method forsolving linear systems

    = ,

    , (1)

    where is an symmetric positive definite matrix. Pro-blem (1) can be stated equivalently as the followingminimiza-tion problem

    min 12

    ,

    . (2)

    This equivalence allows us to interpret the linear conjugategradient method either as an algorithm for solving linear

    systems or as a technique for minimizing convex quadraticfunctions. For any , the sequence {

    } generated by the

    linear conjugate gradient method converges to the solution of the linear systems (1) in at most steps.The first nonlinear conjugate gradient method was intro-

    duced by Fletcher and Reeves [2] in the 1960s. It is one of theearliest known techniques for solving large-scale nonlinearoptimization problems

    min () , , (3)

    where : is continuously differentiable.The nonlin-ear conjugate gradient methods for solving (3) have the fol-lowing form:

    +1

    = + ,

    = {

    () , = 0,

    () + 1

    , 1,

    (4)

    where is a steplength obtained by a line search and

    is

    a scalar which deternimes the different conjugate gradientmethods. If we choose to be a strongly convex quadratic andto be the exact minimizer, the nonliear conjugate gradient

    method reduces to the linear conjugate gradient method.

  • 2 Journal of Applied Mathematics

    Several famous formulae for are the Hestenes-Stiefel (HS)

    [1], Fletcher-Reeves (FR) [2], Polak-Ribiere-Polyak (PRP) [3,4], Conjugate-Descent (CD) [5], Liu-Storey (LS) [6], andDai-Yuan (DY) [7] formulae, which are given by

    HS

    =()

    1

    11

    , FR

    =

    ()

    2

    (1)

    2, (5)

    PRP

    =()

    1

    (1)

    2,

    CD

    =

    ()

    2

    1 (1

    ), (6)

    LS

    = ()

    1

    1 (1

    ),

    DY

    =

    ()

    2

    11

    , (7)

    where 1

    = ()(

    1) and stands for the Euclid-

    ean norm of vectors. In this paper, we focus our attentionon the Polak-Ribiere-Polyak (PRP) method. The study ofthe PRP method has received much attention and has madegood progress. The global convergence of the PRP methodwith exact line search has been proved in [3] under strongconvexity assumption on . However, for general nonlinearfunction, an example given by Powell [8] shows that the PRPmethod may fail to be globally convergent even if the exactline search is used. Inspired by Powells work, Gilbert andNocedal [9] conducted an elegant analysis and showed thatthe PRPmethod is globally convergent if PRP

    is restricted to

    be nonnegative and is determined by a line search satisfy-

    ing the sufficient descent condition

    2 in

    addition to the standard Wolfe conditions. Other conjugategradient methods and their global convergence can be foundin [1015] and so forth.

    Recently, Li andWang [16] extended themodified Fletch-er-Reeves (MFR) method proposed by Zhang et al. [17] forsolving unconstrained optimization to the nonlinear equa-tions

    () = 0, (8)

    where : is continuously differentiable, and pro-posed a descent derivative-freemethod for solving symmetricnonlinear equations. The direction generated by the methodis descent for the residual function. Under appropriateconditions, the method is globally convergent by the use ofsome backtracking line search technique.

    In this paper, we further study the conjugate gradientmethod. We focus our attention on the modified Polak-Ribiere-Polyak (MPRP) method proposed by Zhang et al.[18]. The direction generated by MPRP method is given by

    = {

    () , = 0,

    () +

    PRP

    1

    1

    , > 0,

    (9)

    where () = (

    ), PRP

    = ()

    1

    /(1

    )2, =

    ()

    1

    /(1

    )2, and

    1= (

    ) (

    1). The

    MPRP method not only reserves good properties of thePRP method but also possesses another nice property; thatit is, always generates descent directions for the objective

    function. This property is independent of the line searchused. Under suitable conditions, the MPRP method withthe Armoji-type line search is also globally convergent. Thepurpose of this paper is to develop an MPRP type methodfor the nonnegative constraints optimization problems. Com-bining the Zoutendijk feasible direction method with MPRPmethod, we propose a conjugate gradient type method forsolving the nonnegative constraints optimization problems.If the initial point is feasible, the method generates a feasiblepoint sequence. We also do numerical experiments to testthe proposed method and compare the performance of themethod with the Zoutendijk feasible direction method. Thenumerical results show that the method that we proposeoutperforms the Zoutendijk feasible direction method.

    2. Algorithm

    Consider the following nonnegative constraints optimizationproblems:

    min ()

    s.t. 0,(10)

    where : is continuously differentiable. Let 0

    be the current iteration. Define the index set= (

    ) = { |

    () = 0} ,

    = {1, 2, . . . , } \

    ,

    (11)

    where () is the th component of

    . In fact the index set

    is the active set of problem (10) at

    .

    The purpose of this paper is to develop a conjugategradient type method for problem (10). Since the iterativesequence is a feasible point sequence, the search directionsshould be feasible descent directions. Let

    0 be the

    current iteration. By the definition of feasible direction, wehave that [19] is a feasible direction of (10) at

    if and

    only if

    0. Similar to the Zoutendijk feasible directionmethod, we consider the following problem:

    min ()

    s.t. 0, 1.

    (12)

    Next, we show that, if is not a KKT point of (10), the solu-

    tion of problem (12) is a feasible descent direction of at .

    Lemma 1. Let 0 and let be a solution of problem (12);

    then ()

    0. Moreover ()

    = 0 if and only if

    is a KKT point of problem (10).

    Proof. Since = 0 is a feasible point of problem (12), theremust be (

    )

    0. Consequently, if ()

    = 0, theremust be (

    )

    < 0. This implies that the direction is afeasible descent direction of at

    .

    We suppose that()

    = 0. Problem (12) is equivalentto the following problem:

    min ()

    s.t. 0,

    2

    1.

    (13)

  • Journal of Applied Mathematics 3

    Then there exist and such that the following KKT con-

    dition holds:

    () (

    0) + 2 = 0,

    0,

    0,

    = 0,

    0, 1, (

    2

    1) = 0.

    (14)

    Multiplying the first of these expressions by , we obtain

    ()

    + 2

    2

    = 0, (15)

    where = ( 0

    ). By combining the assumption ()

    =

    0 with the second and the third expressions of (14), we findthat = 0. Substituting it into the first expressions of (14), weobtain that

    () = 0,

    () = 0. (16)

    Let = 0,

    ; then

    0,

    . Moreover, we have

    () (

    ) = 0,

    0,

    () 0,

    () = 0,

    .

    (17)

    This implies that is a KKT point of problem (10).

    On the other hand, we suppose that is a KKT point of

    problem (10). Then there exist ,

    , such that the

    following KKT condition holds:

    () (

    ) = 0,

    0,

    () 0,

    () = 0,

    .

    (18)

    From the second of these expressions, we get

    = 0. Substi-tuting it into the first of these expressions, we have

    () =

    0 and () = 0, so that (

    )

    = ()

    =

    0. Ho