# Research Article A Conjugate Gradient Type Method for the ... A Conjugate Gradient Type Method for...

date post

23-Jan-2021Category

## Documents

view

0download

0

Embed Size (px)

### Transcript of Research Article A Conjugate Gradient Type Method for the ... A Conjugate Gradient Type Method for...

Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 986317, 6 pages http://dx.doi.org/10.1155/2013/986317

Research Article A Conjugate Gradient Type Method for the Nonnegative Constraints Optimization Problems

Can Li

College of Mathematics, Honghe University, Mengzi 661199, China

Correspondence should be addressed to Can Li; canlymathe@163.com

Received 16 December 2012; Accepted 20 March 2013

Academic Editor: Theodore E. Simos

Copyright Β© 2013 Can Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We are concernedwith the nonnegative constraints optimization problems. It is well known that the conjugate gradientmethods are efficient methods for solving large-scale unconstrained optimization problems due to their simplicity and low storage. Combining the modified Polak-RibieΜre-Polyak method proposed by Zhang, Zhou, and Li with the Zoutendijk feasible direction method, we proposed a conjugate gradient type method for solving the nonnegative constraints optimization problems. If the current iteration is a feasible point, the direction generated by the proposed method is always a feasible descent direction at the current iteration. Under appropriate conditions, we show that the proposed method is globally convergent. We also present some numerical results to show the efficiency of the proposed method.

1. Introduction

Due to their simplicity and their low memory requirement, the conjugate gradient methods play a very important role for solving unconstrained optimization problems, especially for the large-scale optimization problems. Over the years, many variants of the conjugate gradient method have been proposed, and some are widely used in practice. The key fea- tures of the conjugate gradient methods are that they require no matrix storage and are faster than the steepest descent method.

The linear conjugate gradient method was proposed by Hestenes and Stiefel [1] in the 1950s as an iterative method for solving linear systems

π΄π₯ = π, π₯ β π π

, (1)

where π΄ is an π Γ π symmetric positive definite matrix. Pro- blem (1) can be stated equivalently as the followingminimiza- tion problem

min 1 2 π₯ π

π΄π₯ β π π

π₯, π₯ β π π

. (2)

This equivalence allows us to interpret the linear conjugate gradient method either as an algorithm for solving linear

systems or as a technique for minimizing convex quadratic functions. For any π₯ β π π, the sequence {π₯

π } generated by the

linear conjugate gradient method converges to the solution π₯ β of the linear systems (1) in at most π steps. The first nonlinear conjugate gradient method was intro-

duced by Fletcher and Reeves [2] in the 1960s. It is one of the earliest known techniques for solving large-scale nonlinear optimization problems

minπ (π₯) , π₯ β π π, (3)

whereπ : π π β π is continuously differentiable.The nonlin- ear conjugate gradient methods for solving (3) have the fol- lowing form:

π₯ π+1

= π₯ π + πΌ π π π ,

π π = {

ββπ (π₯ π ) , π = 0,

ββπ (π₯ π ) + π½ π π πβ1

, π β₯ 1,

(4)

where πΌ π is a steplength obtained by a line search and π½

π is

a scalar which deternimes the different conjugate gradient methods. If we chooseπ to be a strongly convex quadratic and πΌ π to be the exact minimizer, the nonliear conjugate gradient

method reduces to the linear conjugate gradient method.

2 Journal of Applied Mathematics

Several famous formulae for π½ π are the Hestenes-Stiefel (HS)

[1], Fletcher-Reeves (FR) [2], Polak-RibieΜre-Polyak (PRP) [3, 4], Conjugate-Descent (CD) [5], Liu-Storey (LS) [6], andDai- Yuan (DY) [7] formulae, which are given by

π½ HS π

= βπ(π₯ π ) β€

π¦ πβ1

π β€

πβ1 π¦ πβ1

, π½ FR π

=

σ΅©σ΅©σ΅©σ΅©βπ (π₯π) σ΅©σ΅©σ΅©σ΅©

2

σ΅©σ΅©σ΅©σ΅©βπ (π₯πβ1) σ΅©σ΅©σ΅©σ΅©

2 , (5)

π½ PRP π

= βπ(π₯ π ) β€

π¦ πβ1

σ΅©σ΅©σ΅©σ΅©βπ (π₯πβ1) σ΅©σ΅©σ΅©σ΅©

2 , π½

CD π

= β

σ΅©σ΅©σ΅©σ΅©βπ (π₯π) σ΅©σ΅©σ΅©σ΅©

2

π β€

πβ1 βπ (π₯ πβ1

) , (6)

π½ LS π

= β βπ(π₯ π ) β€

π¦ πβ1

π β€

πβ1 βπ (π₯ πβ1

) , π½

DY π

=

σ΅©σ΅©σ΅©σ΅©βπ (π₯π) σ΅©σ΅©σ΅©σ΅©

2

π β€

πβ1 π¦ πβ1

, (7)

where π¦ πβ1

= βπ(π₯ π )ββπ(π₯

πβ1 ) and ββ β stands for the Euclid-

ean norm of vectors. In this paper, we focus our attention on the Polak-RibieΜre-Polyak (PRP) method. The study of the PRP method has received much attention and has made good progress. The global convergence of the PRP method with exact line search has been proved in [3] under strong convexity assumption on π. However, for general nonlinear function, an example given by Powell [8] shows that the PRP method may fail to be globally convergent even if the exact line search is used. Inspired by Powellβs work, Gilbert and Nocedal [9] conducted an elegant analysis and showed that the PRPmethod is globally convergent if π½PRP

π is restricted to

be nonnegative and πΌ π is determined by a line search satisfy-

ing the sufficient descent condition πβ€ π π π

β€ βπ β π π β 2 in

addition to the standard Wolfe conditions. Other conjugate gradient methods and their global convergence can be found in [10β15] and so forth.

Recently, Li andWang [16] extended themodified Fletch- er-Reeves (MFR) method proposed by Zhang et al. [17] for solving unconstrained optimization to the nonlinear equa- tions

πΉ (π₯) = 0, (8)

where πΉ : π π β π π is continuously differentiable, and pro- posed a descent derivative-freemethod for solving symmetric nonlinear equations. The direction generated by the method is descent for the residual function. Under appropriate conditions, the method is globally convergent by the use of some backtracking line search technique.

In this paper, we further study the conjugate gradient method. We focus our attention on the modified Polak- RibieΜre-Polyak (MPRP) method proposed by Zhang et al. [18]. The direction generated by MPRP method is given by

π π = {

βπ (π₯ π ) , π = 0,

βπ (π₯ π ) + π½

PRP π

π πβ1

β π π π¦ πβ1

, π > 0,

(9)

where π(π₯ π ) = βπ(π₯

π ), π½PRP π

= π(π₯ π ) π

π¦ πβ1

/βπ(π₯ πβ1

)β 2, π π =

π(π₯ π ) π

π πβ1

/βπ(π₯ πβ1

)β 2, and π¦

πβ1 = π(π₯

π ) β π(π₯

πβ1 ). The

MPRP method not only reserves good properties of the PRP method but also possesses another nice property; that it is, always generates descent directions for the objective

function. This property is independent of the line search used. Under suitable conditions, the MPRP method with the Armoji-type line search is also globally convergent. The purpose of this paper is to develop an MPRP type method for the nonnegative constraints optimization problems. Com- bining the Zoutendijk feasible direction method with MPRP method, we propose a conjugate gradient type method for solving the nonnegative constraints optimization problems. If the initial point is feasible, the method generates a feasible point sequence. We also do numerical experiments to test the proposed method and compare the performance of the method with the Zoutendijk feasible direction method. The numerical results show that the method that we propose outperforms the Zoutendijk feasible direction method.

2. Algorithm

Consider the following nonnegative constraints optimization problems:

min π (π₯)

s.t. π₯ β₯ 0, (10)

where π : π π β π is continuously differentiable. Let π₯ π β₯ 0

be the current iteration. Define the index set πΌ π = πΌ (π₯

π ) = {π | π₯

π (π) = 0} , π½

π = {1, 2, . . . , π} \ πΌ

π ,

(11)

where π₯ π (π) is the πth component of π₯

π . In fact the index set

πΌ π is the active set of problem (10) at π₯

π .

The purpose of this paper is to develop a conjugate gradient type method for problem (10). Since the iterative sequence is a feasible point sequence, the search directions should be feasible descent directions. Let π₯

π β₯ 0 be the

current iteration. By the definition of feasible direction, we have that [19] π β π π is a feasible direction of (10) at π₯

π if and

only if π πΌπ

β₯ 0. Similar to the Zoutendijk feasible direction method, we consider the following problem:

min βπ(π₯ π ) π

π

*View more*