A Cauchy Point Direction Trust Region Algorithm for...

9
Research Article A Cauchy Point Direction Trust Region Algorithm for Nonlinear Equations Zhou Sheng 1 and Dan Luo 2 1 Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, Jiangsu, China 2 School of Mathematics and Statistics, Baise University, Baise 533000, Guangxi, China Correspondence should be addressed to Dan Luo; [email protected] Received 13 July 2020; Accepted 10 August 2020; Published 24 August 2020 Guest Editor: Maojun Zhang Copyright © 2020 Zhou Sheng and Dan Luo. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In this paper, a Cauchy point direction trust region algorithm is presented to solve nonlinear equations. e search direction is an optimal convex combination of the trust region direction and the Cauchy point direction with the sufficiently descent property and the automatic trust region property. e global convergence of the proposed algorithm is proven under some conditions. e preliminary numerical results demonstrate that the proposed algorithm is promising and has better convergence behaviors than the other two existing algorithms for solving large-scale nonlinear equations. 1. Introduction e system of nonlinear equations is one of the most important mathematical models, with extensive applications in chemical equilibrium problems [1], problem of hydraulic circuits of power plants [2], L-shaped beam structures [3], and image restoration problems [4]. Some 1 -norm regularized optimi- zation problems in compressive sensing [5] are also obtained by reformulating the systems of nonlinear equations. In this paper, consider the following nonlinear system of equation: F(x)� 0, (1) where F : R n R n is continuously differentiable. Let f(x)� 1 2 F(x)‖ 2 , (2) where f : R n R. It is clear that the nonlinear system of equations (1) is equivalent to the unconstrained optimiza- tion problem as follows: min xR n f(x). (3) Many literature studies have studied a variety of nu- merical algorithms to solve the nonlinear system of equations, such as the conjugate gradient algorithm [6–9], Levenberg–Marquardt algorithm [10–13], and trust region algorithm [14–21]. e trust region method plays an im- portant role in the area of nonlinear optimization. It was proposed by Powell in [22] to solve the nonlinear optimi- zation problem by using an iterative structure. Recently, some trust region-based methods have been developed to solve nonlinear equations and have shown very promising results. For instance, Fan and Pan [16] discussed a regu- larized trust region subproblem with an adaptive trust region radius, which possesses quadratic convergence under the local error bound condition. Yuan et al. [17] proposed a trust region-based algorithm, in which the Jacobian is updated by the BFGS formula, for solving nonlinear equations. e algorithm showed global convergence at a quadratic con- vergence rate. is algorithm does not compute the Jacobian matrix in each interrogation, which significantly reduces the computation burden. Esmaeili and Kimiaei [19] studied an adaptive trust region algorithm with the moderate radius size by employing the nonmonotone technique, and the global and quadratic local convergence was given. However, themaximaldimensionofthetestproblemswasonly300.Qi et al. [23] proposed an asymptotic Newton search direction method which is an adequate combination of the projected Hindawi Mathematical Problems in Engineering Volume 2020, Article ID 4072651, 9 pages https://doi.org/10.1155/2020/4072651

Transcript of A Cauchy Point Direction Trust Region Algorithm for...

Page 1: A Cauchy Point Direction Trust Region Algorithm for ...downloads.hindawi.com/journals/mpe/2020/4072651.pdf · A Cauchy Point Direction Trust Region Algorithm for Nonlinear Equations

Research ArticleA Cauchy Point Direction Trust Region Algorithm forNonlinear Equations

Zhou Sheng1 and Dan Luo 2

1Department of Mathematics Nanjing University of Aeronautics and Astronautics Nanjing 211106 Jiangsu China2School of Mathematics and Statistics Baise University Baise 533000 Guangxi China

Correspondence should be addressed to Dan Luo bsxyldan163com

Received 13 July 2020 Accepted 10 August 2020 Published 24 August 2020

Guest Editor Maojun Zhang

Copyright copy 2020 Zhou Sheng and Dan Luo +is is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work isproperly cited

In this paper a Cauchy point direction trust region algorithm is presented to solve nonlinear equations +e search direction is anoptimal convex combination of the trust region direction and the Cauchy point direction with the sufficiently descent propertyand the automatic trust region property +e global convergence of the proposed algorithm is proven under some conditions +epreliminary numerical results demonstrate that the proposed algorithm is promising and has better convergence behaviors thanthe other two existing algorithms for solving large-scale nonlinear equations

1 Introduction

+e system of nonlinear equations is one of themost importantmathematical models with extensive applications in chemicalequilibrium problems [1] problem of hydraulic circuits ofpower plants [2] L-shaped beam structures [3] and imagerestoration problems [4] Some ℓ1-norm regularized optimi-zation problems in compressive sensing [5] are also obtainedby reformulating the systems of nonlinear equations In thispaper consider the following nonlinear system of equation

F(x) 0 (1)

where F Rn⟶ Rn is continuously differentiable Let

f(x) 12F(x)

2 (2)

where f Rn⟶ R It is clear that the nonlinear system ofequations (1) is equivalent to the unconstrained optimiza-tion problem as follows

minxisinRn

f(x) (3)

Many literature studies have studied a variety of nu-merical algorithms to solve the nonlinear system of

equations such as the conjugate gradient algorithm [6ndash9]LevenbergndashMarquardt algorithm [10ndash13] and trust regionalgorithm [14ndash21] +e trust region method plays an im-portant role in the area of nonlinear optimization It wasproposed by Powell in [22] to solve the nonlinear optimi-zation problem by using an iterative structure Recentlysome trust region-based methods have been developed tosolve nonlinear equations and have shown very promisingresults For instance Fan and Pan [16] discussed a regu-larized trust region subproblemwith an adaptive trust regionradius which possesses quadratic convergence under thelocal error bound condition Yuan et al [17] proposed a trustregion-based algorithm in which the Jacobian is updated bythe BFGS formula for solving nonlinear equations +ealgorithm showed global convergence at a quadratic con-vergence rate+is algorithm does not compute the Jacobianmatrix in each interrogation which significantly reduces thecomputation burden Esmaeili and Kimiaei [19] studied anadaptive trust region algorithm with the moderate radiussize by employing the nonmonotone technique and theglobal and quadratic local convergence was given Howeverthe maximal dimension of the test problems was only 300 Qiet al [23] proposed an asymptotic Newton search directionmethod which is an adequate combination of the projected

HindawiMathematical Problems in EngineeringVolume 2020 Article ID 4072651 9 pageshttpsdoiorg10115520204072651

gradient direction and the projected trust region directionfor solving box-constrained nonsmooth equations andproved the global and quadratic local convergence Moti-vated by the search direction by Qi et al Yuan et al [24]used the idea of the search direction to solve nonsmoothunconstrained optimization problems and also showedglobal convergence

Inspired by the fact that the search direction by Qi et al[23] speeds up the local convergence rate compared to thegradient direction we propose an efficient Cauchy pointdirection trust region algorithm to solve (1) in this paper+e search direction is an optimal convex combination ofthe trust region direction and the Cauchy point directionwith the automatic trust region property and the sufficientlydescent property which can improve the numerical per-formance We also show the global convergence of theproposed algorithm Numerical results indicate that thisalgorithm has better behavior convergence compared withthat of the classical trust region algorithm and the adaptiveregularized trust region algorithm However the localconvergence is not given

+e remainder of this paper is organized as follows InSection 2 we introduce our motivation and a new searchdirection In Section 3 we give a Cauchy point directiontrust region algorithm for solving problem (1) +e globalconvergence of the proposed algorithm is proven in Section4 Preliminary numerical results are reported in Section 5We conclude our paper in Section 6

+roughout this paper unless otherwise specified middot

denotes the Euclidean norm of vectors or matrices

2 Motivation and Search Direction

In this section motivated by the definition of the Cauchypoint which is parallel to the steepest descent direction wepresent a search direction with the Cauchy point Let Δk bethe trust region radius and Δk gt 0

21 Cauchy Point Direction Noting that JTk Jk is a semi-

definite matrix we define the Cauchy point at the currentpoint xk

dCk Δk( 1113857 ≔ minus

ckΔk

JTk Fk

J

Tk Fk (4)

where

ck

1 if FTk JkJ

Tk1113872 1113873

2Fk 0

min 1J

Tk Fk

3

ΔkFTk JkJ

Tk1113872 1113873

2Fk

⎧⎪⎨

⎪⎩

⎫⎪⎬

⎪⎭ if F

Tk JkJ

Tk1113872 1113873

2Fk gt 0

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

(5)

Fk F(xk) and Jk nablaF(xk) is the Jacobian of F(x) or itsapproximation

+e following lemma gives a nice property of the Cauchypoint direction dC

k

Proposition 1 (automatic trust region property) Supposethat dC

k is defined by (4) then dCk leΔk

Proof It is easy to get the result of this proposition from0lt ck le 1 +is completes the proof

From the definition of dCk and the above proposition we

can determine the descent property and the automatic trustregion property of the Cauchy point direction dC

k

22 Trust Region Direction Let the trial step dTRk (Δk) be a

solution of the following trust region subproblem

mindisinRn

mk(d) ≔12

Fk + Jkd

2

st dleΔk

(6)

where Δk is the same as the above definitionFrom the famous result of Powell [22] we give the

following lemma and omit its proof

Lemma 1 Let dTRk (Δk) be a solution of the trust region

subproblem (6) then

fk minus mk dTRk Δk( 11138571113872 1113873

12

Fk

2

minus12

Fk + JkdTRk Δk( 1113857

2

ge12

JTk Fk

min Δk

JTk Fk

JTk Jk

⎧⎨

⎫⎬

(7)

As mentioned by Qi et al [23] and Yuan et al [24] whenxk is far from the optimization solution the trust regiondirection dTR

k (Δk) may not possess the descent propertyHowever it follows from the definition of the Cauchy pointdirection that dC

k (Δk) always possesses the descent propertyOn the contrary if the search direction is defined by using acombination of dC

k and dTRk it will be a descent direction

Hence we give a new search direction by using a convexcombination form in the next section

23 Search Direction Let λk be a solution of the one-di-mensional minimization problem as follows

minλisin[01]

hk(λ) ≔12

Fk + Jk λdCk Δk( 1113857 +(1 minus λ)d

TRk Δk( 11138571113872 1113873

2

(8)

and the search direction is defined as

dk Δk( 1113857 λkdCk Δk( 1113857 + 1 minus λk( 1113857d

TRk Δk( 1113857 (9)

For the above one-dimensional minimization problemwe can use the gold section method Fibonacci methodparabolic method [25] etc +e idea of the search directionfollows from Qi et al [23] However we use the Cauchypoint direction which possesses the automatic trust regionproperty to get optimal λk by using the gold section methodfor problem (6) in this paper

2 Mathematical Problems in Engineering

3 New Algorithm

In this section let dk(Δk) be the optimal solution of (3)+en the actual reduction is defined as

Aredk ≔ fk minus f xk + dk Δk( 1113857( 1113857 (10)

and the predicted reduction is

Predk ≔ mk(0) minus mk dk Δk( 1113857( 1113857

12

Fk

2

minus12

Fk + Jkdk Δk( 1113857

2

(11)

+us rk is defined by the ratio between Aredk and Predk

rk ≔Aredk

Predk

(12)

We now list the detailed steps of the Cauchy point di-rection trust region algorithm in Algorithm 1

Algorithm 1 Cauchy point direction trust region algorithmfor solving (3)

Step 1 Given initial point x0 isin Rn and parameters

0le μ1 lt μ2 lt 1 0lt η1 lt 1lt η2 0lt δ lt 1 0le ϵ≪ 1 andΔ0 gt 0 Let k ≔ 0Step 2 If the termination condition JT

k Fkle ϵ is sat-isfied at the iteration point xk then stop Otherwise goto Step 3Step 3 Solve the trust region subproblem (6) to obtaindTR

k (Δk) and compute ck and dCk (Δk) from (4) and (5)

respectivelyStep 4 To solve the one-dimensional minimizationproblem (8) giving λk by using Algorithm 2 calculatethe search direction from (6)Step 5 Compute Aredk Predk and rk if

fk minus12

Fk + Jkdk

2 ge minus δFkJ

Tk d

Ck Δk( 1113857

rk ge μ1

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

(13)

are satisfied let xk+1 ≔ xk + dk(Δk) and update Δk by

Δk+1 ≔

η1Δk if rk lt μ1

Δk if μ1 le rk lt μ2

η2Δk if rk ge μ2

⎧⎪⎪⎨

⎪⎪⎩(14)

Step 6 If rk lt μ1 return to Step 3 and let xk+1 ≔ xkOtherwise return to Step 2 Let k ≔ k + 1

In Step 4 of Algorithm 1 we use the gold section al-gorithm as given in Algorithm 2

Algorithm 2 Gold section algorithm [25] for solvingproblem (8)

Step 1 For the given initial parametersa0 0 b0 1 ϵprime gt 0 compute

p0 ≔ a0 + 0382 b0 minus a0( 1113857

q0 ≔ a0 + 0618 b0 minus a0( 1113857(15)

and hk(p0) hk(q0) Let k ≔ 0Step 2 If hk(pk)le hk(qk) go to Step 3 Otherwise go toStep 4Step 3 If the termination condition |qk minus ak|le ϵprime issatisfied stop Otherwise let

ak+1 ≔ ak bk+1 ≔ qk hk qk+1( 1113857 ≔ hk pk( 1113857

qk+1 ≔ pk pk+1 ≔ ak+1 + 0382 bk+1 minus ak+1( 1113857(16)

and compute hk(pk+1) and go to Step 5Step 4 If the termination condition |bk minus pk|le ϵprime issatisfied stop Otherwise let

ak+1 ≔ pk bk+1 ≔ bk hk pk+1( 1113857 ≔ hk qk( 1113857

pk+1 ≔ qk qk+1 ≔ ak+1 + 0618 bk+1 minus ak+1( 1113857(17)

and compute hk(qk+1) and go to Step 5Step 5 Let k ≔ k + 1 return to Step 2

4 Global Convergence

In this section the global convergence of Algorithm 1 isproven Firstly it is easy to have the following lemma fromLemma 1

Lemma 2 Let dk(Δk) be defined by (9) where λk is a solutionof problem (8) 9en

Predk ge12

JTk Fk

min Δk

JTk Fk

JTk Jk

⎧⎨

⎫⎬

⎭ (18)

Proof From the definition of dk(Δk) we havemk(dk(Δk))lemk(dTR

k (Δk)) It follows from the definition ofPredk and Lemma 1 that Predk gefk minus mk(dTR

k (Δk)) holds+us (16) holds and therefore the proof is completed

+e next lemma indicates an important property of theCauchy point direction dC

k which is the sufficiently descentproperty

Lemma 3 (JTk Fk)TdC

k (Δk) minus ckΔkJTk Fk holds for any

Δk gt 0

Proof For any Δk from (9) we have

JTk Fk1113872 1113873

Td

Ck Δk( 1113857 F

Tk Jkd

Ck Δk( 1113857 minus ckΔkF

Tk Jk

JTk Fk

JTk Fk

minus ckΔk JTk Fk

(19)

and note that 0lt ck le 1 thus

JTk Fk1113872 1113873

Td

Ck Δk( 1113857 minus ckΔk J

Tk Fk

lt 0 (20)

Mathematical Problems in Engineering 3

+is completes the proofTo derive the global convergence of Algorithm 1 the

following assumption is considered

Assumption 1 +e Jacobian J(x) of F(x) is bounded iethere exists a positive constant Mgt 0 such that J(x)leMfor any x isin Rn

In the sequel we show that ldquoSteps 3ndash6rdquo of Algorithm 1do not cycle in the inner loop infinitely

Lemma 4 Suppose that Assumption 1 holds let xk1113864 1113865 begenerated by Algorithm 1 If xk is not a stationary point ofproblem (3) then Algorithm 1 is well defined ie the inner

cycle of Algorithm 1 can be left in a finite number of internaliterates

Proof It follows from xk which is not a stationary point thatthere exists a positive constant ϵ0 gt 0 such that

JTk Fk

ge ϵ0 forallk (21)

Since 0lt ck le 1 and Assumption 1 holds there exists apositive constant αgt 0 such that

JkdCk

le ckΔk Jk

le αΔk (22)

Letting 0ltΔk le 2α2(1 minus δ)ϵ0ck then from (4) and (22)we obtain

fk minus12

Fk + Jkdk Δk( 1113857

2 gefk minus

12

Fk + JkdCk Δk( 1113857

2

minus FTk Jkd

Ck Δk( 1113857 minus

12

JkdCk Δk( 1113857

2

minus δFTk Jkd

Ck Δk( 1113857 minus (1 minus δ)F

Tk Jkd

Ck Δk( 1113857 minus

12

JkdCk Δk( 1113857

2

ge minus δFTk Jkd

Ck Δk( 1113857 +(1 minus δ)ckΔk J

Tk Fk

minus

12α2Δ2k

(23)

and noting (21) we have (1 minus δ)ckΔkJTk Fk minus

12α2Δ2k ge (1 minus δ)ckΔk minus 12α2Δ2k ge 0 which implies

fk minus12

Fk + Jkdk Δk( 1113857

2 ge minus δF

Tk Jkd

Ck Δk( 1113857 (24)

and therefore (13) holds Next we prove that (14) is alsosatisfied By contradiction we assume that ldquoStep 3ndashStep 6rdquo ofAlgorithm 1 do cycle in the inner loop infinitely thenΔk⟶ 0

From the definition of dCk (Δk) we get

12

Fk + JkdCk Δk( 1113857

2

12

Fk

2

+ FTk Jkd

Ck Δk( 1113857 +

12

JkdCk Δk( 1113857

2

le12

Fk

2

minus ckΔk JTk Fk

+

12α2Δ2k

(25)

It follows from (13) and Lemma 3 that there exists apositive constant βgt 0 such that

fk minus12

Fk + Jkdk Δk( 1113857

2 ge minus δF

Tk Jkd

Ck Δk( 1113857

δckΔk JTk Fk

ge βΔk

(26)

By the definition dTRk (Δk) we have dTR

k leΔk andnoting Proposition 1 and 0lt ck le 1 further we get

dk Δk( 1113857

leΔk (27)

+us from (26) and (27) we have

rk minus 11113868111386811138681113868

1113868111386811138681113868 Aredk minus Predk

Predk

11138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868

12 Fk + Jkdk Δk( 1113857

2

minus 12 Fk + Jkdk Δk( 1113857 + O dk Δk( 1113857

2

1113874 1113875

2

Predk

11138681113868111386811138681113868111386811138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868111386811138681113868111386811138681113868

Fk + Jkdk Δk( 1113857

O dk Δk( 1113857

2

1113874 1113875 + O dk Δk( 1113857

4

1113874 1113875

fk minus 12 Fk + Jkdk

2 le

O dk Δk( 1113857

2

1113874 1113875

βΔk

leO Δk

2

1113874 1113875

βΔk

⟶ 0

(28)

4 Mathematical Problems in Engineering

and therefore rk⟶ 1 which means that rk ge μ2 when Δk issufficiently small However the definition of Δk in Step 5 ofAlgorithm 1 contradicts the fact that Δk⟶ 0 Hence thecycling between Step 3 and Step 6 terminates finitely +iscompletes the proof

We can get the global convergence of Algorithm 1 basedon the above lemmas

Theorem 1 Suppose that Assumption 1 holds and xk1113864 1113865 begenerated by Algorithm 1 then

lim infk⟶infin

JTk Fk

0 (29)

Proof Suppose that the conclusion is not true then thereexists a positive constant ϵ1 gt 0 such that for any k

JTk Fk

ge ϵ1 (30)

From Lemma 2 Assumption 1 and the above inequa-tion we have

Predk ge12

JTk Fk

min Δk

JTk Fk

JTk Jk

⎧⎨

⎫⎬

⎭ ge12ϵ1 min Δk

ϵ1M

21113896 1113897 (31)

Noting the updating rule of Δk rk ge μ1 (31) and Lemma4 we deduce that

+infinge 1113944infin

k1

12

Fk

2

minus12

Fk+1

2

1113874 1113875

ge 1113944infin

k1μ1Predk

ge 1113944

infin

k1

12μ1ϵ1 min Δk

ϵ1M

21113896 1113897

ge 1113944infin

k1

12μ1ϵ1 min Δ

ϵ1M

21113896 1113897

+infin

(32)

which yields a contradiction that this means (30) is not true iethe result (29) of the theorem holds and therefore the proof iscompleted

5 Numerical Results

In this section the numerical results of the proposed algorithmfor solving problem (1) are reported We denote the proposedAlgorithm 1 by CTR and compare it with some existing al-gorithms +e adaptive trust region algorithm by Fan and Pan[16] and the classical trust region algorithm [26] are denoted byATR and TTR respectively In numerical experiments allparameters of Algorithm 1 are chosen as follows μ1 01μ2 09 η1 025 η2 3 ϵ 10minus 5 and c0 Δ0 1 δ 09We choose ϵprime 10minus 6 in Algorithm 2 Steihaug method [27] isemployed to obtain the approximative trial step dk for solvingthe trust region subproblem (6) We will terminate algorithmsif the number of iterations is larger than 2000

All codes were implemented in MATLAB R2010b +enumerical experiments were performed on a PC with anIntel Pentium (R) Dual-Core CPU at 320GHz and 200GBof RAM and using the Windows 7 operating system

We list the test problems in Table 1 that can also be found in[28] In Tables 1ndash3 ldquoNordquo denotes the number of the testproblem ldquoProblem namerdquo denotes the name of the testproblem ldquox0rdquo denotes the initial point ldquonrdquo denotes the di-mension of the problem ldquoNIrdquo is the total number of iterationsldquoNFrdquo is the number of the function evaluations ldquoFkrdquo refers tothe optimum function values and ldquoλmeanrdquo denotes the meanvalue of λ It is worth mentioning that ldquolowastrdquo represents thestopping of the algorithms in situations that the number ofiterations exceeds the maximal iterations but the terminalconditions cannot be satisfied yet

It is shown from Tables 2 and 3 that the proposed al-gorithm is efficient to solve these test problems and iscompetitive with the other two algorithms in NI and NF formost problems For problems 1 2 16 19 and 20 λmean is

Table 1 Test problems

No Problem name x0

1 Exponential function 1 (n(n minus 1) n(n minus 1))

2 Exponential function 2 (1n2 1n2)3 Extended Rosenbrock function (5 1 5 1)

4 Trigonometric function (101100n 101100n)

5 Singular function (1 1)

6 Logarithmic function (1 1)

7 Broyden tridiagonal function (minus 1 minus 1)

8 Trigexp function (0 0)

9 Function 15 (minus 1 minus 1)

10 Strictly convex function 1 (1n 1n)

11 Strictly convex function 2 (1 1)

12 Zero Jacobian function (100(n minus 100)n (n minus 1000)(n minus 500)360n2 (n minus 1000)(n minus 500)360n2)

13 Linear function-full rank (100 100)

14 Penalty function (13 13)

15 Brown almost-linear function (12 12)

16 Extended Powell singular function (15 times 10minus 4 15 times 10minus 4)

17 Tridiagonal system (12 12)

18 Extended Freudenstein and Roth function (6 3 6 3)

19 Discrete boundary value problem minus n(n + 1)2 i minus n minus 1(n + 1)2 minus 1(n + 1)2

20 Troesch problem (12 12)

Mathematical Problems in Engineering 5

Table 2 Numerical results

No nCTR ATR TTR

NINF Fk λmean NINF Fk NINF Fk

1500 67 318E minus 06 332E minus 07 78 673E minus 06 67 318E minus 061000 56 895E minus 06 332E minus 07 78 474E minus 06 56 895E minus 062000 56 632E minus 06 332E minus 07 78 334E minus 06 56 632E minus 06

2500 23 350E minus 06 157E minus 06 23 338E minus 06 23 338E minus 061000 23 667E minus 06 332E minus 07 23 669E minus 06 23 669E minus 062000 34 913E minus 06 332E minus 07 38 839E minus 06 35 844E minus 06

3500 1216 216E minus 07 331E minus 01 1220 389E minus 06 1621 944E minus 101000 1115 592E minus 07 296E minus 01 1220 551E minus 06 1621 139E minus 092000 1115 178E minus 07 350E minus 01 1220 782E minus 06 1621 184E minus 09

4 100 911 733E minus 06 528E minus 01 20012293 303E minus 03lowast 20012270 348E minus 03lowast

5500 2225 439E minus 06 296E minus 01 1819 584E minus 06 1920 832E minus 061000 2527 637E minus 06 435E minus 01 2358 568E minus 06 2324 433E minus 062000 2428 524E minus 06 450E minus 01 1920 553E minus 06 2123 523E minus 06

6500 56 212E minus 06 981E minus 04 56 491E minus 08 78 274E minus 081000 56 144E minus 06 505E minus 01 56 191E minus 07 78 158E minus 072000 67 769E minus 07 608E minus 01 56 756E minus 08 78 508E minus 08

7500 67 257E minus 09 159E minus 01 67 151E minus 09 67 428E minus 111000 67 263E minus 06 163E minus 01 67 103E minus 08 67 383E minus 062000 67 550E minus 06 226E minus 01 67 260E minus 08 67 543E minus 06

8 100 1416 431E minus 08 292E minus 01 20012281 376E + 00lowast 20012268 376E + 00lowast9 1000 1418 542E minus 06 146E minus 02 1420 276E minus 07 1622 136E minus 09

10500 67 554E minus 08 414E minus 01 67 133E minus 07 67 109E minus 071000 67 604E minus 08 371E minus 01 67 249E minus 07 67 991E minus 072000 67 116E minus 07 447E minus 01 67 433E minus 07 78 232E minus 07

11 100 89 322E minus 07 221E minus 01 89 338E minus 08 89 115E minus 06

12500 56 362E minus 12 836E minus 01 1617 581E minus 06 1819 596E minus 061000 56 415E minus 11 882E minus 01 1617 738E minus 06 1819 931E minus 062000 68 342E minus 07 829E minus 01 1617 823E minus 06 1920 282E minus 06

13500 910 241E minus 07 719E minus 01 45 767E minus 06 910 104E minus 081000 910 393E minus 07 762E minus 01 45 832E minus 06 910 337E minus 082000 1011 701E minus 06 730E minus 01 56 100E minus 10 1011 135E minus 07

14500 45 696E minus 06 109E minus 02 710 522E minus 06 56 534E minus 061000 1213 179E minus 07 177E minus 01 810 239E minus 06 68 265E minus 062000 1011 539E minus 06 115E minus 01 1116 170E minus 06 1011 532E minus 07

15500 34 285E minus 09 361E minus 01 34 248E minus 06 45 249E minus 061000 45 132E minus 07 556E minus 01 34 634E minus 07 67 880E minus 082000 45 140E minus 08 504E minus 01 45 374E minus 07 67 363E minus 07

16500 12 110E minus 06 332E minus 07 12 110E minus 06 12 110E minus 061000 12 155E minus 06 332E minus 07 12 155E minus 06 12 155E minus 062000 12 220E minus 06 332E minus 07 12 220E minus 06 12 220E minus 06

17 1000 3444 922E minus 09 358E minus 01 90219 685E minus 09 6375 489E minus 10

Table 3 Numerical results (continued)

No nCTR ATR TTR

NINF Fk λmean NINF Fk NINF Fk

18500 68 357E minus 07 413E minus 01 78 584E minus 08 78 612E minus 091000 810 114E minus 06 317E minus 01 78 787E minus 08 89 710E minus 082000 810 308E minus 06 317E minus 01 78 107E minus 07 89 229E minus 08

19500 23 977E minus 09 332E minus 07 34 677E minus 09 23 977E minus 091000 23 910E minus 10 332E minus 07 34 617E minus 10 23 909E minus 102000 23 447E minus 10 332E minus 07 34 302E minus 10 23 447E minus 10

20 1000 45 190E minus 07 332E minus 07 89 894E minus 08 45 187E minus 07

6 Mathematical Problems in Engineering

close to zero which implies that the trust region directionwas used many times Moreover the number of iteration isthe same as that of algorithm TTR which means that ouralgorithm speeds up the local convergence of the iterates Itcan be seen that λmean is not too large for other test problemsHowever for problems 4 and 8 algorithms ATR and TTRare not efficient to get the optimal solution when NIgt 2000We used the performance profiles by Dolan and More [29]to compare these algorithms with respect to NI and NF

again as shown in Figures 1 and 2 +erefore it is notedfrom Tables 2 and 3 and Figures 1 and 2 that the proposedalgorithm is well promising and has better performance intest problems in comparison with two existing algorithmsIn order to investigate the behavior of the convergence rateof three algorithms we plot the curve 0 minus k minus Lgxk minus xlowast

on problem 10 with n 1000 and n 2000 in Figure 3where xlowast is the optimal point Figure 3 indicates that Al-gorithm 1 converges faster than others

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 1 Performance profiles of the algorithms for the test problems (NI)

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 2 Performance profiles of the algorithms for the test problems (NF)

Mathematical Problems in Engineering 7

6 Conclusions

In this paper we presented a Cauchy point direction trustregion algorithm for solving the nonlinear system ofequations +is algorithm combines the trust region di-rection and the Cauchy point direction with the descentproperty and the automatic trust region +e optimalconvex combination parameters were determined by usingthe gold section algorithm which is an exact one-dimen-sional linear search algorithm +e new algorithm wasproven to have the descent property and the trust regionproperty We also showed the global convergence of theproposed algorithm under suitable condition More im-portantly the numerical results demonstrated the fasterconvergence of the proposed algorithm over two existingmethods which is very promising to solve large-scalenonlinear equations However the convergence rate of theproposed algorithm remains unclear +erefore thedemonstration of the theoretical convergence rate of theproposed algorithm application in image restorationproblems and compressive sensing will be the topicsamong others in our future study

Data Availability

+e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the National Natural ScienceFoundation of China (11261006 and 11661009) and the GuangxiNatural Science Foundation (2020GXNSFAA159069)

References

[1] K Meintjes and A P Morgan ldquoChemical equilibrium systemsas numerical test problemsrdquo ACM Transactions on Mathe-matical Software vol 16 no 2 pp 143ndash151 1990

[2] A A Levin V Chistyakov E A Tairov and V F ChistyakovldquoOn application of the structure of the nonlinear equationssystem describing hydraulic circuits of power plants incomputationsrdquo Bulletin of the South Ural State UniversitySeries ldquoMathematical Modelling Programming and ComputerSoftware vol 9 no 4 pp 53ndash62 2016

[3] F Georgiades ldquoNonlinear equations of motion of L-shapedbeam structuresrdquo European Journal of MechanicsmdashASolidsvol 65 pp 91ndash122 2017

[4] G Yuan T Li and W Hu ldquoA conjugate gradient algorithmfor large-scale nonlinear equations and image restorationproblemsrdquo Applied Numerical Mathematics vol 147pp 129ndash141 2020

[5] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013

[6] Z Papp and S Rapajic ldquoFR type methods for systems of large-scale nonlinear monotone equationsrdquo Applied Mathematicsand Computation vol 269 pp 816ndash823 2015

[7] W Zhou and F Wang ldquoA PRP-based residual method forlarge-scale monotone nonlinear equationsrdquo Applied Mathe-matics and Computation vol 261 pp 1ndash7 2015

1 2 3 4 5 6 7ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

kNo10 strictly convex function 1 (n = 1000)

Log 1

0 ||x

k ndash x

lowast||

CTRATRTTR

(a)

ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

Log 1

0 ||x

k ndash x

lowast||

1 2 3 4 5 6 7 8kNo10 strictly convex function 1 (n = 2000)

CTRATRTTR

(b)

Figure 3 +e behavior of the convergence (a) n 1000 and (b) n 2000

8 Mathematical Problems in Engineering

[8] G Yuan and M Zhang ldquoA three-terms Polak-Ribiere-Polyakconjugate gradient algorithm for large-scale nonlinearequationsrdquo Journal of Computational and Applied Mathe-matics vol 286 pp 186ndash195 2015

[9] G Yuan Z Meng and Y Li ldquoA modified Hestenes and Stiefelconjugate gradient algorithm for large-scale nonsmoothminimizations and nonlinear equationsrdquo Journal of Optimi-zation 9eory and Applications vol 168 no 1 pp 129ndash1522016

[10] J Fan and J Pan ldquoConvergence properties of a self-adaptiveLevenberg-Marquardt algorithm under local error boundconditionrdquo Computational Optimization and Applicationsvol 34 no 1 pp 47ndash62 2006

[11] J Fan ldquoAccelerating the modified Levenberg-Marquardtmethod for nonlinear equationsrdquo Mathematics of Compu-tation vol 83 no 287 pp 1173ndash1187 2013

[12] X Yang ldquoA higher-order Levenberg-Marquardt method fornonlinear equationsrdquoAppliedMathematics and Computationvol 219 no 22 pp 10682ndash10694 2013

[13] K Amini and F Rostami ldquoA modified two steps Levenberg-Marquardt method for nonlinear equationsrdquo Journal ofComputational and Applied Mathematics vol 288 pp 341ndash350 2015

[14] J-l Zhang and Y Wang ldquoA new trust region method fornonlinear equationsrdquo Mathematical Methods of OperationsResearch (ZOR) vol 58 no 2 pp 283ndash298 2003

[15] J Fan ldquoConvergence rate of the trust region method fornonlinear equations under local error bound conditionrdquoComputational Optimization and Applications vol 34 no 2pp 215ndash227 2006

[16] J Fan and J Pan ldquoAn improved trust region algorithm fornonlinear equationsrdquo Computational Optimization and Ap-plications vol 48 no 1 pp 59ndash70 2011

[17] G Yuan Z Wei and X Lu ldquoA BFGS trust-region method fornonlinear equationsrdquo Computing vol 92 no 4 pp 317ndash3332011

[18] H Esmaeili and M Kimiaei ldquoA new adaptive trust-regionmethod for system of nonlinear equationsrdquo Applied Mathe-matical Modelling vol 38 no 11-12 pp 3003ndash3015 2014

[19] H Esmaeili and M Kimiaei ldquoAn efficient adaptive trust-region method for systems of nonlinear equationsrdquo Inter-national Journal of Computer Mathematics vol 92 no 1pp 151ndash166 2015

[20] M Kimiaei and H Esmaeili ldquoA trust-region approach withnovel filter adaptive radius for system of nonlinear equationsrdquoNumerical Algorithms vol 73 no 4 pp 999ndash1016 2016

[21] H Esmaeili and M Kimiaei ldquoA trust-region method withimproved adaptive radius for systems of nonlinear equationsrdquoMathematical Methods of Operations Research vol 83 no 1pp 109ndash125 2016

[22] M J D Powell ldquoConvergence properties of a class of min-imization algorithmsrdquo in Nonlinear Programming pp 1ndash27Elsevier Amsterdam Netherlands 1975

[23] L Qi X J Tong and D H Li ldquoActive-set projected trust-region algorithm for box-constrained nonsmooth equationsrdquoJournal of Optimization 9eory and Applications vol 120no 3 pp 601ndash625 2004

[24] G Yuan Z Wei and Z Wang ldquoGradient trust region al-gorithm with limited memory BFGS update for nonsmoothconvex minimizationrdquo Computational Optimization andApplications vol 54 no 1 pp 45ndash64 2013

[25] Y X Yuan and W Sun Optimization 9eory and MethodsScience Press Beijing China 1997

[26] A R Conn N I M Gould and P L Toint Trust-regionMethods Society for Industrial and Applied MathematicsPhiladelphia PA USA 2000

[27] T Steihaug ldquo+e conjugate gradient method and trust regionsin large scale optimizationrdquo SIAM Journal on NumericalAnalysis vol 20 no 3 pp 626ndash637 1983

[28] W La Cruz J M Martınez andM Raydan ldquoSpectral residualmethod without gradient information for solving large-scalenonlinear systems of equationsrdquo Mathematics of Computa-tion vol 75 no 255 pp 1429ndash1449 2006

[29] E D Dolan and J J More ldquoBenchmarking optimizationsoftware with performance profilesrdquo Mathematical Pro-gramming vol 91 no 2 pp 201ndash213 2002

Mathematical Problems in Engineering 9

Page 2: A Cauchy Point Direction Trust Region Algorithm for ...downloads.hindawi.com/journals/mpe/2020/4072651.pdf · A Cauchy Point Direction Trust Region Algorithm for Nonlinear Equations

gradient direction and the projected trust region directionfor solving box-constrained nonsmooth equations andproved the global and quadratic local convergence Moti-vated by the search direction by Qi et al Yuan et al [24]used the idea of the search direction to solve nonsmoothunconstrained optimization problems and also showedglobal convergence

Inspired by the fact that the search direction by Qi et al[23] speeds up the local convergence rate compared to thegradient direction we propose an efficient Cauchy pointdirection trust region algorithm to solve (1) in this paper+e search direction is an optimal convex combination ofthe trust region direction and the Cauchy point directionwith the automatic trust region property and the sufficientlydescent property which can improve the numerical per-formance We also show the global convergence of theproposed algorithm Numerical results indicate that thisalgorithm has better behavior convergence compared withthat of the classical trust region algorithm and the adaptiveregularized trust region algorithm However the localconvergence is not given

+e remainder of this paper is organized as follows InSection 2 we introduce our motivation and a new searchdirection In Section 3 we give a Cauchy point directiontrust region algorithm for solving problem (1) +e globalconvergence of the proposed algorithm is proven in Section4 Preliminary numerical results are reported in Section 5We conclude our paper in Section 6

+roughout this paper unless otherwise specified middot

denotes the Euclidean norm of vectors or matrices

2 Motivation and Search Direction

In this section motivated by the definition of the Cauchypoint which is parallel to the steepest descent direction wepresent a search direction with the Cauchy point Let Δk bethe trust region radius and Δk gt 0

21 Cauchy Point Direction Noting that JTk Jk is a semi-

definite matrix we define the Cauchy point at the currentpoint xk

dCk Δk( 1113857 ≔ minus

ckΔk

JTk Fk

J

Tk Fk (4)

where

ck

1 if FTk JkJ

Tk1113872 1113873

2Fk 0

min 1J

Tk Fk

3

ΔkFTk JkJ

Tk1113872 1113873

2Fk

⎧⎪⎨

⎪⎩

⎫⎪⎬

⎪⎭ if F

Tk JkJ

Tk1113872 1113873

2Fk gt 0

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

(5)

Fk F(xk) and Jk nablaF(xk) is the Jacobian of F(x) or itsapproximation

+e following lemma gives a nice property of the Cauchypoint direction dC

k

Proposition 1 (automatic trust region property) Supposethat dC

k is defined by (4) then dCk leΔk

Proof It is easy to get the result of this proposition from0lt ck le 1 +is completes the proof

From the definition of dCk and the above proposition we

can determine the descent property and the automatic trustregion property of the Cauchy point direction dC

k

22 Trust Region Direction Let the trial step dTRk (Δk) be a

solution of the following trust region subproblem

mindisinRn

mk(d) ≔12

Fk + Jkd

2

st dleΔk

(6)

where Δk is the same as the above definitionFrom the famous result of Powell [22] we give the

following lemma and omit its proof

Lemma 1 Let dTRk (Δk) be a solution of the trust region

subproblem (6) then

fk minus mk dTRk Δk( 11138571113872 1113873

12

Fk

2

minus12

Fk + JkdTRk Δk( 1113857

2

ge12

JTk Fk

min Δk

JTk Fk

JTk Jk

⎧⎨

⎫⎬

(7)

As mentioned by Qi et al [23] and Yuan et al [24] whenxk is far from the optimization solution the trust regiondirection dTR

k (Δk) may not possess the descent propertyHowever it follows from the definition of the Cauchy pointdirection that dC

k (Δk) always possesses the descent propertyOn the contrary if the search direction is defined by using acombination of dC

k and dTRk it will be a descent direction

Hence we give a new search direction by using a convexcombination form in the next section

23 Search Direction Let λk be a solution of the one-di-mensional minimization problem as follows

minλisin[01]

hk(λ) ≔12

Fk + Jk λdCk Δk( 1113857 +(1 minus λ)d

TRk Δk( 11138571113872 1113873

2

(8)

and the search direction is defined as

dk Δk( 1113857 λkdCk Δk( 1113857 + 1 minus λk( 1113857d

TRk Δk( 1113857 (9)

For the above one-dimensional minimization problemwe can use the gold section method Fibonacci methodparabolic method [25] etc +e idea of the search directionfollows from Qi et al [23] However we use the Cauchypoint direction which possesses the automatic trust regionproperty to get optimal λk by using the gold section methodfor problem (6) in this paper

2 Mathematical Problems in Engineering

3 New Algorithm

In this section let dk(Δk) be the optimal solution of (3)+en the actual reduction is defined as

Aredk ≔ fk minus f xk + dk Δk( 1113857( 1113857 (10)

and the predicted reduction is

Predk ≔ mk(0) minus mk dk Δk( 1113857( 1113857

12

Fk

2

minus12

Fk + Jkdk Δk( 1113857

2

(11)

+us rk is defined by the ratio between Aredk and Predk

rk ≔Aredk

Predk

(12)

We now list the detailed steps of the Cauchy point di-rection trust region algorithm in Algorithm 1

Algorithm 1 Cauchy point direction trust region algorithmfor solving (3)

Step 1 Given initial point x0 isin Rn and parameters

0le μ1 lt μ2 lt 1 0lt η1 lt 1lt η2 0lt δ lt 1 0le ϵ≪ 1 andΔ0 gt 0 Let k ≔ 0Step 2 If the termination condition JT

k Fkle ϵ is sat-isfied at the iteration point xk then stop Otherwise goto Step 3Step 3 Solve the trust region subproblem (6) to obtaindTR

k (Δk) and compute ck and dCk (Δk) from (4) and (5)

respectivelyStep 4 To solve the one-dimensional minimizationproblem (8) giving λk by using Algorithm 2 calculatethe search direction from (6)Step 5 Compute Aredk Predk and rk if

fk minus12

Fk + Jkdk

2 ge minus δFkJ

Tk d

Ck Δk( 1113857

rk ge μ1

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

(13)

are satisfied let xk+1 ≔ xk + dk(Δk) and update Δk by

Δk+1 ≔

η1Δk if rk lt μ1

Δk if μ1 le rk lt μ2

η2Δk if rk ge μ2

⎧⎪⎪⎨

⎪⎪⎩(14)

Step 6 If rk lt μ1 return to Step 3 and let xk+1 ≔ xkOtherwise return to Step 2 Let k ≔ k + 1

In Step 4 of Algorithm 1 we use the gold section al-gorithm as given in Algorithm 2

Algorithm 2 Gold section algorithm [25] for solvingproblem (8)

Step 1 For the given initial parametersa0 0 b0 1 ϵprime gt 0 compute

p0 ≔ a0 + 0382 b0 minus a0( 1113857

q0 ≔ a0 + 0618 b0 minus a0( 1113857(15)

and hk(p0) hk(q0) Let k ≔ 0Step 2 If hk(pk)le hk(qk) go to Step 3 Otherwise go toStep 4Step 3 If the termination condition |qk minus ak|le ϵprime issatisfied stop Otherwise let

ak+1 ≔ ak bk+1 ≔ qk hk qk+1( 1113857 ≔ hk pk( 1113857

qk+1 ≔ pk pk+1 ≔ ak+1 + 0382 bk+1 minus ak+1( 1113857(16)

and compute hk(pk+1) and go to Step 5Step 4 If the termination condition |bk minus pk|le ϵprime issatisfied stop Otherwise let

ak+1 ≔ pk bk+1 ≔ bk hk pk+1( 1113857 ≔ hk qk( 1113857

pk+1 ≔ qk qk+1 ≔ ak+1 + 0618 bk+1 minus ak+1( 1113857(17)

and compute hk(qk+1) and go to Step 5Step 5 Let k ≔ k + 1 return to Step 2

4 Global Convergence

In this section the global convergence of Algorithm 1 isproven Firstly it is easy to have the following lemma fromLemma 1

Lemma 2 Let dk(Δk) be defined by (9) where λk is a solutionof problem (8) 9en

Predk ge12

JTk Fk

min Δk

JTk Fk

JTk Jk

⎧⎨

⎫⎬

⎭ (18)

Proof From the definition of dk(Δk) we havemk(dk(Δk))lemk(dTR

k (Δk)) It follows from the definition ofPredk and Lemma 1 that Predk gefk minus mk(dTR

k (Δk)) holds+us (16) holds and therefore the proof is completed

+e next lemma indicates an important property of theCauchy point direction dC

k which is the sufficiently descentproperty

Lemma 3 (JTk Fk)TdC

k (Δk) minus ckΔkJTk Fk holds for any

Δk gt 0

Proof For any Δk from (9) we have

JTk Fk1113872 1113873

Td

Ck Δk( 1113857 F

Tk Jkd

Ck Δk( 1113857 minus ckΔkF

Tk Jk

JTk Fk

JTk Fk

minus ckΔk JTk Fk

(19)

and note that 0lt ck le 1 thus

JTk Fk1113872 1113873

Td

Ck Δk( 1113857 minus ckΔk J

Tk Fk

lt 0 (20)

Mathematical Problems in Engineering 3

+is completes the proofTo derive the global convergence of Algorithm 1 the

following assumption is considered

Assumption 1 +e Jacobian J(x) of F(x) is bounded iethere exists a positive constant Mgt 0 such that J(x)leMfor any x isin Rn

In the sequel we show that ldquoSteps 3ndash6rdquo of Algorithm 1do not cycle in the inner loop infinitely

Lemma 4 Suppose that Assumption 1 holds let xk1113864 1113865 begenerated by Algorithm 1 If xk is not a stationary point ofproblem (3) then Algorithm 1 is well defined ie the inner

cycle of Algorithm 1 can be left in a finite number of internaliterates

Proof It follows from xk which is not a stationary point thatthere exists a positive constant ϵ0 gt 0 such that

JTk Fk

ge ϵ0 forallk (21)

Since 0lt ck le 1 and Assumption 1 holds there exists apositive constant αgt 0 such that

JkdCk

le ckΔk Jk

le αΔk (22)

Letting 0ltΔk le 2α2(1 minus δ)ϵ0ck then from (4) and (22)we obtain

fk minus12

Fk + Jkdk Δk( 1113857

2 gefk minus

12

Fk + JkdCk Δk( 1113857

2

minus FTk Jkd

Ck Δk( 1113857 minus

12

JkdCk Δk( 1113857

2

minus δFTk Jkd

Ck Δk( 1113857 minus (1 minus δ)F

Tk Jkd

Ck Δk( 1113857 minus

12

JkdCk Δk( 1113857

2

ge minus δFTk Jkd

Ck Δk( 1113857 +(1 minus δ)ckΔk J

Tk Fk

minus

12α2Δ2k

(23)

and noting (21) we have (1 minus δ)ckΔkJTk Fk minus

12α2Δ2k ge (1 minus δ)ckΔk minus 12α2Δ2k ge 0 which implies

fk minus12

Fk + Jkdk Δk( 1113857

2 ge minus δF

Tk Jkd

Ck Δk( 1113857 (24)

and therefore (13) holds Next we prove that (14) is alsosatisfied By contradiction we assume that ldquoStep 3ndashStep 6rdquo ofAlgorithm 1 do cycle in the inner loop infinitely thenΔk⟶ 0

From the definition of dCk (Δk) we get

12

Fk + JkdCk Δk( 1113857

2

12

Fk

2

+ FTk Jkd

Ck Δk( 1113857 +

12

JkdCk Δk( 1113857

2

le12

Fk

2

minus ckΔk JTk Fk

+

12α2Δ2k

(25)

It follows from (13) and Lemma 3 that there exists apositive constant βgt 0 such that

fk minus12

Fk + Jkdk Δk( 1113857

2 ge minus δF

Tk Jkd

Ck Δk( 1113857

δckΔk JTk Fk

ge βΔk

(26)

By the definition dTRk (Δk) we have dTR

k leΔk andnoting Proposition 1 and 0lt ck le 1 further we get

dk Δk( 1113857

leΔk (27)

+us from (26) and (27) we have

rk minus 11113868111386811138681113868

1113868111386811138681113868 Aredk minus Predk

Predk

11138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868

12 Fk + Jkdk Δk( 1113857

2

minus 12 Fk + Jkdk Δk( 1113857 + O dk Δk( 1113857

2

1113874 1113875

2

Predk

11138681113868111386811138681113868111386811138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868111386811138681113868111386811138681113868

Fk + Jkdk Δk( 1113857

O dk Δk( 1113857

2

1113874 1113875 + O dk Δk( 1113857

4

1113874 1113875

fk minus 12 Fk + Jkdk

2 le

O dk Δk( 1113857

2

1113874 1113875

βΔk

leO Δk

2

1113874 1113875

βΔk

⟶ 0

(28)

4 Mathematical Problems in Engineering

and therefore rk⟶ 1 which means that rk ge μ2 when Δk issufficiently small However the definition of Δk in Step 5 ofAlgorithm 1 contradicts the fact that Δk⟶ 0 Hence thecycling between Step 3 and Step 6 terminates finitely +iscompletes the proof

We can get the global convergence of Algorithm 1 basedon the above lemmas

Theorem 1 Suppose that Assumption 1 holds and xk1113864 1113865 begenerated by Algorithm 1 then

lim infk⟶infin

JTk Fk

0 (29)

Proof Suppose that the conclusion is not true then thereexists a positive constant ϵ1 gt 0 such that for any k

JTk Fk

ge ϵ1 (30)

From Lemma 2 Assumption 1 and the above inequa-tion we have

Predk ge12

JTk Fk

min Δk

JTk Fk

JTk Jk

⎧⎨

⎫⎬

⎭ ge12ϵ1 min Δk

ϵ1M

21113896 1113897 (31)

Noting the updating rule of Δk rk ge μ1 (31) and Lemma4 we deduce that

+infinge 1113944infin

k1

12

Fk

2

minus12

Fk+1

2

1113874 1113875

ge 1113944infin

k1μ1Predk

ge 1113944

infin

k1

12μ1ϵ1 min Δk

ϵ1M

21113896 1113897

ge 1113944infin

k1

12μ1ϵ1 min Δ

ϵ1M

21113896 1113897

+infin

(32)

which yields a contradiction that this means (30) is not true iethe result (29) of the theorem holds and therefore the proof iscompleted

5 Numerical Results

In this section the numerical results of the proposed algorithmfor solving problem (1) are reported We denote the proposedAlgorithm 1 by CTR and compare it with some existing al-gorithms +e adaptive trust region algorithm by Fan and Pan[16] and the classical trust region algorithm [26] are denoted byATR and TTR respectively In numerical experiments allparameters of Algorithm 1 are chosen as follows μ1 01μ2 09 η1 025 η2 3 ϵ 10minus 5 and c0 Δ0 1 δ 09We choose ϵprime 10minus 6 in Algorithm 2 Steihaug method [27] isemployed to obtain the approximative trial step dk for solvingthe trust region subproblem (6) We will terminate algorithmsif the number of iterations is larger than 2000

All codes were implemented in MATLAB R2010b +enumerical experiments were performed on a PC with anIntel Pentium (R) Dual-Core CPU at 320GHz and 200GBof RAM and using the Windows 7 operating system

We list the test problems in Table 1 that can also be found in[28] In Tables 1ndash3 ldquoNordquo denotes the number of the testproblem ldquoProblem namerdquo denotes the name of the testproblem ldquox0rdquo denotes the initial point ldquonrdquo denotes the di-mension of the problem ldquoNIrdquo is the total number of iterationsldquoNFrdquo is the number of the function evaluations ldquoFkrdquo refers tothe optimum function values and ldquoλmeanrdquo denotes the meanvalue of λ It is worth mentioning that ldquolowastrdquo represents thestopping of the algorithms in situations that the number ofiterations exceeds the maximal iterations but the terminalconditions cannot be satisfied yet

It is shown from Tables 2 and 3 that the proposed al-gorithm is efficient to solve these test problems and iscompetitive with the other two algorithms in NI and NF formost problems For problems 1 2 16 19 and 20 λmean is

Table 1 Test problems

No Problem name x0

1 Exponential function 1 (n(n minus 1) n(n minus 1))

2 Exponential function 2 (1n2 1n2)3 Extended Rosenbrock function (5 1 5 1)

4 Trigonometric function (101100n 101100n)

5 Singular function (1 1)

6 Logarithmic function (1 1)

7 Broyden tridiagonal function (minus 1 minus 1)

8 Trigexp function (0 0)

9 Function 15 (minus 1 minus 1)

10 Strictly convex function 1 (1n 1n)

11 Strictly convex function 2 (1 1)

12 Zero Jacobian function (100(n minus 100)n (n minus 1000)(n minus 500)360n2 (n minus 1000)(n minus 500)360n2)

13 Linear function-full rank (100 100)

14 Penalty function (13 13)

15 Brown almost-linear function (12 12)

16 Extended Powell singular function (15 times 10minus 4 15 times 10minus 4)

17 Tridiagonal system (12 12)

18 Extended Freudenstein and Roth function (6 3 6 3)

19 Discrete boundary value problem minus n(n + 1)2 i minus n minus 1(n + 1)2 minus 1(n + 1)2

20 Troesch problem (12 12)

Mathematical Problems in Engineering 5

Table 2 Numerical results

No nCTR ATR TTR

NINF Fk λmean NINF Fk NINF Fk

1500 67 318E minus 06 332E minus 07 78 673E minus 06 67 318E minus 061000 56 895E minus 06 332E minus 07 78 474E minus 06 56 895E minus 062000 56 632E minus 06 332E minus 07 78 334E minus 06 56 632E minus 06

2500 23 350E minus 06 157E minus 06 23 338E minus 06 23 338E minus 061000 23 667E minus 06 332E minus 07 23 669E minus 06 23 669E minus 062000 34 913E minus 06 332E minus 07 38 839E minus 06 35 844E minus 06

3500 1216 216E minus 07 331E minus 01 1220 389E minus 06 1621 944E minus 101000 1115 592E minus 07 296E minus 01 1220 551E minus 06 1621 139E minus 092000 1115 178E minus 07 350E minus 01 1220 782E minus 06 1621 184E minus 09

4 100 911 733E minus 06 528E minus 01 20012293 303E minus 03lowast 20012270 348E minus 03lowast

5500 2225 439E minus 06 296E minus 01 1819 584E minus 06 1920 832E minus 061000 2527 637E minus 06 435E minus 01 2358 568E minus 06 2324 433E minus 062000 2428 524E minus 06 450E minus 01 1920 553E minus 06 2123 523E minus 06

6500 56 212E minus 06 981E minus 04 56 491E minus 08 78 274E minus 081000 56 144E minus 06 505E minus 01 56 191E minus 07 78 158E minus 072000 67 769E minus 07 608E minus 01 56 756E minus 08 78 508E minus 08

7500 67 257E minus 09 159E minus 01 67 151E minus 09 67 428E minus 111000 67 263E minus 06 163E minus 01 67 103E minus 08 67 383E minus 062000 67 550E minus 06 226E minus 01 67 260E minus 08 67 543E minus 06

8 100 1416 431E minus 08 292E minus 01 20012281 376E + 00lowast 20012268 376E + 00lowast9 1000 1418 542E minus 06 146E minus 02 1420 276E minus 07 1622 136E minus 09

10500 67 554E minus 08 414E minus 01 67 133E minus 07 67 109E minus 071000 67 604E minus 08 371E minus 01 67 249E minus 07 67 991E minus 072000 67 116E minus 07 447E minus 01 67 433E minus 07 78 232E minus 07

11 100 89 322E minus 07 221E minus 01 89 338E minus 08 89 115E minus 06

12500 56 362E minus 12 836E minus 01 1617 581E minus 06 1819 596E minus 061000 56 415E minus 11 882E minus 01 1617 738E minus 06 1819 931E minus 062000 68 342E minus 07 829E minus 01 1617 823E minus 06 1920 282E minus 06

13500 910 241E minus 07 719E minus 01 45 767E minus 06 910 104E minus 081000 910 393E minus 07 762E minus 01 45 832E minus 06 910 337E minus 082000 1011 701E minus 06 730E minus 01 56 100E minus 10 1011 135E minus 07

14500 45 696E minus 06 109E minus 02 710 522E minus 06 56 534E minus 061000 1213 179E minus 07 177E minus 01 810 239E minus 06 68 265E minus 062000 1011 539E minus 06 115E minus 01 1116 170E minus 06 1011 532E minus 07

15500 34 285E minus 09 361E minus 01 34 248E minus 06 45 249E minus 061000 45 132E minus 07 556E minus 01 34 634E minus 07 67 880E minus 082000 45 140E minus 08 504E minus 01 45 374E minus 07 67 363E minus 07

16500 12 110E minus 06 332E minus 07 12 110E minus 06 12 110E minus 061000 12 155E minus 06 332E minus 07 12 155E minus 06 12 155E minus 062000 12 220E minus 06 332E minus 07 12 220E minus 06 12 220E minus 06

17 1000 3444 922E minus 09 358E minus 01 90219 685E minus 09 6375 489E minus 10

Table 3 Numerical results (continued)

No nCTR ATR TTR

NINF Fk λmean NINF Fk NINF Fk

18500 68 357E minus 07 413E minus 01 78 584E minus 08 78 612E minus 091000 810 114E minus 06 317E minus 01 78 787E minus 08 89 710E minus 082000 810 308E minus 06 317E minus 01 78 107E minus 07 89 229E minus 08

19500 23 977E minus 09 332E minus 07 34 677E minus 09 23 977E minus 091000 23 910E minus 10 332E minus 07 34 617E minus 10 23 909E minus 102000 23 447E minus 10 332E minus 07 34 302E minus 10 23 447E minus 10

20 1000 45 190E minus 07 332E minus 07 89 894E minus 08 45 187E minus 07

6 Mathematical Problems in Engineering

close to zero which implies that the trust region directionwas used many times Moreover the number of iteration isthe same as that of algorithm TTR which means that ouralgorithm speeds up the local convergence of the iterates Itcan be seen that λmean is not too large for other test problemsHowever for problems 4 and 8 algorithms ATR and TTRare not efficient to get the optimal solution when NIgt 2000We used the performance profiles by Dolan and More [29]to compare these algorithms with respect to NI and NF

again as shown in Figures 1 and 2 +erefore it is notedfrom Tables 2 and 3 and Figures 1 and 2 that the proposedalgorithm is well promising and has better performance intest problems in comparison with two existing algorithmsIn order to investigate the behavior of the convergence rateof three algorithms we plot the curve 0 minus k minus Lgxk minus xlowast

on problem 10 with n 1000 and n 2000 in Figure 3where xlowast is the optimal point Figure 3 indicates that Al-gorithm 1 converges faster than others

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 1 Performance profiles of the algorithms for the test problems (NI)

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 2 Performance profiles of the algorithms for the test problems (NF)

Mathematical Problems in Engineering 7

6 Conclusions

In this paper we presented a Cauchy point direction trustregion algorithm for solving the nonlinear system ofequations +is algorithm combines the trust region di-rection and the Cauchy point direction with the descentproperty and the automatic trust region +e optimalconvex combination parameters were determined by usingthe gold section algorithm which is an exact one-dimen-sional linear search algorithm +e new algorithm wasproven to have the descent property and the trust regionproperty We also showed the global convergence of theproposed algorithm under suitable condition More im-portantly the numerical results demonstrated the fasterconvergence of the proposed algorithm over two existingmethods which is very promising to solve large-scalenonlinear equations However the convergence rate of theproposed algorithm remains unclear +erefore thedemonstration of the theoretical convergence rate of theproposed algorithm application in image restorationproblems and compressive sensing will be the topicsamong others in our future study

Data Availability

+e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the National Natural ScienceFoundation of China (11261006 and 11661009) and the GuangxiNatural Science Foundation (2020GXNSFAA159069)

References

[1] K Meintjes and A P Morgan ldquoChemical equilibrium systemsas numerical test problemsrdquo ACM Transactions on Mathe-matical Software vol 16 no 2 pp 143ndash151 1990

[2] A A Levin V Chistyakov E A Tairov and V F ChistyakovldquoOn application of the structure of the nonlinear equationssystem describing hydraulic circuits of power plants incomputationsrdquo Bulletin of the South Ural State UniversitySeries ldquoMathematical Modelling Programming and ComputerSoftware vol 9 no 4 pp 53ndash62 2016

[3] F Georgiades ldquoNonlinear equations of motion of L-shapedbeam structuresrdquo European Journal of MechanicsmdashASolidsvol 65 pp 91ndash122 2017

[4] G Yuan T Li and W Hu ldquoA conjugate gradient algorithmfor large-scale nonlinear equations and image restorationproblemsrdquo Applied Numerical Mathematics vol 147pp 129ndash141 2020

[5] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013

[6] Z Papp and S Rapajic ldquoFR type methods for systems of large-scale nonlinear monotone equationsrdquo Applied Mathematicsand Computation vol 269 pp 816ndash823 2015

[7] W Zhou and F Wang ldquoA PRP-based residual method forlarge-scale monotone nonlinear equationsrdquo Applied Mathe-matics and Computation vol 261 pp 1ndash7 2015

1 2 3 4 5 6 7ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

kNo10 strictly convex function 1 (n = 1000)

Log 1

0 ||x

k ndash x

lowast||

CTRATRTTR

(a)

ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

Log 1

0 ||x

k ndash x

lowast||

1 2 3 4 5 6 7 8kNo10 strictly convex function 1 (n = 2000)

CTRATRTTR

(b)

Figure 3 +e behavior of the convergence (a) n 1000 and (b) n 2000

8 Mathematical Problems in Engineering

[8] G Yuan and M Zhang ldquoA three-terms Polak-Ribiere-Polyakconjugate gradient algorithm for large-scale nonlinearequationsrdquo Journal of Computational and Applied Mathe-matics vol 286 pp 186ndash195 2015

[9] G Yuan Z Meng and Y Li ldquoA modified Hestenes and Stiefelconjugate gradient algorithm for large-scale nonsmoothminimizations and nonlinear equationsrdquo Journal of Optimi-zation 9eory and Applications vol 168 no 1 pp 129ndash1522016

[10] J Fan and J Pan ldquoConvergence properties of a self-adaptiveLevenberg-Marquardt algorithm under local error boundconditionrdquo Computational Optimization and Applicationsvol 34 no 1 pp 47ndash62 2006

[11] J Fan ldquoAccelerating the modified Levenberg-Marquardtmethod for nonlinear equationsrdquo Mathematics of Compu-tation vol 83 no 287 pp 1173ndash1187 2013

[12] X Yang ldquoA higher-order Levenberg-Marquardt method fornonlinear equationsrdquoAppliedMathematics and Computationvol 219 no 22 pp 10682ndash10694 2013

[13] K Amini and F Rostami ldquoA modified two steps Levenberg-Marquardt method for nonlinear equationsrdquo Journal ofComputational and Applied Mathematics vol 288 pp 341ndash350 2015

[14] J-l Zhang and Y Wang ldquoA new trust region method fornonlinear equationsrdquo Mathematical Methods of OperationsResearch (ZOR) vol 58 no 2 pp 283ndash298 2003

[15] J Fan ldquoConvergence rate of the trust region method fornonlinear equations under local error bound conditionrdquoComputational Optimization and Applications vol 34 no 2pp 215ndash227 2006

[16] J Fan and J Pan ldquoAn improved trust region algorithm fornonlinear equationsrdquo Computational Optimization and Ap-plications vol 48 no 1 pp 59ndash70 2011

[17] G Yuan Z Wei and X Lu ldquoA BFGS trust-region method fornonlinear equationsrdquo Computing vol 92 no 4 pp 317ndash3332011

[18] H Esmaeili and M Kimiaei ldquoA new adaptive trust-regionmethod for system of nonlinear equationsrdquo Applied Mathe-matical Modelling vol 38 no 11-12 pp 3003ndash3015 2014

[19] H Esmaeili and M Kimiaei ldquoAn efficient adaptive trust-region method for systems of nonlinear equationsrdquo Inter-national Journal of Computer Mathematics vol 92 no 1pp 151ndash166 2015

[20] M Kimiaei and H Esmaeili ldquoA trust-region approach withnovel filter adaptive radius for system of nonlinear equationsrdquoNumerical Algorithms vol 73 no 4 pp 999ndash1016 2016

[21] H Esmaeili and M Kimiaei ldquoA trust-region method withimproved adaptive radius for systems of nonlinear equationsrdquoMathematical Methods of Operations Research vol 83 no 1pp 109ndash125 2016

[22] M J D Powell ldquoConvergence properties of a class of min-imization algorithmsrdquo in Nonlinear Programming pp 1ndash27Elsevier Amsterdam Netherlands 1975

[23] L Qi X J Tong and D H Li ldquoActive-set projected trust-region algorithm for box-constrained nonsmooth equationsrdquoJournal of Optimization 9eory and Applications vol 120no 3 pp 601ndash625 2004

[24] G Yuan Z Wei and Z Wang ldquoGradient trust region al-gorithm with limited memory BFGS update for nonsmoothconvex minimizationrdquo Computational Optimization andApplications vol 54 no 1 pp 45ndash64 2013

[25] Y X Yuan and W Sun Optimization 9eory and MethodsScience Press Beijing China 1997

[26] A R Conn N I M Gould and P L Toint Trust-regionMethods Society for Industrial and Applied MathematicsPhiladelphia PA USA 2000

[27] T Steihaug ldquo+e conjugate gradient method and trust regionsin large scale optimizationrdquo SIAM Journal on NumericalAnalysis vol 20 no 3 pp 626ndash637 1983

[28] W La Cruz J M Martınez andM Raydan ldquoSpectral residualmethod without gradient information for solving large-scalenonlinear systems of equationsrdquo Mathematics of Computa-tion vol 75 no 255 pp 1429ndash1449 2006

[29] E D Dolan and J J More ldquoBenchmarking optimizationsoftware with performance profilesrdquo Mathematical Pro-gramming vol 91 no 2 pp 201ndash213 2002

Mathematical Problems in Engineering 9

Page 3: A Cauchy Point Direction Trust Region Algorithm for ...downloads.hindawi.com/journals/mpe/2020/4072651.pdf · A Cauchy Point Direction Trust Region Algorithm for Nonlinear Equations

3 New Algorithm

In this section let dk(Δk) be the optimal solution of (3)+en the actual reduction is defined as

Aredk ≔ fk minus f xk + dk Δk( 1113857( 1113857 (10)

and the predicted reduction is

Predk ≔ mk(0) minus mk dk Δk( 1113857( 1113857

12

Fk

2

minus12

Fk + Jkdk Δk( 1113857

2

(11)

+us rk is defined by the ratio between Aredk and Predk

rk ≔Aredk

Predk

(12)

We now list the detailed steps of the Cauchy point di-rection trust region algorithm in Algorithm 1

Algorithm 1 Cauchy point direction trust region algorithmfor solving (3)

Step 1 Given initial point x0 isin Rn and parameters

0le μ1 lt μ2 lt 1 0lt η1 lt 1lt η2 0lt δ lt 1 0le ϵ≪ 1 andΔ0 gt 0 Let k ≔ 0Step 2 If the termination condition JT

k Fkle ϵ is sat-isfied at the iteration point xk then stop Otherwise goto Step 3Step 3 Solve the trust region subproblem (6) to obtaindTR

k (Δk) and compute ck and dCk (Δk) from (4) and (5)

respectivelyStep 4 To solve the one-dimensional minimizationproblem (8) giving λk by using Algorithm 2 calculatethe search direction from (6)Step 5 Compute Aredk Predk and rk if

fk minus12

Fk + Jkdk

2 ge minus δFkJ

Tk d

Ck Δk( 1113857

rk ge μ1

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

(13)

are satisfied let xk+1 ≔ xk + dk(Δk) and update Δk by

Δk+1 ≔

η1Δk if rk lt μ1

Δk if μ1 le rk lt μ2

η2Δk if rk ge μ2

⎧⎪⎪⎨

⎪⎪⎩(14)

Step 6 If rk lt μ1 return to Step 3 and let xk+1 ≔ xkOtherwise return to Step 2 Let k ≔ k + 1

In Step 4 of Algorithm 1 we use the gold section al-gorithm as given in Algorithm 2

Algorithm 2 Gold section algorithm [25] for solvingproblem (8)

Step 1 For the given initial parametersa0 0 b0 1 ϵprime gt 0 compute

p0 ≔ a0 + 0382 b0 minus a0( 1113857

q0 ≔ a0 + 0618 b0 minus a0( 1113857(15)

and hk(p0) hk(q0) Let k ≔ 0Step 2 If hk(pk)le hk(qk) go to Step 3 Otherwise go toStep 4Step 3 If the termination condition |qk minus ak|le ϵprime issatisfied stop Otherwise let

ak+1 ≔ ak bk+1 ≔ qk hk qk+1( 1113857 ≔ hk pk( 1113857

qk+1 ≔ pk pk+1 ≔ ak+1 + 0382 bk+1 minus ak+1( 1113857(16)

and compute hk(pk+1) and go to Step 5Step 4 If the termination condition |bk minus pk|le ϵprime issatisfied stop Otherwise let

ak+1 ≔ pk bk+1 ≔ bk hk pk+1( 1113857 ≔ hk qk( 1113857

pk+1 ≔ qk qk+1 ≔ ak+1 + 0618 bk+1 minus ak+1( 1113857(17)

and compute hk(qk+1) and go to Step 5Step 5 Let k ≔ k + 1 return to Step 2

4 Global Convergence

In this section the global convergence of Algorithm 1 isproven Firstly it is easy to have the following lemma fromLemma 1

Lemma 2 Let dk(Δk) be defined by (9) where λk is a solutionof problem (8) 9en

Predk ge12

JTk Fk

min Δk

JTk Fk

JTk Jk

⎧⎨

⎫⎬

⎭ (18)

Proof From the definition of dk(Δk) we havemk(dk(Δk))lemk(dTR

k (Δk)) It follows from the definition ofPredk and Lemma 1 that Predk gefk minus mk(dTR

k (Δk)) holds+us (16) holds and therefore the proof is completed

+e next lemma indicates an important property of theCauchy point direction dC

k which is the sufficiently descentproperty

Lemma 3 (JTk Fk)TdC

k (Δk) minus ckΔkJTk Fk holds for any

Δk gt 0

Proof For any Δk from (9) we have

JTk Fk1113872 1113873

Td

Ck Δk( 1113857 F

Tk Jkd

Ck Δk( 1113857 minus ckΔkF

Tk Jk

JTk Fk

JTk Fk

minus ckΔk JTk Fk

(19)

and note that 0lt ck le 1 thus

JTk Fk1113872 1113873

Td

Ck Δk( 1113857 minus ckΔk J

Tk Fk

lt 0 (20)

Mathematical Problems in Engineering 3

+is completes the proofTo derive the global convergence of Algorithm 1 the

following assumption is considered

Assumption 1 +e Jacobian J(x) of F(x) is bounded iethere exists a positive constant Mgt 0 such that J(x)leMfor any x isin Rn

In the sequel we show that ldquoSteps 3ndash6rdquo of Algorithm 1do not cycle in the inner loop infinitely

Lemma 4 Suppose that Assumption 1 holds let xk1113864 1113865 begenerated by Algorithm 1 If xk is not a stationary point ofproblem (3) then Algorithm 1 is well defined ie the inner

cycle of Algorithm 1 can be left in a finite number of internaliterates

Proof It follows from xk which is not a stationary point thatthere exists a positive constant ϵ0 gt 0 such that

JTk Fk

ge ϵ0 forallk (21)

Since 0lt ck le 1 and Assumption 1 holds there exists apositive constant αgt 0 such that

JkdCk

le ckΔk Jk

le αΔk (22)

Letting 0ltΔk le 2α2(1 minus δ)ϵ0ck then from (4) and (22)we obtain

fk minus12

Fk + Jkdk Δk( 1113857

2 gefk minus

12

Fk + JkdCk Δk( 1113857

2

minus FTk Jkd

Ck Δk( 1113857 minus

12

JkdCk Δk( 1113857

2

minus δFTk Jkd

Ck Δk( 1113857 minus (1 minus δ)F

Tk Jkd

Ck Δk( 1113857 minus

12

JkdCk Δk( 1113857

2

ge minus δFTk Jkd

Ck Δk( 1113857 +(1 minus δ)ckΔk J

Tk Fk

minus

12α2Δ2k

(23)

and noting (21) we have (1 minus δ)ckΔkJTk Fk minus

12α2Δ2k ge (1 minus δ)ckΔk minus 12α2Δ2k ge 0 which implies

fk minus12

Fk + Jkdk Δk( 1113857

2 ge minus δF

Tk Jkd

Ck Δk( 1113857 (24)

and therefore (13) holds Next we prove that (14) is alsosatisfied By contradiction we assume that ldquoStep 3ndashStep 6rdquo ofAlgorithm 1 do cycle in the inner loop infinitely thenΔk⟶ 0

From the definition of dCk (Δk) we get

12

Fk + JkdCk Δk( 1113857

2

12

Fk

2

+ FTk Jkd

Ck Δk( 1113857 +

12

JkdCk Δk( 1113857

2

le12

Fk

2

minus ckΔk JTk Fk

+

12α2Δ2k

(25)

It follows from (13) and Lemma 3 that there exists apositive constant βgt 0 such that

fk minus12

Fk + Jkdk Δk( 1113857

2 ge minus δF

Tk Jkd

Ck Δk( 1113857

δckΔk JTk Fk

ge βΔk

(26)

By the definition dTRk (Δk) we have dTR

k leΔk andnoting Proposition 1 and 0lt ck le 1 further we get

dk Δk( 1113857

leΔk (27)

+us from (26) and (27) we have

rk minus 11113868111386811138681113868

1113868111386811138681113868 Aredk minus Predk

Predk

11138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868

12 Fk + Jkdk Δk( 1113857

2

minus 12 Fk + Jkdk Δk( 1113857 + O dk Δk( 1113857

2

1113874 1113875

2

Predk

11138681113868111386811138681113868111386811138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868111386811138681113868111386811138681113868

Fk + Jkdk Δk( 1113857

O dk Δk( 1113857

2

1113874 1113875 + O dk Δk( 1113857

4

1113874 1113875

fk minus 12 Fk + Jkdk

2 le

O dk Δk( 1113857

2

1113874 1113875

βΔk

leO Δk

2

1113874 1113875

βΔk

⟶ 0

(28)

4 Mathematical Problems in Engineering

and therefore rk⟶ 1 which means that rk ge μ2 when Δk issufficiently small However the definition of Δk in Step 5 ofAlgorithm 1 contradicts the fact that Δk⟶ 0 Hence thecycling between Step 3 and Step 6 terminates finitely +iscompletes the proof

We can get the global convergence of Algorithm 1 basedon the above lemmas

Theorem 1 Suppose that Assumption 1 holds and xk1113864 1113865 begenerated by Algorithm 1 then

lim infk⟶infin

JTk Fk

0 (29)

Proof Suppose that the conclusion is not true then thereexists a positive constant ϵ1 gt 0 such that for any k

JTk Fk

ge ϵ1 (30)

From Lemma 2 Assumption 1 and the above inequa-tion we have

Predk ge12

JTk Fk

min Δk

JTk Fk

JTk Jk

⎧⎨

⎫⎬

⎭ ge12ϵ1 min Δk

ϵ1M

21113896 1113897 (31)

Noting the updating rule of Δk rk ge μ1 (31) and Lemma4 we deduce that

+infinge 1113944infin

k1

12

Fk

2

minus12

Fk+1

2

1113874 1113875

ge 1113944infin

k1μ1Predk

ge 1113944

infin

k1

12μ1ϵ1 min Δk

ϵ1M

21113896 1113897

ge 1113944infin

k1

12μ1ϵ1 min Δ

ϵ1M

21113896 1113897

+infin

(32)

which yields a contradiction that this means (30) is not true iethe result (29) of the theorem holds and therefore the proof iscompleted

5 Numerical Results

In this section the numerical results of the proposed algorithmfor solving problem (1) are reported We denote the proposedAlgorithm 1 by CTR and compare it with some existing al-gorithms +e adaptive trust region algorithm by Fan and Pan[16] and the classical trust region algorithm [26] are denoted byATR and TTR respectively In numerical experiments allparameters of Algorithm 1 are chosen as follows μ1 01μ2 09 η1 025 η2 3 ϵ 10minus 5 and c0 Δ0 1 δ 09We choose ϵprime 10minus 6 in Algorithm 2 Steihaug method [27] isemployed to obtain the approximative trial step dk for solvingthe trust region subproblem (6) We will terminate algorithmsif the number of iterations is larger than 2000

All codes were implemented in MATLAB R2010b +enumerical experiments were performed on a PC with anIntel Pentium (R) Dual-Core CPU at 320GHz and 200GBof RAM and using the Windows 7 operating system

We list the test problems in Table 1 that can also be found in[28] In Tables 1ndash3 ldquoNordquo denotes the number of the testproblem ldquoProblem namerdquo denotes the name of the testproblem ldquox0rdquo denotes the initial point ldquonrdquo denotes the di-mension of the problem ldquoNIrdquo is the total number of iterationsldquoNFrdquo is the number of the function evaluations ldquoFkrdquo refers tothe optimum function values and ldquoλmeanrdquo denotes the meanvalue of λ It is worth mentioning that ldquolowastrdquo represents thestopping of the algorithms in situations that the number ofiterations exceeds the maximal iterations but the terminalconditions cannot be satisfied yet

It is shown from Tables 2 and 3 that the proposed al-gorithm is efficient to solve these test problems and iscompetitive with the other two algorithms in NI and NF formost problems For problems 1 2 16 19 and 20 λmean is

Table 1 Test problems

No Problem name x0

1 Exponential function 1 (n(n minus 1) n(n minus 1))

2 Exponential function 2 (1n2 1n2)3 Extended Rosenbrock function (5 1 5 1)

4 Trigonometric function (101100n 101100n)

5 Singular function (1 1)

6 Logarithmic function (1 1)

7 Broyden tridiagonal function (minus 1 minus 1)

8 Trigexp function (0 0)

9 Function 15 (minus 1 minus 1)

10 Strictly convex function 1 (1n 1n)

11 Strictly convex function 2 (1 1)

12 Zero Jacobian function (100(n minus 100)n (n minus 1000)(n minus 500)360n2 (n minus 1000)(n minus 500)360n2)

13 Linear function-full rank (100 100)

14 Penalty function (13 13)

15 Brown almost-linear function (12 12)

16 Extended Powell singular function (15 times 10minus 4 15 times 10minus 4)

17 Tridiagonal system (12 12)

18 Extended Freudenstein and Roth function (6 3 6 3)

19 Discrete boundary value problem minus n(n + 1)2 i minus n minus 1(n + 1)2 minus 1(n + 1)2

20 Troesch problem (12 12)

Mathematical Problems in Engineering 5

Table 2 Numerical results

No nCTR ATR TTR

NINF Fk λmean NINF Fk NINF Fk

1500 67 318E minus 06 332E minus 07 78 673E minus 06 67 318E minus 061000 56 895E minus 06 332E minus 07 78 474E minus 06 56 895E minus 062000 56 632E minus 06 332E minus 07 78 334E minus 06 56 632E minus 06

2500 23 350E minus 06 157E minus 06 23 338E minus 06 23 338E minus 061000 23 667E minus 06 332E minus 07 23 669E minus 06 23 669E minus 062000 34 913E minus 06 332E minus 07 38 839E minus 06 35 844E minus 06

3500 1216 216E minus 07 331E minus 01 1220 389E minus 06 1621 944E minus 101000 1115 592E minus 07 296E minus 01 1220 551E minus 06 1621 139E minus 092000 1115 178E minus 07 350E minus 01 1220 782E minus 06 1621 184E minus 09

4 100 911 733E minus 06 528E minus 01 20012293 303E minus 03lowast 20012270 348E minus 03lowast

5500 2225 439E minus 06 296E minus 01 1819 584E minus 06 1920 832E minus 061000 2527 637E minus 06 435E minus 01 2358 568E minus 06 2324 433E minus 062000 2428 524E minus 06 450E minus 01 1920 553E minus 06 2123 523E minus 06

6500 56 212E minus 06 981E minus 04 56 491E minus 08 78 274E minus 081000 56 144E minus 06 505E minus 01 56 191E minus 07 78 158E minus 072000 67 769E minus 07 608E minus 01 56 756E minus 08 78 508E minus 08

7500 67 257E minus 09 159E minus 01 67 151E minus 09 67 428E minus 111000 67 263E minus 06 163E minus 01 67 103E minus 08 67 383E minus 062000 67 550E minus 06 226E minus 01 67 260E minus 08 67 543E minus 06

8 100 1416 431E minus 08 292E minus 01 20012281 376E + 00lowast 20012268 376E + 00lowast9 1000 1418 542E minus 06 146E minus 02 1420 276E minus 07 1622 136E minus 09

10500 67 554E minus 08 414E minus 01 67 133E minus 07 67 109E minus 071000 67 604E minus 08 371E minus 01 67 249E minus 07 67 991E minus 072000 67 116E minus 07 447E minus 01 67 433E minus 07 78 232E minus 07

11 100 89 322E minus 07 221E minus 01 89 338E minus 08 89 115E minus 06

12500 56 362E minus 12 836E minus 01 1617 581E minus 06 1819 596E minus 061000 56 415E minus 11 882E minus 01 1617 738E minus 06 1819 931E minus 062000 68 342E minus 07 829E minus 01 1617 823E minus 06 1920 282E minus 06

13500 910 241E minus 07 719E minus 01 45 767E minus 06 910 104E minus 081000 910 393E minus 07 762E minus 01 45 832E minus 06 910 337E minus 082000 1011 701E minus 06 730E minus 01 56 100E minus 10 1011 135E minus 07

14500 45 696E minus 06 109E minus 02 710 522E minus 06 56 534E minus 061000 1213 179E minus 07 177E minus 01 810 239E minus 06 68 265E minus 062000 1011 539E minus 06 115E minus 01 1116 170E minus 06 1011 532E minus 07

15500 34 285E minus 09 361E minus 01 34 248E minus 06 45 249E minus 061000 45 132E minus 07 556E minus 01 34 634E minus 07 67 880E minus 082000 45 140E minus 08 504E minus 01 45 374E minus 07 67 363E minus 07

16500 12 110E minus 06 332E minus 07 12 110E minus 06 12 110E minus 061000 12 155E minus 06 332E minus 07 12 155E minus 06 12 155E minus 062000 12 220E minus 06 332E minus 07 12 220E minus 06 12 220E minus 06

17 1000 3444 922E minus 09 358E minus 01 90219 685E minus 09 6375 489E minus 10

Table 3 Numerical results (continued)

No nCTR ATR TTR

NINF Fk λmean NINF Fk NINF Fk

18500 68 357E minus 07 413E minus 01 78 584E minus 08 78 612E minus 091000 810 114E minus 06 317E minus 01 78 787E minus 08 89 710E minus 082000 810 308E minus 06 317E minus 01 78 107E minus 07 89 229E minus 08

19500 23 977E minus 09 332E minus 07 34 677E minus 09 23 977E minus 091000 23 910E minus 10 332E minus 07 34 617E minus 10 23 909E minus 102000 23 447E minus 10 332E minus 07 34 302E minus 10 23 447E minus 10

20 1000 45 190E minus 07 332E minus 07 89 894E minus 08 45 187E minus 07

6 Mathematical Problems in Engineering

close to zero which implies that the trust region directionwas used many times Moreover the number of iteration isthe same as that of algorithm TTR which means that ouralgorithm speeds up the local convergence of the iterates Itcan be seen that λmean is not too large for other test problemsHowever for problems 4 and 8 algorithms ATR and TTRare not efficient to get the optimal solution when NIgt 2000We used the performance profiles by Dolan and More [29]to compare these algorithms with respect to NI and NF

again as shown in Figures 1 and 2 +erefore it is notedfrom Tables 2 and 3 and Figures 1 and 2 that the proposedalgorithm is well promising and has better performance intest problems in comparison with two existing algorithmsIn order to investigate the behavior of the convergence rateof three algorithms we plot the curve 0 minus k minus Lgxk minus xlowast

on problem 10 with n 1000 and n 2000 in Figure 3where xlowast is the optimal point Figure 3 indicates that Al-gorithm 1 converges faster than others

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 1 Performance profiles of the algorithms for the test problems (NI)

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 2 Performance profiles of the algorithms for the test problems (NF)

Mathematical Problems in Engineering 7

6 Conclusions

In this paper we presented a Cauchy point direction trustregion algorithm for solving the nonlinear system ofequations +is algorithm combines the trust region di-rection and the Cauchy point direction with the descentproperty and the automatic trust region +e optimalconvex combination parameters were determined by usingthe gold section algorithm which is an exact one-dimen-sional linear search algorithm +e new algorithm wasproven to have the descent property and the trust regionproperty We also showed the global convergence of theproposed algorithm under suitable condition More im-portantly the numerical results demonstrated the fasterconvergence of the proposed algorithm over two existingmethods which is very promising to solve large-scalenonlinear equations However the convergence rate of theproposed algorithm remains unclear +erefore thedemonstration of the theoretical convergence rate of theproposed algorithm application in image restorationproblems and compressive sensing will be the topicsamong others in our future study

Data Availability

+e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the National Natural ScienceFoundation of China (11261006 and 11661009) and the GuangxiNatural Science Foundation (2020GXNSFAA159069)

References

[1] K Meintjes and A P Morgan ldquoChemical equilibrium systemsas numerical test problemsrdquo ACM Transactions on Mathe-matical Software vol 16 no 2 pp 143ndash151 1990

[2] A A Levin V Chistyakov E A Tairov and V F ChistyakovldquoOn application of the structure of the nonlinear equationssystem describing hydraulic circuits of power plants incomputationsrdquo Bulletin of the South Ural State UniversitySeries ldquoMathematical Modelling Programming and ComputerSoftware vol 9 no 4 pp 53ndash62 2016

[3] F Georgiades ldquoNonlinear equations of motion of L-shapedbeam structuresrdquo European Journal of MechanicsmdashASolidsvol 65 pp 91ndash122 2017

[4] G Yuan T Li and W Hu ldquoA conjugate gradient algorithmfor large-scale nonlinear equations and image restorationproblemsrdquo Applied Numerical Mathematics vol 147pp 129ndash141 2020

[5] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013

[6] Z Papp and S Rapajic ldquoFR type methods for systems of large-scale nonlinear monotone equationsrdquo Applied Mathematicsand Computation vol 269 pp 816ndash823 2015

[7] W Zhou and F Wang ldquoA PRP-based residual method forlarge-scale monotone nonlinear equationsrdquo Applied Mathe-matics and Computation vol 261 pp 1ndash7 2015

1 2 3 4 5 6 7ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

kNo10 strictly convex function 1 (n = 1000)

Log 1

0 ||x

k ndash x

lowast||

CTRATRTTR

(a)

ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

Log 1

0 ||x

k ndash x

lowast||

1 2 3 4 5 6 7 8kNo10 strictly convex function 1 (n = 2000)

CTRATRTTR

(b)

Figure 3 +e behavior of the convergence (a) n 1000 and (b) n 2000

8 Mathematical Problems in Engineering

[8] G Yuan and M Zhang ldquoA three-terms Polak-Ribiere-Polyakconjugate gradient algorithm for large-scale nonlinearequationsrdquo Journal of Computational and Applied Mathe-matics vol 286 pp 186ndash195 2015

[9] G Yuan Z Meng and Y Li ldquoA modified Hestenes and Stiefelconjugate gradient algorithm for large-scale nonsmoothminimizations and nonlinear equationsrdquo Journal of Optimi-zation 9eory and Applications vol 168 no 1 pp 129ndash1522016

[10] J Fan and J Pan ldquoConvergence properties of a self-adaptiveLevenberg-Marquardt algorithm under local error boundconditionrdquo Computational Optimization and Applicationsvol 34 no 1 pp 47ndash62 2006

[11] J Fan ldquoAccelerating the modified Levenberg-Marquardtmethod for nonlinear equationsrdquo Mathematics of Compu-tation vol 83 no 287 pp 1173ndash1187 2013

[12] X Yang ldquoA higher-order Levenberg-Marquardt method fornonlinear equationsrdquoAppliedMathematics and Computationvol 219 no 22 pp 10682ndash10694 2013

[13] K Amini and F Rostami ldquoA modified two steps Levenberg-Marquardt method for nonlinear equationsrdquo Journal ofComputational and Applied Mathematics vol 288 pp 341ndash350 2015

[14] J-l Zhang and Y Wang ldquoA new trust region method fornonlinear equationsrdquo Mathematical Methods of OperationsResearch (ZOR) vol 58 no 2 pp 283ndash298 2003

[15] J Fan ldquoConvergence rate of the trust region method fornonlinear equations under local error bound conditionrdquoComputational Optimization and Applications vol 34 no 2pp 215ndash227 2006

[16] J Fan and J Pan ldquoAn improved trust region algorithm fornonlinear equationsrdquo Computational Optimization and Ap-plications vol 48 no 1 pp 59ndash70 2011

[17] G Yuan Z Wei and X Lu ldquoA BFGS trust-region method fornonlinear equationsrdquo Computing vol 92 no 4 pp 317ndash3332011

[18] H Esmaeili and M Kimiaei ldquoA new adaptive trust-regionmethod for system of nonlinear equationsrdquo Applied Mathe-matical Modelling vol 38 no 11-12 pp 3003ndash3015 2014

[19] H Esmaeili and M Kimiaei ldquoAn efficient adaptive trust-region method for systems of nonlinear equationsrdquo Inter-national Journal of Computer Mathematics vol 92 no 1pp 151ndash166 2015

[20] M Kimiaei and H Esmaeili ldquoA trust-region approach withnovel filter adaptive radius for system of nonlinear equationsrdquoNumerical Algorithms vol 73 no 4 pp 999ndash1016 2016

[21] H Esmaeili and M Kimiaei ldquoA trust-region method withimproved adaptive radius for systems of nonlinear equationsrdquoMathematical Methods of Operations Research vol 83 no 1pp 109ndash125 2016

[22] M J D Powell ldquoConvergence properties of a class of min-imization algorithmsrdquo in Nonlinear Programming pp 1ndash27Elsevier Amsterdam Netherlands 1975

[23] L Qi X J Tong and D H Li ldquoActive-set projected trust-region algorithm for box-constrained nonsmooth equationsrdquoJournal of Optimization 9eory and Applications vol 120no 3 pp 601ndash625 2004

[24] G Yuan Z Wei and Z Wang ldquoGradient trust region al-gorithm with limited memory BFGS update for nonsmoothconvex minimizationrdquo Computational Optimization andApplications vol 54 no 1 pp 45ndash64 2013

[25] Y X Yuan and W Sun Optimization 9eory and MethodsScience Press Beijing China 1997

[26] A R Conn N I M Gould and P L Toint Trust-regionMethods Society for Industrial and Applied MathematicsPhiladelphia PA USA 2000

[27] T Steihaug ldquo+e conjugate gradient method and trust regionsin large scale optimizationrdquo SIAM Journal on NumericalAnalysis vol 20 no 3 pp 626ndash637 1983

[28] W La Cruz J M Martınez andM Raydan ldquoSpectral residualmethod without gradient information for solving large-scalenonlinear systems of equationsrdquo Mathematics of Computa-tion vol 75 no 255 pp 1429ndash1449 2006

[29] E D Dolan and J J More ldquoBenchmarking optimizationsoftware with performance profilesrdquo Mathematical Pro-gramming vol 91 no 2 pp 201ndash213 2002

Mathematical Problems in Engineering 9

Page 4: A Cauchy Point Direction Trust Region Algorithm for ...downloads.hindawi.com/journals/mpe/2020/4072651.pdf · A Cauchy Point Direction Trust Region Algorithm for Nonlinear Equations

+is completes the proofTo derive the global convergence of Algorithm 1 the

following assumption is considered

Assumption 1 +e Jacobian J(x) of F(x) is bounded iethere exists a positive constant Mgt 0 such that J(x)leMfor any x isin Rn

In the sequel we show that ldquoSteps 3ndash6rdquo of Algorithm 1do not cycle in the inner loop infinitely

Lemma 4 Suppose that Assumption 1 holds let xk1113864 1113865 begenerated by Algorithm 1 If xk is not a stationary point ofproblem (3) then Algorithm 1 is well defined ie the inner

cycle of Algorithm 1 can be left in a finite number of internaliterates

Proof It follows from xk which is not a stationary point thatthere exists a positive constant ϵ0 gt 0 such that

JTk Fk

ge ϵ0 forallk (21)

Since 0lt ck le 1 and Assumption 1 holds there exists apositive constant αgt 0 such that

JkdCk

le ckΔk Jk

le αΔk (22)

Letting 0ltΔk le 2α2(1 minus δ)ϵ0ck then from (4) and (22)we obtain

fk minus12

Fk + Jkdk Δk( 1113857

2 gefk minus

12

Fk + JkdCk Δk( 1113857

2

minus FTk Jkd

Ck Δk( 1113857 minus

12

JkdCk Δk( 1113857

2

minus δFTk Jkd

Ck Δk( 1113857 minus (1 minus δ)F

Tk Jkd

Ck Δk( 1113857 minus

12

JkdCk Δk( 1113857

2

ge minus δFTk Jkd

Ck Δk( 1113857 +(1 minus δ)ckΔk J

Tk Fk

minus

12α2Δ2k

(23)

and noting (21) we have (1 minus δ)ckΔkJTk Fk minus

12α2Δ2k ge (1 minus δ)ckΔk minus 12α2Δ2k ge 0 which implies

fk minus12

Fk + Jkdk Δk( 1113857

2 ge minus δF

Tk Jkd

Ck Δk( 1113857 (24)

and therefore (13) holds Next we prove that (14) is alsosatisfied By contradiction we assume that ldquoStep 3ndashStep 6rdquo ofAlgorithm 1 do cycle in the inner loop infinitely thenΔk⟶ 0

From the definition of dCk (Δk) we get

12

Fk + JkdCk Δk( 1113857

2

12

Fk

2

+ FTk Jkd

Ck Δk( 1113857 +

12

JkdCk Δk( 1113857

2

le12

Fk

2

minus ckΔk JTk Fk

+

12α2Δ2k

(25)

It follows from (13) and Lemma 3 that there exists apositive constant βgt 0 such that

fk minus12

Fk + Jkdk Δk( 1113857

2 ge minus δF

Tk Jkd

Ck Δk( 1113857

δckΔk JTk Fk

ge βΔk

(26)

By the definition dTRk (Δk) we have dTR

k leΔk andnoting Proposition 1 and 0lt ck le 1 further we get

dk Δk( 1113857

leΔk (27)

+us from (26) and (27) we have

rk minus 11113868111386811138681113868

1113868111386811138681113868 Aredk minus Predk

Predk

11138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868

12 Fk + Jkdk Δk( 1113857

2

minus 12 Fk + Jkdk Δk( 1113857 + O dk Δk( 1113857

2

1113874 1113875

2

Predk

11138681113868111386811138681113868111386811138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868111386811138681113868111386811138681113868

Fk + Jkdk Δk( 1113857

O dk Δk( 1113857

2

1113874 1113875 + O dk Δk( 1113857

4

1113874 1113875

fk minus 12 Fk + Jkdk

2 le

O dk Δk( 1113857

2

1113874 1113875

βΔk

leO Δk

2

1113874 1113875

βΔk

⟶ 0

(28)

4 Mathematical Problems in Engineering

and therefore rk⟶ 1 which means that rk ge μ2 when Δk issufficiently small However the definition of Δk in Step 5 ofAlgorithm 1 contradicts the fact that Δk⟶ 0 Hence thecycling between Step 3 and Step 6 terminates finitely +iscompletes the proof

We can get the global convergence of Algorithm 1 basedon the above lemmas

Theorem 1 Suppose that Assumption 1 holds and xk1113864 1113865 begenerated by Algorithm 1 then

lim infk⟶infin

JTk Fk

0 (29)

Proof Suppose that the conclusion is not true then thereexists a positive constant ϵ1 gt 0 such that for any k

JTk Fk

ge ϵ1 (30)

From Lemma 2 Assumption 1 and the above inequa-tion we have

Predk ge12

JTk Fk

min Δk

JTk Fk

JTk Jk

⎧⎨

⎫⎬

⎭ ge12ϵ1 min Δk

ϵ1M

21113896 1113897 (31)

Noting the updating rule of Δk rk ge μ1 (31) and Lemma4 we deduce that

+infinge 1113944infin

k1

12

Fk

2

minus12

Fk+1

2

1113874 1113875

ge 1113944infin

k1μ1Predk

ge 1113944

infin

k1

12μ1ϵ1 min Δk

ϵ1M

21113896 1113897

ge 1113944infin

k1

12μ1ϵ1 min Δ

ϵ1M

21113896 1113897

+infin

(32)

which yields a contradiction that this means (30) is not true iethe result (29) of the theorem holds and therefore the proof iscompleted

5 Numerical Results

In this section the numerical results of the proposed algorithmfor solving problem (1) are reported We denote the proposedAlgorithm 1 by CTR and compare it with some existing al-gorithms +e adaptive trust region algorithm by Fan and Pan[16] and the classical trust region algorithm [26] are denoted byATR and TTR respectively In numerical experiments allparameters of Algorithm 1 are chosen as follows μ1 01μ2 09 η1 025 η2 3 ϵ 10minus 5 and c0 Δ0 1 δ 09We choose ϵprime 10minus 6 in Algorithm 2 Steihaug method [27] isemployed to obtain the approximative trial step dk for solvingthe trust region subproblem (6) We will terminate algorithmsif the number of iterations is larger than 2000

All codes were implemented in MATLAB R2010b +enumerical experiments were performed on a PC with anIntel Pentium (R) Dual-Core CPU at 320GHz and 200GBof RAM and using the Windows 7 operating system

We list the test problems in Table 1 that can also be found in[28] In Tables 1ndash3 ldquoNordquo denotes the number of the testproblem ldquoProblem namerdquo denotes the name of the testproblem ldquox0rdquo denotes the initial point ldquonrdquo denotes the di-mension of the problem ldquoNIrdquo is the total number of iterationsldquoNFrdquo is the number of the function evaluations ldquoFkrdquo refers tothe optimum function values and ldquoλmeanrdquo denotes the meanvalue of λ It is worth mentioning that ldquolowastrdquo represents thestopping of the algorithms in situations that the number ofiterations exceeds the maximal iterations but the terminalconditions cannot be satisfied yet

It is shown from Tables 2 and 3 that the proposed al-gorithm is efficient to solve these test problems and iscompetitive with the other two algorithms in NI and NF formost problems For problems 1 2 16 19 and 20 λmean is

Table 1 Test problems

No Problem name x0

1 Exponential function 1 (n(n minus 1) n(n minus 1))

2 Exponential function 2 (1n2 1n2)3 Extended Rosenbrock function (5 1 5 1)

4 Trigonometric function (101100n 101100n)

5 Singular function (1 1)

6 Logarithmic function (1 1)

7 Broyden tridiagonal function (minus 1 minus 1)

8 Trigexp function (0 0)

9 Function 15 (minus 1 minus 1)

10 Strictly convex function 1 (1n 1n)

11 Strictly convex function 2 (1 1)

12 Zero Jacobian function (100(n minus 100)n (n minus 1000)(n minus 500)360n2 (n minus 1000)(n minus 500)360n2)

13 Linear function-full rank (100 100)

14 Penalty function (13 13)

15 Brown almost-linear function (12 12)

16 Extended Powell singular function (15 times 10minus 4 15 times 10minus 4)

17 Tridiagonal system (12 12)

18 Extended Freudenstein and Roth function (6 3 6 3)

19 Discrete boundary value problem minus n(n + 1)2 i minus n minus 1(n + 1)2 minus 1(n + 1)2

20 Troesch problem (12 12)

Mathematical Problems in Engineering 5

Table 2 Numerical results

No nCTR ATR TTR

NINF Fk λmean NINF Fk NINF Fk

1500 67 318E minus 06 332E minus 07 78 673E minus 06 67 318E minus 061000 56 895E minus 06 332E minus 07 78 474E minus 06 56 895E minus 062000 56 632E minus 06 332E minus 07 78 334E minus 06 56 632E minus 06

2500 23 350E minus 06 157E minus 06 23 338E minus 06 23 338E minus 061000 23 667E minus 06 332E minus 07 23 669E minus 06 23 669E minus 062000 34 913E minus 06 332E minus 07 38 839E minus 06 35 844E minus 06

3500 1216 216E minus 07 331E minus 01 1220 389E minus 06 1621 944E minus 101000 1115 592E minus 07 296E minus 01 1220 551E minus 06 1621 139E minus 092000 1115 178E minus 07 350E minus 01 1220 782E minus 06 1621 184E minus 09

4 100 911 733E minus 06 528E minus 01 20012293 303E minus 03lowast 20012270 348E minus 03lowast

5500 2225 439E minus 06 296E minus 01 1819 584E minus 06 1920 832E minus 061000 2527 637E minus 06 435E minus 01 2358 568E minus 06 2324 433E minus 062000 2428 524E minus 06 450E minus 01 1920 553E minus 06 2123 523E minus 06

6500 56 212E minus 06 981E minus 04 56 491E minus 08 78 274E minus 081000 56 144E minus 06 505E minus 01 56 191E minus 07 78 158E minus 072000 67 769E minus 07 608E minus 01 56 756E minus 08 78 508E minus 08

7500 67 257E minus 09 159E minus 01 67 151E minus 09 67 428E minus 111000 67 263E minus 06 163E minus 01 67 103E minus 08 67 383E minus 062000 67 550E minus 06 226E minus 01 67 260E minus 08 67 543E minus 06

8 100 1416 431E minus 08 292E minus 01 20012281 376E + 00lowast 20012268 376E + 00lowast9 1000 1418 542E minus 06 146E minus 02 1420 276E minus 07 1622 136E minus 09

10500 67 554E minus 08 414E minus 01 67 133E minus 07 67 109E minus 071000 67 604E minus 08 371E minus 01 67 249E minus 07 67 991E minus 072000 67 116E minus 07 447E minus 01 67 433E minus 07 78 232E minus 07

11 100 89 322E minus 07 221E minus 01 89 338E minus 08 89 115E minus 06

12500 56 362E minus 12 836E minus 01 1617 581E minus 06 1819 596E minus 061000 56 415E minus 11 882E minus 01 1617 738E minus 06 1819 931E minus 062000 68 342E minus 07 829E minus 01 1617 823E minus 06 1920 282E minus 06

13500 910 241E minus 07 719E minus 01 45 767E minus 06 910 104E minus 081000 910 393E minus 07 762E minus 01 45 832E minus 06 910 337E minus 082000 1011 701E minus 06 730E minus 01 56 100E minus 10 1011 135E minus 07

14500 45 696E minus 06 109E minus 02 710 522E minus 06 56 534E minus 061000 1213 179E minus 07 177E minus 01 810 239E minus 06 68 265E minus 062000 1011 539E minus 06 115E minus 01 1116 170E minus 06 1011 532E minus 07

15500 34 285E minus 09 361E minus 01 34 248E minus 06 45 249E minus 061000 45 132E minus 07 556E minus 01 34 634E minus 07 67 880E minus 082000 45 140E minus 08 504E minus 01 45 374E minus 07 67 363E minus 07

16500 12 110E minus 06 332E minus 07 12 110E minus 06 12 110E minus 061000 12 155E minus 06 332E minus 07 12 155E minus 06 12 155E minus 062000 12 220E minus 06 332E minus 07 12 220E minus 06 12 220E minus 06

17 1000 3444 922E minus 09 358E minus 01 90219 685E minus 09 6375 489E minus 10

Table 3 Numerical results (continued)

No nCTR ATR TTR

NINF Fk λmean NINF Fk NINF Fk

18500 68 357E minus 07 413E minus 01 78 584E minus 08 78 612E minus 091000 810 114E minus 06 317E minus 01 78 787E minus 08 89 710E minus 082000 810 308E minus 06 317E minus 01 78 107E minus 07 89 229E minus 08

19500 23 977E minus 09 332E minus 07 34 677E minus 09 23 977E minus 091000 23 910E minus 10 332E minus 07 34 617E minus 10 23 909E minus 102000 23 447E minus 10 332E minus 07 34 302E minus 10 23 447E minus 10

20 1000 45 190E minus 07 332E minus 07 89 894E minus 08 45 187E minus 07

6 Mathematical Problems in Engineering

close to zero which implies that the trust region directionwas used many times Moreover the number of iteration isthe same as that of algorithm TTR which means that ouralgorithm speeds up the local convergence of the iterates Itcan be seen that λmean is not too large for other test problemsHowever for problems 4 and 8 algorithms ATR and TTRare not efficient to get the optimal solution when NIgt 2000We used the performance profiles by Dolan and More [29]to compare these algorithms with respect to NI and NF

again as shown in Figures 1 and 2 +erefore it is notedfrom Tables 2 and 3 and Figures 1 and 2 that the proposedalgorithm is well promising and has better performance intest problems in comparison with two existing algorithmsIn order to investigate the behavior of the convergence rateof three algorithms we plot the curve 0 minus k minus Lgxk minus xlowast

on problem 10 with n 1000 and n 2000 in Figure 3where xlowast is the optimal point Figure 3 indicates that Al-gorithm 1 converges faster than others

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 1 Performance profiles of the algorithms for the test problems (NI)

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 2 Performance profiles of the algorithms for the test problems (NF)

Mathematical Problems in Engineering 7

6 Conclusions

In this paper we presented a Cauchy point direction trustregion algorithm for solving the nonlinear system ofequations +is algorithm combines the trust region di-rection and the Cauchy point direction with the descentproperty and the automatic trust region +e optimalconvex combination parameters were determined by usingthe gold section algorithm which is an exact one-dimen-sional linear search algorithm +e new algorithm wasproven to have the descent property and the trust regionproperty We also showed the global convergence of theproposed algorithm under suitable condition More im-portantly the numerical results demonstrated the fasterconvergence of the proposed algorithm over two existingmethods which is very promising to solve large-scalenonlinear equations However the convergence rate of theproposed algorithm remains unclear +erefore thedemonstration of the theoretical convergence rate of theproposed algorithm application in image restorationproblems and compressive sensing will be the topicsamong others in our future study

Data Availability

+e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the National Natural ScienceFoundation of China (11261006 and 11661009) and the GuangxiNatural Science Foundation (2020GXNSFAA159069)

References

[1] K Meintjes and A P Morgan ldquoChemical equilibrium systemsas numerical test problemsrdquo ACM Transactions on Mathe-matical Software vol 16 no 2 pp 143ndash151 1990

[2] A A Levin V Chistyakov E A Tairov and V F ChistyakovldquoOn application of the structure of the nonlinear equationssystem describing hydraulic circuits of power plants incomputationsrdquo Bulletin of the South Ural State UniversitySeries ldquoMathematical Modelling Programming and ComputerSoftware vol 9 no 4 pp 53ndash62 2016

[3] F Georgiades ldquoNonlinear equations of motion of L-shapedbeam structuresrdquo European Journal of MechanicsmdashASolidsvol 65 pp 91ndash122 2017

[4] G Yuan T Li and W Hu ldquoA conjugate gradient algorithmfor large-scale nonlinear equations and image restorationproblemsrdquo Applied Numerical Mathematics vol 147pp 129ndash141 2020

[5] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013

[6] Z Papp and S Rapajic ldquoFR type methods for systems of large-scale nonlinear monotone equationsrdquo Applied Mathematicsand Computation vol 269 pp 816ndash823 2015

[7] W Zhou and F Wang ldquoA PRP-based residual method forlarge-scale monotone nonlinear equationsrdquo Applied Mathe-matics and Computation vol 261 pp 1ndash7 2015

1 2 3 4 5 6 7ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

kNo10 strictly convex function 1 (n = 1000)

Log 1

0 ||x

k ndash x

lowast||

CTRATRTTR

(a)

ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

Log 1

0 ||x

k ndash x

lowast||

1 2 3 4 5 6 7 8kNo10 strictly convex function 1 (n = 2000)

CTRATRTTR

(b)

Figure 3 +e behavior of the convergence (a) n 1000 and (b) n 2000

8 Mathematical Problems in Engineering

[8] G Yuan and M Zhang ldquoA three-terms Polak-Ribiere-Polyakconjugate gradient algorithm for large-scale nonlinearequationsrdquo Journal of Computational and Applied Mathe-matics vol 286 pp 186ndash195 2015

[9] G Yuan Z Meng and Y Li ldquoA modified Hestenes and Stiefelconjugate gradient algorithm for large-scale nonsmoothminimizations and nonlinear equationsrdquo Journal of Optimi-zation 9eory and Applications vol 168 no 1 pp 129ndash1522016

[10] J Fan and J Pan ldquoConvergence properties of a self-adaptiveLevenberg-Marquardt algorithm under local error boundconditionrdquo Computational Optimization and Applicationsvol 34 no 1 pp 47ndash62 2006

[11] J Fan ldquoAccelerating the modified Levenberg-Marquardtmethod for nonlinear equationsrdquo Mathematics of Compu-tation vol 83 no 287 pp 1173ndash1187 2013

[12] X Yang ldquoA higher-order Levenberg-Marquardt method fornonlinear equationsrdquoAppliedMathematics and Computationvol 219 no 22 pp 10682ndash10694 2013

[13] K Amini and F Rostami ldquoA modified two steps Levenberg-Marquardt method for nonlinear equationsrdquo Journal ofComputational and Applied Mathematics vol 288 pp 341ndash350 2015

[14] J-l Zhang and Y Wang ldquoA new trust region method fornonlinear equationsrdquo Mathematical Methods of OperationsResearch (ZOR) vol 58 no 2 pp 283ndash298 2003

[15] J Fan ldquoConvergence rate of the trust region method fornonlinear equations under local error bound conditionrdquoComputational Optimization and Applications vol 34 no 2pp 215ndash227 2006

[16] J Fan and J Pan ldquoAn improved trust region algorithm fornonlinear equationsrdquo Computational Optimization and Ap-plications vol 48 no 1 pp 59ndash70 2011

[17] G Yuan Z Wei and X Lu ldquoA BFGS trust-region method fornonlinear equationsrdquo Computing vol 92 no 4 pp 317ndash3332011

[18] H Esmaeili and M Kimiaei ldquoA new adaptive trust-regionmethod for system of nonlinear equationsrdquo Applied Mathe-matical Modelling vol 38 no 11-12 pp 3003ndash3015 2014

[19] H Esmaeili and M Kimiaei ldquoAn efficient adaptive trust-region method for systems of nonlinear equationsrdquo Inter-national Journal of Computer Mathematics vol 92 no 1pp 151ndash166 2015

[20] M Kimiaei and H Esmaeili ldquoA trust-region approach withnovel filter adaptive radius for system of nonlinear equationsrdquoNumerical Algorithms vol 73 no 4 pp 999ndash1016 2016

[21] H Esmaeili and M Kimiaei ldquoA trust-region method withimproved adaptive radius for systems of nonlinear equationsrdquoMathematical Methods of Operations Research vol 83 no 1pp 109ndash125 2016

[22] M J D Powell ldquoConvergence properties of a class of min-imization algorithmsrdquo in Nonlinear Programming pp 1ndash27Elsevier Amsterdam Netherlands 1975

[23] L Qi X J Tong and D H Li ldquoActive-set projected trust-region algorithm for box-constrained nonsmooth equationsrdquoJournal of Optimization 9eory and Applications vol 120no 3 pp 601ndash625 2004

[24] G Yuan Z Wei and Z Wang ldquoGradient trust region al-gorithm with limited memory BFGS update for nonsmoothconvex minimizationrdquo Computational Optimization andApplications vol 54 no 1 pp 45ndash64 2013

[25] Y X Yuan and W Sun Optimization 9eory and MethodsScience Press Beijing China 1997

[26] A R Conn N I M Gould and P L Toint Trust-regionMethods Society for Industrial and Applied MathematicsPhiladelphia PA USA 2000

[27] T Steihaug ldquo+e conjugate gradient method and trust regionsin large scale optimizationrdquo SIAM Journal on NumericalAnalysis vol 20 no 3 pp 626ndash637 1983

[28] W La Cruz J M Martınez andM Raydan ldquoSpectral residualmethod without gradient information for solving large-scalenonlinear systems of equationsrdquo Mathematics of Computa-tion vol 75 no 255 pp 1429ndash1449 2006

[29] E D Dolan and J J More ldquoBenchmarking optimizationsoftware with performance profilesrdquo Mathematical Pro-gramming vol 91 no 2 pp 201ndash213 2002

Mathematical Problems in Engineering 9

Page 5: A Cauchy Point Direction Trust Region Algorithm for ...downloads.hindawi.com/journals/mpe/2020/4072651.pdf · A Cauchy Point Direction Trust Region Algorithm for Nonlinear Equations

and therefore rk⟶ 1 which means that rk ge μ2 when Δk issufficiently small However the definition of Δk in Step 5 ofAlgorithm 1 contradicts the fact that Δk⟶ 0 Hence thecycling between Step 3 and Step 6 terminates finitely +iscompletes the proof

We can get the global convergence of Algorithm 1 basedon the above lemmas

Theorem 1 Suppose that Assumption 1 holds and xk1113864 1113865 begenerated by Algorithm 1 then

lim infk⟶infin

JTk Fk

0 (29)

Proof Suppose that the conclusion is not true then thereexists a positive constant ϵ1 gt 0 such that for any k

JTk Fk

ge ϵ1 (30)

From Lemma 2 Assumption 1 and the above inequa-tion we have

Predk ge12

JTk Fk

min Δk

JTk Fk

JTk Jk

⎧⎨

⎫⎬

⎭ ge12ϵ1 min Δk

ϵ1M

21113896 1113897 (31)

Noting the updating rule of Δk rk ge μ1 (31) and Lemma4 we deduce that

+infinge 1113944infin

k1

12

Fk

2

minus12

Fk+1

2

1113874 1113875

ge 1113944infin

k1μ1Predk

ge 1113944

infin

k1

12μ1ϵ1 min Δk

ϵ1M

21113896 1113897

ge 1113944infin

k1

12μ1ϵ1 min Δ

ϵ1M

21113896 1113897

+infin

(32)

which yields a contradiction that this means (30) is not true iethe result (29) of the theorem holds and therefore the proof iscompleted

5 Numerical Results

In this section the numerical results of the proposed algorithmfor solving problem (1) are reported We denote the proposedAlgorithm 1 by CTR and compare it with some existing al-gorithms +e adaptive trust region algorithm by Fan and Pan[16] and the classical trust region algorithm [26] are denoted byATR and TTR respectively In numerical experiments allparameters of Algorithm 1 are chosen as follows μ1 01μ2 09 η1 025 η2 3 ϵ 10minus 5 and c0 Δ0 1 δ 09We choose ϵprime 10minus 6 in Algorithm 2 Steihaug method [27] isemployed to obtain the approximative trial step dk for solvingthe trust region subproblem (6) We will terminate algorithmsif the number of iterations is larger than 2000

All codes were implemented in MATLAB R2010b +enumerical experiments were performed on a PC with anIntel Pentium (R) Dual-Core CPU at 320GHz and 200GBof RAM and using the Windows 7 operating system

We list the test problems in Table 1 that can also be found in[28] In Tables 1ndash3 ldquoNordquo denotes the number of the testproblem ldquoProblem namerdquo denotes the name of the testproblem ldquox0rdquo denotes the initial point ldquonrdquo denotes the di-mension of the problem ldquoNIrdquo is the total number of iterationsldquoNFrdquo is the number of the function evaluations ldquoFkrdquo refers tothe optimum function values and ldquoλmeanrdquo denotes the meanvalue of λ It is worth mentioning that ldquolowastrdquo represents thestopping of the algorithms in situations that the number ofiterations exceeds the maximal iterations but the terminalconditions cannot be satisfied yet

It is shown from Tables 2 and 3 that the proposed al-gorithm is efficient to solve these test problems and iscompetitive with the other two algorithms in NI and NF formost problems For problems 1 2 16 19 and 20 λmean is

Table 1 Test problems

No Problem name x0

1 Exponential function 1 (n(n minus 1) n(n minus 1))

2 Exponential function 2 (1n2 1n2)3 Extended Rosenbrock function (5 1 5 1)

4 Trigonometric function (101100n 101100n)

5 Singular function (1 1)

6 Logarithmic function (1 1)

7 Broyden tridiagonal function (minus 1 minus 1)

8 Trigexp function (0 0)

9 Function 15 (minus 1 minus 1)

10 Strictly convex function 1 (1n 1n)

11 Strictly convex function 2 (1 1)

12 Zero Jacobian function (100(n minus 100)n (n minus 1000)(n minus 500)360n2 (n minus 1000)(n minus 500)360n2)

13 Linear function-full rank (100 100)

14 Penalty function (13 13)

15 Brown almost-linear function (12 12)

16 Extended Powell singular function (15 times 10minus 4 15 times 10minus 4)

17 Tridiagonal system (12 12)

18 Extended Freudenstein and Roth function (6 3 6 3)

19 Discrete boundary value problem minus n(n + 1)2 i minus n minus 1(n + 1)2 minus 1(n + 1)2

20 Troesch problem (12 12)

Mathematical Problems in Engineering 5

Table 2 Numerical results

No nCTR ATR TTR

NINF Fk λmean NINF Fk NINF Fk

1500 67 318E minus 06 332E minus 07 78 673E minus 06 67 318E minus 061000 56 895E minus 06 332E minus 07 78 474E minus 06 56 895E minus 062000 56 632E minus 06 332E minus 07 78 334E minus 06 56 632E minus 06

2500 23 350E minus 06 157E minus 06 23 338E minus 06 23 338E minus 061000 23 667E minus 06 332E minus 07 23 669E minus 06 23 669E minus 062000 34 913E minus 06 332E minus 07 38 839E minus 06 35 844E minus 06

3500 1216 216E minus 07 331E minus 01 1220 389E minus 06 1621 944E minus 101000 1115 592E minus 07 296E minus 01 1220 551E minus 06 1621 139E minus 092000 1115 178E minus 07 350E minus 01 1220 782E minus 06 1621 184E minus 09

4 100 911 733E minus 06 528E minus 01 20012293 303E minus 03lowast 20012270 348E minus 03lowast

5500 2225 439E minus 06 296E minus 01 1819 584E minus 06 1920 832E minus 061000 2527 637E minus 06 435E minus 01 2358 568E minus 06 2324 433E minus 062000 2428 524E minus 06 450E minus 01 1920 553E minus 06 2123 523E minus 06

6500 56 212E minus 06 981E minus 04 56 491E minus 08 78 274E minus 081000 56 144E minus 06 505E minus 01 56 191E minus 07 78 158E minus 072000 67 769E minus 07 608E minus 01 56 756E minus 08 78 508E minus 08

7500 67 257E minus 09 159E minus 01 67 151E minus 09 67 428E minus 111000 67 263E minus 06 163E minus 01 67 103E minus 08 67 383E minus 062000 67 550E minus 06 226E minus 01 67 260E minus 08 67 543E minus 06

8 100 1416 431E minus 08 292E minus 01 20012281 376E + 00lowast 20012268 376E + 00lowast9 1000 1418 542E minus 06 146E minus 02 1420 276E minus 07 1622 136E minus 09

10500 67 554E minus 08 414E minus 01 67 133E minus 07 67 109E minus 071000 67 604E minus 08 371E minus 01 67 249E minus 07 67 991E minus 072000 67 116E minus 07 447E minus 01 67 433E minus 07 78 232E minus 07

11 100 89 322E minus 07 221E minus 01 89 338E minus 08 89 115E minus 06

12500 56 362E minus 12 836E minus 01 1617 581E minus 06 1819 596E minus 061000 56 415E minus 11 882E minus 01 1617 738E minus 06 1819 931E minus 062000 68 342E minus 07 829E minus 01 1617 823E minus 06 1920 282E minus 06

13500 910 241E minus 07 719E minus 01 45 767E minus 06 910 104E minus 081000 910 393E minus 07 762E minus 01 45 832E minus 06 910 337E minus 082000 1011 701E minus 06 730E minus 01 56 100E minus 10 1011 135E minus 07

14500 45 696E minus 06 109E minus 02 710 522E minus 06 56 534E minus 061000 1213 179E minus 07 177E minus 01 810 239E minus 06 68 265E minus 062000 1011 539E minus 06 115E minus 01 1116 170E minus 06 1011 532E minus 07

15500 34 285E minus 09 361E minus 01 34 248E minus 06 45 249E minus 061000 45 132E minus 07 556E minus 01 34 634E minus 07 67 880E minus 082000 45 140E minus 08 504E minus 01 45 374E minus 07 67 363E minus 07

16500 12 110E minus 06 332E minus 07 12 110E minus 06 12 110E minus 061000 12 155E minus 06 332E minus 07 12 155E minus 06 12 155E minus 062000 12 220E minus 06 332E minus 07 12 220E minus 06 12 220E minus 06

17 1000 3444 922E minus 09 358E minus 01 90219 685E minus 09 6375 489E minus 10

Table 3 Numerical results (continued)

No nCTR ATR TTR

NINF Fk λmean NINF Fk NINF Fk

18500 68 357E minus 07 413E minus 01 78 584E minus 08 78 612E minus 091000 810 114E minus 06 317E minus 01 78 787E minus 08 89 710E minus 082000 810 308E minus 06 317E minus 01 78 107E minus 07 89 229E minus 08

19500 23 977E minus 09 332E minus 07 34 677E minus 09 23 977E minus 091000 23 910E minus 10 332E minus 07 34 617E minus 10 23 909E minus 102000 23 447E minus 10 332E minus 07 34 302E minus 10 23 447E minus 10

20 1000 45 190E minus 07 332E minus 07 89 894E minus 08 45 187E minus 07

6 Mathematical Problems in Engineering

close to zero which implies that the trust region directionwas used many times Moreover the number of iteration isthe same as that of algorithm TTR which means that ouralgorithm speeds up the local convergence of the iterates Itcan be seen that λmean is not too large for other test problemsHowever for problems 4 and 8 algorithms ATR and TTRare not efficient to get the optimal solution when NIgt 2000We used the performance profiles by Dolan and More [29]to compare these algorithms with respect to NI and NF

again as shown in Figures 1 and 2 +erefore it is notedfrom Tables 2 and 3 and Figures 1 and 2 that the proposedalgorithm is well promising and has better performance intest problems in comparison with two existing algorithmsIn order to investigate the behavior of the convergence rateof three algorithms we plot the curve 0 minus k minus Lgxk minus xlowast

on problem 10 with n 1000 and n 2000 in Figure 3where xlowast is the optimal point Figure 3 indicates that Al-gorithm 1 converges faster than others

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 1 Performance profiles of the algorithms for the test problems (NI)

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 2 Performance profiles of the algorithms for the test problems (NF)

Mathematical Problems in Engineering 7

6 Conclusions

In this paper we presented a Cauchy point direction trustregion algorithm for solving the nonlinear system ofequations +is algorithm combines the trust region di-rection and the Cauchy point direction with the descentproperty and the automatic trust region +e optimalconvex combination parameters were determined by usingthe gold section algorithm which is an exact one-dimen-sional linear search algorithm +e new algorithm wasproven to have the descent property and the trust regionproperty We also showed the global convergence of theproposed algorithm under suitable condition More im-portantly the numerical results demonstrated the fasterconvergence of the proposed algorithm over two existingmethods which is very promising to solve large-scalenonlinear equations However the convergence rate of theproposed algorithm remains unclear +erefore thedemonstration of the theoretical convergence rate of theproposed algorithm application in image restorationproblems and compressive sensing will be the topicsamong others in our future study

Data Availability

+e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the National Natural ScienceFoundation of China (11261006 and 11661009) and the GuangxiNatural Science Foundation (2020GXNSFAA159069)

References

[1] K Meintjes and A P Morgan ldquoChemical equilibrium systemsas numerical test problemsrdquo ACM Transactions on Mathe-matical Software vol 16 no 2 pp 143ndash151 1990

[2] A A Levin V Chistyakov E A Tairov and V F ChistyakovldquoOn application of the structure of the nonlinear equationssystem describing hydraulic circuits of power plants incomputationsrdquo Bulletin of the South Ural State UniversitySeries ldquoMathematical Modelling Programming and ComputerSoftware vol 9 no 4 pp 53ndash62 2016

[3] F Georgiades ldquoNonlinear equations of motion of L-shapedbeam structuresrdquo European Journal of MechanicsmdashASolidsvol 65 pp 91ndash122 2017

[4] G Yuan T Li and W Hu ldquoA conjugate gradient algorithmfor large-scale nonlinear equations and image restorationproblemsrdquo Applied Numerical Mathematics vol 147pp 129ndash141 2020

[5] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013

[6] Z Papp and S Rapajic ldquoFR type methods for systems of large-scale nonlinear monotone equationsrdquo Applied Mathematicsand Computation vol 269 pp 816ndash823 2015

[7] W Zhou and F Wang ldquoA PRP-based residual method forlarge-scale monotone nonlinear equationsrdquo Applied Mathe-matics and Computation vol 261 pp 1ndash7 2015

1 2 3 4 5 6 7ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

kNo10 strictly convex function 1 (n = 1000)

Log 1

0 ||x

k ndash x

lowast||

CTRATRTTR

(a)

ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

Log 1

0 ||x

k ndash x

lowast||

1 2 3 4 5 6 7 8kNo10 strictly convex function 1 (n = 2000)

CTRATRTTR

(b)

Figure 3 +e behavior of the convergence (a) n 1000 and (b) n 2000

8 Mathematical Problems in Engineering

[8] G Yuan and M Zhang ldquoA three-terms Polak-Ribiere-Polyakconjugate gradient algorithm for large-scale nonlinearequationsrdquo Journal of Computational and Applied Mathe-matics vol 286 pp 186ndash195 2015

[9] G Yuan Z Meng and Y Li ldquoA modified Hestenes and Stiefelconjugate gradient algorithm for large-scale nonsmoothminimizations and nonlinear equationsrdquo Journal of Optimi-zation 9eory and Applications vol 168 no 1 pp 129ndash1522016

[10] J Fan and J Pan ldquoConvergence properties of a self-adaptiveLevenberg-Marquardt algorithm under local error boundconditionrdquo Computational Optimization and Applicationsvol 34 no 1 pp 47ndash62 2006

[11] J Fan ldquoAccelerating the modified Levenberg-Marquardtmethod for nonlinear equationsrdquo Mathematics of Compu-tation vol 83 no 287 pp 1173ndash1187 2013

[12] X Yang ldquoA higher-order Levenberg-Marquardt method fornonlinear equationsrdquoAppliedMathematics and Computationvol 219 no 22 pp 10682ndash10694 2013

[13] K Amini and F Rostami ldquoA modified two steps Levenberg-Marquardt method for nonlinear equationsrdquo Journal ofComputational and Applied Mathematics vol 288 pp 341ndash350 2015

[14] J-l Zhang and Y Wang ldquoA new trust region method fornonlinear equationsrdquo Mathematical Methods of OperationsResearch (ZOR) vol 58 no 2 pp 283ndash298 2003

[15] J Fan ldquoConvergence rate of the trust region method fornonlinear equations under local error bound conditionrdquoComputational Optimization and Applications vol 34 no 2pp 215ndash227 2006

[16] J Fan and J Pan ldquoAn improved trust region algorithm fornonlinear equationsrdquo Computational Optimization and Ap-plications vol 48 no 1 pp 59ndash70 2011

[17] G Yuan Z Wei and X Lu ldquoA BFGS trust-region method fornonlinear equationsrdquo Computing vol 92 no 4 pp 317ndash3332011

[18] H Esmaeili and M Kimiaei ldquoA new adaptive trust-regionmethod for system of nonlinear equationsrdquo Applied Mathe-matical Modelling vol 38 no 11-12 pp 3003ndash3015 2014

[19] H Esmaeili and M Kimiaei ldquoAn efficient adaptive trust-region method for systems of nonlinear equationsrdquo Inter-national Journal of Computer Mathematics vol 92 no 1pp 151ndash166 2015

[20] M Kimiaei and H Esmaeili ldquoA trust-region approach withnovel filter adaptive radius for system of nonlinear equationsrdquoNumerical Algorithms vol 73 no 4 pp 999ndash1016 2016

[21] H Esmaeili and M Kimiaei ldquoA trust-region method withimproved adaptive radius for systems of nonlinear equationsrdquoMathematical Methods of Operations Research vol 83 no 1pp 109ndash125 2016

[22] M J D Powell ldquoConvergence properties of a class of min-imization algorithmsrdquo in Nonlinear Programming pp 1ndash27Elsevier Amsterdam Netherlands 1975

[23] L Qi X J Tong and D H Li ldquoActive-set projected trust-region algorithm for box-constrained nonsmooth equationsrdquoJournal of Optimization 9eory and Applications vol 120no 3 pp 601ndash625 2004

[24] G Yuan Z Wei and Z Wang ldquoGradient trust region al-gorithm with limited memory BFGS update for nonsmoothconvex minimizationrdquo Computational Optimization andApplications vol 54 no 1 pp 45ndash64 2013

[25] Y X Yuan and W Sun Optimization 9eory and MethodsScience Press Beijing China 1997

[26] A R Conn N I M Gould and P L Toint Trust-regionMethods Society for Industrial and Applied MathematicsPhiladelphia PA USA 2000

[27] T Steihaug ldquo+e conjugate gradient method and trust regionsin large scale optimizationrdquo SIAM Journal on NumericalAnalysis vol 20 no 3 pp 626ndash637 1983

[28] W La Cruz J M Martınez andM Raydan ldquoSpectral residualmethod without gradient information for solving large-scalenonlinear systems of equationsrdquo Mathematics of Computa-tion vol 75 no 255 pp 1429ndash1449 2006

[29] E D Dolan and J J More ldquoBenchmarking optimizationsoftware with performance profilesrdquo Mathematical Pro-gramming vol 91 no 2 pp 201ndash213 2002

Mathematical Problems in Engineering 9

Page 6: A Cauchy Point Direction Trust Region Algorithm for ...downloads.hindawi.com/journals/mpe/2020/4072651.pdf · A Cauchy Point Direction Trust Region Algorithm for Nonlinear Equations

Table 2 Numerical results

No nCTR ATR TTR

NINF Fk λmean NINF Fk NINF Fk

1500 67 318E minus 06 332E minus 07 78 673E minus 06 67 318E minus 061000 56 895E minus 06 332E minus 07 78 474E minus 06 56 895E minus 062000 56 632E minus 06 332E minus 07 78 334E minus 06 56 632E minus 06

2500 23 350E minus 06 157E minus 06 23 338E minus 06 23 338E minus 061000 23 667E minus 06 332E minus 07 23 669E minus 06 23 669E minus 062000 34 913E minus 06 332E minus 07 38 839E minus 06 35 844E minus 06

3500 1216 216E minus 07 331E minus 01 1220 389E minus 06 1621 944E minus 101000 1115 592E minus 07 296E minus 01 1220 551E minus 06 1621 139E minus 092000 1115 178E minus 07 350E minus 01 1220 782E minus 06 1621 184E minus 09

4 100 911 733E minus 06 528E minus 01 20012293 303E minus 03lowast 20012270 348E minus 03lowast

5500 2225 439E minus 06 296E minus 01 1819 584E minus 06 1920 832E minus 061000 2527 637E minus 06 435E minus 01 2358 568E minus 06 2324 433E minus 062000 2428 524E minus 06 450E minus 01 1920 553E minus 06 2123 523E minus 06

6500 56 212E minus 06 981E minus 04 56 491E minus 08 78 274E minus 081000 56 144E minus 06 505E minus 01 56 191E minus 07 78 158E minus 072000 67 769E minus 07 608E minus 01 56 756E minus 08 78 508E minus 08

7500 67 257E minus 09 159E minus 01 67 151E minus 09 67 428E minus 111000 67 263E minus 06 163E minus 01 67 103E minus 08 67 383E minus 062000 67 550E minus 06 226E minus 01 67 260E minus 08 67 543E minus 06

8 100 1416 431E minus 08 292E minus 01 20012281 376E + 00lowast 20012268 376E + 00lowast9 1000 1418 542E minus 06 146E minus 02 1420 276E minus 07 1622 136E minus 09

10500 67 554E minus 08 414E minus 01 67 133E minus 07 67 109E minus 071000 67 604E minus 08 371E minus 01 67 249E minus 07 67 991E minus 072000 67 116E minus 07 447E minus 01 67 433E minus 07 78 232E minus 07

11 100 89 322E minus 07 221E minus 01 89 338E minus 08 89 115E minus 06

12500 56 362E minus 12 836E minus 01 1617 581E minus 06 1819 596E minus 061000 56 415E minus 11 882E minus 01 1617 738E minus 06 1819 931E minus 062000 68 342E minus 07 829E minus 01 1617 823E minus 06 1920 282E minus 06

13500 910 241E minus 07 719E minus 01 45 767E minus 06 910 104E minus 081000 910 393E minus 07 762E minus 01 45 832E minus 06 910 337E minus 082000 1011 701E minus 06 730E minus 01 56 100E minus 10 1011 135E minus 07

14500 45 696E minus 06 109E minus 02 710 522E minus 06 56 534E minus 061000 1213 179E minus 07 177E minus 01 810 239E minus 06 68 265E minus 062000 1011 539E minus 06 115E minus 01 1116 170E minus 06 1011 532E minus 07

15500 34 285E minus 09 361E minus 01 34 248E minus 06 45 249E minus 061000 45 132E minus 07 556E minus 01 34 634E minus 07 67 880E minus 082000 45 140E minus 08 504E minus 01 45 374E minus 07 67 363E minus 07

16500 12 110E minus 06 332E minus 07 12 110E minus 06 12 110E minus 061000 12 155E minus 06 332E minus 07 12 155E minus 06 12 155E minus 062000 12 220E minus 06 332E minus 07 12 220E minus 06 12 220E minus 06

17 1000 3444 922E minus 09 358E minus 01 90219 685E minus 09 6375 489E minus 10

Table 3 Numerical results (continued)

No nCTR ATR TTR

NINF Fk λmean NINF Fk NINF Fk

18500 68 357E minus 07 413E minus 01 78 584E minus 08 78 612E minus 091000 810 114E minus 06 317E minus 01 78 787E minus 08 89 710E minus 082000 810 308E minus 06 317E minus 01 78 107E minus 07 89 229E minus 08

19500 23 977E minus 09 332E minus 07 34 677E minus 09 23 977E minus 091000 23 910E minus 10 332E minus 07 34 617E minus 10 23 909E minus 102000 23 447E minus 10 332E minus 07 34 302E minus 10 23 447E minus 10

20 1000 45 190E minus 07 332E minus 07 89 894E minus 08 45 187E minus 07

6 Mathematical Problems in Engineering

close to zero which implies that the trust region directionwas used many times Moreover the number of iteration isthe same as that of algorithm TTR which means that ouralgorithm speeds up the local convergence of the iterates Itcan be seen that λmean is not too large for other test problemsHowever for problems 4 and 8 algorithms ATR and TTRare not efficient to get the optimal solution when NIgt 2000We used the performance profiles by Dolan and More [29]to compare these algorithms with respect to NI and NF

again as shown in Figures 1 and 2 +erefore it is notedfrom Tables 2 and 3 and Figures 1 and 2 that the proposedalgorithm is well promising and has better performance intest problems in comparison with two existing algorithmsIn order to investigate the behavior of the convergence rateof three algorithms we plot the curve 0 minus k minus Lgxk minus xlowast

on problem 10 with n 1000 and n 2000 in Figure 3where xlowast is the optimal point Figure 3 indicates that Al-gorithm 1 converges faster than others

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 1 Performance profiles of the algorithms for the test problems (NI)

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 2 Performance profiles of the algorithms for the test problems (NF)

Mathematical Problems in Engineering 7

6 Conclusions

In this paper we presented a Cauchy point direction trustregion algorithm for solving the nonlinear system ofequations +is algorithm combines the trust region di-rection and the Cauchy point direction with the descentproperty and the automatic trust region +e optimalconvex combination parameters were determined by usingthe gold section algorithm which is an exact one-dimen-sional linear search algorithm +e new algorithm wasproven to have the descent property and the trust regionproperty We also showed the global convergence of theproposed algorithm under suitable condition More im-portantly the numerical results demonstrated the fasterconvergence of the proposed algorithm over two existingmethods which is very promising to solve large-scalenonlinear equations However the convergence rate of theproposed algorithm remains unclear +erefore thedemonstration of the theoretical convergence rate of theproposed algorithm application in image restorationproblems and compressive sensing will be the topicsamong others in our future study

Data Availability

+e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the National Natural ScienceFoundation of China (11261006 and 11661009) and the GuangxiNatural Science Foundation (2020GXNSFAA159069)

References

[1] K Meintjes and A P Morgan ldquoChemical equilibrium systemsas numerical test problemsrdquo ACM Transactions on Mathe-matical Software vol 16 no 2 pp 143ndash151 1990

[2] A A Levin V Chistyakov E A Tairov and V F ChistyakovldquoOn application of the structure of the nonlinear equationssystem describing hydraulic circuits of power plants incomputationsrdquo Bulletin of the South Ural State UniversitySeries ldquoMathematical Modelling Programming and ComputerSoftware vol 9 no 4 pp 53ndash62 2016

[3] F Georgiades ldquoNonlinear equations of motion of L-shapedbeam structuresrdquo European Journal of MechanicsmdashASolidsvol 65 pp 91ndash122 2017

[4] G Yuan T Li and W Hu ldquoA conjugate gradient algorithmfor large-scale nonlinear equations and image restorationproblemsrdquo Applied Numerical Mathematics vol 147pp 129ndash141 2020

[5] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013

[6] Z Papp and S Rapajic ldquoFR type methods for systems of large-scale nonlinear monotone equationsrdquo Applied Mathematicsand Computation vol 269 pp 816ndash823 2015

[7] W Zhou and F Wang ldquoA PRP-based residual method forlarge-scale monotone nonlinear equationsrdquo Applied Mathe-matics and Computation vol 261 pp 1ndash7 2015

1 2 3 4 5 6 7ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

kNo10 strictly convex function 1 (n = 1000)

Log 1

0 ||x

k ndash x

lowast||

CTRATRTTR

(a)

ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

Log 1

0 ||x

k ndash x

lowast||

1 2 3 4 5 6 7 8kNo10 strictly convex function 1 (n = 2000)

CTRATRTTR

(b)

Figure 3 +e behavior of the convergence (a) n 1000 and (b) n 2000

8 Mathematical Problems in Engineering

[8] G Yuan and M Zhang ldquoA three-terms Polak-Ribiere-Polyakconjugate gradient algorithm for large-scale nonlinearequationsrdquo Journal of Computational and Applied Mathe-matics vol 286 pp 186ndash195 2015

[9] G Yuan Z Meng and Y Li ldquoA modified Hestenes and Stiefelconjugate gradient algorithm for large-scale nonsmoothminimizations and nonlinear equationsrdquo Journal of Optimi-zation 9eory and Applications vol 168 no 1 pp 129ndash1522016

[10] J Fan and J Pan ldquoConvergence properties of a self-adaptiveLevenberg-Marquardt algorithm under local error boundconditionrdquo Computational Optimization and Applicationsvol 34 no 1 pp 47ndash62 2006

[11] J Fan ldquoAccelerating the modified Levenberg-Marquardtmethod for nonlinear equationsrdquo Mathematics of Compu-tation vol 83 no 287 pp 1173ndash1187 2013

[12] X Yang ldquoA higher-order Levenberg-Marquardt method fornonlinear equationsrdquoAppliedMathematics and Computationvol 219 no 22 pp 10682ndash10694 2013

[13] K Amini and F Rostami ldquoA modified two steps Levenberg-Marquardt method for nonlinear equationsrdquo Journal ofComputational and Applied Mathematics vol 288 pp 341ndash350 2015

[14] J-l Zhang and Y Wang ldquoA new trust region method fornonlinear equationsrdquo Mathematical Methods of OperationsResearch (ZOR) vol 58 no 2 pp 283ndash298 2003

[15] J Fan ldquoConvergence rate of the trust region method fornonlinear equations under local error bound conditionrdquoComputational Optimization and Applications vol 34 no 2pp 215ndash227 2006

[16] J Fan and J Pan ldquoAn improved trust region algorithm fornonlinear equationsrdquo Computational Optimization and Ap-plications vol 48 no 1 pp 59ndash70 2011

[17] G Yuan Z Wei and X Lu ldquoA BFGS trust-region method fornonlinear equationsrdquo Computing vol 92 no 4 pp 317ndash3332011

[18] H Esmaeili and M Kimiaei ldquoA new adaptive trust-regionmethod for system of nonlinear equationsrdquo Applied Mathe-matical Modelling vol 38 no 11-12 pp 3003ndash3015 2014

[19] H Esmaeili and M Kimiaei ldquoAn efficient adaptive trust-region method for systems of nonlinear equationsrdquo Inter-national Journal of Computer Mathematics vol 92 no 1pp 151ndash166 2015

[20] M Kimiaei and H Esmaeili ldquoA trust-region approach withnovel filter adaptive radius for system of nonlinear equationsrdquoNumerical Algorithms vol 73 no 4 pp 999ndash1016 2016

[21] H Esmaeili and M Kimiaei ldquoA trust-region method withimproved adaptive radius for systems of nonlinear equationsrdquoMathematical Methods of Operations Research vol 83 no 1pp 109ndash125 2016

[22] M J D Powell ldquoConvergence properties of a class of min-imization algorithmsrdquo in Nonlinear Programming pp 1ndash27Elsevier Amsterdam Netherlands 1975

[23] L Qi X J Tong and D H Li ldquoActive-set projected trust-region algorithm for box-constrained nonsmooth equationsrdquoJournal of Optimization 9eory and Applications vol 120no 3 pp 601ndash625 2004

[24] G Yuan Z Wei and Z Wang ldquoGradient trust region al-gorithm with limited memory BFGS update for nonsmoothconvex minimizationrdquo Computational Optimization andApplications vol 54 no 1 pp 45ndash64 2013

[25] Y X Yuan and W Sun Optimization 9eory and MethodsScience Press Beijing China 1997

[26] A R Conn N I M Gould and P L Toint Trust-regionMethods Society for Industrial and Applied MathematicsPhiladelphia PA USA 2000

[27] T Steihaug ldquo+e conjugate gradient method and trust regionsin large scale optimizationrdquo SIAM Journal on NumericalAnalysis vol 20 no 3 pp 626ndash637 1983

[28] W La Cruz J M Martınez andM Raydan ldquoSpectral residualmethod without gradient information for solving large-scalenonlinear systems of equationsrdquo Mathematics of Computa-tion vol 75 no 255 pp 1429ndash1449 2006

[29] E D Dolan and J J More ldquoBenchmarking optimizationsoftware with performance profilesrdquo Mathematical Pro-gramming vol 91 no 2 pp 201ndash213 2002

Mathematical Problems in Engineering 9

Page 7: A Cauchy Point Direction Trust Region Algorithm for ...downloads.hindawi.com/journals/mpe/2020/4072651.pdf · A Cauchy Point Direction Trust Region Algorithm for Nonlinear Equations

close to zero which implies that the trust region directionwas used many times Moreover the number of iteration isthe same as that of algorithm TTR which means that ouralgorithm speeds up the local convergence of the iterates Itcan be seen that λmean is not too large for other test problemsHowever for problems 4 and 8 algorithms ATR and TTRare not efficient to get the optimal solution when NIgt 2000We used the performance profiles by Dolan and More [29]to compare these algorithms with respect to NI and NF

again as shown in Figures 1 and 2 +erefore it is notedfrom Tables 2 and 3 and Figures 1 and 2 that the proposedalgorithm is well promising and has better performance intest problems in comparison with two existing algorithmsIn order to investigate the behavior of the convergence rateof three algorithms we plot the curve 0 minus k minus Lgxk minus xlowast

on problem 10 with n 1000 and n 2000 in Figure 3where xlowast is the optimal point Figure 3 indicates that Al-gorithm 1 converges faster than others

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 1 Performance profiles of the algorithms for the test problems (NI)

1 15 2 25 301

02

03

04

05

06

07

08

09

1

τ

Ppr

(p s

) le τ

CTRATRTTR

Figure 2 Performance profiles of the algorithms for the test problems (NF)

Mathematical Problems in Engineering 7

6 Conclusions

In this paper we presented a Cauchy point direction trustregion algorithm for solving the nonlinear system ofequations +is algorithm combines the trust region di-rection and the Cauchy point direction with the descentproperty and the automatic trust region +e optimalconvex combination parameters were determined by usingthe gold section algorithm which is an exact one-dimen-sional linear search algorithm +e new algorithm wasproven to have the descent property and the trust regionproperty We also showed the global convergence of theproposed algorithm under suitable condition More im-portantly the numerical results demonstrated the fasterconvergence of the proposed algorithm over two existingmethods which is very promising to solve large-scalenonlinear equations However the convergence rate of theproposed algorithm remains unclear +erefore thedemonstration of the theoretical convergence rate of theproposed algorithm application in image restorationproblems and compressive sensing will be the topicsamong others in our future study

Data Availability

+e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the National Natural ScienceFoundation of China (11261006 and 11661009) and the GuangxiNatural Science Foundation (2020GXNSFAA159069)

References

[1] K Meintjes and A P Morgan ldquoChemical equilibrium systemsas numerical test problemsrdquo ACM Transactions on Mathe-matical Software vol 16 no 2 pp 143ndash151 1990

[2] A A Levin V Chistyakov E A Tairov and V F ChistyakovldquoOn application of the structure of the nonlinear equationssystem describing hydraulic circuits of power plants incomputationsrdquo Bulletin of the South Ural State UniversitySeries ldquoMathematical Modelling Programming and ComputerSoftware vol 9 no 4 pp 53ndash62 2016

[3] F Georgiades ldquoNonlinear equations of motion of L-shapedbeam structuresrdquo European Journal of MechanicsmdashASolidsvol 65 pp 91ndash122 2017

[4] G Yuan T Li and W Hu ldquoA conjugate gradient algorithmfor large-scale nonlinear equations and image restorationproblemsrdquo Applied Numerical Mathematics vol 147pp 129ndash141 2020

[5] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013

[6] Z Papp and S Rapajic ldquoFR type methods for systems of large-scale nonlinear monotone equationsrdquo Applied Mathematicsand Computation vol 269 pp 816ndash823 2015

[7] W Zhou and F Wang ldquoA PRP-based residual method forlarge-scale monotone nonlinear equationsrdquo Applied Mathe-matics and Computation vol 261 pp 1ndash7 2015

1 2 3 4 5 6 7ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

kNo10 strictly convex function 1 (n = 1000)

Log 1

0 ||x

k ndash x

lowast||

CTRATRTTR

(a)

ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

Log 1

0 ||x

k ndash x

lowast||

1 2 3 4 5 6 7 8kNo10 strictly convex function 1 (n = 2000)

CTRATRTTR

(b)

Figure 3 +e behavior of the convergence (a) n 1000 and (b) n 2000

8 Mathematical Problems in Engineering

[8] G Yuan and M Zhang ldquoA three-terms Polak-Ribiere-Polyakconjugate gradient algorithm for large-scale nonlinearequationsrdquo Journal of Computational and Applied Mathe-matics vol 286 pp 186ndash195 2015

[9] G Yuan Z Meng and Y Li ldquoA modified Hestenes and Stiefelconjugate gradient algorithm for large-scale nonsmoothminimizations and nonlinear equationsrdquo Journal of Optimi-zation 9eory and Applications vol 168 no 1 pp 129ndash1522016

[10] J Fan and J Pan ldquoConvergence properties of a self-adaptiveLevenberg-Marquardt algorithm under local error boundconditionrdquo Computational Optimization and Applicationsvol 34 no 1 pp 47ndash62 2006

[11] J Fan ldquoAccelerating the modified Levenberg-Marquardtmethod for nonlinear equationsrdquo Mathematics of Compu-tation vol 83 no 287 pp 1173ndash1187 2013

[12] X Yang ldquoA higher-order Levenberg-Marquardt method fornonlinear equationsrdquoAppliedMathematics and Computationvol 219 no 22 pp 10682ndash10694 2013

[13] K Amini and F Rostami ldquoA modified two steps Levenberg-Marquardt method for nonlinear equationsrdquo Journal ofComputational and Applied Mathematics vol 288 pp 341ndash350 2015

[14] J-l Zhang and Y Wang ldquoA new trust region method fornonlinear equationsrdquo Mathematical Methods of OperationsResearch (ZOR) vol 58 no 2 pp 283ndash298 2003

[15] J Fan ldquoConvergence rate of the trust region method fornonlinear equations under local error bound conditionrdquoComputational Optimization and Applications vol 34 no 2pp 215ndash227 2006

[16] J Fan and J Pan ldquoAn improved trust region algorithm fornonlinear equationsrdquo Computational Optimization and Ap-plications vol 48 no 1 pp 59ndash70 2011

[17] G Yuan Z Wei and X Lu ldquoA BFGS trust-region method fornonlinear equationsrdquo Computing vol 92 no 4 pp 317ndash3332011

[18] H Esmaeili and M Kimiaei ldquoA new adaptive trust-regionmethod for system of nonlinear equationsrdquo Applied Mathe-matical Modelling vol 38 no 11-12 pp 3003ndash3015 2014

[19] H Esmaeili and M Kimiaei ldquoAn efficient adaptive trust-region method for systems of nonlinear equationsrdquo Inter-national Journal of Computer Mathematics vol 92 no 1pp 151ndash166 2015

[20] M Kimiaei and H Esmaeili ldquoA trust-region approach withnovel filter adaptive radius for system of nonlinear equationsrdquoNumerical Algorithms vol 73 no 4 pp 999ndash1016 2016

[21] H Esmaeili and M Kimiaei ldquoA trust-region method withimproved adaptive radius for systems of nonlinear equationsrdquoMathematical Methods of Operations Research vol 83 no 1pp 109ndash125 2016

[22] M J D Powell ldquoConvergence properties of a class of min-imization algorithmsrdquo in Nonlinear Programming pp 1ndash27Elsevier Amsterdam Netherlands 1975

[23] L Qi X J Tong and D H Li ldquoActive-set projected trust-region algorithm for box-constrained nonsmooth equationsrdquoJournal of Optimization 9eory and Applications vol 120no 3 pp 601ndash625 2004

[24] G Yuan Z Wei and Z Wang ldquoGradient trust region al-gorithm with limited memory BFGS update for nonsmoothconvex minimizationrdquo Computational Optimization andApplications vol 54 no 1 pp 45ndash64 2013

[25] Y X Yuan and W Sun Optimization 9eory and MethodsScience Press Beijing China 1997

[26] A R Conn N I M Gould and P L Toint Trust-regionMethods Society for Industrial and Applied MathematicsPhiladelphia PA USA 2000

[27] T Steihaug ldquo+e conjugate gradient method and trust regionsin large scale optimizationrdquo SIAM Journal on NumericalAnalysis vol 20 no 3 pp 626ndash637 1983

[28] W La Cruz J M Martınez andM Raydan ldquoSpectral residualmethod without gradient information for solving large-scalenonlinear systems of equationsrdquo Mathematics of Computa-tion vol 75 no 255 pp 1429ndash1449 2006

[29] E D Dolan and J J More ldquoBenchmarking optimizationsoftware with performance profilesrdquo Mathematical Pro-gramming vol 91 no 2 pp 201ndash213 2002

Mathematical Problems in Engineering 9

Page 8: A Cauchy Point Direction Trust Region Algorithm for ...downloads.hindawi.com/journals/mpe/2020/4072651.pdf · A Cauchy Point Direction Trust Region Algorithm for Nonlinear Equations

6 Conclusions

In this paper we presented a Cauchy point direction trustregion algorithm for solving the nonlinear system ofequations +is algorithm combines the trust region di-rection and the Cauchy point direction with the descentproperty and the automatic trust region +e optimalconvex combination parameters were determined by usingthe gold section algorithm which is an exact one-dimen-sional linear search algorithm +e new algorithm wasproven to have the descent property and the trust regionproperty We also showed the global convergence of theproposed algorithm under suitable condition More im-portantly the numerical results demonstrated the fasterconvergence of the proposed algorithm over two existingmethods which is very promising to solve large-scalenonlinear equations However the convergence rate of theproposed algorithm remains unclear +erefore thedemonstration of the theoretical convergence rate of theproposed algorithm application in image restorationproblems and compressive sensing will be the topicsamong others in our future study

Data Availability

+e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

+e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

+is work was supported by the National Natural ScienceFoundation of China (11261006 and 11661009) and the GuangxiNatural Science Foundation (2020GXNSFAA159069)

References

[1] K Meintjes and A P Morgan ldquoChemical equilibrium systemsas numerical test problemsrdquo ACM Transactions on Mathe-matical Software vol 16 no 2 pp 143ndash151 1990

[2] A A Levin V Chistyakov E A Tairov and V F ChistyakovldquoOn application of the structure of the nonlinear equationssystem describing hydraulic circuits of power plants incomputationsrdquo Bulletin of the South Ural State UniversitySeries ldquoMathematical Modelling Programming and ComputerSoftware vol 9 no 4 pp 53ndash62 2016

[3] F Georgiades ldquoNonlinear equations of motion of L-shapedbeam structuresrdquo European Journal of MechanicsmdashASolidsvol 65 pp 91ndash122 2017

[4] G Yuan T Li and W Hu ldquoA conjugate gradient algorithmfor large-scale nonlinear equations and image restorationproblemsrdquo Applied Numerical Mathematics vol 147pp 129ndash141 2020

[5] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013

[6] Z Papp and S Rapajic ldquoFR type methods for systems of large-scale nonlinear monotone equationsrdquo Applied Mathematicsand Computation vol 269 pp 816ndash823 2015

[7] W Zhou and F Wang ldquoA PRP-based residual method forlarge-scale monotone nonlinear equationsrdquo Applied Mathe-matics and Computation vol 261 pp 1ndash7 2015

1 2 3 4 5 6 7ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

kNo10 strictly convex function 1 (n = 1000)

Log 1

0 ||x

k ndash x

lowast||

CTRATRTTR

(a)

ndash18

ndash16

ndash14

ndash12

ndash10

ndash8

ndash6

ndash4

ndash2

0

2

4

Log 1

0 ||x

k ndash x

lowast||

1 2 3 4 5 6 7 8kNo10 strictly convex function 1 (n = 2000)

CTRATRTTR

(b)

Figure 3 +e behavior of the convergence (a) n 1000 and (b) n 2000

8 Mathematical Problems in Engineering

[8] G Yuan and M Zhang ldquoA three-terms Polak-Ribiere-Polyakconjugate gradient algorithm for large-scale nonlinearequationsrdquo Journal of Computational and Applied Mathe-matics vol 286 pp 186ndash195 2015

[9] G Yuan Z Meng and Y Li ldquoA modified Hestenes and Stiefelconjugate gradient algorithm for large-scale nonsmoothminimizations and nonlinear equationsrdquo Journal of Optimi-zation 9eory and Applications vol 168 no 1 pp 129ndash1522016

[10] J Fan and J Pan ldquoConvergence properties of a self-adaptiveLevenberg-Marquardt algorithm under local error boundconditionrdquo Computational Optimization and Applicationsvol 34 no 1 pp 47ndash62 2006

[11] J Fan ldquoAccelerating the modified Levenberg-Marquardtmethod for nonlinear equationsrdquo Mathematics of Compu-tation vol 83 no 287 pp 1173ndash1187 2013

[12] X Yang ldquoA higher-order Levenberg-Marquardt method fornonlinear equationsrdquoAppliedMathematics and Computationvol 219 no 22 pp 10682ndash10694 2013

[13] K Amini and F Rostami ldquoA modified two steps Levenberg-Marquardt method for nonlinear equationsrdquo Journal ofComputational and Applied Mathematics vol 288 pp 341ndash350 2015

[14] J-l Zhang and Y Wang ldquoA new trust region method fornonlinear equationsrdquo Mathematical Methods of OperationsResearch (ZOR) vol 58 no 2 pp 283ndash298 2003

[15] J Fan ldquoConvergence rate of the trust region method fornonlinear equations under local error bound conditionrdquoComputational Optimization and Applications vol 34 no 2pp 215ndash227 2006

[16] J Fan and J Pan ldquoAn improved trust region algorithm fornonlinear equationsrdquo Computational Optimization and Ap-plications vol 48 no 1 pp 59ndash70 2011

[17] G Yuan Z Wei and X Lu ldquoA BFGS trust-region method fornonlinear equationsrdquo Computing vol 92 no 4 pp 317ndash3332011

[18] H Esmaeili and M Kimiaei ldquoA new adaptive trust-regionmethod for system of nonlinear equationsrdquo Applied Mathe-matical Modelling vol 38 no 11-12 pp 3003ndash3015 2014

[19] H Esmaeili and M Kimiaei ldquoAn efficient adaptive trust-region method for systems of nonlinear equationsrdquo Inter-national Journal of Computer Mathematics vol 92 no 1pp 151ndash166 2015

[20] M Kimiaei and H Esmaeili ldquoA trust-region approach withnovel filter adaptive radius for system of nonlinear equationsrdquoNumerical Algorithms vol 73 no 4 pp 999ndash1016 2016

[21] H Esmaeili and M Kimiaei ldquoA trust-region method withimproved adaptive radius for systems of nonlinear equationsrdquoMathematical Methods of Operations Research vol 83 no 1pp 109ndash125 2016

[22] M J D Powell ldquoConvergence properties of a class of min-imization algorithmsrdquo in Nonlinear Programming pp 1ndash27Elsevier Amsterdam Netherlands 1975

[23] L Qi X J Tong and D H Li ldquoActive-set projected trust-region algorithm for box-constrained nonsmooth equationsrdquoJournal of Optimization 9eory and Applications vol 120no 3 pp 601ndash625 2004

[24] G Yuan Z Wei and Z Wang ldquoGradient trust region al-gorithm with limited memory BFGS update for nonsmoothconvex minimizationrdquo Computational Optimization andApplications vol 54 no 1 pp 45ndash64 2013

[25] Y X Yuan and W Sun Optimization 9eory and MethodsScience Press Beijing China 1997

[26] A R Conn N I M Gould and P L Toint Trust-regionMethods Society for Industrial and Applied MathematicsPhiladelphia PA USA 2000

[27] T Steihaug ldquo+e conjugate gradient method and trust regionsin large scale optimizationrdquo SIAM Journal on NumericalAnalysis vol 20 no 3 pp 626ndash637 1983

[28] W La Cruz J M Martınez andM Raydan ldquoSpectral residualmethod without gradient information for solving large-scalenonlinear systems of equationsrdquo Mathematics of Computa-tion vol 75 no 255 pp 1429ndash1449 2006

[29] E D Dolan and J J More ldquoBenchmarking optimizationsoftware with performance profilesrdquo Mathematical Pro-gramming vol 91 no 2 pp 201ndash213 2002

Mathematical Problems in Engineering 9

Page 9: A Cauchy Point Direction Trust Region Algorithm for ...downloads.hindawi.com/journals/mpe/2020/4072651.pdf · A Cauchy Point Direction Trust Region Algorithm for Nonlinear Equations

[8] G Yuan and M Zhang ldquoA three-terms Polak-Ribiere-Polyakconjugate gradient algorithm for large-scale nonlinearequationsrdquo Journal of Computational and Applied Mathe-matics vol 286 pp 186ndash195 2015

[9] G Yuan Z Meng and Y Li ldquoA modified Hestenes and Stiefelconjugate gradient algorithm for large-scale nonsmoothminimizations and nonlinear equationsrdquo Journal of Optimi-zation 9eory and Applications vol 168 no 1 pp 129ndash1522016

[10] J Fan and J Pan ldquoConvergence properties of a self-adaptiveLevenberg-Marquardt algorithm under local error boundconditionrdquo Computational Optimization and Applicationsvol 34 no 1 pp 47ndash62 2006

[11] J Fan ldquoAccelerating the modified Levenberg-Marquardtmethod for nonlinear equationsrdquo Mathematics of Compu-tation vol 83 no 287 pp 1173ndash1187 2013

[12] X Yang ldquoA higher-order Levenberg-Marquardt method fornonlinear equationsrdquoAppliedMathematics and Computationvol 219 no 22 pp 10682ndash10694 2013

[13] K Amini and F Rostami ldquoA modified two steps Levenberg-Marquardt method for nonlinear equationsrdquo Journal ofComputational and Applied Mathematics vol 288 pp 341ndash350 2015

[14] J-l Zhang and Y Wang ldquoA new trust region method fornonlinear equationsrdquo Mathematical Methods of OperationsResearch (ZOR) vol 58 no 2 pp 283ndash298 2003

[15] J Fan ldquoConvergence rate of the trust region method fornonlinear equations under local error bound conditionrdquoComputational Optimization and Applications vol 34 no 2pp 215ndash227 2006

[16] J Fan and J Pan ldquoAn improved trust region algorithm fornonlinear equationsrdquo Computational Optimization and Ap-plications vol 48 no 1 pp 59ndash70 2011

[17] G Yuan Z Wei and X Lu ldquoA BFGS trust-region method fornonlinear equationsrdquo Computing vol 92 no 4 pp 317ndash3332011

[18] H Esmaeili and M Kimiaei ldquoA new adaptive trust-regionmethod for system of nonlinear equationsrdquo Applied Mathe-matical Modelling vol 38 no 11-12 pp 3003ndash3015 2014

[19] H Esmaeili and M Kimiaei ldquoAn efficient adaptive trust-region method for systems of nonlinear equationsrdquo Inter-national Journal of Computer Mathematics vol 92 no 1pp 151ndash166 2015

[20] M Kimiaei and H Esmaeili ldquoA trust-region approach withnovel filter adaptive radius for system of nonlinear equationsrdquoNumerical Algorithms vol 73 no 4 pp 999ndash1016 2016

[21] H Esmaeili and M Kimiaei ldquoA trust-region method withimproved adaptive radius for systems of nonlinear equationsrdquoMathematical Methods of Operations Research vol 83 no 1pp 109ndash125 2016

[22] M J D Powell ldquoConvergence properties of a class of min-imization algorithmsrdquo in Nonlinear Programming pp 1ndash27Elsevier Amsterdam Netherlands 1975

[23] L Qi X J Tong and D H Li ldquoActive-set projected trust-region algorithm for box-constrained nonsmooth equationsrdquoJournal of Optimization 9eory and Applications vol 120no 3 pp 601ndash625 2004

[24] G Yuan Z Wei and Z Wang ldquoGradient trust region al-gorithm with limited memory BFGS update for nonsmoothconvex minimizationrdquo Computational Optimization andApplications vol 54 no 1 pp 45ndash64 2013

[25] Y X Yuan and W Sun Optimization 9eory and MethodsScience Press Beijing China 1997

[26] A R Conn N I M Gould and P L Toint Trust-regionMethods Society for Industrial and Applied MathematicsPhiladelphia PA USA 2000

[27] T Steihaug ldquo+e conjugate gradient method and trust regionsin large scale optimizationrdquo SIAM Journal on NumericalAnalysis vol 20 no 3 pp 626ndash637 1983

[28] W La Cruz J M Martınez andM Raydan ldquoSpectral residualmethod without gradient information for solving large-scalenonlinear systems of equationsrdquo Mathematics of Computa-tion vol 75 no 255 pp 1429ndash1449 2006

[29] E D Dolan and J J More ldquoBenchmarking optimizationsoftware with performance profilesrdquo Mathematical Pro-gramming vol 91 no 2 pp 201ndash213 2002

Mathematical Problems in Engineering 9