Performance analysis of gradient neural network exploited for online time-varying quadratic...

10
Performance analysis of gradient neural network exploited for online time-varying quadratic minimization and equality-constrained quadratic programming $ Yunong Zhang , Yiwen Yang, Gongqin Ruan School of Information Science and Technology, Sun Yat-sen University, Guangzhou 510006, China article info Article history: Received 9 December 2009 Received in revised form 18 January 2011 Accepted 7 February 2011 Communicated by J. Zhang Available online 17 March 2011 Keywords: Gradient neural network (GNN) Quadratic minimization Quadratic programming (QP) Time-varying Exponential convergence rate abstract In this paper, the performance of a gradient neural network (GNN), which was designed intrinsically for solving static problems, is investigated, analyzed and simulated in the situation of time-varying coefficients. It is theoretically proved that the gradient neural network for online solution of time- varying quadratic minimization (QM) and quadratic programming (QP) problems could only approxi- mately approach the time-varying theoretical solution, instead of converging exactly. That is, the steady-state error between the GNN solution and the theoretical solution can not decrease to zero. In order to understand the situation better, the upper bound of such an error is estimated firstly, and then the global exponential convergence rate is investigated for such a GNN when approaching an error bound. Computer-simulation results, including those based on a six-link robot manipulator, further substantiate the performance analysis of the GNN exploited to solve online time-varying QM and QP problems. & 2011 Elsevier B.V. All rights reserved. 1. Introduction As an important branch of mathematical optimization, quad- ratic programming (QP) constrained with linear equalities [with quadratic minimization (QM) as a special case] has been theore- tically analyzed [14] and widely applied in various scientific areas; e.g., optimal controller design [5,6], power-scheduling [7], robot-arm motion planning [811], and digital signal processing [12]. Due to its fundamental roles, many algorithms have been proposed to solve QP problems [3,13,14]. In general, numerical algorithms performed on digital computers are considered to be a well-accepted approach to QM and linear-equality constrained QP problems. However, for large-scale online or real-time applica- tions, such numerical algorithms may not be efficient enough, since the minimal arithmetic operations are normally propor- tional to the cube of Hessian matrix’s dimension n [12]. To solve optimization problems more efficiently, many paral- lel-processing computational methods have been proposed, ana- lyzed, and implemented based on specific architectures, e.g., the neural-dynamic and analog solvers investigated in [1520]. Due to the in-depth research in recurrent neural networks (RNN), such a neural-dynamic approach is now regarded as a powerful alternative to real-time computation and optimization, owing to its parallel-processing distributed nature and convenience of hardware implementation [2129]. It is worth noting that Chua and Lin [21] developed a canonical nonlinear programming circuit (NPC) for simulating general nonlinear programs; and that Kennedy and Chua [22] explored the stability properties of such a canonical nonlinear programming circuit model with the objective function and constraints being smooth. Forti et al. [27] introduced a generalized circuit for nonsmooth programming problems, which derives from a natural extension of NPC. Bian and Xue [28] proposed a subgradient-based neural network to solve a nonsmooth nonconvex optimization problem with the objective function being nonsmooth and nonconvex. A number of neural-dynamic models have been proposed, which are usually based on the gradient-descent methods, or termed gradient neural networks (GNNs). For solving static problems, such GNNs can be proved to exponentially converge to theoretical optimal solutions [9,18,24,30]. To lay a basis for further discussion, the design of a GNN model can generally be described as follows. First, to solve a given problem via GNN, a scalar-valued norm- based nonnegative energy function eðtÞ Z0 A R is defined. Theoretically speaking, a minimum point of the energy func- tion eðtÞ is achieved with eðtÞ¼ 0, corresponding to the situa- tion that the neural-solution [e.g., xðtÞ] is the theoretical solution (e.g., x ) of the given static problem. Second, a computational scheme can be designed to evolve along a descent direction of the energy function eðtÞ, until the Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing 0925-2312/$ - see front matter & 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2011.02.007 $ This work is supported by the National Natural Science Foundation of China under Grants 61075121 and 60935001, and also by the Fundamental Research Funds for the Central Universities of China. Corresponding author. Tel.: þ86 13060687155; fax: þ86 20 39943353. E-mail addresses: [email protected], [email protected] (Y. Zhang). Neurocomputing 74 (2011) 1710–1719

Transcript of Performance analysis of gradient neural network exploited for online time-varying quadratic...

Neurocomputing 74 (2011) 1710–1719

Contents lists available at ScienceDirect

Neurocomputing

0925-23

doi:10.1

$This

under G

Funds f� Corr

E-m

journal homepage: www.elsevier.com/locate/neucom

Performance analysis of gradient neural network exploited for onlinetime-varying quadratic minimization and equality-constrainedquadratic programming$

Yunong Zhang �, Yiwen Yang, Gongqin Ruan

School of Information Science and Technology, Sun Yat-sen University, Guangzhou 510006, China

a r t i c l e i n f o

Article history:

Received 9 December 2009

Received in revised form

18 January 2011

Accepted 7 February 2011

Communicated by J. Zhangmately approach the time-varying theoretical solution, instead of converging exactly. That is, the

Available online 17 March 2011

Keywords:

Gradient neural network (GNN)

Quadratic minimization

Quadratic programming (QP)

Time-varying

Exponential convergence rate

12/$ - see front matter & 2011 Elsevier B.V. A

016/j.neucom.2011.02.007

work is supported by the National Natural

rants 61075121 and 60935001, and also by

or the Central Universities of China.

esponding author. Tel.: þ86 13060687155; f

ail addresses: [email protected], zhynong@m

a b s t r a c t

In this paper, the performance of a gradient neural network (GNN), which was designed intrinsically for

solving static problems, is investigated, analyzed and simulated in the situation of time-varying

coefficients. It is theoretically proved that the gradient neural network for online solution of time-

varying quadratic minimization (QM) and quadratic programming (QP) problems could only approxi-

steady-state error between the GNN solution and the theoretical solution can not decrease to zero. In

order to understand the situation better, the upper bound of such an error is estimated firstly, and then

the global exponential convergence rate is investigated for such a GNN when approaching an error

bound. Computer-simulation results, including those based on a six-link robot manipulator, further

substantiate the performance analysis of the GNN exploited to solve online time-varying QM and QP

problems.

& 2011 Elsevier B.V. All rights reserved.

1. Introduction

As an important branch of mathematical optimization, quad-ratic programming (QP) constrained with linear equalities [withquadratic minimization (QM) as a special case] has been theore-tically analyzed [1–4] and widely applied in various scientificareas; e.g., optimal controller design [5,6], power-scheduling [7],robot-arm motion planning [8–11], and digital signal processing[12]. Due to its fundamental roles, many algorithms have beenproposed to solve QP problems [3,13,14]. In general, numericalalgorithms performed on digital computers are considered to be awell-accepted approach to QM and linear-equality constrained QPproblems. However, for large-scale online or real-time applica-tions, such numerical algorithms may not be efficient enough,since the minimal arithmetic operations are normally propor-tional to the cube of Hessian matrix’s dimension n [12].

To solve optimization problems more efficiently, many paral-lel-processing computational methods have been proposed, ana-lyzed, and implemented based on specific architectures, e.g., theneural-dynamic and analog solvers investigated in [15–20]. Dueto the in-depth research in recurrent neural networks (RNN), sucha neural-dynamic approach is now regarded as a powerful

ll rights reserved.

Science Foundation of China

the Fundamental Research

ax: þ86 20 39943353.

ail.sysu.edu.cn (Y. Zhang).

alternative to real-time computation and optimization, owing toits parallel-processing distributed nature and convenience ofhardware implementation [21–29]. It is worth noting that Chuaand Lin [21] developed a canonical nonlinear programming circuit(NPC) for simulating general nonlinear programs; and thatKennedy and Chua [22] explored the stability properties of sucha canonical nonlinear programming circuit model with theobjective function and constraints being smooth. Forti et al. [27]introduced a generalized circuit for nonsmooth programmingproblems, which derives from a natural extension of NPC. Bianand Xue [28] proposed a subgradient-based neural network tosolve a nonsmooth nonconvex optimization problem with theobjective function being nonsmooth and nonconvex.

A number of neural-dynamic models have been proposed, whichare usually based on the gradient-descent methods, or termedgradient neural networks (GNNs). For solving static problems, suchGNNs can be proved to exponentially converge to theoreticaloptimal solutions [9,18,24,30]. To lay a basis for further discussion,the design of a GNN model can generally be described as follows.

First, to solve a given problem via GNN, a scalar-valued norm-based nonnegative energy function eðtÞZ0AR is defined.Theoretically speaking, a minimum point of the energy func-tion eðtÞ is achieved with eðtÞ ¼ 0, corresponding to the situa-tion that the neural-solution [e.g., xðtÞ] is the theoreticalsolution (e.g., x�) of the given static problem. � Second, a computational scheme can be designed to evolve

along a descent direction of the energy function eðtÞ, until the

Y. Zhang et al. / Neurocomputing 74 (2011) 1710–1719 1711

minimum point is reached. It is worth mentioning here that atypical descent direction is the negative gradient of eðtÞ, i.e.,�ð@e=@xÞ.

� Third, the conventional gradient neural network design for-

mula can be chosen and implemented as follows:

_xðtÞ :¼dxðtÞ

dt¼�g @e

@x, ð1Þ

where design parameter g40 corresponds to the reciprocal ofa capacitance parameter in the analog-circuit implementationof GNN (1), which should be set as large as the hardwarewould permit or selected appropriately for simulative and/orexperimental purposes [31,32].

However, it is worth pointing out that such a GNN is designedintrinsically for static problems solving with constant coefficientmatrices and vectors. When applied to time-varying matrix–vector problems, the GNN may generate a considerably largesolution-error, as shown in [33] by Zhang et al. Facing this less-favorable phenomenon, the authors have been interested in theproblems inside and have begun to investigate the performance ofGNN-type solvers applied to time-varying quadratic optimization.The main contributions of this paper lie in the following facts.

(1)

In this paper, the less-favorable phenomenon of GNN ispointed out formally and systematically; i.e., conventionalGNN as a system can not solve exactly the time-varyingquadratic optimization problems. In other words, therealways exists a nonzero steady-state solution error betweenthe GNN-solution and the time-varying theoretical solution.

(2)

This paper investigates and analyzes the performance of GNNapplied to time-varying QM and QP problems. Theoreticalanalysis is provided rigorously for estimating the steady-statesolution-error bound and exponential convergence rate forGNN approaching an error bound.

(3)

The gain g is quite widely exploited in many researchers’ works.However, almost all existing works may only show examples tosubstantiate the function of this parameter in the situation ofstatic problems solving; and the gradient neural networks arethen applied to the situation of time-varying problems solving,but without theoretical proof. In this paper, the theoreticalanalysis and proof are presented for gradient neural networksolving time-varying quadratic optimization problems, includ-ing the theoretically proved important role of design parameterg in the situation of time-varying quadratic optimization.

(4)

Illustrative examples of online time-varying QM and QPproblems solving substantiate well the theoretical results,e.g., those about the steady-state solution-error bound, theimportant roles of design parameter g and solution-variationrate z.

(5)

The application and simulation results based on a six-linkrobot manipulator further substantiate the performance ana-lysis of the GNN solving time-varying QP problems.

To the best of the authors’ knowledge, to date, few studieshave been published on such a time-varying performance analysisin the literature at the present stage. Note that Myung and Kim[25] proposed a time-varying two-phase (TVTP) algorithm whichcan give exact feasible solutions with a finite penalty parameterwhen the problem is a constrained time-varying optimization.Their work is inspiring and interesting. Evidently, there are somesubstantial differences between Myung and Kim’s pioneeringwork and this paper lying as below.

(1)

Myung and Kim’s work considers the time-varying optimiza-tion problem when time t goes to infinity or large enough.

In contrast, this paper is to find the time-varying optimalsolution at any time instant.

(2)

The constraints in Myung and Kim’s work are static. Incontrast, the coefficients of quadratic programming investi-gated in this paper are time-varying, i.e., with a time-varyingmatrix/vector constraint.

(3)

Myung and Kim’s work exploits a penalty-based method forhandling time-varying optimization problems. In contrast,this paper exploits time-varying linear-equation solving andLagrange multiplier method for handling the time-varyingquadratic optimization problems at any time instant tA ½0,1Þ.

The remainder of this paper is organized as follows. Section 2describes the situation of using GNN for solving online time-varying QM problem. Online time-varying QP problem solving isfurther developed and investigated in Section 3. In Section 4, theauthors analyze the performance of GNN applied to time-varyingQM and QP problems solving; specifically speaking, the solution-error bound and the exponential convergence rate towards abound are analyzed. Several illustrative examples are shown inSection 5. Section 6 further presents the application and simula-tion results based on a six-link planar robot manipulator. Finally,the paper is concluded with some remarks in Section 7.

2. Time-varying quadratic minimization

Consider the following time-varying quadratic minimizationproblem:

minimize f ðxÞ :¼ xT ðtÞPðtÞxðtÞ=2þqT ðtÞxðtÞAR, ð2Þ

where Hessian matrix PðtÞARn�n is smoothly time-varying, posi-tive-definite and symmetric for any time instant tA ½0,þ1Þ�R,and the coefficient vector qðtÞARn is assumed smoothly time-varying as well. In expression (2), unknown vector xðtÞARn is tobe solved so as to make the value of f ðxÞ always smallest for anytime instant tA ½0,þ1Þ.

One simple method solving the time-varying QM problem (2)could be done by zeroing the partial-derivative rf ðxÞ of f ðxÞ [12]at every time instant t; in mathematics:

rf ðxÞ :¼@f ðxÞ

@x¼ PðtÞxðtÞþqðtÞ ¼ 0ARn, 8tA ½0,þ1Þ: ð3Þ

In addition, it follows from the above that the theoretical time-varying solution x�ðtÞARn to (2), being the minimum point of f ðxÞat any time instant t, satisfies x�ðtÞ ¼�P�1

ðtÞqðtÞ. The theoreticalminimum value f � :¼ f ðx�Þ of time-varying quadratic function f ðxÞis thus achieved as f � ¼ x�T ðtÞPðtÞx�ðtÞ=2þqT ðtÞx�ðtÞ. Note that,since such a QM problem has time-varying coefficients, theminimum value f n of f ðxÞ together with its minimum solutionx�ðtÞ are all ‘‘moving’’ with time t, so that this time-varying QMproblem (2) is considered as a ‘‘moving minimum’’ problem to besolved online (or in real time t).

According to the GNN design method described in Section 1,we could handle the time-varying QM problem depicted in (2) viathe following procedure.

Step 1: The energy function is defined as eðtÞ ¼ Jr f J22=2¼

JPðtÞxðtÞþqðtÞJ22=2, where J � J2 denotes the two-norm of a vector.

Step 2: The design formula and dynamics of a GNN modelcould then be implemented as below so as to minimize online thetime-varying quadratic function xðtÞT PðtÞxðtÞ=2þqðtÞT xðtÞ:

_xðtÞ ¼�g @Jrf J22=2

@x¼�gPðtÞT ðPðtÞxðtÞþqðtÞÞ: ð4Þ

After developing the GNN model (4) for QM problem solving, weshow an illustrative computer-simulation example to demonstratethe characteristics of the convergence of GNN (4). Considering the

0 5 10−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

0 5 10−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

x 1 (

t)

x 2 (

t)

t (s)t (s)

Fig. 1. Time-varying quadratic minimization (2) via GNN (4) with g¼ 1. (For

interpretation of the references to color in this figure legend, the reader is referred

to the web version of this article.)

0 2 4 6 8 10−1

−0.5

0

0.5

1

1.5

2

2.5

3

3.5

4

f (x)

t (s)

Fig. 2. Time-varying quadratic function value f ðxÞ minimized by GNN (4) with

g¼ 1.

Y. Zhang et al. / Neurocomputing 74 (2011) 1710–17191712

following coefficients PðtÞAR2�2 and qðtÞAR2 with o¼ 1 testedhere:

PðtÞ ¼0:5cosotþ2 sinot

sinot 0:5sinotþ2

" #, qðtÞ ¼

sin3ot

cos3ot

� �, ð5Þ

and correspondingly xðtÞ ¼ ½x1ðtÞ,x2ðtÞ�T AR2. As shown in Fig. 1,

starting from eight randomly generated initial states xð0Þ ¼½x1ð0Þ,x2ð0Þ�

T , the GNN solution (denoted by blue solid curves) doesnot fit well with the time-varying theoretical solution (denoted byred dotted curves). The online minimization effect of GNN for time-varying quadratic function f ðxÞ is depicted in Fig. 2; i.e., the GNN-based f ðxÞ value is generally larger than the theoretical minimumvalue f ðx�Þ of time-varying quadratic function f ðxÞ.

3. Time-varying quadratic programming

As we know, minimizing quadratic function (2) may not beenough to describe practical problems in some fields. Forinstance, in the application to robot-arm motion planning[8,10,17,31], the consideration of the end-effector path tracking

yields a time-varying QP formulation which is subject to a time-varying linear-equality constraint. In this section, let us considerthe following time-varying convex quadratic programming pro-blem which is subject to a time-varying linear-equality con-straint:

minimize xT ðtÞPðtÞxðtÞ=2þqT ðtÞxðtÞ, ð6Þ

subject to AðtÞxðtÞ ¼ bðtÞ: ð7Þ

In QP (6) and (7), the time-varying decision-vector xðtÞARn isunknown and to be solved at any time instant tA ½0,þ1Þ. Inaddition to the coefficients’ description in Section 2, in equalityconstraint (7), the coefficient matrix AðtÞARm�n being of full rowrank and vector bðtÞARm are both assumed smoothly time-varying.

Facing the time-varying quadratic program (6) and (7) andbased on the preliminary results on equality-constrained optimi-zation problems [3,8,34], we have its related Lagrangian:

LðxðtÞ,lðtÞ,tÞ ¼ xT ðtÞPðtÞxðtÞ=2þqT ðtÞxðtÞþ lTðtÞðAðtÞxðtÞ�bðtÞÞ,

where lðtÞARm denotes the Lagrange-multiplier vector. As wemay recognize or know, the time-varying quadratic program(6) and (7) could then be solved by zeroing the equations belowat any time instant tA ½0,þ1Þ:

@LðxðtÞ,lðtÞ,tÞ

@xðtÞ¼ PðtÞxðtÞþqðtÞþAT

ðtÞlðtÞ,

@LðxðtÞ,lðtÞ,tÞ

@lðtÞ¼AðtÞxðtÞ�bðtÞ:

8>>><>>>:This can be further written as

~PðtÞ ~xðtÞ ¼� ~qðtÞ, ð8Þ

where

~PðtÞ :¼PðtÞ AT

ðtÞ

AðtÞ 0

" #ARðnþmÞ�ðnþmÞ,

~xðtÞ :¼xðtÞ

lðtÞ

" #ARnþm, ~qðtÞ :¼

qðtÞ

�bðtÞ

" #ARnþm:

With matrix PðtÞARn�n being positive-definite and matrixAðtÞARm�n being full row-rank at any time instant tA ½0,þ1Þ,the augmented matrix ~PðtÞARðnþmÞ�ðnþmÞ must be nonsingular atany time instant tA ½0,þ1Þ, which guarantees the solutionuniqueness of Eq. (8) [3,34]. Besides, for the purposes of betterunderstanding and comparison, we know that the time-varyingtheoretical solution could be written as

~x�ðtÞ ¼ ½x�TðtÞ,l�

TðtÞ�T ¼� ~P

�1ðtÞ ~qðtÞARnþm:

It follows that the optimal solution of the time-varying QPproblem is also ‘‘moving’’ with time t due to the ‘‘moving’’ effectsof the time-varying objective function and linear constraint.

Based on a similar approach to solving the time-varying QMproblem, the gradient neural network which handles the time-varying QP problem (6) and (7) can be obtained as

_~x ðtÞ ¼�g ~PTðtÞð ~PðtÞ ~xðtÞþ ~qðtÞÞ, ð9Þ

or written in a more complete form as

_xðtÞ_lðtÞ

" #¼�g

PðtÞ ATðtÞ

AðtÞ 0

" #TPðtÞ AT

ðtÞ

AðtÞ 0

" #xðtÞ

lðtÞ

" #þ

qðtÞ

�bðtÞ

" # !:

With PðtÞ and qðtÞ defined in (5), a computer-simulationexample is presented in this section about the convergenceof GNN model (9) by considering the following coefficients AðtÞ

0 2 4 6 8 10−2

0

2

0 2 4 6 8 10−2

0

2

0 2 4 6 8 10−5

0

5

x 1 (

t)x 2

(t)

l 1 (

t)

t (s)

t (s)

t (s)

Fig. 3. Online solution of time-varying QP (6) and (7) via GNN (9) with g¼ 1. (For

interpretation of the references to color in this figure legend, the reader is referred

to the web version of this article.)

Y. Zhang et al. / Neurocomputing 74 (2011) 1710–1719 1713

and bðtÞ:

AðtÞ ¼ ½sin4ot cos4ot�, bðtÞ ¼ cos2ot: ð10Þ

It follows from Eq. (8) that

~P ðtÞ ¼

0:5cosotþ2 sinot sin4ot

sinot 0:5sinotþ2 cos4ot

sin4ot cos4ot 0

264

375,

~qðtÞ ¼ ½sin3ot,cos3ot,�cos2ot�T ,

and ~xðtÞ ¼ ½x1ðtÞ,x2ðtÞ,l1ðtÞ�T . Then we have the simulation results.

As shown in Fig. 3, starting from eight randomly generated initialstates ~xð0Þ ¼ ½x1ð0Þ,x2ð0Þ,l1ð0Þ�

T with o¼ 1, the state-vectors ~xðtÞ(denoted by blue solid curves) of GNN model (9) can not convergeto the time-varying theoretical optimal solution ~x�ðtÞ (denoted byred dotted curves) but with quite large computational errors.

4. Time-varying performance analysis

From the computer-simulation results of time-varying QM andQP problems solving, we clearly find the drawbacks of GNNmodels applied to the time-varying coefficients situation. Figs. 1–3all indicate that, when applied to time-varying problems solving, thecomputational errors generated by GNN solvers can not decrease tozero. The GNN model for online minimization of quadratic function,constrained or unconstrained, could only approximately approachits time-varying theoretical optimal solution, instead of convergingto it exactly. In this section, the GNN performance is analyzed; i.e.,the solution-error bound and the convergence rate of GNN exploitedfor time-varying QM and QP problems solving.

4.1. Preliminaries

As previous sections show, the QM and QP problems can betransformed to linear equations (3) and (8), respectively. For thepurposes of further discussion and presentation convenience, theQM and QP problems are both formulated in a unified manner as

WðtÞyðtÞ ¼ uðtÞ, ð11Þ

where

WðtÞ :¼PðtÞ, in QM,~PðtÞ, in QP,

(yðtÞ :¼

xðtÞ, in QM,

~xðtÞ, in QP,

(

uðtÞ :¼�qðtÞ, in QM,

� ~qðtÞ, in QP:

(

The time-varying theoretical optimal solution could be written asy�ðtÞ ¼WðtÞ�1uðtÞARN . After giving the above Eq. (11), the unifiedgradient neural network model handling QM and QP problemscan be written as

_yðtÞ ¼�gWTðtÞðWðtÞyðtÞ�uðtÞÞ: ð12Þ

To lay a basis for further analysis, the following regularitycondition is presented.

Regularity condition: There exists a positive real number d40such that

min8iA f1,2,...,Ng

liðWTðtÞWðtÞÞZd, 8tZ0 ð13Þ

where lið�Þ denotes the ith eigenvalue of its input matrixargument.

If the above regularity condition holds true, evidently thereexists a unique solution to such a problem (11); i.e., time-varyingsolution y�ðtÞ ¼W�1

ðtÞuðtÞ exists 8tZ0. Besides, the parameter dof regularity condition (13) is presented only for analysis pur-poses, and there is no need to know the exact value of d inpractice.

4.2. Tight error bound

When we apply GNN model (12) to handling the time-varyingQM and QP problems, the following theorem about its steady-state solution-error bound could be derived.

Theorem 1. Consider a time-varying matrix WðtÞARN�N satisfying

regularity condition (13), of which the solution-variation rate is

uniformly bounded as JdðW�1ðtÞuðtÞÞ=dtJ2rz, 8tA ½0,1Þ, (0rzo

1. Starting from any initial state yð0ÞARN , the steady-state solu-

tion-error of gradient neural network (12) solving for W�1(t)u(t) is

upper bounded tightly as

limt-þ1

JyðtÞ�W�1ðtÞuðtÞJ2r

zgd

, ð14Þ

where design parameter g of GNN (12) should be set large enough.

Evidently, it follows from (14) that, when the solution-variation rate

z increases, the GNN steady-state solution-error upper bound

increases; and, when the design parameter g or regularity condition

parameter d increases, the GNN steady-state solution-error upper

bound decreases.

Proof. For gradient neural network (12), let us define solution-error eðtÞ ¼ yðtÞ�y�ðtÞARN with y�ðtÞ :¼W�1

ðtÞuðtÞ. In otherwords, eðtÞ denotes the difference between the GNN-computedsolution yðtÞ and the theoretical optimal solution y�ðtÞ ofWðtÞyðtÞ ¼ uðtÞ. Then we have yðtÞ ¼ eðtÞþy�ðtÞ and its time-derivative equation _yðtÞ ¼ _eðtÞþ _y�ðtÞ.

Consequently, gradient neural network (12) could be trans-

formed to the following dynamic equation in terms of eðtÞARN:

_eðtÞ ¼ �gWTðtÞWðtÞeðtÞ� _y�ðtÞ, ð15Þ

where the initial state eð0Þ ¼ yð0Þ�y�ð0Þ. To analyze (15) as well as

GNN (12), we can first define a Lyapunov function candidate

xðtÞ ¼ JeðtÞJ22=2, and evidently xðtÞ is positive-definite in view of

xðtÞ ¼ ðeT ðtÞeðtÞÞ=240 for eðtÞa0 and xðtÞ ¼ 0 for eðtÞ ¼ 0 only.

�/ (���)

�/ (��)

Fig. 4. Solution-error eðtÞ of GNN (12) globally converges to the ball of z=ðgdÞ.

Table 1

Y. Zhang et al. / Neurocomputing 74 (2011) 1710–17191714

Second, we could derive the time-derivative of xðtÞ along the state

trajectory of (15) as the following (with argument t omitted

sometimes for presentation convenience):

_xðtÞ ¼dxdt¼

dJeðtÞJ22=2

dt¼ eT de

dt¼ eT _e ¼ eT ð�gWT We� _y�Þ

¼�geT WT We�eT _y�: ð16Þ

Before analyzing the time-derivative _xðtÞ, we have to handle the

two terms in the last line of Eq. (16) individually. For the first

term �geT WT We, we have

�geT WT Wer�gdeT e¼�gdJeJ22, ð17Þ

where d40, as defined in regularity condition (13), denotes the

uniformly minimum eigenvalue of WTðtÞWðtÞ. On the other hand,

for the second term �eT _y�, we have the following result from

Cauchy’s inequality [35]:

�eT _y�rzJeJ2, ð18Þ

where J _y�J2 ¼ JdðWðtÞ�1uðtÞÞ=dtJ2rz.

Then, substituting (17) and (18) into (16) yields the following:

_xðtÞ ¼�gðeT WT WeÞ�ðeT _y�Þr�gdJeJ22þzJeJ2 ¼�JeJ2ðgdJeJ2�zÞ:

ð19Þ

During the time evolution of eðtÞ, the above equation falls into one

of the following three situations: (i) gdJeJ2�z40; (ii)

gdJeJ2�z¼ 0; and (iii) gdJeJ2�zo0.

Steady-state solution-error bound, exponential convergence rate and convergence

time of GNN (12).

a Error

bound

Convergence

rate

Convergence time tc for

Jeð0ÞJ2 Zz=ðagdÞ

If in the time interval [t0, t1) the trajectory of error system (15)is in the first situation [i.e., JeJ24z=ðgdÞ], then _xo0 whichimplies that eðtÞ approaches 0ARN [i.e., yðtÞ approaches y�ðtÞ]as time evolves.

0.01 100z=ðgdÞ 0:99gd lnð0:01gdJeð0ÞJ2=zÞ=ð0:99gdÞ

0.02 50z=ðgdÞ 0:98gd lnð0:02gdJeð0ÞJ2=zÞ=ð0:98gdÞ0.08 25z=ð2gdÞ 0:92gd lnð0:08gdJeð0ÞJ2=zÞ=ð0:92gdÞ0.10 10z=ðgdÞ 0:90gd lnð0:10gdJeð0ÞJ2=zÞ=ð0:90gdÞ0.30 10z=ð3gdÞ 0:70gd lnð0:30gdJeð0ÞJ2=zÞ=ð0:70gdÞ0.70 10z=ð7gdÞ 0:30gd lnð0:70gdJeð0ÞJ2=zÞ=ð0:30gdÞ0.90 10z=ð9gdÞ 0:10gd lnð0:90gdJeð0ÞJ2=zÞ=ð0:10gdÞ0.99 1:01z=ðgdÞ 0:01gd 100lnð0:99gdJeð0ÞJ2=zÞ=ðgdÞ0.999 1:001z=ðgdÞ 0:001gd 1000lnð0:999gdJeð0ÞJ2=zÞ=ðgdÞ

If at any time t the trajectory of error system (15) is in thesecond situation [i.e., JeJ2 ¼ z=ðgdÞ, a so-called ball surface],then _xr0 which implies that eðtÞ approaches 0ARN [i.e., yðtÞapproaches y�ðtÞ] or stays on the ball surface with JeJ2 ¼ z=ðgdÞ[i.e., JyðtÞ�y�ðtÞJ2 ¼ z=ðgdÞ], in view of _xr0 containing sub-situations _xo0 and _x ¼ 0, respectively. Simply put, for thissituation, eðtÞ will not go outside the ball of z=ðgdÞ.

� � � � � � � � � � � �

� 1.00 z=ðgdÞ 0 (AC) þ1 (theoretically)

For any time t at which the system trajectory falls into thethird situation [i.e., JeJ2oz=ðgdÞ, inside the ball], it followsfrom (19) that _xðtÞ is less than a positive scalar (containingsub-situations _xr0 and _x40), and thus the distance JeðtÞJ2

between yðtÞ and y�ðtÞ may not decrease again. Now let usanalyze the worst case, i.e., _xðtÞ40: it is readily known thatxðtÞ and JeJ2 would increase, which increases gdJeJ2�z as well,as time evolves. So, there must exist a certain time instant t2

such that gdJeJ2�z¼ 0, which returns to the second situation,i.e, _xr0, and the worst is JeJ2 ¼ z=ðgdÞ.

For a better understanding about the above analysis, Fig. 4 ispresented. Summarizing the above three situations, the solution-error

limt-þ1JeðtÞJ2 of gradient neural network (12) is upper bounded by

z=ðgdÞ, i.e., Eq. (14). Or, in mathematics, limt-þ1 JeðtÞJ2 ¼

limt-þ1JyðtÞ�y�ðtÞJ2 ¼ limt-þ1JyðtÞ�W�1ðtÞuðtÞJ2rz=ðgdÞ, from

which the remaining statements of Theorem 1 follows readily. Theproof is thus completed. &

4.3. Exponential convergence rate

In the preceding subsection, a tight steady-state solution-errorbound z=ðgdÞ of gradient neural network (12) solving online time-varying QM and QP problems is presented in Theorem 1. Theanalysis shows that the solution-error generated by GNN can

globally converge to the ball of z=ðgdÞ, instead of decreasing to zero.However, the nature of Theorem 1 is just asymptotic convergence(AC) [i.e., limt-þ1JeðtÞJ2rz=ðgdÞ], which may not be good enoughin practice as it requires an infinitely long time-period to generatethe solution. For completeness of the analysis, the error dynamicequation (15) of gradient neural network (12) is investigated furtherby analyzing (19) more deeply in this subsection. This leads to thefollowing results on global exponential convergence rate and finiteconvergence time of GNN (12) to a relatively loose error bound ofz=ðagdÞ with 0oao1 chosen by GNN users.

Theorem 2. Let us consider a time-varying matrix WðtÞARN�N

satisfying regularity condition (13), of which the solution-variation

rate is uniformly bounded as JdðW�1ðtÞuðtÞÞ=dtJ2rz, 8tA ½0,1Þ.

Starting from any initial state yð0ÞARN , the error JyðtÞ�W�1ðtÞuðtÞJ2 of gradient neural network (12) is globally exponen-

tially convergent to or stays within the error bound z=ðagdÞ, where,for any aAð0,1Þ, the exponential convergence rate is ð1�aÞgd, and

the convergence time is lnðagdJeð0ÞJ2=zÞ=ðð1�aÞgdÞ for Jeð0ÞJ2Zz=ðagdÞ or is 0 for Jeð0ÞJ2rz=ðagdÞ. In addition, the analysis result can

be seen from Fig. 4 (i.e., with respect to the outer ball denoted by the

dotted circle) and Table1 (for different values of a).

Y. Zhang et al. / Neurocomputing 74 (2011) 1710–1719 1715

Proof. Following the proof of Theorem 1, for gradient neuralnetwork (12), we define solution-error eðtÞ ¼ yðtÞ�y�ðtÞARN withy�ðtÞ :¼W�1

ðtÞuðtÞ. Gradient neural network (12) is then trans-formed into the following error dynamic equation [witheð0Þ ¼ yð0Þ�y�ð0Þ]:

_eðtÞ ¼�gWTðtÞWðtÞeðtÞ� _y�ðtÞ:

Define a Lyapunov-like positive-definite function xðtÞ ¼ JeðtÞJ22=2.

The time-derivative of xðtÞ along the state trajectory of (15) isderived as (19) and repeated below for readers’ convenience:

_xðtÞr�gdJeJ22þzJeJ2: ð20Þ

Let us pay attention to the above inequality. In Theorem 1, we

have proved the global asymptotic convergence of error-

dynamics (15) [equivalently, GNN (12)] to steady-state error

bound z=ðgdÞ. Now, we could show as follows its global exponen-

tial convergence to a relatively loose error bound z=ðagdÞ with

aAð0,1Þ. That is, expression (20) can be rewritten as

_xðtÞr�gdJeJ22þzJeJ2 ¼�ð1�aÞgdJeJ2

2þð�agdJeJ22þzJeJ2Þ, ð21Þ

where weighting parameter aAð0,1Þ could be termed as ‘‘a

loosing ratio’’.

Evidently, on the right-hand side of (21), the first term�ð1�aÞgdJeJ2

2r0.

� For solution-error eðtÞ satisfying �agdJeJ2

2þzJeJ2r0 [i.e.,JeðtÞJ2Zz=ðagdÞ, outside or on the surface of new ballz=ðagdÞ depicted in Fig. 4 by a dotted circle], the second termon the right-hand side of (21) could be dropped. It followsfrom (21) that

_xðtÞr�ð1�aÞgdJeJ22 ¼�2ð1�aÞgdxðtÞ,

xðtÞrexpð�2ð1�aÞgdtÞxð0Þ,

and thus

JeðtÞJ2rexpð�ð1�aÞgdtÞJeð0ÞJ2, 8tA ½0,tc�, ð22Þ

where the exponential convergence rate is clearly ð1�aÞgd, andthe convergence time tc :¼ lnðagdJeð0ÞJ2=zÞ=ðð1�aÞgdÞ in viewof

expð�ð1�aÞgdtcÞJeð0ÞJ2 ¼ z=ðagdÞ,

ðð1�aÞgdÞtc ¼ lnðagdJeð0ÞJ2=zÞ:

For error eðtÞ entering into the new ball of z=ðagdÞ [i.e.,JeðtÞJ2oz=ðagdÞ], such an eðtÞ can never leave the ball. Thisis in view of the first situation analysis of (19) in the proofof Theorem 1: _xo0 for any eðtÞ outside the small ball of z=ðgdÞ.

0 1 2 3 4 5 6 7 8 9 100

0.5

1

1.5

2

2.5

t (s)

� =

5, �

= 1

Fig. 5. The g effects on convergence and error bounds of JxðtÞþP�1ðtÞ

So is the situation with Jeð0ÞJ2oz=ðagdÞ, of which the resul-tant eðtÞ trajectory can never leave the ball of z=ðagdÞ.

Thus, from (22) and the above analysis, defining the loosing ratioaA ð0,1Þ and convergence time tc ¼ lnðagdJeð0ÞJ2=zÞ=ðð1�aÞgdÞ, wecould have

JeðtÞJ2

rexpð�ð1�aÞgdtÞJeð0ÞJ2, 8tA ½0,tc�,

rz=ðagdÞ, 8tA ½tc ,1Þ,

(Jeð0ÞJ2Zz=ðagdÞ;

rz=ðagdÞ, 8tA ½0,1Þ, Jeð0ÞJ2rz=ðagdÞ;

8>><>>:

ð23Þ

where, even in the worst case, the exponential convergence rate isð1�aÞgd. Given different values of aAð0,1�, we could generatereadily Table 1, which shows the estimated exponential conver-gence rates and finite convergence time of gradient neural net-work (12) approaching the loose error bound of z=ðagdÞ. The proofis thus completed. &

5. Simulative verification

As Section 4 proves, the steady-state error can not decrease tozero when the GNN model is exploited for solving time-varyingQM and QP problems, and the solution could exponentiallyconverge to a loose error bound. In this section, we present someillustrative examples to show the validity of the above theoreticalresults, and to show how parameters g and o impact on theperformance of GNN exploited for solving the time-varying QMand QP problems.

Let us consider the time-varying QM problem first. Fig. 5shows the convergence of computational error JxðtÞþP�1

ðtÞqðtÞJ2

synthesized by gradient neural network (12) with o¼ 1 used.Starting from eight randomly generated initial states xð0Þ, thesolution-error eðtÞ of GNN (12) can always converge to an errorbound. Moreover, by increasing design parameter g from 5 to 50,such a steady-state solution-error bound is decreased fromroughly 0.8 to 0.2. In addition, as variation rate o in PðtÞ andqðtÞ is related to the solution-variation rate z (which is used fortheoretical analysis in Section 4), we increase o from 1 to 5 tosimulate the convergent performance of GNN (12) again.Observed from Fig. 6, the solution-error eðtÞ can converge to anerror bound as well, but the upper bound of the steady-statesolution-error limt-þ1JeðtÞJ2 in the case of o¼ 5 is evidentlylarger than that of o¼ 1, and the steady-state solution-errorlimt-þ1JeðtÞJ2 of o¼ 5 is with larger oscillation. Furthermore, inthe case of o¼ 5, by increasing g from 5 to 50, the maximumsteady-state solution-error of eðtÞ decreases from 1.5 to 0.6. Theabove results are reasonable and consistent with the presentedtheoretical analysis, because when o becomes larger, the solution

0 1 2 3 4 5 6 7 8 9 100

0.5

1

1.5

2

2.5

� =

50,

� =

1

t (s)

qðtÞJ2 synthesized by GNN (12) solving QM problem with o¼ 1.

0 1 2 3 4 5 6 7 8 9 100

0.5

1

1.5

2

2.5

t (s)0 1 2 3 4 5 6 7 8 9 10

0

0.5

1

1.5

2

2.5

� =

50,

� =

5

t (s)

� =

5, �

= 5

Fig. 6. The g effects on convergence and error bounds of JxðtÞþP�1ðtÞqðtÞJ2 synthesized by GNN (12) solving QM problem with o¼ 5.

0 1 2 3 4 5 6 7 8 9 100

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

� =

5, �

= 1

� =

500

, � =

1

t (s)0 1 2 3 4 5 6 7 8 9 10

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t (s)

0 1 2 3 4 5 6 7 8 9 100

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

6

t (s)

� =

500

0, �

= 1

� =

50,

� =

1

t (s)

Fig. 7. The g effects on convergence and error bounds of J ~xðtÞþ ~P�1ðtÞ ~qðtÞJ2 synthesized by GNN (12) solving QP problem with o¼ 1.

0 1 2 3 4 5 6 7 8 9 100

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

� =

5, �

= 5

t (s)0 1 2 3 4 5 6 7 8 9 10

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t (s)

� =

50,

� =

5

Fig. 8. The g effects on convergence and error bounds of J ~xðtÞþ ~P�1ðtÞ ~qðtÞJ2 synthesized by GNN (12) solving QP problem with o¼ 5.

Y. Zhang et al. / Neurocomputing 74 (2011) 1710–17191716

of time-varying QM problem changes faster, which makes GNNmore difficult to solve the time-varying QM problem.

Similarly, computer-simulation of the time-varying QP pro-blem solving is conducted and presented: the solution errors eðtÞ

of GNN (12) also converge to an error bound. In the case of o¼ 1,shown in Fig. 7, the upper bound of the steady-state solution-error is roughly 3.9. When g increases from 5 to 50, the upperbound of the steady-state solution-error decreases to roughly 1.5.

Y. Zhang et al. / Neurocomputing 74 (2011) 1710–1719 1717

If g continues to increase to 500 (or 5000), such a steady-statesolution-error bound could be decreased to roughly 0.27 (or0.028, respectively). In addition, by increasing o from 1 to 5, asshown in Fig. 8, the upper bound of the steady-state solution-error increases to roughly 4 with more oscillation, but decreasesto roughly 3 by increasing g from 5 to 50. The results are similarto the time-varying QM problem solving, which are both highlyconsistent with the theoretical analysis results presented inSection 4.

6. Application to robot tracking

As we know and experience [8,10,17,31,36], the gradient-typeneural networks have been exploited in motion planning andcontrol of redundant robots with end-effectors tracking desiredtrajectories. As pointed out by the authors’ previous work [36],

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

−1

−0.5

0

0.5

1

1.5

2

2.5

X (m)

Y (

m)

Fig. 9. Motion trajectories of a six-link planar manipulator synthesized by GNN (12)

interpretation of the references to color in this figure legend, the reader is referred to

0 5 10 15 20−5

−4

−3

−2

−1

0

1x 10−3

t (s)

eXeY

0 5 10 15 20−5

−4

−3

−2

−1

0

1x 10−5

t (s)

eXeY

� =

104 5

� =

102

Fig. 10. Position errors (m) of a six-link planar robot’s end-effector synthesized by GN

the motion planning problem of robot manipulators can beformulated as the following time-varying quadratic programsubject to a time-varying linear-equality constraint:

minimize xT ðtÞPxðtÞ=2þqT ðtÞxðtÞ, ð24Þ

subject to AðtÞxðtÞ ¼ bðtÞ, ð25Þ

with P :¼ I, qðtÞ :¼ mðgðtÞ�gð0ÞÞ, AðtÞ :¼ JðtÞ, and bðtÞ :¼ _rðtÞ, wheregðtÞARn denotes the joint variable vector, m40 is a designparameter used to scale the magnitude of the manipulatorresponse to joint displacements, xðtÞ corresponds to _g ðtÞARn

which is to be solved online, JðtÞ denotes the related Jacobianmatrix, and _rðtÞARm denotes the desired end-effector velocityvector. It is worth mentioning that the previous and currentvalues of these matrices and vectors are known (or can bemeasured/estimated quite accurately) in the robot application.The GNN model (12) and theoretical results obtained in previous

3 3.5 4 4.5 50.6

0.8

1

1.2

1.4

1.6

1.8

2

2.2

2.4

X (m)

Y (

m)

with design parameter g¼ 104 when its end-effector tracks a square path. (For

the web version of this article.)

0 5 10 15 20−5

−4

−3

−2

−1

0

1

2x 10−4

t (s)

eXeY

� =

103

0 5 10 15 20−4.5

−4−3.5

−3−2.5

−2−1.5

−1−0.5

00.5

x 10−6

t (s)

eXeY

� =

10

N (12) with different values of design parameters g when tracking a square path.

0 5 10 15 200

0.5

1

1.5

2

2.5

3

3.5

4x 10−5

t (s)0 5 10 15 20

0

0.5

1

1.5

2

2.5

3

3.5

4x 10−6

� =

105

t (s)

� =

104

Fig. 11. Solution-error JyðtÞ�W�1ðtÞuðtÞJ2 of GNN (12) solving time-varying QP (24) and (25), where red dashed lines correspond to the theoretical error bound z=ðgdÞ. (For

interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Y. Zhang et al. / Neurocomputing 74 (2011) 1710–17191718

sections can be used to analyze and decrease the tracking errorsof the robots; i.e., by increasing the neural network parameter g,the tracking errors could be decreased effectively.

For example, as synthesized by GNN (12) with different valuesof deign parameter g, a six-link planar redundant robot manip-ulator is simulated for further verification purposes. The six-linkrobot manipulator has four redundant degrees (n¼6 and m¼2),and the desired path of its end-effector is a square with side-length 1.414 m. The task duration is 20 s, the initial joint variablegð0Þ ¼ ½�p=12,p=12,p=6,�p=4,p=3,p=12�T rad, and the designparameter m is set to be 6. The simulation results are shown inFigs. 9–11. Fig. 9 illustrates the motion trajectories of the six-linkplanar robot manipulator operating in the two-dimensionalspace, which is synthesized by GNN (12) with g¼ 104. The arrowappearing in Fig. 9 shows the motion direction. From Fig. 9, wesee that the actual trajectory of the robot end-effector (denotedby blue asterisk lines) is very close to the desired square path(denoted by red dashed lines). In addition, as seen from Fig. 10, byincreasing parameter g, e.g., from 104 to 105, the maximum end-effector position error resulting from the solution by GNN (12)can be decreased from 4:71� 10�5 to 4:39� 10�6 m. Moreover,the solution error JyðtÞ�W�1

ðtÞuðtÞJ2 of GNN (12) solving time-varying QP problem (24) and (25) is shown in Fig. 11. From thisfigure and the simulation data, we can observe and verify that thesolution-error JyðtÞ�W�1

ðtÞuðtÞJ2rz=ðgdÞ for time tA ½0,20� s,where z is set to be the maximum value of JdðW�1

ðtÞuðtÞÞ=dtJ2

and d is set to be the minimum eigenvalue of WTðtÞWðtÞ for all

time instant tA ½0,20� s. This substantiates well the performanceanalysis of GNN (12) solving time-varying QP problem.

7. Conclusions

This paper has investigated the performance analysis ofgradient neural network (GNN) exploited for online time-varyingquadratic minimization (QM) and equality-constrained quadraticprogramming (QP) problems solving. As seen from the theoreticalresults presented in this paper, the conventional gradient-typeneural network can not solve exactly the time-varying quadraticoptimization problem. Computer-simulation results have sub-stantiated the performance analysis of such a GNN modelexploited to solve online time-varying QM and QP problems.

Before ending the paper, it is worth pointing out further that thispaper focuses on the unique solution case of time-varying problemssolving via gradient neural networks, which therefore requires thatthe regularity condition hold true. In future work we will considerloosening or removing this regularity condition. Moreover, othertypes of neural networks, e.g., the projection-type neural networks

[8–10,18], for solving time-varying quadratic programs could beinvestigated as one of the future research directions, which, undersubstantial effort and devotion, might achieve fruitful results.

Acknowledgements

The authors would like to thank the editors and anonymousreviewers sincerely for their constructive comments and sugges-tions which have really improved the presentation and quality ofthis paper very much.

References

[1] Y. Ye, S. Zhang, New results on quadratic minimization, SIAM J. Optim. 14(2003) 245–267.

[2] Z. Luo, N.D. Sidiropoulos, P. Tseng, S. Zhang, Approximation bounds forquadratic optimization with homogeneous quadratic constraints, SIAM J.Optim. 18 (2007) 1–28.

[3] S. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press,New York, 2004.

[4] W. Li, Error bounds for piecewise convex quadratic programs and applica-tions, SIAM J. Control Optim. 33 (1995) 1510–1529.

[5] T.A. Johansen, T.I. Fsosen, S.P. Berge, Constrained nonlinear control allocationwith singularity avoidance using sequential quadratic programming, IEEETrans. Control Syst. Technol. 12 (2004) 211–216.

[6] B. Fares, D. Noll, P. Apkarian, Robust control via sequential semidefiniteprogramming, SIAM J. Control Optim. 40 (2002) 1791–1820.

[7] N. Grudinin, Reactive power optimization using successive quadratic pro-gramming method, IEEE Trans. Power Syst. 13 (1998) 1219–1225.

[8] J. Wang, Y. Zhang, Recurrent neural networks for real-time computation ofinverse kinematics of redundant manipulators, in: Machine Intelligence: QuoVadis?, World Scientific, Singapore, 2004.

[9] Y. Zhang, W. Ma, X. Li, H. Tan, K. Chen, MATLAB Simulink modeling andsimulation of LVI-based primal-dual neural network for solving linear andquadratic programs, Neurocomputing 72 (2009) 1679–1687.

[10] Y. Zhang, K. Li, Bi-criteria velocity minimization of robot manipulators usingLVI-based primal-dual neural network and illustrated via PUMA560 robotarm, Robotica 28 (2010) 525–537.

[11] Y. Zhang, A set of nonlinear equations and inequalities arising in robotics andits online solution via a primal neural network, Neurocomputing 70 (2006)513–524.

[12] W.E. Leithead, Y. Zhang, O(N2)-operation approximation of covariance matrixinverse in Gaussian process regression based on quasi-Newton BFGS method,Communications in Statistics – Simulation and Computation 36 (2007)367–380.

[13] W. Li, J. Swetits, A new algorithm for solving strictly convex quadraticprograms, SIAM J. Optim. 7 (1997) 595–619.

[14] F. Leibfritz, E.W. Sachs, Inexact SQP interior point methods and large scaleoptimal control problems, SIAM J. Control Optim. 38 (1999) 272–293.

[15] D.W. Tank, J.J. Hopfield, Simple ‘‘neural’’ optimization networks: an A/Dconverter, signal decision circuit, and a linear programming circuit, IEEETrans. Circuits Syst. 33 (1986) 533–541.

[16] G. Costantini, R. Perfetti, M. Todisco, Quasi-Lagrangian neural network forconvex quadratic optimization, IEEE Trans. Neural Networks 19 (2008)1804–1809.

Y. Zhang et al. / Neurocomputing 74 (2011) 1710–1719 1719

[17] Y. Zhang, Revisit the analog computer and gradient-based neural system formatrix inversion, in: Proceedings of IEEE International Symposium onIntelligent Control, 2005, pp. 1411–1416.

[18] Y. Li, F. Cao, Projection type neural network and its convergence analysis,J. Control Theor. Appl. 4 (2006) 286–290.

[19] M. Di Marco, M. Forti, M. Grazzini, P. Nistri, L. Pancioni, Lyapunov methodand convergence of the full-range model of CNNs, IEEE Trans. Circuits Syst. I55 (2008) 3528–3541.

[20] H. Zhao, J. Zhang, Nonlinear dynamic system identification using pipelinedfunctional link artificial recurrent neural network, Neurocomputing 72(2009) 3046–3054.

[21] L.O. Chua, G. Lin, Nonlinear programming without computation, IEEE Trans.Circuits Syst. CAS-31 (1984) 182–188.

[22] M.P. Kennedy, L.O. Chua, Neural networks for nonlinear programming, IEEETrans. Circuits Syst. 35 (1988) 554–562.

[23] A. Cichocki, R. Unbehauen, Neural networks for solving systems of linearequations and related problems, IEEE Trans. Circuits Syst. 39 (1992) 124–138.

[24] C.Y. Maa, M.A. Shanblatt, Linear and quadratic programming neural networkanalysis, IEEE Trans. Neural Networks 3 (1992) 580–594.

[25] H. Myung, J. Kim, Time-varying two-phase optimization and its application toneural-network learning, IEEE Trans. Neural Netw. 8 (1997) 1293–1300.

[26] J. Wang, Electronic realisation of recurrent neural work for solving simulta-neous linear equations, Electron. Lett. 28 (1992) 493–495.

[27] M. Forti, P. Nistri, M. Quincampoix, Generalized neural network for non-smooth nonlinear programming problems, IEEE Trans. Circuits Syst. I 51(2004) 1741–1754.

[28] W. Bian, X. Xue, Subgradient-based neural networks for nonsmooth non-convex optimization problems, IEEE Trans. Neural Netw. 20 (2009)1024–1038.

[29] P. Ferreiraa, P. Ribeiroa, A. Antunesa, F.M. Dias, A high bit resolution FPGAimplementation of a FNN with a new algorithm for the activation function,Neurocomputing 71 (2007) 71–77.

[30] Y. Zhang, K. Chen, Global exponential convergence and stability of Wangneural network for solving online linear equations, Electron. Lett. 44 (2008)145–146.

[31] Y. Zhang, S.S. Ge, Design and analysis of a general recurrent neural networkmodel for time-varying matrix inversion, IEEE Trans. Neural Netw. 16 (2005)1477–1490.

[32] Y. Zhang, D. Jiang, J. Wang, A recurrent neural network for solving Sylvesterequation with time-varying coefficients, IEEE Trans. Neural Netw. 13 (2002)1053–1063.

[33] Y. Zhang, Z. Li, Zhang neural network for online solution of time-varyingconvex quadratic program subject to time-varying linear-equality con-straints, Phys. Lett. A 373 (2009) 1639–1643.

[34] J. Nocedal, S.J. Wright, Numerical Optimization, Springer-Verlag, Berlin,Heidelberg, 1999.

[35] M. Abramowitz, I.A. Stegun, Handbook of Mathematical Functions withFormulas, Graphs, and Mathematical Tables, Dover, New York, 1972.

[36] Y. Zhang, Z. Tan, K. Chen, Z. Yang, X. Lv, Repetitive motion of redundantrobots planned by three kinds of recurrent neural networks and illustrated

with a four-link planar manipulator’s straight-line example, Robot. Auton.Syst. 57 (2009) 645–651.

Yunong Zhang is a professor at School of InformationScience and Technology, Sun Yat-sen University(SYSU), Guangzhou, China. He received his B.S., M.S.,and Ph.D. degrees, respectively from Huazhong Uni-versity of Science and Technology, South China Uni-versity of Technology, and Chinese University of HongKong, in 1996, 1999, and 2003, respectively. Beforejoining SYSU in 2006, Yunong Zhang had been withNational University of Ireland, University of Strath-clyde, National University of Singapore since 2003. Hismain research interests include neural networks,robotics, and scientific computing.

Yiwen Yang was born in Guangdong, China, in 1986.He received the B.S. degree in Software Engineering,Sun Yat-sen University (SYSU), Guangzhou, China, in2009. He is currently pursuing the M.S. degree inCommunication and Information Systems at School ofInformation Science and Technology, Sun Yat-sen Uni-versity. His research interests include neural networks,nonlinear systems, machine learning, and robotics.

Gongqin Ruan was born in Guangdong, China, in1985. He received the B.S. degree in Electronic Infor-mation Engineering, Guangdong University of Tech-nology (GDUT), China, in 2008. He received the M.S.degree in Communication and Information Systems atSchool of Information Science and Technology, SunYat-sen University (SYSU), Guangzhou, China, in 2010.His current research interests include neural networks,nonlinear systems, and intelligent information.