A dual neural network for convex quadratic programming subject to linear equality and inequality...

8

Click here to load reader

Transcript of A dual neural network for convex quadratic programming subject to linear equality and inequality...

Page 1: A dual neural network for convex quadratic programming subject to linear equality and inequality constraints

10 June 2002

Physics Letters A 298 (2002) 271–278

www.elsevier.com/locate/pla

A dual neural network for convex quadratic programming subjectto linear equality and inequality constraints

Yunong Zhang1, Jun Wang∗

Department of Automation and Computer-Aided Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong

Received 31 October 2001; accepted 1 April 2002

Communicated by A.R. Bishop

Abstract

A recurrent neural network called the dual neural network is proposed in this Letter for solving the strictly convex quadraticprogramming problems. Compared to other recurrent neural networks, the proposed dual network with fewer neurons can solvequadratic programming problems subject to equality, inequality, and bound constraints. The dual neural network is shown tobe globally exponentially convergent to optimal solutions of quadratic programming problems. In addition, compared to neuralnetworks containing high-order nonlinear terms, the dynamic equation of the proposed dual neural network is piecewise linear,and the network architecture is thus much simpler. The global convergence behavior of the dual neural network is demonstratedby an illustrative numerical example. 2002 Elsevier Science B.V. All rights reserved.

Keywords: Dual neural network; Quadratic programming; Linear constraint; Projection operator; Global convergence

1. Introduction

Linearly constrained quadratic programming prob-lems, due to its fundamental role, arise in numerous ar-eas of applications such as manufacturing, economic,social, and public planning. Optimization problemswith nonlinear objective functions are usually ap-proximated by a second-order quadratic system andthen solved by a standard quadratic programming

This research was supported by the Hong Kong ResearchGrants Council under Grant CUHK4165/98E.

* Corresponding author.E-mail address: [email protected] (J. Wang).

1 The author currently is a Ph.D. student in the Departmentof Automation and Computer-Aided Engineering, the ChineseUniversity of Hong Kong, Shatin, Hong Kong.

technique sequentially. The computational complexityof serial-processing algorithms performed on digitalcomputers may limit their usage in large-scale or on-line optimization applications, such as in rigid bodymechanics [13], fluid dynamics and elastic-plastic tor-sion [7].

The dynamical system approach is one of the im-portant methods for solving optimization problems,which was first proposed by Pyne (see [1]) in thelate 1950s. Recently, due to the in-depth research inneural networks, numerous dynamic solvers based onneural networks have been developed and investigated(see [2–13]). Specifically, Tank and Hopfield proposedtheir working neural network implemented on ana-logue circuits, which opened new avenue for neuralnetworks to solve optimization problems (see [2]). Theneural network approach is now thought to be a pow-

0375-9601/02/$ – see front matter 2002 Elsevier Science B.V. All rights reserved.PII: S0375-9601(02)00424-3

Page 2: A dual neural network for convex quadratic programming subject to linear equality and inequality constraints

272 Y. Zhang, J. Wang / Physics Letters A 298 (2002) 271–278

erful tool for real-time optimization, in view of the na-ture of parallel distributed computation and hardwareimplementability.

In the past two decades, various neural networkmodels have been developed for solving the lin-early constrained quadratic optimization problems,e.g., those based on the penalty-parameter method [3],the Lagrange method [5], the gradient and projectedmethod [7,10], the primal–dual method [9,11], and thedual method [12,13]. It is well-known that the neuralnetwork model [3] contains finite penalty parametersand generates approximate solutions only. When solv-ing inequality-constrained quadratic programs, the La-grange neural network may exhibit the premature de-fect. In addition, the dimensionality of Lagrange net-work is much larger than that of original problems, dueto the introduction of slack and surplus variables. Thegradient and projected method [7,10] were proposedfor solving special quadratic program with simplebound constraints only, which cannot be generalizedto solve general constrained quadratic programmingproblems. As a much flexible tool for exactly solvingquadratic programs, the primal–dual neural networks[9,11] were developed with the feature that they han-dle the primal quadratic program and its dual problemsimultaneously by minimizing the duality gap withKarush–Kuhn–Tacker condition and gradient method.Unfortunately, the dynamic equations of the primal–dual neural network are usually complicated, and maycontain high-order nonlinear terms. Moreover, the net-work size are usually larger than the dimensionality ofthe primal quadratic program plus its dual problem.

As stated in [4], to formulate an optimization prob-lem solvable by a neural network, there exist twotypes of methods. One approach commonly used indeveloping an optimization neural network is to firstconvert the constrained optimization problem into anassociated unconstrained optimization problem, andthen design a neural network that solves the uncon-strained problem with a gradient descent. The otherapproach is to construct a set of differential equationssuch that their equilibrium points correspond to the de-sired solutions and then find an appropriate Lyapunovfunction such that all trajectories of the systems con-verges to equilibrium points. As a special case of theprimal–dual neural network, the dual neural networkis proposed by using the dual decision variables only.But different form the primal–dual network, the dual

neural network is developed with the latter approachof the above design methodologies to reduce networkcomplexity and increase computational efficiency. Butup to now, only a few dual neural network models havebeen developed for quadratic programming subject toinequality constraints [12] or simple bound constraints[13]. Linear constraints of the general form, whichmay include equality, inequality and bound constraintssimultaneously, should be addressed to cover the needfor engineering applications. In addition to the aim ofdeveloping a neural network with much simple archi-tecture, the above consideration is our motivation ofthis study.

The remainder of this Letter is organized in six sec-tions. Section 2 provides the background informationand problem formulation of a quadratic program underequality, inequality and bound constraints. Section 3proposes a dual neural network model for solvingquadratic programming problems subject to generallinear constraints. The convergence results on glob-ally exponential convergence are given in Section 4.Section 5 presents an illustrative example of quadraticprogram under linear equality, inequality and boundconstraints solved by using the proposed dual neuralnetwork. Section 6 concludes the Letter with final re-marks.

2. Preliminaries

Consider the following quadratic programmingpro-blem subject to various linear constraints:

(1)Minimize1

2xTQx + cT x,

(2)Subject to Ax = b,

(3)Cx d,

(4)x− x x+,

where x is the n-dimensional decision vector,Q ∈Rn×n is a positive-definite symmetric matrix, theother coefficient matrices and vectors are defined,respectively, asc ∈Rn, A ∈ Rp×n, b ∈ Rp , C ∈Rq×n,d ∈ Rq , the n-dimensional vectorsx− and x+ are,respectively, the lower and upper bounds ofx.

The Lagrange neural network [5] was developedoriginally for solving equality-constrained quadraticprogramming problems. If extending Lagrange neural

Page 3: A dual neural network for convex quadratic programming subject to linear equality and inequality constraints

Y. Zhang, J. Wang / Physics Letters A 298 (2002) 271–278 273

network to solve the above general constrained quad-ratic program, the network architecture is eight-lay-ered with the total number of neurons equal to 5n +p + 2q . Additionally, the premature defect of suchnetwork appears that at some situations the equilibriaof the Lagrange neural network may ensure feasibilityonly rather than optimality. The reason is that thedesign procedure does not take into account thecomplimentary slack condition.

A single-layered recurrent neural networks [7,10],based on the gradient and projected method, is an ef-fective tool to solve the bound-constrained quadraticprogramming problem, i.e., (1) subject to (4) only.The most generalized form of the simple bound con-straint (4) that can be handled isx− Px x+,whereP is a nonsingular square matrix of dimen-sion n. If P is singular or not a square matrix, thena transform involved in the design procedure becomesinapplicable and the network cannot be built up. Thatis why such neural networks cannot be extended tohandle the general quadratic program (1)–(4).

In the literature of recurrent neural networks onquadratic program, few studies have been conductedexplicitly to include simultaneously all types of lin-ear constraints such as (2)–(4). The usual approach isto design a recurrent neural network (e.g., the primal–dual neural networks [9,11]) by considering only oneor two extendable basic kinds of constraints, then toinclude the other constraints by converting them intothe basic ones. However, the dimensionality of the re-sultant neural network for solving hybrid-constrainedquadratic programs may be much larger than expected.For instance, a single-layered dual neural network [12]was developed for solving quadratic program under in-equality constraint (3) only. The dynamic equation andoutput equation are

u = (Iq +CQ−1CT

g((Iq −CQ−1CT

)u−CQ−1c − d

)− u,

(5)x = −Q−1(CT u+ c),

where the dual decision variable vectoru ∈ Rq rep-resents the states of neurons,Iq denotes theq ×q identity matrix, and the vector-valued functiong(u) = [g1(u1), . . . , gq(uq)]T is defined asgi(ui) =max(0, ui), i = 1, . . . , q . Now let us convert the con-

straints (2)–(4) as

Ax b, −Ax −b,

Cx d,

Inx x+, −Inx −x−.

The coefficient matrix and vector of the resultinginequality constraintCx d are thus defined as

C =

A

−A

C

In−In

, d =

b

−b

d

x+−x−

.

Clearly, to solve (1)–(4), the dimensionality of a dualneural network generalized from (5) is 2p + q +2n, which is smaller than that of the aforementionedLagrange neural network. But such generalized neuralnetwork is still not sufficiently economical in terms ofthe neuron number and network architecture. Besides,for network implementation, the hardware complexityof analogue circuits increases substantially as the totalnumber of neurons increases.

Before ending this section, it is worth discussingthe reason why the aforementioned neural networkapproaches are not economical. To solve (1)–(4),the usual neural network design methods convertequality constraint and two-sided inequality constraintinto two one-sided inequality constraints, which hasunnecessarily increased the dimension of the dualspace, and thus introduces some excessive neurons.If we can treat the equality constraint (2) as a specialtwo-sided bound constraint (with lower bounds equalto upper ones) and treat the one-sided inequalityconstraint (3) as a two-sided bound constraint (withlower bounds equal to “the negative infinity”). Thismay provide a unified treatment of both equality,inequality and simple bound constraints, and theresultant neural network for solving general-formquadratic programming (1)–(4) is of sizen + p + q

only.

3. Model description

As a generalization of the design procedure of thedual neural network [13] for quadratic programmingwith simple bound constraint, we can reformulate a

Page 4: A dual neural network for convex quadratic programming subject to linear equality and inequality constraints

274 Y. Zhang, J. Wang / Physics Letters A 298 (2002) 271–278

Fig. 1. Projection operatorgi(ui ) on [r−i , r+i ].

generally-constrained quadratic program (1)–(4) intoa unified form. That is, to treat equality and one-sidedinequality constraints as special cases of two-sidedbound constraints, we define

r− :=(

b

d−x−

), r+ :=

(b

d

x+

), J :=

(A

C

In

),

whered− ∈ Rp , and∀j ∈ 1, . . . , p, d−j 0 suffi-

ciently large to represent−∞. Then (1)–(4) are rewrit-ten in the following form

Minimize1

2xTQx + cT x,

(6)Subject to r− Jx r+.In the above formulation, the generalized feasibilityregion [r−, r+] is constructed as a closed convex setto facilitate the design and analysis of the dual neuralnetwork via the Karush–Kuhn–Tucker condition andthe projection operator.

It follows from the Karush–Kuhn–Tuckerconditionthat x is a solution to (6) if and only if there existsu ∈Rp+q+n such thatQx − J T u+ c = 0 and

(7)

[Jx]i = r−i , if ui > 0,

[Jx]i = r+i , if ui < 0,

r−i [Jx]i r+

i , if ui = 0.

The complementarity condition (7) is equivalent tothe system of piecewise linear equationJx = g(Jx −u) [14–16], where the vector-valued functiong(u) =[g1(u1), . . . , gp+q+n(up+q+n)]T is defined as

gi(ui) =r−i , if ui < r−

i ,

ui, if r−i ui r+

i ,

r+i , if ui > r+

i ,

(8)i = 1, . . . , p + q + n,

which may include three situations as depicted inFig. 1 by the definitions ofr+ and r−. Therefore,xis a solution to (6) if and only if there exists the dualdecision vectoru ∈ Rp+q+n such thatQx − J T u +c = 0 andJx = g(Jx − u); that is,

(9)

x = Q−1J T u−Q−1c,

g(JQ−1J T u− JQ−1c − u

)= JQ−1J T u− JQ−1c.

The dual neural network model for solving (6) is thusdeveloped with the following dynamical equation andoutput equation

Λu = −JQ−1J T u

+ g(JQ−1J T u− u− JQ−1c

)+ JQ−1c,

(10)x = Q−1J T u−Q−1c,

whereΛ ∈ R(p+q+n)×(p+q+n) is a positive diagonalmatrix to scale the convergence rate of the proposeddual network.

Clearly, to solve the quadratic program with gen-eral linear constraints, the dynamic equation of thedual neural network is piecewise linear and does notcontain any high-order nonlinear term or penalty pa-rameter. Compared to other recurrent neural networks,the dual neural network model is of single-layer withno more thanp + q + n neurons and much simpler interms of network architecture. The block diagram ofthe dual neural network system is depicted in Fig. 2.A circuit realizing the dual neural network consists ofsummers, integrators and weighted connections, andthe piecewise linear activation functiongi(ui) may beimplemented by using an operational amplifier.

Page 5: A dual neural network for convex quadratic programming subject to linear equality and inequality constraints

Y. Zhang, J. Wang / Physics Letters A 298 (2002) 271–278 275

Fig. 2. Block diagram of the dual neural network for solving quad-ratic programs.

4. Convergence results

In the section, we show the global exponentialconvergence of the proposed dual neural network forsolving quadratic programs.

Related definitions and a lemma are presented first.A neural network is said to be globally convergent ifstarting form any initial point taken in the associatedEuclidean space, every state trajectory of the neuralnetwork converges to an equilibrium point that de-pends on the initial state of the trajectory. Furthermore,the neural network is said to be globally exponentiallyconvergent if every trajectory starting from any initialpointx(t0) satisfies∥∥x(t)− x∗∥∥ α

∥∥x(t0)− x∗∥∥exp(−β(t − t0)

),

∀t t0 0,

where α and β are positive constants,x∗ is anequilibrium point. The exponential convergence is themost desirable convergence property.

Lemma 1 [15,17–19].Assume that the set Ω ⊂ Rq isa closed convex set, then the following two inequalitieshold(v − PΩ(v)

)T (PΩ(v) − u

) 0, ∀v ∈ Rq,u ∈ Ω,∥∥PΩ(u)− PΩ(v)

∥∥ ‖u − v‖, ∀u,v ∈Rq,

where PΩ(x) = argminv∈Ω ‖x − v‖ is a projectionoperator from Rq to Ω .

It is clear that the setΩ := u ∈ Rp+q+n | r− u r+ is a closed convex set, andg(·) in (8) pos-sesses the above projection property. The convergence

results of the dual neural network for constrainedquadratic programming are discussed as follows.

Theorem 1. The state of the dual neural network (10)is globally exponentially convergent to an equilibriumpoint u∗.

Proof. To show the convergence property, the follow-ing inequalities are derived.

At u∗, we have the following inequality prop-erty [16,19]

(11)

(v − JQ−1J T u∗ + JQ−1c

)Tu∗ 0, ∀v ∈Ω,

which can be obtained by discussing the followingthree cases:

Case 1. If for some i ∈ 1, . . . , p + q + n, u∗i = 0,

r−i [Jx∗]i r+

i , then(vi −[Jx∗]i )u∗i = 0;

Case 2. If for somej ∈ 1, . . . , p + q + n, u∗j > 0,

[Jx∗]j = r− and r−j vj r+

j , thenvj −[Jx∗]j 0 and thus(vj − [Jx∗]j )u∗

j 0;Case 3. If for somek ∈ 1, . . . , p + q + n, u∗

k < 0,[Jx∗]k = r+

k and r−k vk r+

k , thenvk −[Jx∗]k 0 and thus(vk − [Jx∗]k)u∗

k 0.

Therefore it follows from (11) that(g(JQ−1J T u− JQ−1c − u

)(12)− JQ−1J T u∗ + JQ−1c

)Tu∗ 0.

Defining u := JQ−1J T u − JQ−1c, it follows fromLemma 1 that∀u ∈ Rp+q+n,(g(u− u

)− JQ−1J T u∗ + JQ−1c)T

(13)×(u− u − g(u− u

)) 0.

Then, adding (12) and (13) yields(g(u− u

)− JQ−1J T u∗ + JQ−1c)T

(14)×(u∗ + u − u− g(u− u

)) 0.

Defining

g : = g(JQ−1J T u− JQ−1c − u

)− JQ−1J T u+ JQ−1c,

i.e., g = g(u− u)− u, (14) is reformulated as(g + JQ−1J T

(u− u∗))T ((u− u∗)+ g

) 0,

Page 6: A dual neural network for convex quadratic programming subject to linear equality and inequality constraints

276 Y. Zhang, J. Wang / Physics Letters A 298 (2002) 271–278

and subsequently as(u− u∗)T g + gT JQ−1J T

(u− u∗)

(15) −∥∥g∥∥2 − (u− u∗)T JQ−1J T

(u− u∗).

Now we choose a Lyapunov function candidate as

(16)V(u(t)

)= 1

2

∥∥M(u(t)− u∗)∥∥2,

where matrixM is symmetric positive definite andM2 = (I + JQ−1J T )Λ> 0. Clearly,V (u) is positivedefinite (i.e.,V > 0 if u = u∗ andV = 0 iff u = u∗)for anyu taken in the domainΓ (u∗) ⊂ Rp+q+n (i.e.,the attraction region ofu∗). dV/dt is negative definite,since, in view of (15),

dV

dt= (

u− u∗)TM2u

= (u− u∗)T (I + JQ−1J T

)g

−∥∥g∥∥2 − (u− u∗)T JQ−1J T

(u − u∗)

(17) 0,

anddV/dt = 0 iff u = u∗ in Γ (u∗). Thus it followsthat the dual neural network (10) is globally conver-gent to an equilibrium pointu∗ which depends on theinitial state of the trajectory.

Furthermore, to show the global exponential con-vergence [20,21], reviewV (u) and dV/dt . It fol-lows from (16) thatc1‖u − u∗‖2 V (u) c2‖u −u∗‖2, wherec2 c1 > 0 are, respectively, the max-imal and minimal eigenvalues of(I + JQ−1J T )Λ.Clearly, c1 andc2 are proportional to the design pa-rameterΛ. Moreover, it is reasonably assumed that∃ρ > 0, ‖g‖2 ρ‖u − u∗‖2, since‖g‖ > 0 for anyu(t) = u∗ in Γ (u∗), and‖g‖ = 0 amounts tou = u∗.In addition, by analyzing the linear/saturation casesof gj ([JQ−1J T u − JQ−1c − u]j ), the range ofρ is(0,1]. Therefore from (15), we have

dV (u)

dt −ρ

∥∥u− u∗∥∥2

− (u− u∗)T JQ−1J T

(u− u∗)

= −(u− u∗)T (ρI + JQ−1J T)(u− u∗)

−βV(u(t)

),

where β = ρ/c2 > 0. Thus we haveV (u(t)) =O(exp(−β(t− t0))), ∀t t0, and hence‖u(t)−u∗‖ =O(exp(−β(t − t0)/2)), ∀t t0.

Theorem 2. The output x∗ = Q−1J T u∗ − Q−1c

is the optimal solution to the constrained quadraticprogramming problem (1)–(4), where u∗ denotes theequilibrium point of the dual neural network.

Proof. The equilibrium of the dual neural networksatisfies the following equations:

g(JQ−1J T u∗ − JQ−1c − u∗)− JQ−1J T u∗ + JQ−1c = 0,

x∗ = Q−1J T u∗ −Q−1c,

which is equivalent to

(18)g(Jx∗ − u∗)− Jx∗ = 0.

Let u∗A, u∗

C andu∗I denote the vectors of dual decision

variables corresponding to the constraints (2), (3),(4), respectively, i.e.,u∗ = [(u∗

A)T , (u∗

C)T , (u∗

I )T ]T .

Eq. (18) becomes

(19)g

([Ax∗ − u∗

A

Cx∗ − u∗C

x∗ − u∗I

])−[Ax∗Cx∗x∗

]= 0.

By the definition ofg(·), it follows from (19) thatb −Ax∗ = 0 with u∗

A unrestricted, andCx∗ − d 0, u∗

C 0,(u∗C

)T (Cx∗ − d

)= 0,x∗i = x−

i , if[u∗I

]i> 0,

x∗i = x+

i , if[u∗I

]i< 0,

x−i x∗

i x+i , if

[u∗I

]i= 0.

In addition toQx∗ −AT u∗A −CT u∗

C −u∗I + c = 0 im-

plied byx∗ = Q−1J T u∗ −Q−1c, the above equationsconstitute the Kurash–Kuhn–Tucker optimality condi-tions of (1)–(4), which completes the proof.Corollary. The output x(t) of the dual neural networkis globally exponentially convergent to the uniqueequilibrium point x∗, which is the solution to con-strained quadratic programming problem (1)–(4).

Proof. Since the objective function in (1) is strictlyconvex due to positive definiteness ofQ, and theconstraint region constructed by (2)–(4) is a convexset, by the Theorem 3.4.2 of [22], we know that theconstrained minimizerx∗ to quadratic programmingproblem (1)–(4) is unique.

Page 7: A dual neural network for convex quadratic programming subject to linear equality and inequality constraints

Y. Zhang, J. Wang / Physics Letters A 298 (2002) 271–278 277

It follows from the output equation

x(t) = Q−1J T u(t)−Q−1c,

and Theorem 2 that∥∥x(t)− x∗∥∥= ∥∥Q−1J T(u(t) − u∗)∥∥

(20)∥∥Q−1J T

∥∥∥∥u(t) − u∗∥∥.In view of the results of Theorem 1 that‖u(t)−u∗‖ =O(exp(−β(t − t0)/2)), ∀t t0 0, the inequal-ity (20) implies that the outputx(t) of the dual neuralnetwork (10) exponentially converges tox∗, which isthe unique solution to the constrained quadratic pro-gram (1)–(4).

5. Illustrative example

In the section, a numerical example in [12]is discussed to demonstrate the effectiveness andperformance behavior of the proposed dual neuralnetwork for solving quadratic programs subject togeneral linear constraints:

Minimize 11x21 + x2

2 + x23 − 2x1x2 + 6x1x3 − 4x1,

Subject to 2x1 + 2x2 + x3 = 0,

−x1 + x2 −1,

3x1 + x3 4,

−6 x1, x2, x3 6.

That is,A = [2,2,1], b = 0, x+ = −x− = [6,6,6]Tand

Q =[ 22 −2 6

−2 2 06 0 2

], c =

[−400

],

C =[−1 1 0

3 0 1

], d =

[−14

].

Using the Lagrange neural network [5] to solvethe above quadratic program, the total number ofneurons is 20, in addition to the premature defect.Using the primal–dual neural networks or other kindof dual network such as [9,11,12], the total numberof neurons is usually more than 10. As a comparison,the size of the proposed dual neural network tosolve the above quadratic programming problem isonly 6. As opposed to primal–dual neural networks,

Fig. 3. The state transients of the dual neural network for solving theconstrained quadratic program.

Fig. 4. The output transients of the dual neural network for solvingthe constrained quadratic program.

the complexity reduction of the proposed dual networkis more than(10− 6)/6≈ 66%.

The dual neural network withc = 10−8 is simu-lated, and the results are illustrated in Figs. 3–6. Asshown in Figs. 3 and 4, started with an initial state ran-domly selected within[−5,5]6, the dual neural net-work converges to the optimal solution of the con-strained quadratic program. That is, within 5× 10−7

second (i.e., 0.5 µs), the network can approachx∗ =[2,1,−6]T andu∗ = [0,−2,0,0,0,0]T without any

Page 8: A dual neural network for convex quadratic programming subject to linear equality and inequality constraints

278 Y. Zhang, J. Wang / Physics Letters A 298 (2002) 271–278

Fig. 5. The transients of‖g‖/‖u − u∗‖ in solving the constrainedquadratic program.

Fig. 6. Spatial trajectories of(x1, x2, x3) of the dual neural networkstarting from five different initial states.

appreciable error. In addition, the difference betweenthe neurally-computed solution and the theoretical so-lution x∗ is less than 10−12. Fig. 5 illustrates the tran-sient of‖g‖/‖u − u∗‖ during solving the constrainedquadratic program, and hence the constantρ appear-ing in the proof of Theorem 1 is about 0.15 in the ex-ample. The nonzeroρ guarantees the exponential con-vergence property of the dual neural network. As illus-trated in Fig. 6, the states of the dual neural network,starting from five different initial states, all convergesto the optimal solution of the minimization problem.This substantiates the global convergence property ofthe dual neural network.

6. Concluding remarks

The Letter presents a single-layer dual neural net-work for solving constrained quadratic programmingproblems. Compared to other neural networks forquadratic programming, the proposed dual neural net-work is designed with fewer neurons and with globalexponential convergence property. Moreover, the dy-namic equation of the dual neural network is piece-wise linear and does not contain any high-order non-linear term, and thus the architecture is much simpler.The simulation results also substantiate the theoreticalresults and demonstrate superior performance of thedual neural network.

References

[1] I.B. Pyne, Trans. Am. Inst. Elec. Eng. 75 (1956) 139.[2] D.W. Tank, J.J. Hopfield, IEEE Trans. Circuits Systems 33

(1986) 533.[3] M.P. Kennedy, L.O. Chua, IEEE Trans. Circuits Systems 35 (5)

(1998) 554.[4] Y. Xia, J. Wang, Recurrent neural networks for optimization:

the state of art, in: L.R. Medsker, L.C. Jain (Eds.), CRC Press,New York, 2000, pp. 13–47, chapter 2.

[5] S. Zhang, A.G. Constantinides, IEEE Trans. Circuits Sys-tems 39 (7) (1992) 441.

[6] S. Sudharsanan, M. Sundareshan, Neural Networks 4 (5)(1991) 599.

[7] A. Bouzerdoum, T.R. Pattison, IEEE Trans. Neural Net-works 4 (2) (1993) 293.

[8] J. Wang, Neural Networks 7 (4) (1994) 629.[9] Y. Xia, IEEE Trans. Neural Networks 7 (6) (1996) 1544.

[10] X. Liang, J. Wang, IEEE Trans. Neural Networks 11 (6) (2000)1251.

[11] Q. Tao, J. Cao, M. Xue, H. Qiao, Phys. Lett. A 288 (2) (2001)88.

[12] J. Wang, Y. Xia, Intl. Joint Conf. Neural Net. 1 (1999) 588.[13] Y. Xia, J. Wang, IEEE Trans. Systems Man Cybernet. 31 (1)

(2001) 147.[14] O.L. Mangasarian, J. Optim. Theory Appl. 22 (2) (1979) 465.[15] D.P. Bertsekas, Parallel and Distributed Computation: Numer-

ical Methods, Prentice-Hall, Englewood Cliffs, NJ, 1989.[16] W. Li, J. Swetits, SIAM J. Optim. 7 (3) (1997) 595.[17] J.S. Pang, Math. Oper. Res. 12 (1987) 474.[18] J.S. Pang, J.C. Yao, SIAM J. Control Optim. 33 (1995) 168.[19] Y. Xia, J. Wang, IEEE Trans. Neural Networks 9 (6) (1998)

1331.[20] Y. Xia, J. Wang, IEEE Trans. Neural Networks 11 (4) (2000)

1017.[21] Y. Xia, J. Wang, IEEE Trans. Automat. Control 46 (4) (2001)

635.[22] M.S. Bazaraa, H.D. Sherali, C.M. Shetty, Nonlinear Pro-

gramming—Theory and Algorithms, Wiley, New York, 1993.