A General Projection Neural Network for Solving Optimization and Related Problems

6
A General Projection Neural Network for Solving Optimization and Related Problems Youshen Xia and Jun Wang Department of Automation and Computer-Aided Engineering The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong Abrtmct-In this paper, we propose a general pmjec tion neural network for solving a wider class of opti- mization and related problems. In addition to its sim- ple structure and low complexity, the proposed neural network include existing neural networks for optimiaa- tion, such as the projection neural network, the primal- dual neural network, and the dual neural network, as special cases. Under various mild Conditions, the pro- posed general projection neural network is shown to he globally convergent, globally asymptotically stable, and globally exponentially stable. Furthermore, several improved stability criteria on two special eases of the general projection neural network are obtained under weaker conditions. Simulation results demonstrate the effectiveness and characteristics of the proposed neural network. I. INTRODUCTION Many engineering problems can be formulated as constrained nonlinear optimization problems and com- plementarity problems 111. Real-time solutions of these problems are often needed in engineeringsystems, such as signal processing, system identification, and robot motion control 12-41. The numbers of decision variables and constraints are usually very large and large-scale optimization problems are even more challengingwhen they have to be solved in real time to optimize the per- formance of dynamical systems. For such applications, conventional numerical methods may not be adequate due to the problem dimensionality and stringent re- quirement on computational time. A promising approach to solving such problems in real time is to employ artificial neural networks based on circuit implementation [5]. As parallel computa, tional models, neural networks possess many desirable properties such as real-time information processing. Therefore, neural networks for optimization, control, and signal processing received tremendous interests. In the past two decades, the theory, methodology, and applications of neural networks have been widely inves- tigated (see [5-18) and references therein). Tank and Hopfild [5] first proposed a neural network for solv- ing linear programming problems that was mapped onto a closed-loop circuit [SI. Kennedy and Chua ex- tended [7] their work and proposed a neural network for solving nonlinear convex programming problems. Having a penalty parameter, the true minimizer could be obtained only when the penalty parameter is infi- nite. Moreover, their network has both the implemen- tation and deconvergence problem when the penalty parameter is very large. To avoid using penalty pa- rameters, some significance work have been done in recent years. RodriguwV6zquez et al. proposed a switched-capacitor neural network for solving nonlin- ear convex programming problems, where the optimal solution is assumed to be inside of the feasible set 181. Zhang et al. proposed a second-order neural network for solving a nonlinear convex programming problem with equality constraints [9]. The second-order neural network is complex in implementation due to the need for computing varying inverse matrices. Bouzerdoum and Pattison presented a neural network for solving quadratic convex optimization problems with bounded constraints (101. Recently, we developed several neu- ral networks: primal-dual neural networks for solving linear and quadratic convex programming problems and monotone linear complementary problems, a dual neural network for solving strictly convex quadratic programming problems, and a projection neural net- work for solving a class of nonlinear convex program- ming problems and monotone nonlinear complemen- tary problems (12-161. In this paper, based on a generalized equation in [20] we propose a general projection neural network for solving a wider class of optimization and related problems. The proposed neural network is a significant generalization of existing neural networks for optimiza- tion such as the primal-dual neural network, the dual neural network, and the projection neural network. In addition to its low complexity for implementation, the proposed neural network is shown to be stable in the sense of Lyapunov and globally convergent, globally asymptotically stable, or globally exponentially stable, under different mild conditions. Moreover, several im- proved stability conditions for two special cases of the general projection neural network are obtained under weaker conditions. Illustrative examples demonstrate the performance and effectiveness of the proposed neu- ral network. This paper is organized as follows. In the next sec- tion, a general projection neural network and its ad- vantages are described. In Section 111, the global con- vergence properties of the proposed neural network, including global asymptotic stability and global expcr nential stability, are studied, respectively under some mild conditions. In Section TV, several illustrative ex- amples are presented. Section V gives the conclusions of this paper. 0-7 803-7898-9/03/$17 .oO 02W3 IEEE 2334

description

This book helps to solve the optimization problems using neural networks.

Transcript of A General Projection Neural Network for Solving Optimization and Related Problems

Page 1: A General Projection Neural Network for Solving Optimization and Related Problems

A General Projection Neural Network for Solving Optimization and Related Problems

Youshen Xia and Jun Wang Department of Automation and Computer-Aided Engineering

The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong

Abrtmct-In this paper, we propose a general pmjec tion neural network for solving a wider class of opti- mization and related problems. In addition to its sim- ple structure and low complexity, the proposed neural network include existing neural networks for optimiaa- tion, such as the projection neural network, the primal- dual neural network, and the dual neural network, as special cases. Under various mild Conditions, the pro- posed general projection neural network is shown to he globally convergent, globally asymptotically stable, and globally exponentially stable. Furthermore, several improved stability criteria on two special eases of the general projection neural network are obtained under weaker conditions. Simulation results demonstrate the effectiveness and characteristics of the proposed neural network.

I. INTRODUCTION

Many engineering problems can be formulated as constrained nonlinear optimization problems and com- plementarity problems 111. Real-time solutions of these problems are often needed in engineering systems, such as signal processing, system identification, and robot motion control 12-41. The numbers of decision variables and constraints are usually very large and large-scale optimization problems are even more challenging when they have to be solved in real time to optimize the per- formance of dynamical systems. For such applications, conventional numerical methods may not be adequate due to the problem dimensionality and stringent re- quirement on computational time.

A promising approach to solving such problems in real time is to employ artificial neural networks based on circuit implementation [5]. As parallel computa, tional models, neural networks possess many desirable properties such as real-time information processing. Therefore, neural networks for optimization, control, and signal processing received tremendous interests. In the past two decades, the theory, methodology, and applications of neural networks have been widely inves- tigated (see [5-18) and references therein). Tank and Hopfild [5] first proposed a neural network for solv- ing linear programming problems that was mapped onto a closed-loop circuit [SI. Kennedy and Chua ex- tended [7] their work and proposed a neural network for solving nonlinear convex programming problems. Having a penalty parameter, the true minimizer could be obtained only when the penalty parameter is infi- nite. Moreover, their network has both the implemen- tation and deconvergence problem when the penalty parameter is very large. To avoid using penalty pa-

rameters, some significance work have been done in recent years. RodriguwV6zquez et al. proposed a switched-capacitor neural network for solving nonlin- ear convex programming problems, where the optimal solution is assumed to be inside of the feasible set 181. Zhang et al. proposed a second-order neural network for solving a nonlinear convex programming problem with equality constraints [9]. The second-order neural network is complex in implementation due to the need for computing varying inverse matrices. Bouzerdoum and Pattison presented a neural network for solving quadratic convex optimization problems with bounded constraints (101. Recently, we developed several neu- ral networks: primal-dual neural networks for solving linear and quadratic convex programming problems and monotone linear complementary problems, a dual neural network for solving strictly convex quadratic programming problems, and a projection neural net- work for solving a class of nonlinear convex program- ming problems and monotone nonlinear complemen- tary problems (12-161.

In this paper, based on a generalized equation in [20] we propose a general projection neural network for solving a wider class of optimization and related problems. The proposed neural network is a significant generalization of existing neural networks for optimiza- tion such as the primal-dual neural network, the dual neural network, and the projection neural network. In addition to its low complexity for implementation, the proposed neural network is shown to be stable in the sense of Lyapunov and globally convergent, globally asymptotically stable, or globally exponentially stable, under different mild conditions. Moreover, several im- proved stability conditions for two special cases of the general projection neural network are obtained under weaker conditions. Illustrative examples demonstrate the performance and effectiveness of the proposed neu- ral network.

This paper is organized as follows. In the next sec- tion, a general projection neural network and its ad- vantages are described. In Section 111, the global con- vergence properties of the proposed neural network, including global asymptotic stability and global expcr nential stability, are studied, respectively under some mild conditions. In Section TV, several illustrative ex- amples are presented. Section V gives the conclusions of this paper.

0-7 803-7898-9/03/$17 .oO 02W3 IEEE 2334

Page 2: A General Projection Neural Network for Solving Optimization and Related Problems

11. MODEL DESCRIPTION "

We propose a general projection neural network with its dynamical equation defined as

(1) du - dt A{Px(G(u) - F(u) ) - G(u)} ,

where u E R" is a state vector, A = diag(Xi) is a positive diagonal matrix, F(u) and G(u) are contin- uously differentiable vector-valued functions from R" into R", X = {U E R" 1 l ; 5 U ; 5 h;,i = 1, ..., n}, Px : R". -i X is a projection operator defined by Px(u) = [Px(w) , ..., Px(u,)IT, and Px(u;) is given

l; ui < li i hi ui >hi. Px(ui) = ui li 5 U; 5 hi

bY

The dynamic equation described in (1) can be easily re- alized by a recurrent neural network with a single layer structure as shown in Fig. 1. The projection operator Px(.) can be implemented by using a piecewise activa- tion function. It can be seen that the circuit realizing the prop,osed neural network consists of 2n summers, n integrators, n piecewise linear activation functions, and n processors for G(u) and F(u). Therefore, the net- work complexity depends only on the mapping G(u) and F(u).

In addition to its low comulexitv for realization. the

Next, the general projection neural network in (1) is useful for solving optimization and related problems. This is because it is intimately related to the following general variational inequality (GVI) [20]: find U' E X such that G(u') E X and

(U - G(u' ) )~F(u ' ) 2 0, VU E X. (6)

Fkom [20] it can be seen that solving GVI is equivalent to finding a zero of the generalized equation

Px(G(.) - F(u)) - G(u) = 0. (7)

Therefore, the equilibrium point of the general projec- tion neural network in (1) solves GVI. This property shows that the existence of the equilibrium point of (1) is equivalent to the one of the solutions of GVI. This implies that there is at least an equilibrium point of (1) when GVI has a solution. As for the existence of the solutions of GVI, the reader is referred to related papers [19,20]. It is well known that GVI has been viewed as the general framework of unifying the treat- ment of many optimization, economic, and engineer- ing problems [22-241. For example, GVI includes two useful models: the variational inequalities and general complementarity problems. The variational inequality problem is to find an U* E X such that

(U - U*)TF(U' ) 2 0, vu E x. , (8) general projection neural network in (1) has several advantages. First, it is a significant generalization of some existing neural networks for optimization. For

The general complementarity problem is to find an U* E R" such that

G(z) 2 0, F(u) 2 0, G ( u ) ~ F ( u ) = 0. (9) example; let G(u) = U, then the proposed neural net- work model becomes the projection neural network model [16] given by Because solving GVI is equivalent to finding an equi-

librium point of (I), the desirable solutions to' GVI can he obtained by tracking the continuous trajectory of (1). Therefore, the proposed neural network in (1) is attractive alternative as a real-time solver for many

(2)

' E

du dt - = A{Px(u - F(u)) - U}.

the affine case that qu) = M~ + where R"X" is a positive semi-definite matrixand E R", the optimization and related problems.

111. CONVERGENCE RESULTS proposed neural network model becomes ihe primal- dual neural network model [13]

We estabilish the following results on the .general projection neural network.

- A { P ~ ( u - ( M u + ~ ) ) - u } . (3) A. General Case du dt _ -

Theorem I . Assume that F(u) is G-monotone at an Let F(u) = u, then the proposed neural network model becomes

equilibrum point, of (1):

(F(u) - F(u*),G(u) - G(u*)) 2 O,VU E R",

where (. ,.) denotes an inner product. If VF(u) + VG(u) is symmetric and positive semi-definite in R", then the general projection neural network in (1) is sta- ble in the Lyapunov and is globally convergent to an equilibrium points of (1). Specially, the general projection neural network in (1) is globally asymptot- ically stable if the equilibrium point is unique.

(4) du dt - = A{Px(G(u) - U) - G(u) } .

In the affine case that G(u) = Wu + q where W E R"'" is a positive semi-definite matrix and q E R", the proposed neural ,,etwork model becomes the dual neural network model [15]

du dt - = A{Px(WU + p - U) - WZL - 9) . (5)

2335

Page 3: A General Projection Neural Network for Solving Optimization and Related Problems

h

"1

...

U"

Fig. 1. A block diagram of the general projection neural network in (1)

Theorem 2. Assume that F(u) is G-strongly mono- tone at U':

(F(u) - F(u*),G(u) - G(u*)) 2 @ I ~ U - U * I I ~ , V U E R",

where / I . 11 denotes the 12-norm of R" and 0 > 0 is a constant. If VF(u) + VG(u) is symmetric and posi- tive semidefinite, and has an upper bound in R", then the general projection neural network in (1) is globally exponentially stable at U*.

As an immediate corollary of Theorems 1 and 2, we bave the following results.

CoroJIary 1. Assume that F(u) = U. If VG(u) is symmetric and positive semidefinite, then the neural network in (4) is stable in the Lyapunov sense and is globally convergent to an equilibrium point of (4). If VG(u) is symmetric and uniformly positive definite and has an upper bound in R", then the neural network in (4) is globally exponentially stable.

As a special case the G(u) = WU + q, where W E RnX". IS symmetric and positive semidefinite and q E

R", the result of Corollary 1 is presented in [15]. B. Two special cases As for the projection neural network of (2), we have

two further results below. Theorem 3. Assume that VF(u) is positive semidef-

inite in R". If

model becomes

du dt - = A{Px((N - M ) u + c - p ) - NU - c}. (10)

As for the convergence of (lo), we have the following results.

Theorem 5. Assume that matrix A = NT + MT. The neural network in (10) is globally convergent to an equilibrium point of (10) when M T N is positive semi-definite, and is globally exponentially stable when MTN is positive definite.

As an immediate corollary of Theorem 5, we have the following result, which is presented in [13].

Corollary 2. Assume that N = I and A = I + MT. The neural network in (10) is globally convergent to an equilibrium point of (10) when M is positive semi- definite, and is globally exponentially stable when M is positive definite.

IV. ILLUSTRATIVE EXAMPLES In order to demonstrate the effectiveness and per-

formance of the general projection neural network, in this section, we give several illustrative examples.

Example 1. Consider the implicit complementarity problem (ICP) [28] : find U E R" such that

U - *(U) 2 0, F(u) 2 0, (U - * ( U ) ) T F ( U ) = 0,

2 -1 0 ... 0 + ;1 2 2o 2 ... ::2 I ) ... -1 -1 ... -1

" Y "

and

- 1 . 5 ~ 1 + 0 . 2 5 ~ : - 1 . 5 ~ 2 + 0 . 2 5 ~ ;

F ( U ) T ( U - U') = 0 It can be seen that the ICP can be viewed as the GNCP (. - F ( U ) ) - U ) T v F ( U ) ( p X ( U - F ( U ) ) -') = where G(U) = U - @(.) and F(U) = MU + ~, The

implies that is a solution to (8), then the projection general projection neural network in (1) is thus applied neural network in (2) is stable in the sense of L~~~~~~~ and globally convergent to an equilibrium point of (2).

Theorem 4. If V F ( u ) is uniformly positive definite in R", then the projection neural network in (2) is globally exponentially stable.

c, M , N E PX", and q,c E R", the neural network

to solve the above ICP, and its state equation is given

dU - = d t - *(U) - F ( U ) ) + - U + "(U)}, (11)

In the affine case that F(u) = M u f q , G(u) = NU+ where (U)+ = [(.I)+, ..., (u,)+IT and ( u p = max(0,ui) for i = 1, ..., n. All simulation results show

2336

Page 4: A General Projection Neural Network for Solving Optimization and Related Problems

1%

wms m e

. .

Fig. 2. The complementarity error based on the proposed neural network in (11) for solving ICP in Example 1. Fig. 4. Global convergence of the propbsed neural network in

(13) for solving GNCP in Example 2. 2 1

1 , . 3(21 - 2)' + 4121 - 3)' + 22: - 724 - 120 5 ~ : + (23 - 6) + 822 - 224 - 40 (21 - 8)'/2 + 2 ( a - 4)' + 32: - ZB - 30 2: + 2(2a - 2)' - 22122 + 142s - 62e 421 + 521 - 327 + 928 - 105. 1021 - 811 - 1727 + 228

h(z) =

12(2Q - 8)' - 321 + 622 - 7210 '

-82, + 222 + 529 - 2210 - 12

1.431,1.321,9.829, 8.280,8.376]T.

I -0 I This problem has an optimal solution given in [27]

I' = [ 2.172,2.364,8.774,5.096,0.991,

0 0 5 > 1 5 2 2 s 1 3s 4 .I I - I I

" By the Kuhn-Tucker condition (11 we see that there ex-

if U = (2, y) E R" solves the GNCP where F(u) = U and

Fig. 3. Global convergence of the proposed neural network in kt Y E that the if and Only (11) for solving ICP in Example 1.

qU) = ( f(r) + V W Y ) -Wx) that the trajectory of (11) always is globally convergent

to a solution to ICP. For example, let A = 2 1 and let the initial point he random. Fig. 2 shows the transient behaviors of the implementarity error ( u - C J ( U ) ) ~ F ( U ) based on (11) with n = 10,30,60,90. Fig. 3 shows the transient behavior of the proposed neural network with n = 10.

Example 2. Consider the variational inequality problem (VIP) with nonlinear constraints: find I E Rla such that

The general projection neural network in (1) is thus applied to solve the GNCP, and its state equation is

(13) du dt - = A{(G(u) -U)+ - G(u)} ,

where (U)+ = [(uI)+, ...,( and (U;)+ = max{O, U;} for i = 1, ..., 18. All simulation results show the trajectory of (13) always is globally convergent to U* = (z*,y*). For example, let A = 2 1 and let the ini- tial point be zero. A solution to the GNCP is obtained as follows

(I - f ( I * ) ) T I * 2 0, V I E x, (12)

where X = {I E R"' I h(r) 5 0, I 2 O}, and I* = [ 2.178,2.368,8.743,5.09,0.991,

1.430, 1.322,9.823,8.286,8.371]T,

where time parameter t = 6. Fig. 4 shows the transient behavior of z(t) with a random initial point.

Example 3. Consider the nonlinear complementarity problem (NCP)

2331

Page 5: A General Projection Neural Network for Solving Optimization and Related Problems

1 - 1 0 0 ' I

and

Q = ( ::) , P = ( -0.5 , b = ( a ) , d = ( i ) . o s

-0.5

This problem has an optimal solution (z*,y*) = (0.5,1.5,1,0,0,0). According to the well-known sad- dle point Theorem [l], it can be seen that the above GLQP can be converted into an general linear varia- tional inequality (GLCP): find z* E X such that

-2

-2% d. d, A A : I,* 3:. >'@ 3; 2 VT.

Fig. 5. Global Exponential stability of the projection neural network in (2) for solving VIP in Example 3.

(2 - N t ' ) T ( M t * + 1 ) 2 0, vt E x, where where I = (x ,y , s ,w) E R3 x R4 x R3 x @, t' =

.X = {(I, y,s ,w) E Ri41 I 2 0 , y 2 0,O 5 s 5 b,O 5 w 5 d} ,

(Z*>Y'> s*,w*)T,

I' 0 0 0

B 0 0 0

2u1 exp(u: +(U, - I ) ~ ) + UI - uz - UQ + 1 -U1 + 2ua + 3%

2(Ul - l )eXp(U? + (U2 - 1)Z) - U1 + 2uz + 2u3 + 3

This problem has only one solution U* = [0,0.167, 0IT and F(u*) = [0.833,0,0.334IT. According to [19], U* is a solution to the above NCP if and only if U* satisfies the following equation 0 c o o

A -H BT 0 PX(U - F ( U ) ) = U, (15) and

where X = R I , Px(u) = [(U')+, ...,( u3)+IT, and M = H T Q 0 CT (U;)+ = max{O,u;} for i = 1,2,3. It can be seen that F(u) is strongly monotone on R:. We use the neural network in (2) to solve the above NCP. All simulation results show that the corresponding neural network in (2) always is globally exponentially stable at U*. For example, Fig. 5 displays the exponential stability of the trajectory of (2) with 20 random initial points, dz - = X(MT + NT){Px(Nz - M t - 1 ) - N I } . where A = 41. dt

Example 4. Consider the general linear-quadratic optimization problem (GLQP) I291

where Il E R 3 x 3 and

above GLCP, and it becomes

E p X 4 are unit matrice. The is applied to solve the neural network in

(17)

All simulation results show the trajectory of (17) al- ways is globally convergent to t*. For example, let X = 21 and let the initial point be zero. A solution to 1 , minmax

x t o yro 2 GLCP is obtained as follows f(., Y) = qTz - pTY + --2:

1 -xTHy - ;iyTQy (16) (d, y') = (0.499,1.504,1.004,0.012,0.012,0,0),

0 5 B x 5 b,O 5 Cy 5 d subject to where time parameter t = 5. Fig. 6 shows the transient behavior of (r(t), y(t)) with an random initial point.

V. CONCLUSION Optimization has found wide applications in various

areas such as control and signal processing. In this pa- per, we have proposed a general projection neural net-

-1 0 0 work, which has a simple structure and low complex- ity for implementation. The general projection neu- ral network includes existing neural networks for opti- mization, such as the primal-dual neural network, the

where

2 1 1 1 0 0 1

1 0 1 0 0 1 0 A = ( l Z ' O ) , H = ( O 1 0 0 ) .

Q = ( i 0 1 1 y , ] , B = ( -1 1 1:) -1 '

2338

Page 6: A General Projection Neural Network for Solving Optimization and Related Problems

I 0 o s I $ 5 2 2 1 I 1s 1 I S I

Ime -0 4

Fig. 6. Global convergence of’(i(t),g(t)) based on the neural network in (17) in Example 4.

dual neural ‘network, the projection neural network, as special cases. Moreover, its equilibrium points are able to solve a wide variety of optimization and related problems. Under mild conditions, we have shown the general projection neural network has a global conver- gence, a global asymptotic stability, and a global expc- nential stability, respectively. Since the general projec- tion neural network contains existing neural networks ,in (2)-(5) as special cases, the obtained stability results naturally generalize the existing ones for a special case of such neural’ networks. Furthermore, we have ob- tained several improved stability results on two special cases of the general projection neural network under weaker conditions. The obtained results are helpful for wide applications of the the general projection neu- ral network. nlustrative examples with application to optimization and related problems show that the prc- posed neural network is effective in solving these prob- lems. Further investigations will be aimed at the im- provement of the stability conditions and engineering applications of the general projection neural network to robot motion control and signal processing, etc.

ACKNOWLEDGMENT This work was supported by the Hong Kong Re-

search Grants Council under Grant CUHK4174/00E.

REFERENCES (11 M. S. Bazaraa, H. D. Sherali, C. M. Shetty, Nonlinear Pm-

gmmming: Theory and Algorithms (2nd Ed.), John Wiley, New York, 1993.

[2] T. Yoshikawa, Foundations of Rolmties: Annlysis and Con- trol, MIT Press, Cambridge, MA, 1990.

(31 N. Kalauptisidis, Signal Pmeessing Systems, Theory and Design. Wiley, New York, 1997.

(41 B. Kosko, Net” Networks for Signoi Processing, Prentice- Hall, Inc., USA, 1992.

[5] J. J. Hopfield and D. W. W , “ N e u r a l computation of deci- sions in optimization problems,” Biological Cybernetics, vol. 52, no. 3, pp. 141-152, 1985.

161 D. W. l h k and J. J. Hopfield, “Simple ‘neural optimization networks: an A/D converter, si@ decision circuit, and a linear programming circuit,” IEEE ”%ctions on Circuits and Systems, vol. 33, no. 5, pp. 533-541, 1986.

[7] M. P. Kennedy and L. 0. Chua, “Neural networks for nonlin- ear programming,” LEEE ” a c t i o n s on Circuits and S y s tems, vol. 35, no. 5, pp. 554-562, 1988.

[E] A. Radriguez-V&ques, R. Dodnguez-Castro, A. Rueda, J. L. Huertas, and E. S&nchez-Sinencio, “Nonlinear switched- capacitor ,neural networks’ for optimization problems,” IEEE llamactions on Circuits and Systems, vol. 37, no. 3, 384-397, 1990.

[9] S. Zhang, X. Zhu, and L.-H. Zou, “Second-order neural net- works for constrained outimiaation.” IEEE llansactions M

Neural Networks, vol. 3, no. 6, pp. 1021-1024, 1992. [lo] A. Bouzerdoum and T. R. Pattison, “Neural network

for quadratic Optimization with bound constraints,” IEEE lhn.?actions on Neural Networks, Vol. 4, No. 2, pp. 293- 304, 1993.

[11] A. Cichmki and R. Unbehauen, Neural Networks for Opti- miBation and Signal Processing, Wiley, England, 1993.

[12] Y. Xia, “A new neural network for solving linear program- ming problems and its applications,” IEEE llamaetions on Neural Networks, vol. 7, pp. 525-529, 1996.

[U] Y. Xia “A new neural network for solving linear and quadratic programming problems,” IEEE llmsactions on Neural Networks, vol. 7, no. 4, pp. 1544-1547, 1996.

[14] Y. Xia and J. Wang, ‘“A general methodology for design- ing globally convergent optimization neural networks,” IEEE llansactions Neural Networks, vol. 9, 1331-1343, 1998.

[E] Y. Xia and J. Wang, “A dual neural network for kinematic control of redundant robot manipulators,” IEEE lkansac- tions M Systems, Man and Cybernetics . Part B, vol. 31, no. 1, pp. 147-154, 2001.

[16] Y. Xia, H. Leung, and J. Wang, I‘ A projection neural net- work and its application to constrained optimization prob- lems,” IEEE Dansctions on Circuits and Systems - Part I, vol. 49, no. 4, pp. 447-458, 2002.

1171 C. Y. Maa and M.A. Shanblatt,“Linear and quadratic pro- gramming neural network analysis,” IEEE ” s a c t i o n s on Neural Networks, vol. 3, no. 6, pp. 580-594, 1992.

[I81 W. E. Lillo, M.H. Loh, S. Hui, and S. H: Z a k , “On’solving constrained optimization problems with neural networks: A penalty method approach,” IEEE 7bnsactianr on Ne!“ Networks, vol. 4, no. 6, pp. 931-939, 1993.

1191 D. Kinderlehrer and G. StamDccbia. An Introduction to . , Variational Inequalities and Tieir Applications, Academic Press, New York, 1980.

[20] J.-S. Pang and J . 4 . Yao, “On a generaliiation of a normal map and equation,” SIAM J. Control and Optimization, vol. 33, pp. 168-184, 1995.

[21] M. Wushima, “Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems,“ Mathematical Programming, vol. 53, pp. 99-110, 1992.

[22] T. L. Friesz, D.H. Bernstein, N.J. Mehta, R.L. Tohin, and S. Ganjlizadeb, “Day-tmday dynamic network disequilibria and idealized traveler information systems,” Operations Re- search, vol. 42, pp. 1120-1136, 1994

[23] M. C. Ferris and J. S. pang, “Engineeling and economic applications of Complementarity problems,” SIAM Review, vol. 39, pp. 669-713, 1997.

(241 L. Vandenbergbe, B. L. De Moor, and J. Vandewalle, “The generalized linear complementary problem applied to the complete analysis of resistive piecewiselinear circuits,” IEEE Trans. Circuits and systems, “01.36, no.11, 1382-1391, 1989.

1251 R. K. Miller and A. N. Michel, Ordinary Differential Equa- tions, Academic Press, New York, 1980.

1261 J. M. Ortega and W. G. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970.

[27] C. Cbaralambous, “Nonlinear lest pth optimiaatian and nonlinear programming,” Mathematical Programming, vol. 12, pp. 195-225, 1977.

[Z8] R. Andremi, A. Friedlander, and S. A. Santos, “On the res- olution of the generalized nonlinear complementarity prob- lem,“ SIAM Journal of Optimization, vol. 12, no. 2, pp. 303- 321, 2001.

[29] R. T. Rockafellar and R. J. B. Wets, “Linear-quadratic pro- gramming and Optimal control,” SIAM Journal of Control and Optimization, vol. 25, pp. 781-814, 1987.

2339