On constraint qualifications in nonlinear programming

10
Nonlinear Analysis 69 (2008) 3249–3258 www.elsevier.com/locate/na On constraint qualifications in nonlinear programming Marco Castellani University of L’Aquila, Department S.I.E., 67100 L’Aquila, Italy Received 19 June 2007; accepted 13 September 2007 Abstract In this paper we give conditions for deriving the inconsistency of an inequality system of positively homogeneous functions starting from the inconsistency of another one. When the impossibility of the starting system represents a necessary optimality condition for an inequality constrained extremum problem and the positively homogeneous functions involved have suitable properties of convexity, such conditions collapse into the well known constraint qualifications. c 2008 Published by Elsevier Ltd MSC: 90C26; 90C30 Keywords: Inequality system; Positively homogeneous function; Constraint qualification; Inequality constrained extremum problem 1. Introduction We consider the following inequality constrained extremum problem: min{ f (x ) : g i (x ) 0, i I }, (1) where f , g i : X -→ R, X is a Banach space and I ={1,..., m} is a finite index set. If f and g i are differentiable and x 0 is a local optimal solution for (1), the well known John necessary optimality condition affirms that there exist θ,λ i 0, not all zero, such that 0 = θ f (x 0 ) + i I (x 0 ) λ i g i (x 0 ), where I (x 0 ) ={i I : g i (x 0 ) = 0}. A constraint qualification is any assumption on the problem (1) which allows one to deduce θ = 0 obtaining the Kuhn–Tucker necessary condition 0 =∇ f (x 0 ) + i I (x 0 ) λ i g i (x 0 ). Such a necessary condition can also be generalized for much larger classes of functions (convex, Lipschitz, epi-Lipschitz and so on) by means of suitable directional derivatives with “nice” structures of convexity (see for instance [3,4,7,10–14,16]). A number of calculus rules can be derived for these derivatives and moreover a large E-mail address: [email protected]. 0362-546X/$ - see front matter c 2008 Published by Elsevier Ltd doi:10.1016/j.na.2007.09.014

Transcript of On constraint qualifications in nonlinear programming

Page 1: On constraint qualifications in nonlinear programming

Nonlinear Analysis 69 (2008) 3249–3258www.elsevier.com/locate/na

On constraint qualifications in nonlinear programming

Marco Castellani

University of L’Aquila, Department S.I.E., 67100 L’Aquila, Italy

Received 19 June 2007; accepted 13 September 2007

Abstract

In this paper we give conditions for deriving the inconsistency of an inequality system of positively homogeneous functionsstarting from the inconsistency of another one. When the impossibility of the starting system represents a necessary optimalitycondition for an inequality constrained extremum problem and the positively homogeneous functions involved have suitableproperties of convexity, such conditions collapse into the well known constraint qualifications.c© 2008 Published by Elsevier Ltd

MSC: 90C26; 90C30

Keywords: Inequality system; Positively homogeneous function; Constraint qualification; Inequality constrained extremum problem

1. Introduction

We consider the following inequality constrained extremum problem:

min{ f (x) : gi (x) ≤ 0, i ∈ I }, (1)

where f, gi : X −→ R, X is a Banach space and I = {1, . . . ,m} is a finite index set. If f and gi are differentiableand x0 is a local optimal solution for (1), the well known John necessary optimality condition affirms that there existθ, λi ≥ 0, not all zero, such that

0 = θ∇ f (x0)+

∑i∈I (x0)

λi∇gi (x0),

where I (x0) = {i ∈ I : gi (x0) = 0}. A constraint qualification is any assumption on the problem (1) which allowsone to deduce θ 6= 0 obtaining the Kuhn–Tucker necessary condition

0 = ∇ f (x0)+

∑i∈I (x0)

λi∇gi (x0).

Such a necessary condition can also be generalized for much larger classes of functions (convex, Lipschitz,epi-Lipschitz and so on) by means of suitable directional derivatives with “nice” structures of convexity (see forinstance [3,4,7,10–14,16]). A number of calculus rules can be derived for these derivatives and moreover a large

E-mail address: [email protected].

0362-546X/$ - see front matter c© 2008 Published by Elsevier Ltddoi:10.1016/j.na.2007.09.014

Page 2: On constraint qualifications in nonlinear programming

3250 M. Castellani / Nonlinear Analysis 69 (2008) 3249–3258

amount of generalizations of the well known constraint qualifications are achieved. The aim of this paper is to developgeneral constraint qualifications for the problem (1) avoiding any convexity hypothesis on the directional derivatives.These constraint qualifications collapse into the classical ones when the directional derivatives are sublinear functionsor differences of sublinear functions. Moreover, by using the concept of the pointwise minimum of sublinear functionsintroduced in [5], we establish a necessary optimality condition of Kuhn–Tucker type for a suitable extremum problem.

In what follows X is a real Banach space with norm ‖ · ‖, X∗ is the topological dual space of X endowed with theweak∗ topology and 〈·, ·〉 denotes the canonical pairing of X∗ and X. For a set A, we denote the closure, the convexhull, the conical hull of A by clA, co A and cone A respectively. If A is a subset of X∗, the support function associatedwith A is

σ(x, A) = sup{〈x∗, x〉 : x∗∈ A}.

For a function ϕ : X −→ R ∪ {+∞} we denote the domain of ϕ by

dom ϕ = {x ∈ X : ϕ(x) < +∞}

and ϕ is said to be proper if dom ϕ 6= ∅. Moreover, if ϕ is a proper positively homogeneous function, we define therecession function associated with ϕ by

ϕ∞(x) = sup{ϕ(x + y)− ϕ(y) : y ∈ dom ϕ}.

The recession function plays a fundamental role in the definition of constraint qualifications. For this reason we recallits main properties. First of all it is easy to show that the epigraph of ϕ∞ is the recession cone of the epigraph of ϕ,i.e.

epiϕ∞= (epiϕ)∞ = {(x, y) ∈ X × R : (x, y)+ epiϕ ⊆ epiϕ};

therefore ϕ∞ is sublinear and ϕ∞(0) = 0. Moreover ϕ∞(x) ≥ ϕ(x) for all x ∈ dom ϕ and, clearly, if ϕ(0) = 0,the inequality holds for all x ∈ X. Nevertheless if ϕ is also sublinear then ϕ∞

= ϕ. Referring to the problem (1), ifx ∈ X is a feasible point, we denote by I (x) = {i ∈ I : gi (x) = 0} the index set of the active constraints; moreoverwe denote by

gmax(x) = max{gi (x) : i ∈ I }

the pointwise maximum of the constraints gi . We conclude with the main directional derivatives that we will use inthe sequel. Let ϕ : X −→ R ∪ {+∞} be finite at x ∈ X and let v ∈ X be a fixed direction; the upper and the lowerDini derivatives are

D+ϕ(x, v) = lim supt→0+

ϕ(x + tv)− ϕ(x)

t

and

D−ϕ(x, v) = lim inft→0+

ϕ(x + tv)− ϕ(x)

t;

if ϕ is locally Lipschitz, the Clarke directional derivative is

ϕ◦(x, v) = lim sup(x ′,t)→(x,0+)

ϕ(x ′+ tv)− ϕ(x ′)

t.

It is well known that ϕ◦(x, ·) is a continuous sublinear function and therefore there exists a nonempty convex compactset ∂◦ϕ(x), called the Clarke subdifferential, such that ϕ◦(x, v) = σ(v, ∂◦ϕ(x)). In the examples we use the notationbold x = (x1, . . . , xn) in order to denote the vector on Rn .

Page 3: On constraint qualifications in nonlinear programming

M. Castellani / Nonlinear Analysis 69 (2008) 3249–3258 3251

2. Regularity for inequality systems

It is easy to show that, if gi are u.s.c. at x0 for all i 6∈ I (x0), then a necessary optimality condition for (1) can beexpressed by means of the impossibility of the system{

D− f (x0, v) < 0D+gi (x0, v) < 0, i ∈ I (x0).

(2)

In the last few years many other directional derivatives have been introduced and it has been proved (see for instance [1,14]) that Abadie type necessary optimality conditions can be expressed via the impossibility of systems of positivelyhomogeneous functions{

D f (x0, v) < 0Di gi (x0, v) < 0, i ∈ I (x0),

(3)

where D f (x0, ·) and Di gi (x0, ·) are suitable directional derivatives associated with f and gi respectively. Moreover,ifD f (x0, ·) andDi gi (x0, ·) are sublinear functions (or differences of sublinear functions or, more generally, pointwiseminima of sublinear functions) it is possible to prove, via a generalization of the Gordan Theorem of the alternative[2], that the impossibility of (3) is equivalent to a generalized John necessary optimality condition. Any constraintqualification permits one to deduce a generalized Kuhn–Tucker optimality condition from the John one, and moreover,via a generalization of the Farkas Lemma [5,6], this condition is equivalent to the impossibility of the system{

D f (x0, v) < 0Di gi (x0, v) ≤ 0, i ∈ I (x0).

(4)

The following general scheme summarizes the previous observations: let D f (x0, ·) and Di gi (x0, ·) be “nice”directional derivatives with a “nice” structure of convexity; then, roughly speaking, a first-order necessary optimalitycondition can be expressed by

Impossibility ofsystem (3) -�

Gordan Theorem Generalization ofJohn condition

?

constraintqualification

Impossibility ofsystem (4) -�

Farkas Lemma Generalization ofKuhn–Tucker condition

From the scheme we observe that a constraint qualification can be interpreted as any assumption which permits one todeduce the impossibility of (4) from the impossibility of (3). In this section we exploit this point of view and thereforewe are interested in studying the connection between the impossibility of the system{

F(x) < 0,Gi (x) < 0, i ∈ I,

(5)

and that of the system{F(x) < 0,Gi (x) ≤ 0, i ∈ I,

(6)

where F,Gi : X −→ R ∪ {+∞} are proper positively homogeneous functions, with i ∈ I = {1, . . . ,m} and∩i∈I dom Gi 6= ∅. No assumptions of convexity, or its generalizations, are required. Consider the following fourconditions:⋂

i∈I

{x ∈ X : G∞

i (x) < 0} 6= ∅, (7)

{x ∈ X : G∞max(x) < 0} 6= ∅, (8)

Page 4: On constraint qualifications in nonlinear programming

3252 M. Castellani / Nonlinear Analysis 69 (2008) 3249–3258

∀x ∈ X : Gmax(x) = 0, ∃v ∈ X : D−Gmax(x, v) < 0, (9)

cl{x ∈ X : Gmax(x) < 0} = {x ∈ X : Gmax(x) ≤ 0}. (10)

Theorem 2.1. The following relations hold:

(7) ⇒ (8) ⇒ (9).

Moreover, if Gmax is l.s.c., then (9) implies (10).

Proof. The first implication is trivial since G∞

i (x) ≥ G∞max(x). For the second one, let x ∈ X be such that

G∞max(x) = −L < 0 and x ∈ X such that Gmax(x) = 0. Then, for each t > 0,

−Lt = G∞max(t x) ≥ Gmax(x + t x)− Gmax(x).

Therefore D−Gmax(x, x) ≤ −L < 0. For the last implication, from the lower semicontinuity we have

cl{x ∈ X : Gmax(x) < 0} ⊆ {x ∈ X : Gmax(x) ≤ 0}.

Vice versa, if Gmax(x) = 0, there exists v ∈ X such that D−Gmax(x, v) < 0 and then for each δ > 0 there existst ∈ (0, δ) such that

0 >Gmax(x + tv)− Gmax(x)

t=

Gmax(x + tv)

t;

this completes the proof. �

The converses of the implications in Theorem 2.1 do not hold in general.

Example 2.1. (8) 6⇒ (7). Consider G1,G2 : R −→ R defined by G1(x) = x and G2(x) = −|x |; thenG∞

1 (x) = G∞max(x) = x and G∞

2 (x) = |x |. Therefore (8) holds for x < 0 but (7) is not satisfied.(9) 6⇒ (8). Consider G : R −→ R defined by G(x) = −|x |. Then G(x) = 0 for x = 0 and (9) is satisfied for v 6= 0;nevertheless, since G∞(x) = |x |, (8) does not hold.(10) 6⇒ (9). Consider G : R2

−→ R defined by

G(x) =

x22

x1, if 0 < |x2| ≤ x1,

x1, if x1 ≤ 0 and x2 = 0,|x2|, otherwise;

therefore G is l.s.c. and

{x ∈ R2: G(x) < 0} = {x ∈ R2

: x1 < 0, x2 = 0} ∪ {x ∈ R2: 0 < |x2| ≤ x1}

and hence (10) holds. Vice versa, choosing x = (1, 0) we get D−G(x, v) = 0 for all v ∈ R2 and (9) is not verified.(9) 6⇒ (10) if G is not l.s.c. Consider G : R2

−→ R defined by

G(x) =

0, if x1 ≥ 0 and x2 = 0,−x1, if x1 < 0 and x2 = 0,

√x2

1 + x22 , otherwise;

then cl{x ∈ R2: G(x) < 0} = R2 and

{x ∈ R2: G(x) ≤ 0} = R2

\ {x ∈ R2: x1 < 0, x2 = 0};

therefore (10) does not hold. But G(x1, 0) = 0 if and only if x1 ≥ 0 and x2 = 0 and

D−G((x1, 0), (0, 1)) = lim inft→0+

√x2

1 + t2

t=

{−1, if x1 = 0,−∞, if x1 > 0;

thus (9) is satisfied.

Page 5: On constraint qualifications in nonlinear programming

M. Castellani / Nonlinear Analysis 69 (2008) 3249–3258 3253

Corollary 2.1. If the Gi are sublinear with Gi (0) = 0, then (7)–(9) are equivalent; moreover if Gmax is l.s.c. then(10) is equivalent to (7).

Proof. Suppose that (9) holds; since Gmax(0) = 0 there exists v ∈ X such that

0 > D−Gmax(0, v) = Gmax(v) ≥ Gi (v) = G∞

i (v), ∀i ∈ I.

The last equivalence is trivial. �

Now we prove the main result of this section.

Theorem 2.2. Let (5) be impossible;

(a) if dom F∞∩ {x ∈ X : G∞

max(x) < 0} 6= ∅, then (6) is impossible,(b) if (9) holds and F is u.s.c., then (6) is impossible,(c) if (10) holds and F is u.s.c., then (6) is impossible.

Proof. (a) Let x ∈ dom F∞ such that G∞max(x) < 0, and, by contradiction, suppose there exists x ′

∈ X, a solution of(6). Two cases are possible. If F∞(x) ≤ 0, we have

0 ≥ F∞(x) ≥ F(x + x ′)− F(x ′) > F(x + x ′)

and moreover, for each i ∈ I ,

0 > G∞max(x) ≥ Gmax(x + x ′)− Gmax(x

′) = Gmax(x + x ′) ≥ Gi (x + x ′).

Therefore x+x ′ is a solution of (5) and we achieve a contradiction. Instead, if F∞(x) = M > 0 and F(x ′) = −L < 0,we put x =

L2M x and we have

L

2= F∞(x) ≥ F(x + x ′)− F(x ′) = F(x + x ′)+ L;

then F(x + x ′) ≤ −L2 < 0. Moreover, for each i ∈ I

0 > G∞max(x) ≥ Gmax(x + x ′)− Gmax(x

′) = Gmax(x + x ′) ≥ Gi (x + x ′).

Therefore x + x ′ is a solution of (5) and we achieve a contradiction.(b) By contradiction, let x ∈ X be a solution of (6); then Gmax(x) = 0 and hence there exists v ∈ X such thatD−Gmax(x, v) < 0. Then, for each δ > 0, there exists t ∈ (0, δ) such that

Gmax(x + tv) < 0. (11)

Since F(x) < 0 and F is u.s.c., there exists δ′ > 0 such that, for each t ∈ (0, δ′) satisfying (11), x + tv is a solutionof (5) which contradicts the assumption.(c) By contradiction, let x ∈ X be a solution of (6); then Gmax(x) = 0 and for each δ > 0 there exists x with‖x − x‖ < δ such that Gmax(x) < 0. Since F(x) < 0 and F is u.s.c., we have F(x) < 0 and thus x is a solution of(5), which contradicts the assumption. �

Remark 2.1. Without the assumption of upper semicontinuity of F , (b) and (c) of Theorem 2.2 do not hold. ConsiderF,G : R2

−→ R, defined by

F(x) =

{−x1, if x1 ≥ 0 and x2 = 0,√

x21 + x2

2 , otherwise,

and

G(x) =

{0, if x1 ≥ 0 and x2 = 0,

√x2

1 + x22 , otherwise.

Page 6: On constraint qualifications in nonlinear programming

3254 M. Castellani / Nonlinear Analysis 69 (2008) 3249–3258

The system (5) is impossible but {x ∈ R2: x1 > 0, x2 = 0} is a solution of (6). We observe that F is not u.s.c. and

satisfies (9). In fact G(x1, 0) = 0 if x1 ≥ 0 and

D−G((x1, 0), (0, 1)) = lim inft→0+

√x2

1 + t2

t=

{−∞, if x1 > 0,−1, if x1 = 0.

Moreover it is immediately observed that also (10) holds.

3. The quasidifferentiable problem

Kuntz and Scholtes introduced in [9] a constraint qualification for the problem (1) when X = Rn and f, gi arequasidifferentiable functions (in the Demyanov–Rubinov sense [4]). Moreover they studied the connections with otherkinds of constraint qualifications introduced in [4] and [15]. In this section we describe the main results contained in[9] as a particular case of the theory exposed in Section 2. We recall that a function ϕ : Rn

−→ R is said to bequasidifferentiable at x ∈ Rn if the directional derivative

ϕ′(x, v) = limt→0+

ϕ(x + tv)− ϕ(x)

t

is a well defined finite function of the direction v ∈ Rn and there exist two convex compact sets ∂ϕ(x) and ∂ϕ(x),called subdifferential and superdifferential, respectively, such that

ϕ′(x, v) = σ(v, ∂ϕ(x))− σ(v, ∂ϕ(x)).

The pair (∂ϕ(x), ∂ϕ(x)) is called quasidifferential. Let C be a nonempty convex compact subset of Rn and v ∈ Rn ;the max-face of C in the direction v is defined by

F(v,C) = {x ∈ C : σ(v,C) = 〈x, v〉}.

It is easy to see [8] that F(v,C) = ∂σ(v,C), for all v ∈ Rn , where ∂ is the (Frechet) subdifferential. We conclude,recalling that a pair (C1,C2) of nonempty convex compact subsets of Rn is in general position if there is no directionv ∈ Rn such that F(v,C2) ⊆ F(v,C1). Now we are able to recall the following constraint qualifications for (1) whichwere studied in [9]. Let x ∈ Rn be a feasible point:

(W) The Ward constraint qualification [15] affirms that

0 6∈ co⋃

i∈I (x)

∂◦g′

i (x, 0).

(KS) The Kuntz–Scholtes constraint qualification [9] is 0 6∈ ∂◦g′max(x, 0), where it is easy to show that g′

max(x, v) =

max{g′

i (x, v) : i ∈ I (x)}.(DR) The Demyanov–Rubinov constraint qualification [4] affirms that the pair (∂gmax(x), ∂gmax(x)) is in a general

position.(ND) The nondegenaracy condition affirms that

cl{v ∈ Rn: g′

max(x, v) < 0} = {v ∈ Rn: g′

max(x, v) ≤ 0}.

The following theorem gives an equivalent formulation of the previous conditions (such a result was sketched alsoin [9]). For the sake of completeness we give a proof of it.

Theorem 3.1. Let gi be quasidifferentiable; then

(a) (W) holds if and only if there exists v ∈ Rn such that (g′

i )∞(x, v) < 0 for all i ∈ I (x).

(b) (KS) holds if and only if there exists v ∈ Rn such that (g′max)

∞(x, v) < 0.(c) (DR) holds if and only if for each v ∈ Rn with g′

max(x, v) = 0 there exists a direction z ∈ Rn such thatg′

max(x, v, z) < 0 where

g′max(x, v, z) = lim

t→0+

g′max(x, v + t z)− g′

max(x, v)

t

is the directional derivative of g′max(x, ·) at v in the direction z.

Page 7: On constraint qualifications in nonlinear programming

M. Castellani / Nonlinear Analysis 69 (2008) 3249–3258 3255

Proof. (a) As described in [9], we observe that if ϕ is directionally differentiable at x ∈ Rn , then the Clarke’sdirectional derivative of ϕ′(x, ·) at 0 ∈ Rn in the direction v ∈ Rn coincides with (ϕ′)∞(x, v). Since (W) is equivalentto affirming that there exists v ∈ Rn such that

(g′

i )◦(x, 0, v) = lim sup

(z,t)→(0,0+)

g′

i (x, z + tv)− g′

i (x, z)

t< 0, ∀i ∈ I (x),

the thesis holds.(b) See (a).(c) Let (∂gmax(x), ∂gmax(x)) be in general position and let v ∈ Rn be such that g′

max(x, v) = 0. Therefore there existsx∗

∈ F(v, ∂gmax(x)) \ F(v, ∂gmax(x)) and, from the classical separation theorem, there exist z ∈ Rn and ε > 0 suchthat

〈x∗, z〉 ≥ σ(z, F(v, ∂gmax(x)))+ ε.

Moreover, as previously observed, we get

σ(z, ∂σ (v, ∂gmax(x)))− σ(z, ∂σ (v, ∂gmax(x))) ≤ −ε,

and thus

g′max(x, v, z) = σ ′(v, z, ∂gmax(x))− σ ′(v, z, ∂gmax(x))

= σ(z, ∂σ (v, ∂gmax(x)))− σ(z, ∂σ (v, ∂gmax(x))) ≤ −ε.

For the converse, we prove that, for all v ∈ Rn , there exists z ∈ Rn such that g′max(x, v, z) < 0. If g′

max(x, v) = 0, thethesis is true for the hypothesis; if g′

max(x, v) > 0, choosing z = −v we get

g′max(x, v,−v) = lim

t→0+

g′max(x, v − tv)− g′

max(x, v)

t= −g′

max(x, v) < 0;

finally, if g′max(x, v) < 0, it is sufficient to choose z = v. But we have previously seen that

g′max(x, v, z) = σ(z, F(v, ∂gmax(x)))− σ(z, F(v, ∂gmax(x)));

then, for each v ∈ Rn there exists z ∈ Rn such that

σ(z, F(v, ∂gmax(x))) < σ(z, F(v, ∂gmax(x)))

and hence F(v, ∂gmax(x)) 6⊆ F(v, ∂gmax(x)), for all v ∈ Rn . �

Collecting the results given in Theorems 2.1, 2.2 and 3.1, we achieve the following result.

Corollary 3.1. Let f and gi be quasidifferentiable for all i ∈ I (x0) and gi u.s.c. at x0 for all i 6∈ I (x0); then thefollowing relations hold:

(W) ⇒ (KS) ⇒ (DR) ⇒ (ND).

Moreover they are constraint qualifications and, if x0 is a local optimal solution, then the system{f ′(x0, v) < 0g′

i (x0, v) ≤ 0, i ∈ I (x0)

is impossible.

In order to get a necessary optimality condition in dual form, we need a generalization of the Farkas Lemma fordifference sublinear systems. Such a result has been presented in [6, Corollary 2.2].

Theorem 3.2. Let I be a finite index set and ∂ pi (0), ∂ pi (0), with i ∈ I ∪ {0} be nonempty compact convex sets.Define

pi (x) = σ(x, ∂ pi (0))− σ(x, ∂ pi (0)), i ∈ I ∪ {0};

then the following are equivalent:

Page 8: On constraint qualifications in nonlinear programming

3256 M. Castellani / Nonlinear Analysis 69 (2008) 3249–3258

(a) the system{p0(x) < 0pi (x) ≤ 0, i ∈ I

is impossible;(b) for each x∗

i ∈ ∂ pi (0), with i ∈ I ∪ {0},

0 ∈ ∂ p0(0)− x∗

0 + cl

(∑i∈I

cone (∂ pi (0)− x∗

i )

).

Unfortunately we cannot place the closure after the sum as the following example shows.

Example 3.1. Consider the functions pi : R3−→ R defined as

p0(x) = x1, p1(x) = −x3, p2(x) = x2 + x3 +

√x2

1 + x22 ;

then the system{p0(x) < 0pi (x) ≤ 0, i = 1, 2

is impossible. Moreover ∂ pi (0) = {0} for all i = 0, 1, 2 and ∂ p0(0) = {(1, 0, 0)}, ∂ p1(0) = {(0, 0,−1)} and∂ p2(0) = {x ∈ R3

: x21 + (x2 − 1)2 ≤ 1, x3 = 1}; it is easy to see that 0 does not belong to the set

∂ p0(0)+

2∑i=1

clcone ∂ pi (0) = {x ∈ R3: x2 > 0} ∪ {x ∈ R3

: x1 = 1, x2 = 0}.

On the contrary,

0 ∈ ∂ p0(0)+ cl2∑

i=1

cone ∂ pi (0) = {x ∈ R3: x2 ≥ 0}

as stated in Theorem 3.2. For this reason the statement of Proposition 1.1 in [9] is not exactly correct.

4. An application to extremum problems

This section is concerned with the application of the results obtained in Section 2 to the problem (1). In order todetermine necessary optimality conditions in a dual form we recall the following definition [5].

Definition 4.1. A function ϕ : X −→ R ∪ {+∞} is said to be a pointwise minimum of sublinear functions (or MSLfunction) if there exist an index set T and a family {∂(ϕ, t) : t ∈ T } of nonempty closed convex sets in X∗ such that

ϕ(x) = mint∈T

σ(x, ∂(ϕ, t)), ∀x ∈ X.

In [2] it was shown that every positively homogeneous function vanishing at the origin is an MSL function. For thisclass of functions in [5] the following generalized Farkas Lemma for MSL systems was derived.

Theorem 4.1. Assume that F and Gi are MSL functions with respect to the index sets T and Ti and the families ofnonempty closed convex sets {∂(F, t) : t ∈ T } and {∂(Gi , ti ) : ti ∈ Ti }, respectively; then the following statementsare equivalent:

(a) the system (6) is impossible,(b) for each t ∈ T and ti ∈ Ti , i ∈ I ,

0 ∈ cl

(∂(F, t)+ cone co

⋃i∈I

∂(Gi , ti )

).

Page 9: On constraint qualifications in nonlinear programming

M. Castellani / Nonlinear Analysis 69 (2008) 3249–3258 3257

We introduce the following concept.

Definition 4.2. The function ϕ : X −→ R ∪ {+∞} is said to be locally exact at x ∈ dom ϕ along the direction v ∈ Xif there exist t > 0 and M > 0 such that

ϕ(x) ≤ ϕ(x + tv)+ Mt, ∀M ≥ M and t ∈ (0, t). (12)

If (12) holds for any directions, ϕ is said to be directionally locally exact at x .

Remark 4.1. Clearly every locally Lipschitz function is directionally locally exact but the converse does not hold asϕ(x) =

√|x | shows.

We have observed that if x0 is a local optimal solution for the problem (1) and gi are u.s.c. at x0 for all i 6∈ I (x0)

then the system (2) is impossible. If f and gi , i ∈ I (x0), are directionally locally exact at x0, then D− f (x0, ·)

and D+gi (x0, ·) are proper positively homogeneous functions and therefore, from [2, Theorem 1.2], they are MSLfunctions. On this account there exist families {∂

D−

t f (x0) : t ∈ T } and {∂D+

ti gi (x0) : ti ∈ Ti } of nonempty closedconvex sets of X∗ such that

D− f (x0, v) = mint∈T

σ(v, ∂D−

t f (x0)), (13)

D+gi (x0, v) = minti ∈Ti

σ(v, ∂D+

ti gi (x0)), i ∈ I (x0). (14)

Under these hypotheses we are able to prove the following result.

Theorem 4.2. Let x0 be a local optimal solution for (1) with gi , i 6∈ I (x0), u.s.c. at x0 and f and gi , i ∈ I (x0),directionally locally exact at x0. Assume that (13) and (14) are satisfied and one of the following assumptions holds:

(a) dom (D− f )∞(x0, ·) ∩ {v ∈ X : (D+gmax)∞(x0, v) < 0} 6= ∅,

(b) D− f (x0, ·) is u.s.c. and for each v ∈ X such that D+gmax(x0, v) = 0, there exists w ∈ X such thatD− D+gmax(x0, v, w) < 0,

(c) D− f (x0, ·) is u.s.c. and

cl{v ∈ X : D+gmax(x0, v) < 0} = {v ∈ X : D+gmax(x0, v) ≤ 0};

then, for each t ∈ T and ti ∈ Ti , i ∈ I (x0),

0 ∈ cl

(∂

D−

t f (x0)+ cone co⋃

i∈I (x0)

∂D+

ti gi (x0)

).

In particular, if (a) holds, D+gmax(x0, ·) is l.s.c. and ∂D−

t f (x0) and ∂D+

ti gi (x0) are compact sets, for each t ∈ T andti ∈ Ti , i ∈ I (x0), there exist λi ≥ 0 such that

0 ∈ ∂D−

t f (x0)+

∑i∈I (x0)

λi∂D+

ti gi (x0).

Proof. The assumption of upper semicontinuity of the nonactive constraints implies that the system (2) is impossible.If a constraint qualification holds, from Theorem 2.2 applied to F = D− f (x0, ·) and Gi = D+gi (x0, ·), we get theimpossibility of the system{

D− f (x0, v) < 0D+gi (x0, v) ≤ 0, i ∈ I (x0).

Since D− f (x0, ·) and D+gi (x0, ·) are MSL functions, from Theorem 4.1 we achieve the thesis. Now let us prove thelast part of the theorem. The function

ψ(v) = max{D− f (x0, v), D+gmax(x0, v)}

Page 10: On constraint qualifications in nonlinear programming

3258 M. Castellani / Nonlinear Analysis 69 (2008) 3249–3258

assumes a global minimum at v = 0 with ψ(0) = 0. For each t ∈ T and for each ti ∈ Ti , i ∈ I (x0), we get

D− f (x0, v) ≤ σ(v, ∂D−

t f (x0)),

D+gmax(x0, v) ≤ (D+gmax)∞(x0, v),

D+gi (x0, v) ≤ σ(v, ∂D+

ti gi (x0))

and therefore

D+gmax(x0, v) = maxi∈I (x0)

D+gi (x0, v) ≤ maxi∈I (x0)

σ(v, ∂D+

ti gi (x0)).

From [10, Corollary 3.5] there exists a compact set C in X∗ such that

D+gmax(x0, v) ≤ σ(v,C) ≤ min{(D+gmax)

∞(x0, v), maxi∈I (x0)

σ(v, ∂D+

ti gi (x0))

}.

Then the sublinear function

ϕ(v) = max{σ(v, ∂D−

t f (x0)), σ (v,C)} ≥ ψ(v)

assumes a minimum at the origin; hence

0 ∈ ∂ϕ(0) ⊆ co (∂D−

t f (x0),C)

⊆ co

(∂

D−

t f (x0), ∂(D+gmax)∞(x0, 0) ∩ co

( ⋃i∈I (x0)

∂D+

ti gi (x0)

)).

Thus there exist λ ∈ [0, 1], x∗∈ ∂

D−

t f (x0) and

z∗∈ ∂(D+gmax)

∞(x0, 0) ∩ co

( ⋃i∈I (x0)

∂D+

ti gi (x0)

)such that 0 = λx∗

+(1−λ)z∗. Since there exists v ∈ X such that (D+gmax)∞(x0, v) < 0, then 0 6∈ ∂(D+gmax)

∞(x0, 0)and hence z∗

6= 0; therefore λ 6= 0 and

0 ∈ ∂D−

t f (x0)+1 − λ

λco

( ⋃i∈I (x0)

∂D+

ti gi (x0)

)which proves the statement. �

References

[1] M. Castellani, M. Pappalardo, First order cone approximations and necessary optimality conditions, Optimization 35 (1995) 113–126.[2] M. Castellani, A dual representation for proper positively homogeneous functions and application to nonlinear programming, J. Global Optim.

16 (2000) 393–400.[3] F.H. Clarke, Optimization and Nonsmooth Analysis, John Wiley & Sons, Inc., New York, 1983.[4] V.F. Demyanov, A.M. Rubinov, Constructive Nonsmooth Analysis, Peter Lang, Frankfurt am Main, 1995.[5] B. Glover, Y. Ishizuka, V. Jeyakumar, H.D. Tuan, Complete characterizations of global optimality for problems involving the pointwise

minimum of sublinear functions, SIAM J. Optim. 6 (1996) 362–372.[6] B. Glover, V. Jeyakumar, W. Oettli, A Farkas lemma for difference sublinear systems and quasidifferentiable programming, Math. Program.

63 (1994) 109–125.[7] J.B. Hiriart-Urruty, On optimality conditions in nondifferentiable programming, Math. Program. 14 (1978) 73–86.[8] J.B. Hiriart-Urruty, C. Lemarechal, Convex analysis and minimization algorithms, Springer-Verlag, Berlin, 1993.[9] L. Kuntz, S. Scholtes, Constraint qualifications in quasidifferentiable optimization, Math. Program. 60 (1993) 339–347.

[10] R.R. Merkovsky, D.E. Ward, Upper D.S.L. approximates and nonsmooth optimization, Optimization 21 (1990) 163–177.[11] J.P. Penot, Variations on the theme of nonsmooth analysis: Another subdifferential, in: V.F. Demyanov, D. Pallaschke (Eds.), Nondifferentiable

Optimization: Motivations and Applications, in: Lecture Notes in Econom. and Math. Systems, vol. 225, Springer-Verlag, Berlin, 1985,pp. 41–54.

[12] B.N. Pshenichnyi, Necessary Conditions for an Extremum, Marcel Dekker Inc., New York, 1971.[13] R.T. Rockafellar, Generalized directional derivatives and subgradient of nonconvex functions, Canad. J. Math. 32 (1980) 257–280.[14] D.E. Ward, Isotone tangent cones and nonsmooth optimization, Optimization 18 (1987) 769–783.[15] D.E. Ward, A constraint qualification in quasidifferentiable optimization, Optimization 22 (1991) 661–668.[16] D.E. Ward, J.M. Borwein, Nonsmooth calculus in finite dimensions, SIAM J. Control Optim. 25 (1987) 1312–1340.