Research

# Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization

Kwan D Bae and Do S Kim*

Author Affiliations

Department of Applied Mathematics, Pukyong National University, Busan 608-737, Korea

For all author emails, please log on.

Fixed Point Theory and Applications 2011, 2011:42 doi:10.1186/1687-1812-2011-42

 Received: 3 March 2011 Accepted: 25 August 2011 Published: 25 August 2011

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### Abstract

In this paper, we consider a class of nonsmooth multiobjective programming problems. Necessary and sufficient optimality conditions are obtained under higher order strongly convexity for Lipschitz functions. We formulate Mond-Weir type dual problem and establish weak and strong duality theorems for a strict minimizer of order m.

##### Keywords:
Nonsmooth multiobjective programming; strict minimizers; optimality conditions; duality

### 1 Introduction

Nonlinear analysis is an important area in mathematical sciences, and has become a fundamental research tool in the field of contemporary mathematical analysis. Several nonlinear analysis problems arise from areas of optimization theory, game theory, differential equations, mathematical physics, convex analysis and nonlinear functional analysis. Park [1-3] has devoted to the study of nonlinear analysis and his results had a strong influence on the research topics of equilibrium complementarity and optimization problems. Nonsmooth phenomena in mathematics and optimization occurs naturally and frequently. Rockafellar [4] has pointed out that in many practical applications of applied mathematics the functions involved are not necessarily differentiable. Thus it is important to deal with non-differentiable mathematical programming problems.

The field of multiobjective programming, has grown remarkably in different directional in the setting of optimality conditions and duality theory since 1980s. In 1983, Vial [5] studied a class of functions depending on the sign of the constant ρ. Characteristic properties of this class of sets and related it to strong and weakly convex sets are provided.

Auslender [6] obtained necessary and sufficient conditions for a strict local minimizer of first and second order, supposing that the objective function f is locally Lipschitzian and that the feasible set S is closed. Studniarski [7] extended Auslender's results to any extended real-valued function f, any subset S and encompassing strict minimizers of order greater than 2. Necessary and sufficient conditions for strict minimizer of order m in nondifferentiable scalar programs are studied by Ward [8]. Based on this result, Jimenez [9] extended the notion of strict minimum of order m for real optimization problems to vector optimization. Jimenez and Novo [10,11] presented the first and second order sufficient conditions for strict local Pareto minima and strict local minima of first and second order to multiobjective and vector optimization problems. Subsequently, Bhatia [12] considered the notion of strict minimizer of order m for a multiobjective optimization problem and established only optimality for the concept of strict minimizer of order m under higher order strong convexity for Lipschitz functions.

In 2008, Kim and Bae [13] formulated nondifferentiable multiobjective programs involving the support functions of a compact convex sets. Also, Bae et al. [14] established duality theorems for nondifferentiable multiobjective programming problems under generalized convexity assumptions.

Very recently, Kim and Lee [15] introduce the nonsmooth multiobjective programming problems involving locally Lipschitz functions and support functions. They introduced Karush-Kuhn-Tucker optimality conditions with support functions and established duality theorems for (weak) Pareto-optimal solutions.

In this paper, we consider the nonsmooth multiobjective programming involving the support function of a compact convex set. In section 2, we introduce the concept of a strict minimizer of order m and higher order strongly convexity for Lipschitz functions. Section 3, necessary and sufficient optimality theorems are established for a strict minimizer of order m by using necessary and sufficient optimality theorems under generalized strongly convexity assumptions. Section 4, we formulate Mond-Weir type dual problem and obtained weak and strong duality theorems for a strict minimizer of order m.

### 2 Preliminaries

Let ℝn be the n-dimensional Euclidean space and let be its nonnegative orthant.

Let x, y ∈ ℝn. The following notation will be used for vectors in ℝn:

For x, u ∈ ℝ, x u and x < u have the usual meaning.

Definition 2.1 [16]Let D be a compact convex set in n. The support function s(·|D) is defined by

The support function s(·|D) has a subdifferential. The subdifferential of s(·|D) at x is given by

The support function s(·|D), being convex and everywhere finite, that is, there exists z D such that

Equivalently,

We consider the following multiobjective programming problem,

where f and g are locally Lipschitz functions from ℝn→ℝP and ℝn→ℝq, respectively. Di, for each i P = {1, 2, ... , p}, is a compact convex set of ℝn. Further let, S := {x X|gj (x)≦ 0, j = 1, ..., q} be the feasible set of (MOP) and denote an open ball with center x0 and radius ε. Set I(x0): = {j|gj(x0) = 0, j = 1, ... , q}.

We introduce the following definitions due to Jimenez [9].

Definition 2.2 A point x0 S is called a strict local minimizer for (MOP) if there exists an ε > 0, i ∈ {1, 2, ..., p} such that

Definition 2.3 Let m ≧ 1 be an integer. A point x0 S is called a strict local minimizer of order m for (MOP) if there exists an ε > 0 and a constant such that

Definition 2.4 Let m ≧ 1 be an integer. A point x0 S is called a strict minimizer of order m for (MOP) if there exists a constant such that

Definition 2.5 [16]Suppose that h: X→ℝ is Lipschitz on X. The Clarke's generalized directional derivative of h at x X in the direction v ∈ ℝn, denoted by h0(x, v), is defined as

Definition 2.6 [16]The Clarke's generalized gradient of h at x X, denoted by h(x) is defined as

We recall the notion of strong convexity of order m introduced by Lin and Fukushima in [17].

Definition 2.7 A function h: X→ℝ said to be strongly convex of order m if there exists a constant c > 0 such that for x1, x2 X and t ∈ [0, 1]

For m = 2, the function h is refered to as strongly convex in [5].

Proposition 2.1 [17]If each hi, i = 1, ... , p is strongly convex of order m on a convex set X, then and max1 ≤ iphi are also strongly convex of order m on X, where ti ≥ 0, i = 1, ... , p.

Theorem 2.1 Let X and S be nonempty convex subsets of n and X, respectively. Suppose that x0 S is a strict local minimizer of order m for (MOP) and the functions fi: X→ℝ, i = 1, ... , p, are strongly convex of order m on X. Then x0 is a strict minimizer of order m for (MOP).

Proof. Since x0 S is a strict local minimizer of order m for (MOP). Therefore there exists an ε > 0 and a constant ci > 0, i = 1, ... , p such that

that is, there exits no x B(x0, ε) ∩ S such that

If x0 is not a strict minimizer of order m for (MOP) then there exists some z S such that

(2.1)

Since S is convex, λz + (1 - λ)x0 B(x0, ε) ∩ S, for sufficiently small λ ∈ (0, 1). As fi, i = 1, ... , p, are strongly convex of order m on X, we have for z, x0 S,

or

,

Since, we have

,

which implies that x0 is not a strict local minimizer of order m, a contradiction. Hence, x0 is a strict minimizer of order m for (MOP). □

Motivated by the above result, we give two obvious generalizations of strong convexity of order m which will be used to derive the optimality conditions for a strict minimizer of order m.

Definition 2.8 The function h is said to be strongly pseudoconvex of order m and Lipschitz on X, if there exists a constant c > 0 such that for x1, x2, ∈ X

Definition 2.9 The function h is said to be strongly quasiconvex of order m and Lipschitz on X, if there exists a constant c > 0 such that for x1, x2, ∈ X

We obtain the following lemma due to the theorem 4.1 of Chankong and Haimes [18].

Lemma 2.1 x0 is an efficient point for (MOP) if and only if x0 solves

for every k = 1, ... , p.

We introduce the following definition for (MOP) based on the idea of Chandra et al. [19].

Definition 2.10 Let x0 be a feasible solution for (MOP). We say that the basic regularity condition (BRC) is satisfied at x0 if there exists r ∈ {1, 2, ... , p} such that the only scalars , wi Di, i = 1, ... , p, i r, , j I (x0), , j I (x0); I (x0) = {j|gj(x0) = 0, j = 1, ... , q} which satisfy

are for all i = 1, ... , p, i r, , j = 1, ... , q.

### 3 Optimality Conditions

In this section, we establish Fritz John and Karush-Kuhn-Tucker necessary conditions and Karush-Kuhn-Tucker sufficient condition for a strict minimizer of (MOP).

Theorem 3.1 (Fritz John Necessary Optimality Conditions) Suppose that x0 is a strict minimizer of order m for (MOP) and the functions fi, i = 1, ... , p, gj, j = 1, ... ,q, are Lipschitz at x0. Then there exist , , i = 1, ... , p, such that

Proof. Since x0 is strict minimizer of order m for (MOP), it is strict minimizer. It can be seen that x0 solves the following unconstrained scalar problem

where

If it is not so then there exits x1 ∈ ℝn such that F(x1) < F(x0). Since x0 is strict minimizer of (MOP) then g(x0) ≦ 0, for all j = 1, ... , q. Thus F(x0) = 0 and hence F(x1) < 0. This implies that x1 is a feasible solution of (MOP) and contradicts the fact that x0 is a strict minimizer of (MOP).

Since x0 minimizes F(x) it follows from Proposition 2.3.2 in Clarke[16] that 0 ∈ ∂F(x0). Using Proposition 2.3.12 of [16], it follows that

Thus,

Hence there exist , such that

Theorem 3.2 (Karush-Kuhn-Tucker Necessary Optimality Conditions) Suppose that x0 is a strict minimizer of order m for (MOP) and the functions fi, i = 1, ... , p, gj, j = 1, ... , q, are Lipschitz at x0. Assume that the basic regularity condition (BRC) holds at x0, then there exist , , i = 1, ... p, such that

(3.1)

(3.2)

(3.3)

(3.4)

Proof. Since x0 is a strict minimizer of order m for (MOP), by Theorem 3.1, there exist , , i = 1, ... , p such that

Since BRC Condition holds at x0. Then If , i = 1, ... , p, then we have

for each k P = {1, ... , p}. Since the assumptions of Basic Regularity Condition, we have λk = 0, k P, k i, μj = 0, j I (x0). This contradicts to the fact that λi, λk, k P, k i, μj, j I (x0) are not all simultaneously zero. Hence (λ1, ... , λp) ≠ (0, ... , 0).

Theorem 3.3 (Karush-Kuhn-Tucker Sufficient Optimality Conditions) Let the Karush-Kuhn-Tucker Necessary Optimality Conditions be satisfied at x0 S. Suppose that fi(·) + (·)T wi, i = 1, · · · , p, are strongly convex of order m on X , gj (·), j I (x0) are strongly quasiconvex of order m on X. Then x0 is a strict minimizer of order m for (MOP).

Proof. As fi(·) + (·)T wi, i = 1, ... , p, are strongly convex of order m on X therefore there exist constants ci > 0, i = 1, ... , p, such that for all x S, ξi ∈ ∂fi(x0) and wi Di, i = 1, ... , p,

(3.5)

For , i = 1, ... , p, we obtain

(3.6)

Now for x S,

As gj (·), j I (x0), are strongly quasiconvex of order m on X , it follows that there exist constants cj > 0 and ηj ∂gj (x0), j I (x0), such that

For j I (x0), we obtain

As for j I (x0), we have

(3.7)

By (3.6), (3.7) and (3.1), we get

where . This implies that

(3.8)

where c = ae. It follows from (3.8) that there exist such that for all x S

Since (x0)T wi = s(x0|Di), xT wi s(x|Di), i = 1, ... , p, we have

i.e.

Thereby implying that x0 is a strict minimizer of order m for (MOP). □

Remark 3.1 If Di = {0}, i = 1, ... , k, then our results on optimality reduces to the one of Bhatia [12].

### 4 Duality Theorems

In this section, we formulate Mond-Weir type dual problem and establish duality theorems for a minima. Now we propose the following Mond-Weir type dual (MOD) to (MOP):

(4.1)

(4.2)

where .

Theorem 4.1 (Weak Duality) Let x and (u, w, λ, μ) be feasible solution of (MOP) and (MOD), respectively. Assume that fi (·) + (·)T wi, i = 1, ... , p, are strongly convex of order m on X, gj (·), j I (u); I (u) = {j|gj (u) = 0} are strongly quasiconvex of order m on X. Then the following cannot hold:

(4.3)

Proof. Since x is feasible solution for (MOP) and (u, w, λ, μ) is feasible for (MOD), we have

For every j I (u), as gj , j I (u), are strongly quasiconvex of order m on X, it follows that there exist constants cj > 0 and ηj ∈ ∂gj (u), j I (u) such that

This together with μj ≧ 0, j I (u), imply

As μj = 0, for j I (u), we have

(4.4)

Now, suppose contrary to the result that (4.3) holds. Since xTwi s(x|D), i = 1, ... , p, we obtain

As fi(·) + (·)T wi , i = 1, ... , p, are strongly convex of order m on X, therefore there exist constants ci > 0, i = 1, ... , p, such that for all x S, ξi ∈ ∂fi(u), i = 1, ... , p,

(4.5)

For λi ≧ 0, i = 1, ... , p, (4.5) yields

(4.6)

By (4.4),(4.6) and (4.1), we get

(4.7)

where . This implies that

(4.8)

where c = ae, since λT e = 1. It follows from (4.8) that there exist c int p such that for all x S

Since xT wi s(x|Di), i = 1, ... , p, and c int p, we have

which contradicts to the fact that (4.3) holds. □

Theorem 4.2 (Strong Duality) If x0 is a strictly minimizer of order m for (MOP), and assume that the basic regularity condition (BRC) holds at x0, then there exists λ0 ∈ ℝp, , ... , p, μ0 ∈ ℝq such that (x0, w0, λ0, μ0) is feasible solution for (MOD) and . Moreover, if the assumptions of weak duality are satisfied, then (x0, w0, λ0, μ0) is a strictly minimizer of order m for (MOD).

Proof. By Theorem 3.2, there exists λ0 ∈ ℝp, , i = 1, ... , p, and μ0 ∈ ℝq such that

Thus (x0, w0, λ0, μ0) is a feasible for (MOD) and , i = 1, ... , p. By Theorem 4.1, we obtain that the following cannot hold: □

where (u, w, λ, μ) is any feasible solution of (MOD). Since ci int p such that for all x0, u S

Thus (x0, w0, λ0, μ0) is a strictly minimizer of order m for (MOD). Hence, the result holds.

### Competing interests

The authors declare that they have no competing interests.

### Authors' contributions

DSK presented necessary and sufficient optimality conditions, formulated Mond-Weir type dual problem and established weak and strong duality theorems for a strict minimizer of order m. KDB carried out the optimality and duality studies, participated in the sequence alignment and drafted the manuscript. All authors read and approved the final manuscript.

### Acknowledgements

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (No. 2010-0012780). The authors are indebted to the referee for valuable comments and suggestions which helped to improve the presentation.

### References

1. Park, S: Generalized equilibrium problems and generalized comple- mentarity problems. Journal of Optimization Theory and Applications. 95(2), 409–417 (1997). Publisher Full Text

2. Park, S: Remarks on equilibria for g-monotone maps on generalized convex spaces. Journal of Mathematical Analysis and Applications. 269, 244–255 (2002). Publisher Full Text

3. Park, S: Generalizations of the Nash equilibrium theorem in the KKM theory. Fixed Point Theory and Applications Art. ID 234706, 23 pp. (2010)

4. Rockafellar, RT: Convex Analysis, Princeton Univ. Press, Princeton, NJ (1970)

5. Vial, JP: Strong and weak convexity of sets and functions. Mathematics of Operations Research. 8, 231–259 (1983). Publisher Full Text

6. Auslender, A: Stability in mathematical programming with nondifferentiable data. SIAM Journal on Control and Optimization. 22, 239–254 (1984). Publisher Full Text

7. Studniarski, M: Necessary and sufficient conditions for isolated local minima of nonsmooth functions. SIAM Journal on Control and Optimization. 24, 1044–1049, 1986 (1986). Publisher Full Text

8. Ward, DE: Characterizations of strict local minima and necessary conditions for weak sharp minima. Journal of Optimization Theory and Applications. 80, 551–571 (1994). Publisher Full Text

9. Jimenez, B: Strictly efficiency in vector optimization. Journal of Mathematical Analysis and Applications. 265, 264–284 (2002). Publisher Full Text

10. Jimenez, B, Novo, V: First and second order sufficient conditions for strict minimality in multiobjective programming. Numerical Functional Analysis and Optimization. 23, 303–322 (2002). Publisher Full Text

11. Jimenez, B, Novo, V: First and second order sufficient conditions for strict minimality in nonsmooth vector optimization. Journal of Mathematical Analysis and Applications. 284, 496–510 (2003). Publisher Full Text

12. Bhatia, G: Optimality and mixed saddle point criteria in multiobjective optimization. Journal of Mathematical Analysis and Applications. 342, 135–145 (2008). Publisher Full Text

13. Kim, DS, Bae, KD: Optimality conditions and duality for a class of nondifferentiable multiobjective programming problems. Taiwanese Journal of Mathematics. 13(2B), 789–804 (2009)

14. Bae, KD, Kang, YM, Kim, DS: Efficiency and generalized convex duality for nondifferentiable multiobjective programs. Hindawi Publishing Corporation, Journal of Inequalities and Applications. 2010, Article ID 930457, 10 pp (2010)

15. Kim, DS, Lee, HJ: Optimality conditions and duality in nonsmooth multiobjective programs. Hindawi Publishing Corporation, Journal of Inequalities and Applications Article ID 939537, 12 pp (2010)

16. Clarke, FH: Optimization and Nonsmooth Analysis, Wiley-Interscience, New York (1983)

17. Lin, GH, Fukushima, M: Some exact penalty results for nonlinear programs and mathematical programs with equilibrium constraints. Journal of Optimization Theory and Applications. 118, 67–80 (2003). Publisher Full Text

18. Chankong, V, Haimes, YY: Multiobjective Decision Making: Theory and Methodology, North-Holland, New York (1983)

19. Chandra, S, Dutta, J, Lalitha, CS: Regularity conditions and optimality in vector optimization. Numerical Functional Analysis and Optimization. 25, 479–501 (2004). Publisher Full Text