Sei sulla pagina 1di 14

Int. J. Mathematics in Operational Research, Vol. 6, No.

3, 2014 393

Second order duality for non-differentiable minimax


programming problems involving generalised
α-type-I functions

Anurag Jayswal* and Krishna Kummari


Department of Applied Mathematics,
Indian School of Mines,
Dhanbad 826 004, India
E-mail: anurag_jais123@yahoo.com
E-mail: krishna.maths@gmail.com
*Corresponding author

Abstract: In this paper, we focus our study on a non-differentiable minimax


programming problem and its two types of second order dual models.
We establish weak, strong and strict converse duality theorems under the
assumptions of generalised second-order α-type I functions.

Keywords: non-differentiable minimax programming; second order duality;


α-invexity; type I functions; Schwarz inequality.

Reference to this paper should be made as follows: Jayswal, A. and


Kummari, K. (2014) ‘Second order duality for non-differentiable minimax
programming problems involving generalised α-type-I functions’, Int. J.
Mathematics in Operational Research, Vol. 6, No. 3, pp.393–406.

Biographical notes: Anurag Jayswal is an Assistant Professor in the


Department of Applied Mathematics, Indian School of Mines, Dhanbad, India.
He obtained his MA and PhD degrees from Banaras Hindu University,
Varanasi, India. He has published many papers in the area of mathematical
programming. His research interest includes interval-valued programming and
generalised convexity.

Krishna Kummari is a research scholar in the Department of Applied


Mathematics, Indian School of Mines, Dhanbad, India. He received his MSc
degree from Pondicherry University, Puducherry, India. He is pursuing his PhD
degree under the guidance of Anurag Jayswal.

1 Introduction

The notion of convexity plays a dominant role in mathematical programming. Many


generalisations of convex functions proposed in recent years have proved to be useful for
extending optimality conditions and duality results for convex programs to larger classes
of optimisation problems. A significant generalisation of convex functions is that of
invex functions introduced by Hanson (1981). Hanson’s initial results inspired a great
deal of subsequent work which has greatly expanded the role and applications of invexity
in non-linear optimisation and other branches of pure and applied sciences.

Copyright © 2014 Inderscience Enterprises Ltd.


394 A. Jayswal and K. Kummari

Noor (2004) introduced a new class of α-invex function by relaxing the definition of
an invex function and studied some properties of the α-preinvex functions and their
differentials.
The concept of type I functions was introduced by Hanson and Mond (1982) as a
generalisation of convexity. Subsequently, Rueda and Hanson (1988) defined pseudo-
type I and quasi-type I functions and obtained sufficient optimality conditions involving
these functions. Aghezzaf and Hachimi (2000) introduced new classes of generalised
type I vector valued functions and obtained certain duality results for multiobjective
programming problem. Recently, Jayswal (2010) introduced new classes of generalised
α-univex type I vector valued functions and obtained sufficient optimality conditions and
Mond-Weir type duality results for multiobjective programming problem.
The study of second order duality is significant due to the computational advantage
over first order duality as it provides tighter bounds for the value of the objective function
when approximation are used. One of the generalisations of Hanson’s definitions
of an invex function is a second-order invexity notion introduced by Bector and Bector
(1986). Mangasarian (1975) was first introduced the concept of second-order duality
for non-linear programming problems. By introducing an additional vector p ∈ Rn,
he formulated the second-order dual and established duality theorems. Mond (1974)
reproved second-order duality results involving simpler assumptions than those
previously given in Mangasarian (1975). Ahmad and Sharma (2007) concerned with
second-order duality for a class of non-differentiable multiobjective programming
problems and usual duality theorems are proved for two types of dual models under
generalised convexity assumptions. In order to generalise the notion of convexity to
second and higher order, and to extend the validity of results to larger classes of
optimisation problems, various attempts have been made.
Recently, several authors have been interested in the optimality conditions and duality
results for minimax programming problems. Liu (1999) discussed second-order duality
results for differentiable minimax programming problems. Very recently, Sharma and
Gulati (2012) established weak, strong and strict converse duality results for two second
order duals of a minimax fractional programming problem under the assumption of
generalised α-type I univexity which is a second order extension of α-type I assumptions
of Jayswal (2010). Hu et al. (2012) formulated a second-order dual model for a
non-differentiable minimax fractional programming and proved weak, strong and
strict converse duality theorems under generalised η-bonvexity assumptions. Ahmad
et al. (2008) considered two types of second-order dual models for a non-differentiable
minimax programming problem and established duality results involving generalised
(F, α, ρ, d)-type I functions.
In this paper, we proposed a new class of generalised type I functions, called
second-order α-type I functions and established weak, strong and strict converse duality
theorems for two types of dual model associated with a non-differentiable minimax
programming problem under the assumptions of second-order generalised α-type I
functions.
Second order duality for non-differentiable minimax programming problems 395

2 Notation and preliminaries

Throughout the paper, let Rn denotes the n-dimensional Euclidean space and R+n be its
non-negative orthant. Also, let X be a non-empty subset of Rn. First, we recall the
following definition.
Definitions 2.1 (Noor, 2004): A subset X is said to be α-invex set, if there exist
η: X × X → Rn, α: X × X → R+ such that:
x + kα ( x, x ) η ( x, x ) ∈ X ∀x, x ∈ X , k ∈ [0, 1].

Remark 2.1:
1 if η ( x, x ) = x − x and 0 < α ( x, x ) < 1, then the set X is called star-shaped

2 if η ( x, x ) = x − x and α ( x, x ) = 1, then the set X is called the convex set

3 if α ( x, x ) = 1, then the set X is called the invex (η-connected) set.

It is well known that the α-invex set need not be a convex sets, see Noor (2004).
From now onward we assume that the set X is a non-empty α-invex set with respect
to α(.,.) and η(.,.) unless otherwise specified.
Let f: X × Y → R and g: X → Rm be twice differentiable functions. Let B be an n × n
positive semidefinite symmetric matrix. Suppose that Y, an α-invex set, is a compact
subset of Rl. Consider the following non-differentiable minimax programming problem:

Minimise ψ ( x ) = sup f ( x, y ) + ( xT Bx )
1/ 2
(P)
y∈Y

subject to g ( x) ≤ 0, x ∈ X ,

Let S = {x ∈ X : g ( x) ≤ 0} be the set of all feasible solutions of (P). Let M = {1, 2,..., m} .
For each ( x, y ) ∈ S × Y , we define:

J ( x) = { j ∈ M : g j ( x) = 0} ,

⎧ 1/ 2 ⎫
Y ( x) = ⎨ y ∈ Y : f ( x, y ) + ( xT Bx ) = sup f ( x, z ) + ( xT Bx ) ⎬ ,
1/ 2

⎩ z∈Y ⎭
and

K ( x) = {( s, t , y ) ∈ N × R s
+ × R ls : 1 ≤ s ≤ n + 1, t = ( t1 , t2 ,..., ts ) ∈ R+s

with
s

∑t
i =1
i = 1 and y = ( y1 , y2 ,..., ys ) and yi ∈ Y ( x ) , i = 1, 2,..., s}.

We shall need the following generalised Schwarz inequality in our discussion:

xT Bw ≤ ( xT Bx ) ( wT Bw)
1/ 2 1/ 2
for some x, w ∈ R n , (1)
396 A. Jayswal and K. Kummari

The equality holds when Bx = λBw for some λ ≥ 0. Evidently, if (wT Bw)1/2 ≤ 1, we have:

xT Bw ≤ ( xT Bx )
1/ 2
. (2)

Let φ: X × Y → R and gj: X → R, j ∈ M be twice differentiable functions. Now, we


introduce the following definitions.
Definition 2.2: For each j ∈ M, (φ, gj) is said to be second-order α-type I at x ∈ X , if
there exists α and η such that for all x ∈ S , yi ∈ Y ( x) and p ∈ R n , we have:

1 T 2
φ ( x, yi ) − φ ( x , yi ) + p ∇ φ ( x , yi ) p ≥ α ( x, x ) {∇φ ( x , yi )
2
+ ∇ 2φ ( x , yi ) p} , η ( x, x ) , i = 1, 2,..., s,

1 T 2
−g j ( x ) + p ∇ g j ( x ) p ≥ α ( x, x ) {∇g j ( x ) + ∇ 2 g j ( x ) p} , η ( x, x ) , j = 1, 2,..., m.
2
In the above definition, if the inequalities appear as strict inequalities, then we say that
(φ, gj) is second-order strict α-type I at x ∈ X .
Definition 2.3: For each j ∈ M, (φ, gj) is said to be second-order pseudoquasi α-type I at
x ∈ X , if there exists α and η such that for all x ∈ S , yi ∈ Y ( x) and p ∈ R n , we have:

1 T 2
φ ( x, yi ) − φ ( x , yi ) + p ∇ φ ( x , yi ) p < 0
2

⇒ α ( x, x ) {∇φ ( x , yi ) + ∇ 2φ ( x , yi ) p} , η ( x, x ) < 0, i = 1, 2,..., s,

1 T 2
−g j ( x ) + p ∇ g j ( x ) p ≤ 0 ⇒ α ( x, x ) {∇g j ( x ) +
2
∇ 2 g j ( x ) p} , η ( x, x ) ≤ 0, j = 1, 2,..., m.

In the above definition, if:

α ( x, x ) {∇φ ( x , yi ) + ∇ 2φ ( x , yi ) p} , η ( x, x ) ≥ 0,

1 T 2
⇒ φ ( x, yi ) − φ ( x , yi ) + p ∇ φ ( x , yi ) p > 0, i = 1, 2,..., s,
2
then we say that (φ, gj) is second-order strictly pseudoquasi α-type I at x ∈ X .
The following result is a special case of Theorem 3.1 (Lai et al., 1999), and will be
needed in the sequel.
Theorem 2.1 (necessary conditions): If x* is a solution (local or global) of problem
(P) satisfying x* Bx* > 0, and ∇g j ( x* ), j ∈ J ( x* ), are linearly independent, then there
exist ( s, t , y ) ∈ K ( x* ), u ∈ R n and μ ∈ R+m such that:
s m
∇ ∑ t f ( x , y ) + Bu + ∇∑ μ g ( x ) = 0,
i =1
i
*
i
j =1
j j
*
Second order duality for non-differentiable minimax programming problems 397

∑ μ g ( x ) = 0,
j =1
j j
*

s
ti ≥ 0, i = 1, 2,..., s, ∑t i =1
i = 1,

( x*T Bx* )
1/ 2
= x*T Bu ,

u T Bu ≤ 1.

3 First duality model

In this section, we consider the following dual to (P) (Ahmad et al., 2008):

s m
( WD) max
( s ,t , y )∈K ( z )
sup ∑
( z ,u , μ , p )∈H1 ( s ,t , y ) i =1
ti f ( z , yi ) + z T Bu + ∑ μ g ( z)
j =1
j j

1 T 2 ⎧⎪ s ⎫
m

2
p ∇ ⎨ ∑ ti f ( z , yi ) +
⎪⎩i =1
∑ μ g ( z)⎪⎬⎪ p,
j j
j =1 ⎭

where H1 ( s, t , y ) denotes the set of all ( z , u , μ, p ) ∈ R n × R n × R+m × R n satisfying:

s m s m
∇ ∑ t f ( z, y ) + Bu + ∇∑ μ g ( z) +∇ ∑ t f ( z, y ) p + ∇ ∑ μ g ( z) p = 0,
i =1
i i
j =1
j j
2

i =1
i i
2

j =1
j j (3)

uT Bu ≤ 1. (4)

If, for a triplet ( s, t , y ) ∈ K ( z ), the set H1 ( s, t , y ) = φ , then we define the supremum


over it to be (–∞).

Theorem 3.1 (weak duality): Let x and ( z , u , μ, s , t , y , p ) be the


feasible solutions of (P) and (WD) respectively. Assume that ⎡⎣ f (., yi ) + (.)T Bu , g j (.) ⎤⎦ ,
i = 1, 2,..., s, j = 1, 2,..., m, is second order α-type I at z. Then:

s m
sup f ( x, y ) + ( xT Bx ) ∑ t f ( z, y ) + z ∑ μ g ( z)
1/ 2
≥ i i
T
Bu + j j
y∈Y i =1 j =1

1 T 2 ⎧⎪ ⎫⎪
s m

2
p ∇ ⎨ ti f ( z , yi ) + ∑ ∑ μ j g j ( z ) ⎬ p.
⎩⎪ i =1 j =1 ⎭⎪

Proof: Suppose contrary to the results that:


398 A. Jayswal and K. Kummari

s m
sup f ( x, y ) + ( xT Bx ) ∑ ti f ( z , yi ) + zT Bu + ∑ μ g ( z)
1/ 2
< j j
y∈Y i =1 j =1

1 T 2 ⎧⎪ ⎫
s m

2 ∑
p ∇ ⎨ ti f ( z , yi ) +
⎪⎩ i =1
∑ μ g ( z)⎪⎬⎪ p.
j j
j =1 ⎭
Thus, we have:
s m
f ( x, yi ) + ( xT Bx ) ∑ t f ( z, y ) + z ∑ μ g ( z)
1/ 2
< i i
T
Bu + j j
i =1 j =1

1 T 2 ⎧⎪ ⎫⎪
s m

2 ∑
p ∇ ⎨ ti f ( z , yi ) + ∑ μ j g j ( z ) ⎬ p,
⎩⎪ i =1 j =1 ⎭⎪
for all yi ∈ Y ( x), i = 1, 2,..., s.
It follows that ti ≥ 0, i = 1, 2,..., s, that:

⎡ ⎧⎪ s
{ } ∑
m
ti ⎢ f ( x, yi ) + ( xT Bx ) − ⎨ ti f ( z, yi ) + zT Bu + ∑ μ g ( z)
1/ 2
j j
⎢ ⎩⎪ i =1
⎣ j =1

1 T 2 ⎧⎪
s m ⎫⎪ ⎪⎫⎤

2 ∑
p ∇ ⎨ ti f ( z, yi ) + ∑ μ j g j ( z ) ⎬ p ⎬⎥

⎩⎪ i =1 j =1 ⎭⎪ ⎪⎭⎦
≤ 0, i = 1, 2,..., s,

with at least one strict inequality, since t = ( t1 , t2 ,..., ts ) ≠ 0. Taking summation over i and
s
using ∑t
i =1
i = 1, ti ≥ 0, we have:

s s m

∑ t f ( x, y ) + ( x Bx ) ∑ t f ( z, y ) + z ∑ μ g ( z)
1/ 2
T T
i i < i i Bu + j j
i =1 i =1 j =1

1 T 2 ⎧⎪ ⎫⎪
s m

2 ∑
p ∇ ⎨ ti f ( z , yi ) + ∑ μ j g j ( z ) ⎬ p.
⎩⎪ i =1 j =1 ⎭⎪
By (2) and (4), we obtain:
s s m


i =1
ti f ( x, yi ) + xT Bu < ∑i =1
ti f ( z, yi ) + zT Bu + ∑ μ g ( z)
j =1
j j

(5)
1 ⎧⎪ s m ⎫⎪

− pT ∇ 2 ⎨ ti f ( z , yi ) +
2 ∑ μ j g j ( z ) ⎬ p.
⎩⎪ i =1 j =1 ⎭⎪

From the second-order α-type I assumptions of [ f (., yi ) + (.)T Bu, g j (.)], i = 1, 2,..., s,
j = 1, 2,..., m at z, we have:
Second order duality for non-differentiable minimax programming problems 399

1 T 2
f ( x, yi ) + xT Bu − f ( z, yi ) − zT Bu + p ∇ f ( z, yi ) p
2
≥ α ( x, z ) {∇f ( z , yi ) + Bu + ∇ 2 f ( z , yi ) p} , η ( x, z ) , i = 1, 2,..., s,

1 T 2
− g j ( z) + p ∇ g j ( z) p
2
≥ α ( x, z ) {∇g j ( z ) + ∇ 2 g j ( z ) p} , η ( x, z ) , j = 1, 2,..., m.

Multiplying the first inequality by ti ≥ 0, i = 1, 2,..., s, and second by μ j ≥ 0,


s
j = 1, 2,..., m, with ∑t
i =1
i = 1, we get:

s s s
1 T 2
∑i =1
ti f ( x, yi ) + xT Bu − ∑
i =1
ti f ( z , yi ) − z T Bu +
2
p ∇ ∑ t f ( z, y ) p
i =1
i i

⎧⎪ s s ⎫⎪

⎩⎪ i =1

≥ α ( x, z ) ⎨∇ ti f ( z , yi ) + Bu + ∇ 2 ∑ t f ( z, y ) p ⎬⎭⎪ , η( x, z)
i =1
i i ,

m m
1 T 2
− ∑ j =1
μ j g j ( z) +
2
p ∇ ∑ μ g ( z) p
j =1
j j

⎧⎪ m m ⎫⎪
≥ α ( x, z ) ⎨∇
⎪⎩
∑ μ g ( z) + ∇ ∑ μ g ( z) p ⎬⎪ , η( x, z)
j j
2
j j
j =1 j =1 ⎭

The above inequalities together yield:


s s m

∑ t f ( x, y ) + x
i =1
i i
T
Bu − ∑ t f ( z, y ) − z
i =1
i i
T
Bu − ∑ μ g ( z)
j =1
j j

1 T 2⎛ ⎞
s m
+
2
p ∇ ⎜
⎜ ∑ ti f ( z , yi ) + ∑ μ j g j ( z) ⎟ p

⎝ i =1 j =1 ⎠
⎧⎪ s s

⎪⎩ i =1

≥ α ( x, z ) ⎨∇ ti f ( z , yi ) + Bu + ∇ 2 ∑ t f ( z, y ) p
i =1
i i

m m ⎫⎪
+∇ ∑ j =1
μ j g j ( z) + ∇2 ∑ j =1
μ j g j ( z ) p ⎬ , η( x , z ) ,
⎪⎭

Since α ( x, z ) > 0, the above inequality together with (5) yield:


400 A. Jayswal and K. Kummari

s m s
∇ ∑
i =1
ti f ( z , yi ) + Bu + ∇ ∑ j =1
μ j g j ( z) + ∇2 ∑ t f ( z, y ) p
i =1
i i

m
+ ∇2 ∑ μ g ( z ) p , η( x , z )
j =1
j j < 0,

which contradicts dual constraints (3), since 0, x = 0. This completes the proof. □

Theorem 3.2 (strong duality): Assume that x* is an optimal solution of (P) and
∇g j ( x* ), j ∈ J ( x* ) are linearly independent. Then there exist ( s* , t * , y * ) ∈ K ( x* ) and
( x* , u* , μ* , p* = 0) ∈ H1 ( s* , t * , y * ) such that ( x* , u* , μ* , s* , t * , y * , p* = 0) is a feasible
solution of (WD) and the two objectives have the same values. Further, if the hypothesis
of weak duality Theorem 3.1 holds for all feasible solutions ( z , u , μ, s, t , y , p) of (WD),
then ( x* , u* , μ* , s* , t * , y * , p* = 0) is an optimal solution of (WD).

Proof: Since x* is an optimal solution of (P) and ∇g j ( x* ), j ∈ J ( x* ) are


linearly independent, then by Theorem 2.1, there exist ( s , t , y ) ∈ K ( x ) and * * * *

( x* , u* , μ* , p* = 0) ∈ H1 ( s* , t * , y * ) such that ( x* , u* , μ* , s* , t * , y * , p* = 0) is a feasible


solution of (WD) and the two objectives have the same values. Optimality of
( x* , u* , μ* , s* , t * , y * , p* = 0) for (WD) thus follows from weak duality Theorem 3.1. □

Theorem 3.3 (strict converse duality): Let x* and ( z * , u* , μ* , s* , t * , y * , p* ) be the


optimal of (P) and (WD), respectively. Suppose that [ f (., yi* ) + (.)T Bu* , g j (.)]
i = 1, 2,..., s* , j = 1, 2,..., m, is second-order strictly α-type I at z*, and ∇g j ( x* ), j ∈ J ( x* )
are linearly independent. Then z * = x* , that is, z* is an optimal solution of (P).

Proof: Suppose contrary to the results that z * ≠ x* . Since x* and


( z * , u* , μ* , s* , t * , y * , p* ) are the optimal of (P) and (WD), respectively, and
∇g j ( x* ), j ∈ J ( x* ) are linearly independent, therefore from the strong duality
Theorem 3.2, we reach:
s* m
sup f ( x* , y* ) + ( x*T Bx* ) ∑ ti* f ( z* , yi* ) + z*T Bu* + ∑ μ g (z )
1/ 2
= *
j j
*
y∈Y i =1 j =1

1 *T 2 ⎧⎪ ⎫
s* m


2 ⎪⎩ i =1

p ∇ ⎨ ti* f ( z * , yi* ) + ∑ μ g ( z )⎬⎪ p .
*
j j
* *

j =1 ⎭
Thus, we have:
s* m
f ( x* , yi* ) + ( x*T Bx* ) ∑t f ( z , y ) + z ∑ μ g (z )
1/ 2
≤ *
i
* *
i
*T
Bu* + *
j j
*

i =1 j =1

1 *T 2 ⎧⎪ ⎫
s* m

2 ⎪⎩ i =1

− p ∇ ⎨ ti* f ( z * , yi* ) + ∑ μ g ( z )⎬⎪ p ,
*
j j
* *

j =1 ⎭
Second order duality for non-differentiable minimax programming problems 401

for all ∇g j ( x* ), j ∈ J ( x* ) yi* ∈ Y ( x* ), i = 1, 2,..., s* .


Now proceeding as in Theorem 3.1, we get:

s* s* m

∑i =1
ti* f (x , *
yi* )+ x *T
Bu ≤*
∑ i =1
ti* f ( z* , yi* ) + z*T Bu* + ∑ μ g (z )
j =1
*
j j
*

(6)
⎧⎪ s* m ⎫
( z )⎪⎬ p* .
1
2 ∑
− p*T ∇ 2 ⎨ ti* f ( z * , yi* ) + ∑ μ*j g j *

⎩⎪ i =1 j =1 ⎭⎪

From the second-order strict α-type I assumption of [ f (., yi* ) + (.)T Bu* , g j (.)]
i = 1, 2,..., s* , j = 1, 2,..., m, at z*, we have:

1 *T 2
f ( x* , yi* ) + x*T Bu* − f ( z* , yi* ) − z *T Bu* + p ∇ f ( z * , yi* ) p*
2
{
> α ( x* , z * ) ∇f ( z * , yi* ) + Bu* + ∇ 2 f ( z * , yi* ) p* , η ( x* , z* ) , i = 1, 2,..., s* , }
1 *T 2
− g j ( z* ) + p ∇ g j ( z * ) p*
2
{
> α ( x* , z * ) ∇g j ( z * ) + ∇ 2 g j ( z* ) p* , η ( x* , z * ) , j = 1, 2,..., m.}
Multiplying the first inequality by ti* ≥ 0, i = 1, 2,..., s* , and second by μ*j ≥ 0,
s*
j = 1, 2,..., m, respectively, with ∑t
i =1
*
i = 1, we get:

s* s* s*
1
∑ f (x , )+ x
i =1
ti* *
yi* *T
Bu −*
∑ f (z ,
i =1
ti* *
yi* ) − z Bu + 2 p*T ∇2
*T *
∑t f ( z , y ) p
i =1
*
i
* *
i
*

⎧⎪ s* s*
⎪ ⎫
*

⎪⎩ i =1
*

> α ( x , z ) ⎨∇ ti* f ( z * , yi* ) + Bu* + ∇ 2 ∑ t f ( z , y ) p ⎬⎪ , η ( x , z )
*
i
* *
i
* * *
,
i =1 ⎭

m m
1 *T 2
− ∑
j =1
μ*j g j ( z * ) +
2
p ∇ ∑ μ g (z ) p
j =1
*
j j
* *

⎧⎪ m

m ⎫
> α ( x* , z * ) ⎨∇
⎪⎩
∑ μ g ( z ) + ∇ ∑ μ g ( z ) p ⎬⎪ , η ( x , z )
*
j j
* 2 *
j j
* * * *
.
j =1 j =1 ⎭

The above inequalities together yield:


402 A. Jayswal and K. Kummari

s* s* s*
1 *T 2
∑i =1
ti* f ( x* , yi* ) + x*T Bu* − ∑ i =1
ti* f ( z* , yi* ) − z*T Bu* +
2
p ∇ ∑t f ( z , y ) p
i =1
*
i
* *
i
*

m m
1 *T 2
− ∑
j =1
μ*j g j ( z * ) +
2
p ∇ ∑ μ g (z ) p
j =1
*
j j
* *

⎧⎪ s* s*

⎪⎩ i =1

> α ( x* , z * ) ⎨∇ ti* f ( z * , yi* ) + Bu* + ∇ 2 ∑t f ( z , y ) p
i =1
*
i
* *
i
*

m m
⎪ ⎫
+∇ ∑ μ*j g j ( z * ) + ∇ 2 ∑ μ g ( z ) p ⎬⎪ , η ( x , z )
*
j j
* * * *
.
j =1 j =1 ⎭

Since α ( x* , z * ) > 0, the above inequality together with (3) yield:


s* s* m

∑i =1
ti* f ( x* , yi* ) + x*T Bu* > ∑ i =1
ti* f ( z* , yi* ) + z*T Bu* + ∑ μ g (z )
j =1
*
j j
*

1 *T 2 ⎛ ⎞
s* m

2
p ∇ ⎜
⎜ ∑ t f ( z , y ) + ∑ μ g ( z ) ⎟⎟ p ,
*
i
* *
i
*
j j
* *

⎝ i =1 j =1 ⎠

which contradicts (6). Hence, z* = x* . This completes the proof. □

4 Second duality model

s
(MD) max
( s ,t , y )∈K ( z )
sup
( z ,u , μ , p )∈H 2 ( s ,t , y ) i =1
∑ t f ( z, y ) + z
i i
T
Bu + ∑ μ g ( z)
j∈J 0
j j

1 T 2 ⎧⎪ ⎫
s

2 ⎪⎩ i =1

p ∇ ⎨ ti f ( z , yi ) + ∑ μ g ( z)⎪⎬⎪ p,
j j
j∈J 0 ⎭

where H 2 ( s, t , y ) denotes the set of all ( z , u , μ, p ) ∈ R n × R n × R+m × R n satisfying:


s m s m
∇ ∑ t f ( z, y ) + Bu + ∇∑ μ g ( z) + ∇ ∑ t f ( z, y ) p + ∇ ∑ μ g ( z) p = 0,
i =1
i i
j =1
j j
2

i =1
i i
2

j =1
j j (7)

1
∑ μ g ( z) − 2 p ∇ ∑ μ g ( z) p ≥ 0, β = 1, 2,..., r,
j∈J β
j j
T 2

j∈J β
j j (8)

u T Bu ≤ 1. (9)
r
where J β ⊆ M , β = 0,1, 2,..., r with ∪J
β =0
β = M and J β ∩ J γ = φ , if β ≠ γ. If, for a triplet

( s, t , y ) ∈ K ( z ), the set H 2 ( s, t , y ) = φ , then we define the supremum over it to be (–∞).


Second order duality for non-differentiable minimax programming problems 403

Theorem 4.1 (weak duality): Let x and ( z , u , μ, s, t , y , p) be the feasible solutions of (P)
⎡ s ⎤
and (MD) respectively. Suppose that ⎢ ti f (., yi ) + (.)T Bu +
⎢⎣ i =1
∑ ∑ μ g (.), ∑ μ g (.)⎥⎥
j j j j
j∈J 0 j∈J β ⎦
β = 1, 2,..., r , is second-order pseudoquasi α-type I at z. Then:

s
sup f ( x, y ) + ( xT Bx ) ∑ t f ( z, y ) + z ∑ μ g ( z)
1/ 2
≥ i i
T
Bu + j j
y∈Y i =1 j∈J 0

1 T 2 ⎧⎪ ⎫⎪
s

2 ∑
p ∇ ⎨ ti f ( z , yi ) + ∑
μ j g j ( z ) ⎬ p.
⎩⎪ i =1 j∈J 0 ⎭⎪

Proof: Suppose contrary to the results that:

s
sup f ( x, y ) + ( xT Bx ) ∑ t f ( z, y ) + z ∑ μ g ( z)
1/ 2
< i i
T
Bu + j j
y∈Y i =1 j∈J 0

1 T 2 ⎧⎪ ⎫⎪
s

2 ⎪⎩ i =1

p ∇ ⎨ ti f ( z , yi ) +
j∈J 0

μ j g j ( z ) ⎬ p.
⎪⎭

Thus, we have:

s
f ( x, yi ) + ( xT Bx ) ∑ t f ( z, y ) + z ∑ μ g ( z)
1/ 2
< i i
T
Bu + j j
i =1 j∈J 0

1 T 2 ⎧⎪ ⎫⎪
s

2 ⎪⎩ i =1

p ∇ ⎨ ti f ( z , yi ) +
j∈J 0

μ j g j ( z ) ⎬ p.
⎪⎭

for all yi ∈ Y ( x), i = 1, 2,..., s.


It follows that ti ≥ 0, i = 1, 2,..., s, that:

⎡ ⎧⎪ s


{
ti ⎢ f ( x, yi ) + ( xT Bx )
1/ 2
} ∑
− ⎨ ti f ( z, yi ) + zT Bu +
⎪⎩ i =1 j∈J 0
μ j g j ( z) ∑
1 T 2 ⎧⎪
s ⎫⎪ ⎪⎫⎤

2 ∑
p ∇ ⎨ ti f ( z, yi ) +
⎪⎩ i =1

μ j g j ( z ) ⎬ p ⎬⎥

j∈J 0 ⎭⎪ ⎪⎭⎦
≤ 0, i = 1, 2,..., s,

with at least one strict inequality, since t = ( t1 , t2 ,..., ts ) ≠ 0. Taking summation over i and
s
using ∑t
i =1
i = 1, we have:
404 A. Jayswal and K. Kummari

s s

∑ ti f ( x, yi ) + ( xT Bx ) ∑ t f ( z, y ) + z ∑ μ g ( z)
1/ 2
< i i
T
Bu + j j
i =1 i =1 j∈J 0

1 T 2 ⎧⎪ ⎫⎪
s

2
p ∇ ⎨ ti f ( z , yi ) +
⎪⎩ i =1
∑j∈J 0
μ j g j ( z ) ⎬ p.
⎪⎭

By (2) and (9), we obtain:
s s

∑ t f ( x, y ) + x
i =1
i i
T
Bu < ∑ t f ( z, y ) + z
i =1
i i
T
Bu + ∑ μ g ( z)
j∈J 0
j j

(10)
1 ⎧⎪ s ⎫⎪
− pT ∇ 2 ⎨ ti f ( z , yi ) +
2 ∑ μ j g j ( z ) ⎬ p. ∑
⎩⎪ i =1 j∈J 0 ⎭⎪
Also from (8), we have:
1
− ∑ μ g ( z) + 2 p ∇ ∑ μ g ( z) p ≤ 0, β = 1, 2,..., r.
j∈J β
j j
T 2

j∈J β
j j (11)

From the inequalities (10), (11) and the second-order pseudoquasi α-type I assumption of
⎡ s ⎤

⎢ ti f (., yi ) + (.)T Bu +
⎢⎣ i =1 j∈J 0
μ j g j (.),
j∈J β
∑ ⎥⎦

μ j g j (.) ⎥ , β = 1, 2,..., r , at z , we have:

⎧ s
α ( x, z ) ⎪⎨∇ ∑ t f ( z, y ) + Bu +∇ ∑ μ g ( z)
i i j j
⎪⎩ i =1 j∈J 0

s ⎫
+∇ 2 ∑ t f ( z, y ) p + ∇ ∑ μ g ( z) p ⎪⎬⎪ , η( x, z)
i i
2
j j <0
i =1 j∈J 0 ⎭

⎧⎪ ⎫⎪
α ( x, z ) ⎨∇
⎪⎩
∑ μ g ( z) + ∇ ∑ μ g ( z) p ⎬⎪ , η( x, z)
j j
2
j j ≤ 0, β = 1, 2,..., r.
j∈J β j∈J β ⎭

Since α ( x, z ) > 0, the above inequalities together yields:


s m s
∇ ∑
i =1
ti f ( z , yi ) + Bu +∇ ∑ j =1
μ j g j ( z) + ∇2 ∑ t f ( z, y ) p
i =1
i i

m
+ ∇2 ∑ μ g ( z ) p , η( x , z )
j =1
j j < 0,

which contradicts dual constraints (7), since 0, x = 0. This completes the proof. □

The proof of the following theorems is similar to that of Theorems 3.2 and 3.3
respectively, and hence being omitted.
Second order duality for non-differentiable minimax programming problems 405

Theorem 4.2 (strong duality): Assume that x* is an optimal solution of (P) and
∇g j ( x* ), j ∈ J ( x* ) are linearly independent. Then there exist ( s* , t * , y * ) ∈ K ( x* ) and
( x* , u* , μ* , p* = 0) ∈ H 2 ( s* , t * , y * ) such that ( x* , u* , μ* , s* , t * , y * , p* = 0) is a feasible
solution of (MD) and the two objectives have the same values. Further, if the hypothesis
of weak duality Theorem 4.1 holds for all feasible solutions ( z , u , μ, s, t , y , p) of (MD),
then ( x* , u* , μ* , s* , t * , y * , p* = 0) is an optimal solution of (MD).

Theorem 4.3 (strict converse duality): Let x* and ( z * , u* , μ* , s* , t * , y * , p* )


be the optimal of (P) and (MD), respectively. Suppose
⎡ s* ⎤

that ⎢ ti* f (., yi* ) + (.)T Bu* +
⎢⎣ i =1

j∈J 0
μ*j g j (.), ∑
j∈J β
μ*j g j (.), ⎥ , β = 1, 2,..., r , is second-order
⎥⎦
strictly pseudoquasi α-type I at z* and ∇g j ( x* ), j ∈ J ( x* ) are linearly independent.
Then z * = x* , that is, z* is an optimal solution of (P).

5 Special cases

1 if we set B = 0 and J0 = φ in (MD), then we get another dual obtained in Bector et al.
(1991)

2 let B = 0, then (P) and (MD) reduce to the primal and dual problems of Liu (1999)

3 let B = 0 and p = 0, then (P) and (WD) becomes the problems proposed by Tanimoto
(1981).

6 Concluding remarks and future work

In this paper, we have discussed duality results for non-differentiable minimax


programming problems under the assumptions of generalised second-order α-type I
functions. It will be interesting to see whether or not duality results developed in this
paper still hold for the following complex minimax programming problem:

Minimise f (ξ ) = sup Re ⎡φ (ξ , ν) + ( z H Bz )
1/ 2 ⎤
(CP) ,
ν∈W ⎣⎢ ⎦⎥
subject to ξ ∈ S 0 = {ξ ∈ C 2 n : − g (ξ ) ∈ S } ,

where ξ = ( z , z ), ν = ( w, w) for z ∈ C n , w ∈ C l . φ (.,.) : C 2 n × C 2l → C is analytic with


respect to ξ, W is a specified compact subset in C2l, S is a polyhedral cone in Cm and
g : C 2 n → C m is analytic. Also B ∈ C n×n is a positive semi-definite Hermitian matrix.
This will orient the future research of the authors.
406 A. Jayswal and K. Kummari

Acknowledgements

The authors are thankful to the anonymous referee for his or her valuable suggestions,
which have substantially improved the presentation of the paper.
The research of the first author is financially supported by the University Grant
Commission, New Delhi, India through Grant No. F. No. 41-801/2012(SR).

References
Aghezzaf, B. and Hachimi, M. (2000) ‘Generalized invexity and duality in multiobjective
programming problems’, Journal of Global Optimization, Vol. 18, No. 1, pp.91–101.
Ahmad, I. and Sharma, S. (2007) ‘Second-order duality for nondifferentiable multiobjective
programming problems’, Numerical Functional Analysis and Optimization, Vol. 28,
Nos. 9–10, pp.975–988.
Ahmad, I., Hussain, Z. and Sharma, S. (2008) ‘Second order duality in nondifferentiable
minmax programming involving type-I functions’, Journal of Computational and Applied
Mathematics, Vol. 215, No. 1, pp.91–102.
Bector, C.R. and Bector, B.K. (1986) ‘On various duality theorems for second-order duality in
nonlinear programming’, Cahiers du Centre d’Études de Recherche Opérationnelle, Vol. 28,
No. 4, pp.283–292.
Bector, C.R., Chandra, S. and Hussain, I. (1991) ‘Second order duality for a minimax programming
problems’, Opsearch, Vol. 28, pp.249–263.
Hanson, M.A. (1981) ‘On sufficiency of the Kuhn-Tucker conditions’, Journal of Mathematical
Analysis and Applications, Vol. 80, No. 2, pp.545–550.
Hanson, M.A. and Mond, B. (1982) ‘Further generalizations of convexity in mathematical
programming’, Journal of Information and Optimization Sciences, Vol. 3, pp.25–32.
Hu, Q., Chen, Y. and Jian, J. (2012) ‘Second-order duality for non-differentiable minimax
fractional programming’, International Journal of Computer Mathematics, Vol. 89, No. 1,
pp.11–16.
Jayswal, A. (2010) ‘On sufficiency and duality in multiobjective programming problem under
generalized α-type I univexity’, Journal of Global Optimization, Vol. 46, No. 2, pp.207–216.
Lai, H.C., Liu, J.C. and Tanaka, K. (1999) ‘Necessary and sufficient conditions for minimax
fractional programming’, Journal of Mathematical Analysis and Applications, Vol. 230, No. 2,
pp.311–328.
Liu, J.C. (1999) ‘Second order duality for minimax programming’, Utilitas Mathematica, Vol. 56,
No. 4, pp.53–63.
Mangasarian, O.L. (1975) ‘Second and higher order duality in nonlinear programming’, Journal of
Mathematical Analysis and Applications, Vol. 51, No. 3, pp.607–620.
Mond, B. (1974) ‘Second order duality for nonlinear programs’, Opsearch, Vol. 11, Nos. 2–3,
pp.90–99.
Noor, M.A. (2004) ‘On generalized preinvex functions and monotonicities’, Journal of Inequalities
in Pure and Applied Mathematics, Vol. 5, No. 4, pp.1–9.
Rueda, N.G. and Hanson, M.A. (1988) ‘Optimality criteria in mathematical programming involving
generalized invexity’, Journal of Mathematical Analysis and Applications, Vol. 130, No. 2,
pp.375–385.
Sharma, S. and Gulati, T.R. (2012) ‘Second order duality in minmax fractional programming with
generalized univexity’, Journal of Global Optimization, Vol. 52, No. 1, pp.161–169.
Tanimoto, S. (1981) ‘Duality for a class of nondifferentiable mathematical programming
problems’, Journal of Mathematical Analysis and Applications, Vol. 79, No. 2, pp.286–294.

Potrebbero piacerti anche