Sei sulla pagina 1di 15

Journal of Intelligent & Fuzzy Systems 36 (2019) 101–115 101

DOI:10.3233/JIFS-18040
IOS Press

A novel divergence measure and its


based TOPSIS method for multi criteria
decision-making under single-valued
neutrosophic environment
Nancy and Harish Garg∗
School of Mathematics, Thapar Institute of Engineering & Technology (Deemed University),
Patiala, Punjab, India

Abstract. The theme of this work is to present an axiomatic definition of divergence measure for single-valued neutrosophic
sets (SVNSs). The properties of the proposed divergence measure have been studied. Further, we develop a novel technique
for order preference by similarity to ideal solution (TOPSIS) method for solving single-valued neutrosophic multi-criteria
decision-making with incomplete weight information. Finally, a numerical example is presented to verify the proposed
approach and to present its effectiveness and practicality.

Keywords: Single-valued Neutrosophic set, Divergence measure, TOPSIS, multi criteria decision-making

1. Introduction extended the IFS to the complex IFS and hence


presented some series of distance measures for it.
Intuitionistic fuzzy (IF) set (IFS) developed by [9] presented some new similarity measures of IFS
[1], is a useful tool characterized by the member- based on the set pair analysis theory. Apart from
ship and the non-membership degrees to describe the them, several other algorithms namely, score func-
data more accurately. Since IFS allows two degrees tions [10–12], aggregation operators [13, 14], ranking
of freedom into a set description, therefore, it renders method [15–17] etc., under IFS environment have
us an additional possibility to represent the uncertain gained much attention by the researchers.
data when trying to solve decision-making problems. From this survey, it is remarked that neither the
Under this environment, [2, 3] presented distance and fuzzy set (FS) nor IFS theory is able to deal with
similarity measures between the IFSs. [4] developed indeterminate and inconsistent data. For instance,
the distance measures between the type-2 intuition- consider an expert which gives their opinion about
istic fuzzy sets. [5] developed some series of the a certain object in such a way that 0.5 being the
distance measures while [6] presented IF cross- “possibility that the statement is true”, 0.7 being the
entropy for IFSs. [7] presented generalized directed “possibility that the statement is false” and 0.2 being
divergence measures under IFS environment. [8] the “possibility that he or she is not sure”. To resolve
∗ Corresponding
this, [18] introduced a new component called the
author. Harish Garg, School of Mathematics,
“indeterminacy-membership function” and added to
Thapar Institute of Engineering & Technology (Deemed Univer-
sity), Patiala, Punjab, India. Tel.: +91 8699031147; E-mail: har- the “truth membership function” and “falsity mem-
ish.garg@thapar.edu, http://sites.google.com/site/harishg58iitr/ bership function”, all which are independent of each

ISSN 1064-1246/19/$35.00 © 2019 – IOS Press and the authors. All rights reserved
102 Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS

other and lying in ]0− , 1+ [, and the corresponding coefficient. The strength of this extension has been
set is known as a neutrosophic set (NS). Since it demonstrated by an example of the decision-making
is hard to apply the concept of NSs to the practi- process. Finally, through an example, the superiority
cal problems, therefore, [19] introduced the concept of divergence based TOPSIS method over classical
of a single-valued neutrosophic set (SVNS), which is TOPSIS has been shown.
the particular class of NS. After they came into exis- The rest of the text has been summarized as
tence, researchers have made their efforts to enrich follows: Section 2 introduces the divergence mea-
the concept of information measure, used to reflect sure in the neutrosophic environment and various
the decision maker’s imperative and nature of indi- properties have also been investigated in details. Sec-
vidual choice, in the neutrosophic environment. [20] tion 3 describes the TOPSIS approach for solving the
introduced the distance measures while [21, 22] pre- decision-making problems based on the divergence
sented the correlation for single-valued neutrosophic measure followed by an illustrative example. Further,
numbers(SVNNs). Also, [23] improved the concept various test criteria are applied to check its applicabil-
of cosine similarity for SVNSs which was firstly ity and explore its effectiveness. Finally, paper arrives
introduced by [24] in a neutrosophic environment. at conclusion in Section 4.
[25] presented an improved score function for ranking
the SVNNs. Apart from them, various authors incor-
porated the idea of SVNS theory into information 2. Proposed divergence measure for single-
measures [26–29], aggregation operators [31–33] and valued neutrosophic set
applied them to solve the decision-making problems.
However, TOPSIS developed by [15] is well- In this section, we present a divergence measure
known decision-making approach to find the best between the two SVNSs to rank it. Further, we discuss
alternative(s) based on its ideal values. The chief its various properties apart from their basic axioms.
advantages of the TOPSIS are to consider positive
and negative ideal solutions as anchor points to reflect 2.1. Basic concepts
the contrast of the currently achievable criterion
performances. In TOPSIS, the preferred alternative Firstly, some basic definitions related to NS, SVNS
should have the shortest distance from the positive- on the universal set X have been discussed.
ideal solution and the farthest distance from the
negative-ideal solution. The difference between the Definition 2.1. [18] A neutrosophic set (NS) A in X is
TOPSIS and the neutrosophic TOPSIS is in their rat-  
ing approaches. The merit of neutrosophic TOPSIS A = xj , ζA (xj ), ρA (xj ), ϑA (xj )|xj ∈ X (1)
is to designate the importance of attributes and the
where ζA (xj ), ρA (xj ), ϑA (xj ) represents the truth,
performance of alternatives with respect to various
indeterminacy and falsity-membership functions
attributes by using SVNNs instead of precise num-
respectively, and are real standard or non-standard
bers. Under this environment, [34–36] presented the
subsets of ]0− , 1+ [ such that 0− ≤ sup ζA (xj ) +
TOPSIS method for decision-making problems based
sup ρA (xj ) + sup ϑA (xj ) ≤ 3+ . Here, sup represents
on the distance measure to rank the alternatives.
the supremum of the set.
Consequently, keeping the flexibility and effi-
ciency of SVNS, the theme of this work is to present Definition 2.2. [18, 19] A single-valued neutrosophic
a novel divergence measure for SVNSs to find dis- set (SVNS) A in X is defined as
crimination between them. Further, based on it, a  
TOPSIS method for solving neutrosophic decision- A = xj , ζA (xj ), ρA (xj ), ϑA (xj )|xj ∈ X (2)
making problem has been presented. As far we know,
there is no investigation on divergence measure in where ζA (xj ), ρA (xj ), ϑA (xj ) ∈ [0, 1] such that
the neutrosophic environment. The proposed mea- 0 ≤ ζA (xj ) + ρA (xj ) + ϑA (xj ) ≤ 3 for all xj ∈ X.
sure has elegant properties, which are expressed and For convenience, we denote these pairs as A =
tested in the paper to enhance the employability ζA , ρA , ϑA , throughout this article, and called as
of this measure. In contrast to the classical TOP- single-valued neutrosophic number (SVNN).
SIS method, which is based on distance measure,
this paper applies the proposed divergence mea- Definition 2.3. Let A = {xj , ζA (xj ), ρA (xj ), ϑA (xj ) |
sure to establish the comparative index of closeness xj ∈ X} and B = {xj , ζB (xj ), ρB (xj ), ϑB (xj ) |xj ∈
Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS 103

X} be two SVNSs. Then the following operations are (P1) Since 0 ≤ ζA (xj ), ρA (xj ), ϑA (xj ) ≤ 1 and 0
defined as [19] j ), ρB (xj ), ϑB (xj ) ≤ 1 which implies
≤ ζB (x

(ζA (xj ))2 +(ζB (xj ))2 ζ (x )+ζ (x )


(i) A ⊆ B if ζA (xj ) ≤ ζB (xj ), ρA (xj ) ≥ ρB (xj ) and that ≥ A j 2 B j ,

2
ϑA (xj ) ≥ ϑB (xj ) for all xj in X; (ρA (xj ))2 +(ρB (xj ))2 ρ (x )+ρ (x )
(ii) A = B if and only if A ⊆ B and B ⊆ A;
≥ A j 2 B j and

2
(iii) Ac = {xj ,ϑA (xj ), ρA (xj ), ζA (xj )|xj ∈ X}; (ϑA (xj ))2 +(ϑB (xj ))2 ϑ (x )+ϑ (x )
2 ≥ A j 2 B j for each
(iv) A ∩ B = xj , min(ζA (xj ), ζB (xj )), max(ρA (xj ),
 j. Therefore, from Equation (3), we get
ρB (xj )), max(ϑA (xj ), ϑB (xj ))|xj ∈ X ; D(A|B) ≥ 0.

(v) A ∪ B = xj , max(ζA (xj ), ζB (xj )), min(ρA (xj ), (P2) It is trivial from the Equation (3).

ρB (xj )), min(ϑA (xj ), ϑB (xj ))|xj ∈ X . (P3) Assume that A = B which implies ζA (xj ) =
ζB (xj ), ρA (xj ) = ρB (xj ) and ϑA (xj ) =
ϑB (xj ) for each j, which implies that
n 
2.2. Proposed divergence measure

D(A|B) = √1 ζA (xj ) − ζA (xj ) +
n( 2−1)
Let (X) be the class of SVNSs over the universal j=1

set X. Then for any A, B ∈ SVNSs, a real function ρA (xj ) − ρA (xj ) + ϑA (xj ) − ϑA (xj ) = 0.
D : (X) × (X) → R+ is called a divergence mea- (P4) For SVNSs A and B, we have Ac = ϑA (xj ),
sure, denoted by D(A|B), if it satisfies the following ρA (xj ), ζA (xj ) and Bc = ϑB (xj ), ρB (xj ),
axioms: ζB (xj ), so by Equation (3), we have
(P1) D(A|B) ≥ 0 D(Ac |Bc )
(P2) D(A|B) = D(B|A) ⎡

(ϑA (xj ))2 + (ϑB (xj ))2 ϑA (xj ) + ϑB (xj )
(P3) D(A|B) = 0 if A = B −
⎢ 2 2 ⎥
D(A|B) = D(Ac |Bc ) n ⎢

(P4)  ⎢
1 ρA (xj ) + ρB (xj ) ⎥
⎢+ (ρA (xj )) + (ρB (xj ))
2 2
= √ − ⎥
n( 2 − 1) ⎢ 2 2 ⎥
j=1 ⎣ ⎦
Definition 2.4. For two SVNSs A = ζA (xj ), ρA (xj ),

(ζA (xj ))2 + (ζB (xj ))2 ζA (xj ) + ζB (xj )


ϑA (xj )|xj ∈ X and B = ζB (xj ), ρB (xj ), ϑB (xj ) | +
2

2
xj ∈ X, the divergence measure of A against B is
= D(A|B)
to measure the degree of discrimination between the
pair is defined as: Hence, D(A|B) is a valid divergence measure.
D(A|B) =
Also, it is observed that D(A|B) satisfies certain

⎤ properties which are stated as below:
(ζA (xj ))2 + (ζB (xj ))2 ζA (xj ) + ζB (xj )

⎢ 2 2 ⎥
n ⎢

 ⎢ Theorem 2.2. For SVNS A = ζA (xj ), ρA (xj ), ϑA (xj )
1 ρA (xj ) + ρB (xj ) ⎥(3)
⎢ + (ρA (xj )) + (ρB (xj ))
2 2
√ − ⎥
n( 2 − 1) ⎢ 2 2 ⎥ | xj ∈ X, D(A|Ac ) = 0 if and only if ζA (xj ) =
j=1 ⎣ ⎦

ϑA (xj ) for each xj ∈ X.
(ϑA (xj )) + (ϑB (xj ))
2 2 ϑA (xj ) + ϑB (xj )
+ −
2 2

Proof 2.2. For SVNS A = ζA (xj ), ρA (xj ), ϑA (xj )|


Theorem 2.1. The divergence measure D(A|B), as xj ∈ X, we have Ac = ϑA (xj ), ρA (xj ), ζA (xj ) |xj ∈
X. Thus, from Equation (3), we get
defined in Definition 2.4, for two SVNSs A and B
satisfies the following four axioms (P1)-(P4) D(A|Ac ) = 0

(P1) D(A|B) ≥ 0, 
n  (ζ (x ))2 + (ϑ (x ))2 ζA (xj )+ϑA (xj )

1 A j A j
⇔ √ 2 − =0
(P2) D(A|B) = D(B|A), n( 2 − 1) 2 2
D(A|B) = 0 if A = B,
j=1
(P3) 
(P4) D(A|B) = D(Ac |Bc ) (ζA (xj ))2 + (ϑA (xj ))2 ζA (xj ) + ϑA (xj )
⇔ − =0 for each xj ∈ X
2 2
ζ 2
Proof 2.1. For two SVNSs A = ζA (xj ), ρA (xj ), ⇔
A (xj )
√ −
ϑA (xj )
√ =0 for each xj ∈ X
ϑA (xj ) | xj ∈ X and B = ζB (xj ), ρB (xj ), ϑB (xj ) 2 2
|xj ∈ X, we have ⇔ ζA (xj ) = ϑA (xj ) for each xj ∈ X
104 Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS

Hence, D(A|Ac ) = 0 if and only if ζA (xj ) = Proof 2.5. Let A = ζA (xj ), ρA (xj ), ϑA (xj )|xj ∈
X and B = ζB (xj ), ρB (xj ), ϑB (xj )|xj ∈ X be two
ϑA (xj ) for each xj ∈ X.
SVNSs defined on the universal set X, then by Equa-
Theorem 2.3. For SVNS A = ζA (xj ), ρA (xj ), tion (3), we have
ϑA (xj )| xj ∈ X, D(A|Ac ) = 1 if and only if either
ζA (xj ) = 0, ϑA (xj ) = 1 or ζA (xj ) = 1, ϑA (xj ) = 0. 1

n

D(A ∩ B|A ∪ B) = √
n( 2 − 1)
Proof 2.3. For SVNS
A, we have D(A|Ac ) = j=1

1 n  ⎡

(ζA (xj ))2 +(ϑA (xj ))2
1⇔ √ 2 2 − (ζA∩B (xj ))2 + (ζA∪B (xj ))2

ζA∩B (xj ) + ζA∪B (xj )
n( 2 − 1) j=1 ⎢ 2 2 ⎥
 n 


ζA (xj )+ϑA (xj )
= ⇔
(ζA (xj ))2 +(ϑA (xj ))2
− ⎢ ρA∩B (xj ) + ρA∪B (xj ) ⎥
⎢ + (ρA∩B (xj )) + (ρA∪B (xj ))
2 2
1 − ⎥
2
j=1
2
⎢ 2 2 ⎥
 √


ζA (xj )+ϑA (xj ) n( 2−1) (ζA (xj ))2 +(ϑA (xj ))2 (ϑA∩B (xj )) + (ϑA∪B (xj ))
2 2 ϑA∩B (xj ) + ϑA∪B (xj )
2 = 2 ⇔ 2 − + −
√ 2 2
ζA (xj )+ϑA (xj )
= ( 2−1) for each xj ⇔ ⎡

2 2 (ζA (xj ))2 + (ζB (xj ))2 ζA (xj ) + ζB (xj )

(ζA (xj ))2 + (ϑA (xj ))2 = 1 and ζA (xj ) + ϑA (xj ) = 1 ⎢ 2 2 ⎥
which implies that ζA (xj )ϑA (xj ) = 0 and hence ⎢ ⎢


1 ρA (xj ) + ρB (xj ) ⎥
⎢ + (ρA (xj )) + (ρB (xj ))
2 2
= √ − ⎥
D(A|Ac ) = 1 if and only if either ζA (xj ) = 0, n( 2 − 1) ⎢ 2 2 ⎥
xj ∈X1 ⎣ ⎦
ϑA (xj ) = 1 or ζA (xj ) = 1, ϑA (xj ) = 0.

(ϑA (xj )) + (ϑB (xj ))


2 2 ϑA (xj ) + ϑB (xj )
+ −
2 2
Theorem 2.4. For SVNSs A and B, we have ⎡

(ζB (xj ))2 + (ζA (xj ))2 ζB (xj ) + ζA (xj )
D(A|Bc ) = D(Ac |B). −
⎢ 2 2 ⎥
⎢ ⎢


Proof 2.4. As Ac = ϑA (xj ), ρA (xj ), ζA (xj )|xj ∈ 1 ρB (xj ) + ρA (xj ) ⎥
⎢ + (ρB (xj )) + (ρA (xj ))
2 2
+ √ − ⎥
X and Bc = ϑB (xj ), ρB (xj ), ζB (xj )|xj ∈ X be the n( 2 − 1) ⎢ 2 2 ⎥
xj ∈X2 ⎣ ⎦
complement of the SVNSs A and B, then from Equa-

(ϑB (xj )) + (ϑA (xj ))


2 2 ϑB (xj ) + ϑA (xj )
tion (3) we have + −
2 2
D(A |B)
c
= D(A|B)


(ϑA (xj ))2 + (ζB (xj ))2 ϑA (xj ) + ζB (xj )

⎢ 2 2 ⎥ Thus, the result holds.
n ⎢

 ⎢
1 ρA (xj ) + ρB (xj ) ⎥
⎢+ (ρA (xj )) + (ρB (xj ))
2 2
= √ − ⎥
n( 2 − 1) ⎢ 2 2 ⎥ Theorem 2.6. For two SVNSs A and B, we have
j=1 ⎣ ⎦

(ζA (xj ))2 + (ϑB (xj ))2 ζA (xj ) + ϑB (xj )


+ −
2 2 (i) D(A|A ∪ B) = D(B|A ∩ B)


(ζA (xj ))2 + (ϑB (xj ))2 ζA (xj ) + ϑB (xj ) (ii) D(A|A ∩ B) = D(B|A ∪ B)

⎢ 2 2 ⎥ (iii) D(A|A ∪ B) + D(A|A ∩ B) = D(A|B)
n ⎢

 ⎢ D(B|A ∪ B) + D(B|A ∩ B) = D(A|B)
1 ρA (xj ) + ρB (xj ) ⎥ (iv)
⎢+ (ρA (xj )) + (ρB (xj ))
2 2
= √ − ⎥
n( 2 − 1) ⎢ 2 2 ⎥
j=1 ⎣ ⎦

(ϑA (xj ))2 + (ζB (xj ))2 ϑA (xj ) + ζB (xj )


+ − Proof 2.6. We will prove the first part and rest can
2 2
= D(A|B ) c be proved in the same manner. By the definition of
divergence, we have
Divide the universe X into two disjoint parts X1 and
X2 , where X1 = {xj : xj ∈ X, A(xj ) ⊆ B(xj )} and 1

n

X2 = {xj : xj ∈ X, B(xj ) ⊆ A(xj )}. Based on these D(A|A ∪ B) = √


n( 2 − 1)
considerations, we further propose some properties j=1

of the divergence measure which are explained as ⎡



(ζA (xj ))2 + (ζA∪B (xj ))2 ζA (xj ) + ζA∪B (xj )

follows: ⎢ 2 2 ⎥


⎢ ρA (xj ) + ρA∪B (xj ) ⎥
Theorem 2.5. If A and B be two SVNSs defined on ⎢ + (ρA (xj )) + (ρA∪B (xj ))
2 2
− ⎥
universal set X, then ⎢ 2 2 ⎥


(ϑA (xj )) + (ϑA∪B (xj ))
2 2 ϑA (xj ) + ϑA∪B (xj )
D(A ∩ B|A ∪ B) = D(A|B). +
2

2
Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS 105


⎤ Then, from Equations (4) and (5), we get D(A|A ∪
(ζA (xj ))2 + (ζB (xj ))2 ζA (xj ) + ζB (xj )

⎢ 2 2 ⎥ B) = D(B|A ∩ B).
⎢ ⎢


1 ρA (xj ) + ρB (xj ) ⎥
⎢ + (ρA (xj )) + (ρB (xj ))
2 2
= √ − ⎥
n( 2 − 1) ⎢ 2 2 ⎥
xj ∈X1 ⎣ ⎦

Theorem 2.7. If A and B be two SVNSs defined on
(ϑA (xj )) + (ϑB (xj ))
2 2 ϑA (xj ) + ϑB (xj )
+ − the universal sets X, then
2 2
(i) D(A|C) + D(B|C) − D(A ∪ B|C) ≥ 0,

⎤ (ii) D(A|C) + D(B|C) − D(A ∩ B|C) ≥ 0
(ζA (xj ))2 + (ζA (xj ))2 ζA (xj ) + ζA (xj )

⎢ 2 2 ⎥
⎢ ⎢


1 ρA (xj ) + ρA (xj ) ⎥
⎢ + (ρA (xj )) + (ρA (xj ))
2 2
+ √ − ⎥
n( 2 − 1) ⎢ 2 2 ⎥ Proof 2.7. In this property, we prove only the first
xj ∈X2 ⎣ ⎦

part because of having analogously similar proofs.
(ϑA (xj )) + (ϑA (xj ))
2 2 ϑA (xj ) + ϑA (xj )
+ −
2 2 D(A|C)

⎤ ⎡

(ζA (xj ))2 + (ζB (xj ))2 ζA (xj ) + ζB (xj ) (ζA (xj ))2 + (ζC (xj ))2 ζA (xj ) + ζC (xj )
− −
⎢ 2 2 ⎥ ⎢ 2 2 ⎥
⎢ ⎢

⎥ n ⎢

1 ρA (xj ) + ρB (xj ) ⎥ ⎢ ρA (xj ) + ρC (xj ) ⎥
⎢+ (ρA (xj )) + (ρB (xj ))
1
⎢ + (ρA (xj )) + (ρC (xj ))
2 2

2 2
= √ − = √ − ⎥
n( 2 − 1) ⎢ 2 2 ⎥ n( 2 − 1) ⎢ 2 2 ⎥
xj ∈X1 ⎣ ⎦ j=1 ⎣ ⎦

(ϑA (xj ))2 + (ϑB (xj ))2 ϑA (xj )+ϑB (xj ) (ϑA (xj ))2 + (ϑC (xj ))2 ϑA (xj ) + ϑC (xj )
+ − + −
2 2 2 2
(4)

D(B|C)
Also, ⎡

(ζB (xj ))2 + (ζC (xj ))2 ζB (xj ) + ζC (xj )

 n ⎢ 2 2 ⎥
n ⎢

D(B|A ∩ B) = √
1
1
 ⎢ ⎥
⎢ + (ρB (xj )) + (ρC (xj )) ρB (xj ) + ρC (xj )
2 2
n( 2 − 1) = √ − ⎥
j=1 n( 2 − 1) ⎢ 2 2 ⎥
j=1 ⎣ ⎦

(ζB (xj ))2 + (ζA∩B (xj ))2 ζB (xj ) + ζA∩B (xj ) (ϑB (xj ))2 + (ϑC (xj ))2 ϑB (xj ) + ϑC (xj )
− + −
⎢ 2 2 ⎥ 2 2


⎢ ⎥
⎢ + (ρB (xj )) + (ρA∩B (xj )) ρB (xj ) + ρA∩B (xj ) and
2 2
− ⎥
⎢ 2 2 ⎥

⎦ 1

n

(ϑB (xj ))2 + (ϑA∩B (xj ))2 ϑB (xj ) + ϑA∩B (xj ) D(A ∪ B|C) = √
+ − n( 2 − 1)
2 2 j=1

⎤ ⎡

(ζB (xj ))2 +(ζA (xj ))2 ζB (xj ) + ζA (xj ) (ζA∪B (xj ))2 + (ζC (xj ))2 ζA∪B (xj ) + ζC (xj )
− −
⎢ 2 2 ⎥ ⎢ ⎥
⎢ ⎥ 2 2



1 ⎢ ⎥ ⎢ ρA∪B (xj ) + ρC (xj ) ⎥
⎢ + (ρB (xj )) +(ρA (xj )) ρB (xj ) + ρA (xj )
2 2
= √ − ⎥ ⎢ + (ρA∪B (xj )) + (ρC (xj ))
2 2

n( 2 − 1) ⎢ 2 2 ⎥ ⎢


xj ∈X1 ⎣ ⎦ 2 2



(ϑB (xj ))2 +(ϑA (xj ))2 ϑB (xj ) + ϑA (xj ) (ϑA∪B (xj ))2 + (ϑC (xj ))2 ϑA∪B (xj ) + ϑC (xj )
+ − + −
2 2 2 2

⎤ ⎡

(ζB (xj ))2 + (ζB (xj ))2 ζB (xj ) + ζB (xj ) (ζB (xj ))2 + (ζC (xj ))2 ζB (xj ) + ζC (xj )
− −
⎢ 2 2 ⎥ ⎢ ⎥
⎢ ⎥ 2 2

⎢

1 ⎢ ρB (xj ) + ρB (xj ) ⎥ ⎢ ρB (xj ) + ρC (xj ) ⎥
⎢ + (ρB (xj )) + (ρB (xj ))
2 2
+ √ − ⎥ = √
1
⎢+ (ρB (xj )) + (ρC (xj ))
2 2

n( 2 − 1) ⎢ 2 2 ⎥ n( 2 − 1) ⎢


xj ∈X2 ⎣ ⎦ 2 2

xj ∈X1 ⎣


(ϑB (xj )) + (ϑB (xj ))
2 2 ϑB (xj ) + ϑB (xj ) (ϑB (xj ))2 + (ϑC (xj ))2 ϑB (xj ) + ϑC (xj )
+ − + −
2 2 2 2

⎤ ⎡

(ζA (xj ))2 + (ζB (xj ))2 ζA (xj ) + ζB (xj ) (ζA (xj ))2 + (ζC (xj ))2 ζA (xj ) + ζC (xj )
− −
⎢ 2 2 ⎥ ⎢ ⎥
⎢ ⎥ 2 2

⎢

1 ⎢ ρA (xj ) + ρB (xj ) ⎥ ⎢ ρA (xj ) + ρC (xj ) ⎥
⎢+ (ρA (xj )) + (ρB (xj ))
2 2
= √ − ⎥ + √
1
⎢ + (ρA (xj )) + (ρC (xj ))
2 2

n( 2 − 1) ⎢ 2 2 ⎥ n( 2 − 1) ⎢


xj ∈X1 ⎣ ⎦ 2 2

xj ∈X2 ⎣


(ϑA (xj ))2 + (ϑB (xj ))2 ϑA (xj )+ϑB (xj ) (ϑA (xj ))2 + (ϑC (xj ))2 ϑA (xj ) + ϑC (xj )
+ − + −
2 2 2 2
(5)
106 Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS

Then, Also,
D(A|C) + D(B|C) − D(A ∪ B|C) 1

n

D(A ∪ B|C) =

⎤ √
n( 2 − 1)
(ζA (xj ))2 + (ζC (xj ))2 ζA (xj ) + ζC (xj )
− j=1
⎢ 2 2 ⎥ ⎡

⎢ ⎢

⎥ (ζA∪B (xj ))2 + (ζC (xj ))2 ζA∪B (xj ) + ζC (xj )
1 ρA (xj ) + ρC (xj ) ⎥ −
⎢ + (ρA (xj )) + (ρC (xj ))
2 2
= √ − ⎥ ⎢ 2 2 ⎥
n( 2 − 1) ⎢ 2 2 ⎥ ⎢

xj ∈X1 ⎣ ⎦ ⎢

ρA∪B (xj ) + ρC (xj ) ⎥
⎢ + (ρA∪B (xj )) + (ρC (xj ))
2 2
(ϑA (xj ))2 + (ϑC (xj ))2 ϑA (xj ) + ϑC (xj ) − ⎥
+ − ⎢ 2 2 ⎥
2 2 ⎣


⎤ +
(ϑA∪B (xj ))2 + (ϑC (xj ))2

ϑA∪B (xj ) + ϑC (xj )
(ζB (xj ))2 + (ζC (xj ))2 ζB (xj ) + ζC (xj )
− 2 2
⎢ 2 2 ⎥ ⎡

1
⎢ ⎢


⎥ (ζB (xj ))2 + (ζC (xj ))2

ζB (xj ) + ζC (xj )
⎢ + (ρB (xj )) + (ρC (xj )) ρB (xj ) + ρC (xj )
2 2
+ √ − ⎥ ⎢ 2 2 ⎥
n( 2 − 1) ⎢ 2 2 ⎥ ⎢

xj ∈X2 ⎣ ⎦ ⎢

1 ρB (xj ) + ρC (xj ) ⎥
⎢ + (ρB (xj )) + (ρC (xj ))
2 2
(ϑB (xj ))2 + (ϑC (xj ))2 ϑB (xj ) + ϑC (xj ) = √ − ⎥
+ − n( 2 − 1) ⎢ 2 2 ⎥
j∈X1 ⎣ ⎦
2 2

(ϑB (xj ))2 + (ϑC (xj ))2 ϑB (xj ) + ϑC (xj )


Since, ζ(xj ), ρ(xj ), ϑ(xj ) ∈ [0, 1] for all xj ∈ X. + −
2 2
Thus, we get D(A|C) + D(B|C) − D(A ∪ B|C) ≥ ⎡

(ζA (xj ))2 + (ζC (xj ))2 ζA (xj ) + ζC (xj )
0. −
⎢ 2 2 ⎥
⎢ ⎢


1 ρA (xj ) + ρC (xj ) ⎥
⎢ + (ρA (xj )) + (ρC (xj ))
2 2
+ √ − ⎥
Theorem 2.8. For three SVNSs A, B and C, we have n( 2 − 1) ⎢ 2 2 ⎥
j∈X2 ⎣ ⎦

(ϑA (xj ))2 + (ϑC (xj ))2 ϑA (xj ) + ϑC (xj )


D(A ∩ B|C) + D(A ∪ B|C) = D(A|C) + D(B|C) + −
2 2

Proof 2.8. For three SVNSs A = ζA (xj ), ρA (xj ), Thus, by adding these equations, we get D(A ∩
ϑA (xj )|xj ∈ X, B = ζB (xj ), ρB (xj ), ϑB (xj )|xj ∈ B|C) + D(A ∪ B|C) = D(A|C) + D(B|C).
X and C = ζC (xj ), ρC (xj ), ϑC (xj )|xj ∈ X and by
definition of the divergence measure, we have Theorem 2.9. For two SVNSs A and B, we have


n (i) D(A|B) = D(Ac |Bc )
1
D(A ∩ B|C) = √ (ii) D(A|Bc ) = D(Ac |B)
n( 2 − 1)
j=1 (iii) D(A|B) + D(Ac |B) = D(Ac |Bc ) + D(A|Bc )


(ζA∩B (xj ))2 + (ζC (xj ))2 ζA∩B (xj ) + ζC (xj )

⎢ 2 2 ⎥ Proof 2.9. First and second parts are similar and third


⎢ ρA∩B (xj ) + ρC (xj ) ⎥ part can be proved by adding first and second one.
⎢ + (ρA∩B (xj )) + (ρC (xj ))
2 2
− ⎥
⎢ 2 2 ⎥ Proof of the part (ii) follows from Theorem 2.4.


+
(ϑA∩B (xj ))2 + (ϑC (xj ))2

ϑA∩B (xj ) + ϑC (xj ) Definition 2.5. For two SVNSs A = ζA (xj ), ρA (xj ),
2 2 ϑA (xj )| xj ∈ X and B = ζB (xj ), ρB (xj ), ϑB (xj )|

⎤ xj ∈ X, the weighted divergence measure of A
(ζA (xj ))2 + (ζC (xj ))2 ζA (xj ) + ζC (xj )

⎢ 2 2 ⎥ against B is to measure the degree of discrimination
⎢ ⎢

⎥ between the pair is defined as:
1 ρA (xj ) + ρC (xj ) ⎥
⎢ + (ρA (xj )) + (ρC (xj ))
2 2
= √ − ⎥
n( 2 − 1) ⎢ 2 2 ⎥  n
j∈X1 ⎣ ⎦ 1

Dw (A|B) = √ wj
(ϑA (xj ))2 + (ϑC (xj ))2 ϑA (xj ) + ϑC (xj ) ( 2 − 1) j=1
+ −
2 2
⎡  ⎤
(ζA (xj ))2 + (ζB (xj ))2 ζA (xj ) + ζB (xj )

⎤ ⎢ − ⎥
(ζB (xj ))2 + (ζC (xj ))2 ζB (xj ) + ζC (xj )
− ⎢ 2 2 ⎥
⎢ 2 2 ⎥ ⎢  ⎥
⎢
⎥ ⎢ ⎥
1 ⎢ ρB (xj ) + ρC (xj )⎥ ⎢ + (ρA (xj ))2 + (ρB (xj ))2 − ρA (xj ) + ρB (xj ) ⎥ (6)
⎢ + (ρB (xj )) + (ρC (xj )) ⎢ ⎥
2 2
+ √ − ⎥
n( 2 − 1) ⎢ 2 2 ⎥ ⎢ 2 2 ⎥
j∈X2 ⎣

⎦ ⎢  ⎥
⎣ ⎦
+
(ϑB (xj ))2 + (ϑC (xj ))2

ϑB (xj ) + ϑC (xj ) (ϑA (xj )) + (ϑB (xj ))
2 2 ϑA (xj ) + ϑB (xj )
2 2 + −
2 2
Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS 107

3. TOPSIS approach based on proposed On the other hand, if information about the criteria
divergence measure for MCDM problems weights are completely unknown, then we establish
another nonlinear optimization model as
In this section, a TOPSIS method based on the
proposed divergence measure for SVNSs has been 
n 
m 
m
presented succeeded by an illustrative example to max D(w) = wj D(Aij |Aqj )
demonstrate the approach. j=1 i=1 q=1

subject to wj ≥ 0, (8)
3.1. Maximizing divergence method for
determine the weights 
n
w2j = 1
j=1
In this subsection, we construct a nonlinear pro-
gramming model by maximizing the divergence Solve the above model by using the Lagrangian
value D(w) to find the optimal weights of the cri- multiplier method and hence get the normalized
teria. For this, let  be the set of the partially known weight vector as
weight information. For j th criteria, the divergence
of the ith alternative to all the other alternatives can 
m 
m
be defined as follows: D(Aij |Aqj )
i=1 q=1
wj = (9)

n  m m

m D(Aij |Dqj )
Dij = D(Aij |Aqj ) j=1 i=1 q=1
q=1

where D(Aij |Aqj ) denotes the divergence measure


between the two alternatives Aij and Aqj defined in 3.2. Proposed TOPSIS approach based on
Definition 2.4. divergence measure
m m  m
Let Dj = Dij = D(Aij |Aqj ) represents
i=1 i=1 q=1 TOPSIS method [15] is a simple and effective tool
the total divergence values of all the alternatives to to solve the decision-making problems which aims
other alternatives for j th attributes and hence to pick out the best alternative(s) with the short-
est distance from the positive ideal solution (PIS)

n 
n 
m 
m and the farthest distance from the negative ideal
D(w) = wj Dj = wj D(Aij |Aqj ) solution (NIS). In this section, instead of using the
j=1 j=1 i=1 q=1 distance measures to figure the relative closeness
coefficient, the concept of divergence measure is
represents the total divergence values of all the alter-
appropriately applied to the main structure of the
natives with respect to all criteria.
TOPSIS method. For it, assume that there are ‘m’
Under these, a nonlinear optimization model has
alternatives denoted by A1 , A2 , . . . , Am which are
been constructed to determine the optimal weight
being evaluated with respect to ‘n’ criteria namely,
vector as follow:
C1 , C2 , . . . , Cn by an expert. The preferences related
⎫ to each alternative Ai (i = 1, 2, . . . , m) are repre-

n 
m 
m

⎪ sented under the SVNS environment which are
max D(w) = ⎪
wj D(Aij |Aqj )⎪

⎪ denoted by αij = ζij , ρij , ϑij , i = 1, 2, . . . , m; j =


j=1 i=1 q=1 ⎪
⎬ 1, 2, . . . , n, where ζij , ρij , ϑij represents the degree
subject to wj ∈ , wj ≥ 0, (7) of satisfaction, indeterminacy, and dissatisfaction,



⎪ respectively, of Ai corresponding to criteria Cj
n ⎪
⎪ such that 0 ≤ ζij , ρij , ϑij ≤ 1 and ζij + ρij + ϑij ≤
wj = 1 ⎪


⎭ 3. Then, the following steps of the TOPSIS method
j=1
have been summarized for solving the decision-
By solving this model, we get the optimal weight making problems under SVNN information by using
w = (w1 , w2 , . . . , wn )T of the criteria. the proposed measures as:
108 Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS

Step 1: Arrange the collective information: The ζj+ , ρj+ , ϑj+ )1×n , and ζj− , ρj− , ϑj− )1×n
information related to each alternative where ζj+ = max{ζij |i = 1, 2, . . . , m},
Ai (i = 1, 2, . . . , m) with respect to the i
different criteria Cj (j = 1, 2, . . . , n) are ρj+ = min{ρij |i = 1, 2, . . . , m},
i
arranged in the form of the neutrosophic ϑj+ = min{ϑij |i = 1, 2, . . . , m},
decision-matrix D which is represented i
as ζj− = min{ζij |i = 1, 2, . . . , m},
i
ρj− = max{ρij |i = 1, 2, . . . , m},
i
C
⎛ 1
C2 ... Cn
⎞ ϑj− = max{cij |i = 1, 2, . . . , m}. From
i
A1 α11 α12 ... α1n
this, it has been clearly seen that (ζj− , ρj− ,
A2 ⎜
⎜ α21 α22 ... α2n ⎟

D = .. ⎜ .. .. .. .. ⎟ ϑj− ) ⊆ (ζij , ρij , ϑij ) ⊆ (ζj+ , ρj+ , ϑj+ ).
. ⎝ . . . . ⎠ In order to compare the different alter-
Am αm1 αm2 ... αmn natives, the divergence measure defined
in Equation (6) is used to measures the
Step 2: Normalize the decision matrix: Since
there are two types in criteria, one is degree of discrimination between an
benefit type and the contrary is cost alternative Ai and the RPIS A+ as well as
type. Therefore, to balance the physical the RNIS A− as follows:
dimensions and eliminate the influ-
ence of the criterion types, the matrix 
n
D = (αij )m×n is converted to standard Dw (Ai |A+ ) = √
1
wj
matrix R = (rij )m×n by transforming the ( 2 − 1)
j=1
rating values of cost type into benefit type
criteria, if any, by using the normalization ⎡ ⎤
(ζij (rij ))2 + (ζj+ (rij ))2 ζij (rij ) + ζj+ (rij )
formula: ⎢ − ⎥
 ⎢ 2 2

ζij , ρij , ϑij  ; for benefit type criteria ⎢  ⎥
rij = ⎢ ⎥
2 +
⎢ + (ρij (rij )) + (ρj (rij )) − ρij (rij ) + ρj (rij )⎥(10)
2 +
ϑij , ρij , ζij  ; for cost type criteria ⎢ ⎥
⎢ 2 2 ⎥
⎢  ⎥
⎣ + + ⎦
Step 3: Determine the attribute weights: (ϑij (rij )) + (ϑj (rij ))
2 2 ϑij (rij ) + ϑj (rij )
+ −
The weights of the different criteria 2 2
w = (w1 , w2 , . . . , wn )T are determined
by solving the optimization model 7 or and
8 accordingly whether the information
about the criteria weights are partially 1

n

Dw (Ai |A− ) = √ wj
known and completely unknown. ( 2 − 1)
j=1
Step 4: Determine the discrimination of each
⎡ ⎤
alternative from ideal and anti-ideal (ζij (rij ))2 + (ζj− (rij ))2 ζij (rij ) + ζj− (rij )
alternatives: As 0 ≤ ζij , ρij , ϑij ≤ 1 and ⎢ − ⎥
⎢ 2 2

hence all rating values of the alternative as ⎢  ⎥
⎢ ⎥
given in decision matrix R = (rij )m×n are
− −
⎢ + (ρij (rij ))2 + (ρj (rij ))2 − ρij (rij ) + ρj (rij )⎥(11)
⎢ ⎥
SVNNs. Therefore, accordingly the mem- ⎢ 2 2 ⎥
⎢  ⎥
bership degrees of relative positive ideal ⎣ − − ⎦
(ϑij (rij )) + (ϑj (rij ))
2 2 ϑij (rij ) + ϑj (rij )
ideal solution (RPIS) may be expressed + −
2 2
as A+ = 1, 0, 01×n . Similarly, the
membership degrees of the relative Step 5: Compute the relative-closeness coefficient:
negative ideal solution (RNIS) may be Based on Equations (10) and (11), the
summarized as A− = 0, 1, 11×n . From relative-closeness coefficient of the alter-
these, it has been seen that A+ and A− are native Ai (i = 1, 2, . . . , m) with respect to
complement to each other. Furthermore, A+ and A− is defined as follows:
instead of fixing the degree of A+ to be 1,
0 and 0, the decision maker may varying Dw (Ai |A− )
Ri = (12)
it by defining A+ and A− respectively as Dw (Ai |A− ) + Dw (Ai |A+ )
Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS 109

provided Dw (Ai |A+ ) = / 0. It has been α11 = 0.5, 0.3, 0.4. In the similar manner, we can
seen that 0 ≤ D(Ai |A− ) ≤ D(Ai |A− ) obtain all the evaluations of the alternatives Ai with
+D(Ai |A+ ) and hence 0 ≤ Ri ≤ 1. respect to criterion Cj . Then, the following steps of
1. Rank the alternative: Based on the the proposed approach have been executed to find the
descending order of the values of Ri , we most desirable alternative(s).
rank the alternatives Ai (i = 1, 2, . . . , m) Step 1: The complete rating values of all the alter-
and select the best alternative(s). natives under single-valued neutrosophic
information under above four criteria are
3.3. Illustrative Example summarized are listed in the follow-
ing single-valued neutrosophic decision
A travel agency naming, Marricot Trip mate , matrix D = (αij ) as
has excelled in providing travel related services to

C1 C2 C3 C4
⎛ ⎞
A1 0.5, 0.3, 0.4 0.5, 0.2, 0.5 0.2, 0.2, 0.6 0.3, 0.2, 0.4
A2 ⎜ ⎟
⎜ 0.7, 0.1, 0.3 0.7, 0.2, 0.3 0.6, 0.3, 0.2 0.6, 0.4, 0.2 ⎟

D = A3 ⎜ 0.5, 0.3, 0.4 0.6, 0.2, 0.4 0.6, 0.1, 0.2 0.5, 0.1, 0.3 ⎟

A4 ⎝ 0.7, 0.3, 0.2 0.7, 0.2, 0.2 0.4, 0.5, 0.2 0.5, 0.2, 0.2 ⎠
A5 0.4, 0.1, 0.3 0.5, 0.1, 0.2 0.4, 0.1, 0.5 0.4, 0.3, 0.6

Step 2: Since all the criteria are of the benefit


types, so there is no need of normalizing
domestic and Inbound tourists . Agency wants to pro- process.
vide more facilities like detailed information, online Step 3: Consider the partial information about the
booking capabilities, allow to book and sell airline criterion weights given by  = {0.10 ≤
tickets, car rentals, hotels, and other travel related w1 ≤ 0.2, 0.2 ≤ w2 ≤ 0.3, 0.2 ≤  w3 ≤
services etc. to their customers. For this purpose, 0.25, 0.15 ≤ w4 ≤ 0.25, wj ≥ 0, 4j=1
agency intends to find an appropriate information wj = 1} and hence the optimization
technology (IT) software development company that model (7) is constructed as follows:
delivers affordable solutions through software devel-
maximize D(w) = 0.3344w1 + 0.2372w2
opment. To complete this motive, agency forms a
set of five companies (alternatives), namely, Zensar + 0.7670w3 + 0.4519w4
Tech (A1 ), NIIT Tech (A2 ), HCL Tech(A3 ), Hex- subject to w∈
aware Tech(A4 ), and Tech Mahindra (A5 ) and the
selection is held on the basis of the different criteria, By solving this model, we get the
namely, Technology Expertise (C1 ), Service quality optimal weight vector of criteria w =
(C2 ), Project Management (C3 ), Industry Experi- (0.2, 0.3, 0.25, 0.25)T .
ence (C4 ). Now, we can obtain the evaluation of an Step 4: The RPIS and RNIS is calculated
alternative Ai (i = 1, 2, 3, 4, 5) with respect to the cri- from the given information as A+ =
terion Cj (j = 1, 2, 3, 4) from the questionnaire of a {(C1 , 0.7, 0.1, 0.2), (C2 , 0.7, 0.1, 0.2),
domain expert. For instance, corresponding to alter- (C3 , 0.6, 0.1, 0.2), (C4 , 0.6, 0.1, 0.2)}
native A1 under criterion C1 , when we ask the opinion and A− = {(C1 , 0.4, 0.3, 0.4), (C2 , 0.5,
of an expert about the alternative A1 with respect to 0.2, 0.5), (C3 , 0.2, 0.5, 0.6), (C4 , 0.3,
the criterion C1 , he or she may that the “possibil- 0.4, 0.6)}. Thus, the degree of discrim-
ity degree in which the statement is good” is 0.5, ination between the alternative Ai (i =
“the statement is false” is 0.4 and “the degree in 1, 2, 3, 4, 5) to A+ and A− are computed
which he or she is unsure” is 0.3. In this case, the by using Equations (10) and (11) and get
evaluation of this alternative is represented as SVNN as
110 Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS

Dw (A1 |A+ )
⎡ ⎧" ⎫ ⎧" ⎫ ⎤
⎨ 0.52 + 0.72 0.5 + 0.7 ⎬ ⎨ 0.52 + 0.72 0.5 + 0.7 ⎬
⎢ 0.2 − + 0.3 − ⎥
⎢ ⎩ 2 2 ⎭ ⎩ 2 2 ⎥⎭
⎢ ⎥
⎢ ⎧" ⎫ ⎧" ⎫⎥
⎢ ⎥
⎢ ⎨ 0.22 + 0.62 ⎥
⎢ 0.2 + 0.6 ⎬ ⎨ 0.32 + 0.62 0.3 + 0.6 ⎬⎥
⎢ + 0.25 − + 0.25 − ⎥
⎢ ⎩ 2 2 ⎭ ⎩ 2 2 ⎭⎥
⎢ ⎥
⎢ ⎥
⎢ ⎧" ⎫ ⎧" ⎫ ⎥
⎢ ⎨ 0.32 + 0.12 ⎥
⎢ 0.3 + 0.1 ⎬ ⎨ 0.22 + 0.12 0.2 + 0.1 ⎬ ⎥
⎢ + 0.2 − + 0.3 − ⎥
⎢ ⎩ 2 2 ⎭ ⎩ 2 2 ⎭ ⎥
⎢ ⎥
1 ⎢ ⎥
= √ ⎢ ⎧" ⎫ ⎧" ⎫⎥
( 2 − 1) ⎢
⎢ ⎨ ⎬ ⎨ ⎬ ⎥
⎢ + 0.25 0.22 + 0.12 0.2 + 0.1 0.22 + 0.12 0.2 + 0.1 ⎥ ⎥
⎢ − + 0.25 −
⎢ ⎩ 2 2 ⎭ ⎩ 2 2 ⎭⎥⎥
⎢ ⎥
⎢ ⎧" ⎫ ⎧ " ⎫ ⎥
⎢ ⎨ 0.42 + 0.22 ⎬ ⎨ 0.52 + 0.22 ⎬ ⎥
⎢ 0.4 + 0.2 0.5 + 0.2 ⎥
⎢ + 0.2 − + 0.3 − ⎥
⎢ ⎩ ⎭ ⎩ ⎭ ⎥
⎢ 2 2 2 2 ⎥
⎢ ⎥
⎢ ⎧" ⎫ ⎧" ⎫⎥
⎢ ⎥
⎢ ⎨ 0.62 + 0.22 0.6 + 0.2 ⎬ ⎨ 0.42 + 0.22 0.4 + 0.2 ⎬⎥
⎣ + 0.25 − + 0.25 − ⎦
⎩ 2 2 ⎭ ⎩ 2 2 ⎭

= 0.1487
and
Dw (A1 |A− )
⎡ ⎧" ⎫ ⎧" ⎫ ⎤
⎨ 0.52 + 0.42 0.5 + 0.4 ⎬ ⎨ 0.52 + 0.52 0.5 + 0.5 ⎬
⎢ 0.2 − + 0.3 − ⎥
⎢ ⎩ 2 2 ⎭ ⎩ 2 2 ⎥⎭
⎢ ⎥
⎢ ⎧" ⎫ ⎧" ⎫⎥
⎢ ⎥
⎢ ⎨ 0.22 + 0.22 ⎥
⎢ 0.2 + 0.2 ⎬ ⎨ 0.32 + 0.32 0.3 + 0.3 ⎬⎥
⎢ + 0.25 − + 0.25 − ⎥
⎢ ⎩ 2 2 ⎭ ⎩ 2 2 ⎭⎥
⎢ ⎥
⎢ ⎥
⎢ ⎧" ⎫ ⎧" ⎫ ⎥
⎢ ⎨ ⎬ ⎨ ⎬ ⎥
⎢ 0.32 + 0.32 0.3 + 0.3 0.22 + 0.22 0.2 + 0.2 ⎥
⎢ + 0.2 − + 0.3 − ⎥
⎢ ⎩ 2 2 ⎭ ⎩ 2 2 ⎭ ⎥
⎢ ⎥
1 ⎢ ⎥
= √ ⎢ ⎧" ⎫ ⎧" ⎫⎥

( 2 − 1) ⎢ ⎨ 0.22 + 0.52 ⎥
⎢ + 0.25 0.2 + 0.5 ⎬ ⎨ 0.22 + 0.42 0.2 + 0.4 ⎬⎥⎥
⎢ − + 0.25 −
⎢ ⎩ 2 2 ⎭ ⎩ 2 2 ⎭⎥⎥
⎢ ⎥
⎢ ⎧" ⎫ ⎧" ⎫ ⎥
⎢ ⎨ 0.42 + 0.42 ⎥

⎢ + 0.2 0.4 + 0.4 ⎬ ⎨ 0.52 + 0.52 0.5 + 0.5 ⎬ ⎥ ⎥
⎢ − + 0.3 −
⎢ ⎩ 2 2 ⎭ ⎩ 2 2 ⎭ ⎥ ⎥
⎢ ⎥
⎢ ⎧" ⎫ ⎧" ⎫ ⎥
⎢ ⎥
⎢ ⎨ 0.62 + 0.62 0.6 + 0.6 ⎬ ⎨ 0.4 + 0.6
2 2 ⎬
0.4 + 0.6 ⎥
⎣ + 0.25 − + 0.25 − ⎦
⎩ 2 2 ⎭ ⎩ 2 2 ⎭

= 0.0357
Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS 111

Similarly, we can calculate the oth- The validity of the proposed TOPIS method is
ers and get Dw (A2 |A+ ) = 0.0512; tested using these test criteria.
Dw (A2 |A− ) = 0.1453; Dw (A3 |A+ ) =
0.0466; Dw (A3 |A− ) = 0.1457;
+
Dw (A4 |A ) = 0.0661; Dw (A4 |A− ) =
3.4.1. Validity test under test criterion 1
0.1298; Dw (A5 |A+ ) = 0.0914 and Dw
Under this test criterion, the rating values of the
(A5 |A− ) = 0.0933.
non-optimal alternative A1 and the worse alternative
Step 5: The closeness coefficients of ith alter-
A5 is replaced with the another ones and hence their
native Ai (i = 1, 2, 3, 4, 5) is calculated
updated decision matrix is summarized as

C1 C2 C3 C4
⎛ ⎞
A1 0.4, 0.3, 0.5 0.5, 0.2, 0.5 0.6, 0.2, 0.2 0.4, 0.2, 0.3
A2 ⎜ ⎟
⎜ 0.7, 0.1, 0.3 0.7, 0.2, 0.3 0.6, 0.3, 0.2 0.6, 0.4, 0.2 ⎟

D = A3 ⎜ 0.5, 0.3, 0.4 0.6, 0.2, 0.4 0.6, 0.1, 0.2 0.5, 0.1, 0.3 ⎟

A4 ⎝ 0.7, 0.3, 0.2 0.7, 0.2, 0.2 0.4, 0.5, 0.2 0.5, 0.2, 0.2 ⎠
A5 0.3, 0.1, 0.4 0.2, 0.1, 0.5 0.5, 0.1, 0.4 0.6, 0.3, 0.4

Based on this information, by applying the


by using Equation (12) and are given proposed approach, we compute the divergence mea-
as R1 = 0.1936; R2 = 0.7396; R3 = sures of each alternative Ai (i = 1, 2, . . . , 5) to RPIS
0.7577; R4 = 0.6628; and R5 = 0.5052. (A+ ) and RNIS (A− ) are Dw (A1 |A+ ) = 0.0889,
Step 6: Therefore, the optimal ranking of these Dw (A1 |A− ) = 0.0703, Dw (A2 |A+ ) = 0.0512,
five alternatives are A3  A2  A4  Dw (A2 |A− ) = 0.1307, Dw (A3 |A+ ) = 0.0466,
A5  A1 , and thus, the best alternative is Dw (A3 |A− ) = 0.1247, Dw (A4 |A+ ) = 0.0661,
A3 namely HCL Tech. Dw (A4 |A− ) = 0.1337, Dw (A5 |A+ ) = 0.1309
and Dw (A5 |A− ) = 0.0650. Thus, the optimal
values of the relative closeness coefficient of
3.4. Test Criteria for proposed approach
each alternative are computed by using Equation
(12) as R1 = 0.4416; R2 = 0.7187; R3 = 0.7279
Some decision-making methods deals with irreg-
R4 = 0.6693 and R5 = 0.3317. According to the
ularities which may lead to give some undesirable
descending order of these values, the alternatives
results when some of the information in the given
are ranked as A3  A2  A4  A1  A5 . Thus, the
decision matrix is changed. Therefore, their approach
best alternative is remain unchanged i.e., A3 . Hence
is not acceptable to rank the alternatives and hence
the proposed method is valid under test criterion 1.
a validity of the newly proposed methods is neces-
sary. [37] established the following testing criteria to
evaluate the validity of such types of methods.
Test criterion 1: “An effective decision-making 3.4.2. Validity test using test criterion 2 and 3
method should not change the indication of the best Under it, if we decomposed the original prob-
alternative on replacing a non-optimal alternative by lem into a set of {A1 , A2 , A4 }, {A1 , A3 , A5 },
another worse alternative without changing the rela- {A2 , A4 , A5 }, {A2 , A3 , A5 } and {A1 , A3 , A4 }, then
tive importance of each decision criteria.” the proposed approach has been applied to these
Test criterion 2: “An effective decision-making subproblems. The ranking orders of the alterna-
method should follow transitive property.” tives is A2  A4  A1 , A3  A5  A1 , A2  A4 
Test criterion 3: “When a decision-making prob- A5 , A3  A2  A5 and A3  A4  A1 respectively
lem is decomposed into smaller problems and same and hence the combined ranking order is A3 
method is applied on smaller problems to rank the A2  A4  A5  A1 , which is identical to original
alternatives, combined ranking of the alternatives problem. Therefore, it displays transitive property
should be identical to the original ranking of un- and hence the proposed method is valid under the
decomposed problem.” test criterion 2 and 3.
112 Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS

3.5. Superiority of the proposed approach over On the other hand, if we utilize the proposed
the existing approaches divergence measure to compute the discrimination
between the alternatives and the ideal solutions
In this section, we have presented some counterex- then their measurement values are D(A1 |A+ ) =
amples to show that the existing approaches [20, 23, 0.0137, D(A2 |A+ ) = 0.0167, D(A1 |A− ) = 0.0167
26] under SVNS environment fails to rank the given and D(A2 |A− ) = 0.0137. Thus, the relative close-
alternatives while the proposed divergence measure ness coefficient Ri (i = 1, 2) of the alternative Ai (i =
can overcome their shortcoming. 1, 2) is computed by using Equation (12) as R1 =
0.5500 and R2 = 0.4500. Since R1 > R2 which gives
Example 3.1. Consider a decision-making problem us that A1 is better alternative than A2 . Therefore,
in which there are two alternatives, namely, A1 and proposed divergence based TOPSIS approach is able
A2 which are evaluated by a decision maker under to rank the alternatives in the situation when existing
the three different criteria denoted by C1 , C2 and approach fails.
C3 . The preference values of each alternative given
by the decision maker are summarized under SVNS Example 3.2. Consider two set of SVNSs A1 and A2
environment as follows: defined over the universal set X = {x1 , x2 , . . . , x5 }
as
 %
x1 , 0.5, 0.3, 0.2, x2 , 0.5, 0.2, 0.3, x3 , 0.9,
C1 C2 C3 A1 =
# $ 0.0, 0.1, x4 , 0.5, 0.4, 0.1, x5 , 0.7, 0.1, 0.2
A1 0.5, 0.2, 0.3 0.6, 0.3, 0.1 0.3, 0.2, 0.4
K= and
A2 0.4, 0.3, 0.2 0.6, 0.2, 0.2 0.4, 0.3, 0.3  %
(13) x1 , 0.7, 0.2, 0.1, x2 , 0.5, 0.4, 0.1, x3 , 0.9,
A2 =
0.1, 0.0, x4 , 0.6, 0.3, 0.1, x5 , 0.8, 0.0, 0.2
In order to find the best alternative, we utilize the
existing TOPSIS approach as proposed by [36]. For Then the aim of this problem is to classify the
it, the following steps of their approach are followed unknown pattern B ∈ SVNS(X) which is defined as
as  %
x1 , 0.7, 0.1, 0.2, x2 , 0.6, 0.3, 0.1, x3 , 0.7,
B=
0.1, 0.2, x4 , 0.5, 0.4, 0.1, x5 , 0.4, 0.5, 0.1
Step 1: The information of each alternative in
SVNS environment is represented in the in one of the class of A1 and A2 . For it, if we
form of decision matrix as given in Equa- apply the existing normalized Hamming (dN ) and
tion (13). Euclidean (qN ) distances [20] as defined in Equations
Step 2: The positive and negative ideal (14) and (15) respectively, between the alternatives
solutions are obtained as A+ = Ai (i = 1, 2) and the unknown pattern B
{0.5, 0.2, 0.2, 0.6, 0.2, 0.1, 0.4, 0.2,
dN (Ai , B) =
0.3} and A− = {0.4, 0.3, 0.3, ⎧ ⎫
0.6, 0.3, 0.2, 0.3, 0.3, 0.4}, respec- ⎪
⎪ |ζAi (xj ) − ζB (xj )| ⎪

1 ⎨ ⎬
n
tively.
+ |ρAi (xj ) − ρB (xj )| (14)
Step 3: Based on these values, the hamming ⎪ ⎪
j=1 ⎪ ⎪
3n
⎩ ⎭
distance measures between the alterna- + |ϑAi (xj ) − ϑB (xj )|
tives Ai (i = 1, 2) and ideals solutions
and
are evaluted as d(A1 , A+ ) = 0.0444,
d(A1 , A− ) = 0.0444, d(A2 , A+ ) = qN (Ai , B) =

0.0444 and d(A2 , A ) = 0.0444. &
' ⎧) *2 ⎫
Step 4: Thus, the optimal values of ' ⎪
⎪ ζAi (xj ) − ζB (xj ) ⎪

' n ⎪⎨ ) ⎪
the relative − closeness coefficient '1 
' *2 ⎬
Ri = d(Ai ,Ad(A i ,A ) ' 3n + ρAi (xj ) − ρB (xj ) (15)
of each alter- ( ⎪ ⎪
j=1 ⎪ *2 ⎪
+ )+d(A ,A− )
i
native Ai (i = 1, 2) are computed as ⎪
⎩ ) ⎪

+ ϑAi (xj ) − ϑB (xj )
R1 = 0.5 and R2 = 0.5. Hence, this
method is unable to rank the given then, we get the measurement values correspond-
alternatives. ing to the set A1 and A2 as dN (A1 , B) =
Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS 113

dN (A2 , B) = 0.1333 and qN (A1 , B) = qN (A2 , B) = 4. Conclusion


0.1751. Thus, this existing approach is unable to clas-
sify the pattern B with any one of the class of A1 and In this manuscript, the theory of SVNS has been
A2 . On the other hand, if we apply the proposed diver- enriched by proposing a new information measure,
gence measure between the alternative Ai (i = 1, 2) called as divergence measure, to evaluate the discrim-
and B, then we get their values as D(A1 |B) = 0.0901 ination between the two SVNSs. Some properties
and D(A2 |B) = 0.1061. Hence, we conclude that and the correlations of the measure have been inves-
unknown pattern B is most likely to belong to the class tigated in detail. A maximizing divergence method
of A2 . Therefore, the proposed divergence measure has been presented to determine the optimal criterion
is successfully work in those cases where the existing weights under SVN environment. Then, a single-
measure fails. valued neutrosophic TOPSIS is proposed to solve
the decision-making problems. A practical example
Example 3.3. Consider two SVNSs is given to validate its effectiveness and practicality.
A1 = {x1 , 0.6, 0.1, 0.2, x2 , 0.5, 0.1, 0.3, From the study, we resolve that the proposed mea-
x3 , 0.4, 0.1, 0.1} and A2 = {x1 , 0.5, 0.1 ,0.2, sure shows its superiority in those cases also where
x2 , 0.4, 0.1, 0.3, x3 , 0.4, 0.1, 0.1} defined the existing measures fail to classify the objects. In
over the universal set X = {x1 , x2 , x3 }. Let the the future, we will extend the proposed approach to
ideal solution considered by decision maker the Pythagorean fuzzy set environment [38–40] and
is B = {x1 , 0.6, 0.1, 0.2, x2 , 0.4, 0.1, 0.3, uncertain and fuzzy environment [41–43].
x3 , 0.5, 0.1, 0.1}. In order to rank these alter-
natives, we apply the existing improved cosine
similarity measures [23], defined in Equations (16) Acknowledgments
and (17), as
The authors are thankful to the editor and anony-
SC1 (Ai , B) mous reviewers for their constructive comments and
⎡ ⎛ ⎞⎤
|ζAi (xj ) − ζB (xj )|, suggestions that helped us in improving the paper
1 ⎢π ⎜ ⎟⎥
n
significantly.
= cos ⎢
⎣ max ⎜
⎝ |ρAi (xj ) − ρB (xj )|,⎟ ⎥
⎠⎦(16)
n 2
j=1
|ϑAi (xj ) − ϑB (xj )|
References
and
[1] K.T. Atanassov, Intuitionistic fuzzy sets, Fuzzy Sets and
Systems 20 (1986), 87–96.
SC2 (Ai , B) [2] E. Szmidt and J. Kacprzyk, Distances between intuitionistic
⎡ ⎛ ⎞⎤ fuzzy sets, Fuzzy Sets and Systems 114 (2000), 505–518.
|ζAi (xj ) − ζB (xj )| [3] E. Szmidt and J. Kacprzyk, A similarity measure for intu-
1 ⎢π ⎜ ⎟⎥
n
itionistic fuzzy sets and its application in supporting medical
= cos ⎢ ⎜ + |ρA (xj ) − ρB (xj )| ⎟⎥ (17)
⎣6 ⎝ i ⎠⎦ diagnostic reasoning, Lecture Notes in Computer Science
n 3070 (2004), 388–393.
j=1
+ |ϑAi (xj ) − ϑB (xj )| [4] S. Singh and H. Garg, Distance measures between type-2
intuitionistic fuzzy sets and their application to multicriteria
and hence obtain their corresponding measurement decisionmaking process, Applied Intelligence 46(4) (2017),
values are SC1 (A1 , B) = SC1 (A2 , B) = 0.3292 and 788–799.
SC2 (A1 , B) = SC2 (A2 , B) = 0.3315. Thus, their [5] W. Wang and X. Xin, Distance measure between intuition-
approach is unable to rank the alternatives. On the istic fuzzy sets, Pattern Recognition Letters 26(13) (2005),
2063–2069.
other hand, if we compute the proposed divergence [6] P. Wei and J. Ye, Improved intuitionistic fuzzy cross-entropy
measurement values for these two alternatives, then and its application to pattern recognitions, International
we get their respective values are D(A1 , B) = 0.0045 Conference On Intelligent Systems and Knowledge Engi-
neering (2010), 114–116.
and D(A2 , B) = 0.0041 and hence conclude that A1
[7] H. Garg, N. Agarwal and A. Tripathi, A novel generalized
is the best alternative than A3 . parametric directed divergence measure of intuitionistic
Hence, we can say that the proposed divergence fuzzy sets with its application, Annals of Fuzzy Mathematics
measures, as well as their corresponding TOPSIS and Informatics 13(6) (2017), 703–727.
[8] D. Rani and H. Garg, Distance measures between the com-
approach, are able to solve the decision-making prob- plex intuitionistic fuzzy sets and its applications to the
lem in a better way under SVNS environment where decision – making process, International Journal for Uncer-
the existing studies fail to rank the alternatives. tainty Quantification 7(5) (2017), 423–439.
114 Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS

[9] H. Garg and K. Kumar, An advanced study on the similarity [26] J. Ye, Clustering methods using distance-based similarity
measures of intuitionistic fuzzy sets based on the set pair measures of single-valued neutrosophic sets, Journal of
analysis theory and their application in decision making, Intelligent Systems 23(4) (2014), 379–389.
Soft Computing 22(15) (2018), 4959–4970. [27] Nancy and H. Garg, Novel single-valued neutrosophic deci-
[10] H. Garg, A new generalized improved score function of sion making operators under frank norm operations and its
intervalvalued intuitionistic fuzzy sets and applications application, International Journal for Uncertainty Quantifi-
in expert systems, Applied Soft Computing 38 (2016), cation 6(4) (2016), 361–375.
988–999. [28] J. Ye and J. Fu, Multi-period medical diagnosis method
[11] J. Ye, Multicriteria fuzzy decision-making method based on using a single valued neutrosophic similarity measure based
a novel accuracy function under interval - valued intuition- on tangent function, Computer Methods and Programs in
istic fuzzy environment, Expert Systems with Applications Biomedicine 123 (2016), 142–149.
36 (2009), 6899–6902. [29] H. Garg and Nancy, New Logarithmic operational laws
[12] X.D. Peng and H. Garg, Algorithms for interval-valued and their applications to multiattribute decision making
fuzzy soft sets in emergency decision making based for single-valued neutrosophic numbers, Cognitive Systems
on WDBA and CODAS with new information mea- Research 52 (2018), 931–946.
sure, Computers & Industrial Engineering 119 (2018), [30] R. Chatterjee, P. Majumdar and S. Samanta, On some sim-
439–452. ilarity measures and entropy on quadripartitioned single
[13] H. Garg, Novel intuitionistic fuzzy decision making method valued neutrosophic sets, Journal of Intelligent and Fuzzy
based on an improved operation laws and its applica- Systems 30(4) (2016), 2475–2485.
tion, Engineering Applications of Artificial Intelligence 60 [31] H. Garg and Nancy, Some hybrid weighted aggregation
(2017), 164–174. operators under neutrosophic set environment and their
[14] H. Garg and R. Arora, A nonlinear-programming method- applications to multicriteria decision-making, Applied Intel-
ology for multi-attribute decision-making problem with ligence 48(12) (2018), 4871–4888.
interval-valued intuitionistic fuzzy soft sets information, [32] H. Garg and Nancy, Linguistic single-valued neutrosophic
Applied Intelligence 48(8) (2018), 2031–2046. prioritized aggregation operators and their applications to
[15] C.L. Hwang and K. Yoon, Multiple Attribute Decision Mak- multiple-attribute group decision-making, Journal of Ambi-
ing Methods and Applications A State-of-the-Art Survey, ent Intelligence and Humanized Computing 9(6) (2018),
Springer-Verlag Berlin Heidelberg, 1981. 1975–1997.
[16] K. Kumar and H. Garg, TOPSIS method based on the con- [33] H. Garg and Nancy, Multi-criteria decision-making method
nection number of set pair analysis under interval-valued based on prioritized muirhead mean aggregation opera-
intuitionistic fuzzy set environment, Computational and tor under neutrosophic set environment, Symmetry 10(7)
Applied Mathematics 37(2) (2018), 1319–1329. (2018), 280. doi: 10.3390/sym10070280
[17] K. Kumar and H. Garg, Connection number of set pair anal- [34] J. Ye, An extended TOPSIS method for multiple attribute
ysis based TOPSIS method on intuitionistic fuzzy sets and group decision making based on single valued neutrosophic
their application to decision making, Applied Intelligence linguistic numbers, Journal of Intelligent & Fuzzy Systems
48(8) (2018), 2112–2119. 28(1) (2015), 247–255.
[18] F. Smarandache, Neutrosophy. Neutrosophic Probability, [35] H. Garg and Nancy, Non-linear programming method
Set, and Logic, ProQuest Information & Learning, Ann for multicriteria decision making problems under interval
Arbor, Michigan, USA, 1998. neutrosophic set environment, Applied Intelligence 48(8)
[19] H. Wang, F. Smarandache, Y.Q. Zhang and R. Sunderraman, (2018), 2199–2213.
Single valued neutrosophic sets, Multispace Multistructure [36] P. Biswas, S. Pramanik and B.C. Giri, TOPSIS method for
4 (2010), 410–413. multiattribute group decision-making under single-valued
[20] P. Majumdar, Neutrosophic Sets and Its Applications to neutrosophic environment, Neural Computing and Appli-
Decision Making, Computational Intelligence for Big Data cations 27(3) (2016), 727–737.
Analysis: Frontier Advances and Applications, Springer, [37] X. Wang and E. Triantaphyllou, Ranking irregularities when
Cham, 2015, pp. 97–115. evaluating alternatives by using some electre methods,
[21] S. Broumi and F. Smarandache, Correlation coefficient of Omega – International Journal of Management Science 36
interval neutrosophic set, Applied Mechanics and Material (2008), 45–63.
436 (2013), 511–517. [38] H. Garg, New exponential operational laws and their aggre-
[22] J. Ye, Multicriteria decision making method using the gation operators for interval-valued Pythagorean fuzzy
correlation coefficient under single-value neutrosophic multicriteria decision - making, International Journal of
environment, International Journal of General Systems Intelligent Systems 33(3) (2018), 653–683.
42(4) (2013), 386–394. [39] H. Garg, Some methods for strategic decision-making prob-
[23] J. Ye, Improved cosine similarity measures of simplified lems with immediate probabilities in Pythagorean fuzzy
neutrosophic sets for medical diagnoses, Artificial Intelli- environment, International Journal of Intelligent Systems
gence in Medicine 63 (2015), 171–179. 33(4) (2018), 687–712.
[24] L. Kong, Y. Wu and J. Ye, Misfire fault diagnosis method of [40] H. Garg, New logarithmic operational laws and their
gasoline engines using the cosine similarity measure of neu- aggregation operators for Pythagorean fuzzy set and their
trosophic numbers, Neutrosophic Sets and Systems 8 (2015), applications, International Journal of Intelligent Systems
42–45. 34(1) (2019), 82–106.
[25] Nancy and H. Garg, An improved score function for rank- [41] W. Zhou and J.-M. He, Intuitionistic fuzzy geometric bon-
ing neutrosophic sets and its application to decision-making ferroni means and their application in multicriteria decision
process, International Journal for Uncertainty Quantifica- making, International Journal of Intelligent Systems 27(12)
tion 6(5) (2016), 377–385. (2012), 995–1019.
Nancy and H. Garg / A novel divergence measure and its based TOPSIS method under SVNS 115

[42] W. Zhou, Z. Xu and M. Chen, Preference relations based [43] W. Zhou and Z. Xu, Extended intuitionistic fuzzy sets based
on hesitantintuitionistic fuzzy information and their appli- on the hesitant fuzzy membership and their application in
cation in group decision making, Computers & Industrial decision making with risk preference, International Journal
Engineering 87 (2015), 163–175. of Intelligent Systems 33(2) (2018), 417–443.