Sei sulla pagina 1di 14

International Journal of Approximate Reasoning 115 (2019) 221–234

Contents lists available at ScienceDirect

International Journal of Approximate Reasoning


www.elsevier.com/locate/ijar

Towards quantification of incompleteness in the pairwise


comparisons methods
Konrad Kułakowski a,∗, Jacek Szybowski b, Anna Prusak c
a
AGH University of Science and Technology, the Department of Applied Computer Science, Poland
b
AGH University of Science and Technology, the Faculty of Applied Mathematics, Poland
c
Cracow University of Economics, the Department of Process Management, Poland

a rti c l e i nf o a b s t r a c t

Article history: Apart from consistency, the completeness of information is one of the key factors
Received 6 March 2019 influencing data quality. In the case of the pairwise comparisons (PC) method, much space
Received in revised form 3 October 2019 in the literature is devoted to the quantitative analysis of this first idea, while the second
Accepted 4 October 2019
issue has not been properly studied. The presented article is an attempt to bridge this gap.
Available online 9 October 2019
The aim of the article is to examine how the incompleteness of a set of paired comparisons
Keywords: influences the sensitivity of the PC method.
Decision analysis During the research, two important factors related to the incompleteness of PC matrices
Pairwise comparisons have been identified, namely the number of missing pairwise comparisons and their
Incompleteness arrangements. Accordingly, an easy-to-calculate incompleteness index have been developed.
Data quality It takes into account both the total number of missing data and their distribution in the
AHP PC matrix. During the series of Montecarlo experiments, the properties of this index have
been examined. It demonstrated that both the incompleteness and inconsistency of data
almost equally contribute to the sensitivity of the PC matrix. The relative simplicity of
the proposed index may help decision makers to quickly estimate the impact of missing
comparisons on the quality of final results.
 2019 Elsevier Inc. All rights reserved.

1. Introduction

The pairwise comparisons method is referred to as a process of comparing objects in pairs to judge which of them is
preferred [30]. The first evidence of pairwise judgments comes from the XIII-century philosopher Ramon Llull [7]. As the
basis of the electoral system, pairwise comparisons were of interest to Condorcet, Dodgson, Copeland, and others [34,20,12].
They were also used in psychology and psychometrics [37].
One widely known application of the PC method is in the Analytic Hierarchy Process (AHP) and the Analytic Network
Process (ANP), the multi-criteria decision support techniques developed in the 1970s by the American mathematician, T.L.
Saaty [36]. Besides the AHP/ANP methods, other multi-criteria decision techniques based on comparisons of alternatives
include ELECTRE, PROMETHEE, MACBETH or BWM [14,33].
Despite its long history, the PC method (especially with relation to the AHP/ANP and the pairwise comparison matrices)
is among the prevalent topics in recent studies, exploring problems such as inconsistency [5], rank reversal [29,38] and

* Corresponding author.
E-mail addresses: konrad.kulakowski@agh.edu.pl (K. Kułakowski), szybowsk@agh.edu.pl (J. Szybowski), anna.prusak@uek.krakow.pl (A. Prusak).

https://doi.org/10.1016/j.ijar.2019.10.002
0888-613X/ 2019 Elsevier Inc. All rights reserved.
222 K. Kułakowski et al. / International Journal of Approximate Reasoning 115 (2019) 221–234

Fig. 1. From alternatives to ranking - the pairwise comparisons approach.

incomplete judgments [30]. The PC method is also very often used in practice. As an example may serve the works [40,28,
27,6]. The more comprehensive review of the PC method applications can be found in [17].
Many scientific articles deal with inconsistency in the pairwise comparisons method. Thus, many methods for measuring
inconsistency have also been proposed and thoroughly investigated. The following works may act as a guide in this rich
literature [21,2,3,24,31]. Amazingly, the same does not apply to incompleteness. Although some researchers have proposed
methods for calculating the ranking for incomplete paired comparisons, the influence of incompleteness to the final result
has not been sufficiently studied. One of the exceptions here can be Harker [16] and Carmone Jr. et al. [4]. These works,
however, do not provide us with the methods to measure incompleteness. Therefore, the purpose of this research is twofold:
to study the impact of data incompleteness on the correctness of the PC based ranking, and to construct a method for
measuring the level of incompleteness.
During the work, we have identified two important factors related to the incompleteness affecting the quality of data.
These are the number of missing pairwise comparisons and the arrangement of missing comparisons. For this reason, we
propose an incompleteness index depending on both the total number of missing data and their distribution in the pairwise
comparisons matrix. The performed Montecarlo experiments suggest its usefulness as fast and quick tests of data quality.
The presented paper is composed of 6 sections, including an introduction (Section 1) and summary (Section 6). Section 2
outlines the theory of the pairwise comparisons method and the PC matrices, explaining phenomena such as incomplete-
ness, inconsistency, and sensitivity. Section 3 presents incompleteness index allowing users to estimate the extent to which
a given PC matrix-based ranking may be at risk due to the incompleteness of data. In Section 4, numerical experiments are
presented, illustrating relationships between incompleteness, inconsistency, and sensitivity.

2. Preliminaries

2.1. Pairwise comparisons

The pairwise comparisons method is very often used as a way to allow experts to create a ranking based on a series
of individual comparisons. The subjects of comparisons are alternatives. Beginning the ranking procedure, experts compare
alternatives in pairs. Then the results of individual comparisons are used as an input to the appropriate mathematical
procedure, which allows the final numerical ranking to be computed (Fig. 1).
Let A = {a1, . . . , a n} be a finite set of alternatives representing options from which a decision maker can choose. Similarly,
let C = { cij ∈ R+ : i, j = 1, . . . , n} be a set of expert judgments about each pair (ai , a j ) ∈ A × A, so that cij is the result of
comparisons ai against a j . Assigning the certain real value v∈ R+ represents the expert’s opinion that the alternative ai
is v times more important than a j . It is convenient to represent the set of comparisons in the form of a matrix C (c= ij ),
hereinafter referred to as the PC (pairwise comparisons) matrix. Since a comparison of a given alternative to itself does not
indicate the advantage of any of the two alternatives being compared, the diagonal of C is composed of ones. Similarly, in
most cases, it is assumed that if ai is v times more important than a j then also a j is v times less important then ai . The
latter observation leads to the equality c ij = 1/c ji . A matrix C =(cij ) is said to be reciprocal if for all i, j =1, . . ., n it holds
/c ji .
that c ij 1=
The pairwise comparisons method aims to transform the set of paired comparisons (i.e., the PC matrix) into a ranking
vector (Fig. 1). Let us define the function that assigns the weight (also called the importance or the priority) to every single
alternative. Every PC matrix can also be naturally presented in the form of a graph [24].

Definition 1. Let G= C (V , E ) be a labeled, undirected graph with the set of vertices V= {a1, ..., an} , the set of edges E⊆ 2V
such that e ∈ E contains exactly two different elements, and if e = a{i , a j } then cij is defined. GC is said to be induced by
the matrix C .

In such a graph, the vertices correspond to alternatives and edges correspond to the comparisons among the alternatives.
K. Kułakowski et al. / International Journal of Approximate Reasoning 115 (2019) 221–234 223

Definition 2. Let the degree of ai be denoted by deg(ai ) and be given as

deg(ai ) = .{ j : (ai, a j) ∈ E.} .

It is easy to observe that the degree of the vertex ai is equal to the number of comparisons of alternative ai with others.
Following [23] let us define the ranking function.

Definition 3. The ranking function for A is the function w : A → R+ that assigns a positive real number to every alternative
a ∈ A.

The role of the ranking function is to determine the value of w concerning every alternative. We often write the list of
all values w(a1), . . . , w(a n) in the form of a transposed vector w:

w = [ w ( a 1 ),... , w(an)]T . (1)


Very often, w is called interchangeably as a priority or weight vector. There are several methods of transforming paired
comparisons into the ranking. According to the most popular one, referred to in the literature as the eigenvalue method
(EVM), the ranking is formed as the appropriately rescaled principal eigenvector [35]. Thus, to calculate w in EVM, one has
to solve the equation

Cw max = λmax wmax , (2)


where λmax is the spectral radius (principal eigenvalue) of C , then rescale w so that all its entries sum up to 1.

w = [s · wmax(a1), . . . , s · wmax(an)]T ,
where
. n Σ−1
.
s= wmax(ai ) .
i=1

There are a dozen other weighting methods for PC matrices [18,39,41,22,10]. Among them, the geometric mean method
(GMM) deserves particular attention. According to GMM, the priority of the i-th alternative is formed as the appropriately
rescaled geometric mean of the i-th row of the matrix C . Due to its relative simplicity and theoretical properties, it has
recently gained many supporters.

Example 1. Consider a pairwise comparison matrix



1 1 2 0.5 ⎞
⎜ 1 1 0.25 8
C =⎜ ⎟

⎝ 0.5 4 1 1 ⎠.
2 0.125 1 1

Its principal eigenvalue equals λmax ≈ 5.8875 and its principal eigenvector is given by

wmax = [1.32571, 2.0096, 1.9849, 1]T .


The sum of its coordinates equals 6.32021, so, after normalization, we obtain a priority vector 0[.20976, 0.31796,
0.31406, 0.15822 ] . This determines the order of alternatives: a2, a3, a1, a4. Notice that, according to EVM, alternative a2 is
T

slightly better than a 3. However, since the geometric means of the second and third rows of C are equal, GMM assigns the
same weights to both alternatives.

2.2. Incompleteness

The priority deriving methods mentioned in the previous section assume that the set of paired comparisons is complete,
i.e., every entry c ij of C is known and available. In practice, this condition is not always met because of many reasons. After
taking reciprocity into account, the number of all possible comparisons for n alternatives is n(n−1)/2. Thus, when the
number of alternatives is large, comparing all of them in pairs requires considerable effort. It can not always be possible,
for example, because of the limited and expensive work time of experts. Harker [15] also points out that an expert, when
faced with a comparison between two alternatives ai and a j , sometimes would rather not compare them directly. This
may happen when, e.g., they do not yet have a good understanding of his or her preferences for this particular pair of
alternatives. Sometimes experts evade the answers, especially when taking a position on the given comparisons is morally
or ethically tricky, e.g., comparing mortality risk vs. cost. Finally, some data may be lost or damaged.
224 K. Kułakowski et al. / International Journal of Approximate Reasoning 115 (2019) 221–234

Fig. 2. The graph of C .

In response to the above problems, the methods of calculating the ranking based on an incomplete set of pairwise
comparisons arose. Probably one of the most popular (and the first one) is the Harker method [15]. According to the
method based on the matrix C , a new auxiliary matrix B = (bij ) is created where


⎨ci j , if ci j exists and i /= j ,
bij = 0, if cij does not exist and i /= j,


bii , if i = j ,
and bii means the number of the unanswered questions in the i-th row of C . Harker has shown that a non-negative quasi-
reciprocal matrix (B Id) can + be used for calculating priority ranking as a replacement for an original PC matrix. The natural
limitation of the Harker method is that in C there must be a series of comparisons between every two alternatives ai and a j
such that cik1 , ck1k2 , . . . , ckq j exist. In other words, every two alternatives must be comparable at least indirectly.
Every matrix C for which the above condition holds is irreducible and every graph G (V , E)=in which the set of vertices
V = { v 1, .. ., v}n correspond to the set of alternatives a1, ..., an, and the set of edges E corresponds to the PC matrix
C = c[ij ] so that there exists the edge { vi , v}j in E if cij is known and defined, is strongly connected [32]. Let us consider
the following example.

Example 2. Let C be the incomplete PC matrix


⎛ ⎞
1 3 ?
C⎝ 1/3 1 ⎠ =3 ,
? 1/ 3 1

hence, Harker’s auxiliary matrix is


⎛ ⎞ ⎛ ⎞
1 3 0 100
B + Id = ⎝ 1/3 0 3 ⎠ + ⎝ 0 1 0 ⎠ ,
0 1/3 1 00 1

and
⎛ ⎞
2 3 0
B ⎝ Id 1/3 ⎠
+1 = 3 .
0 1/3 2

Thus, the rescaled ranking vector obtained by EVM is

w = [0.692, 0.23, 0.0769]T


which means that the priority of the first alternative is w(a 1 ) = 0.692, and the second and third: w (a 2) = 0.23, w(a 3) =
0.0769, correspondingly.
The corresponding graph is shown in Fig. 2.

2.3. Inconsistency

As cik represents the results of the comparisons between the i-th and the k-th alternative, and ckj expresses the outcome
of similitude between the k-th and the j-th alternative, it’s natural to expect that c ij = c ik ckj . However, the entries of the
PC matrix C represent the subjective opinions of experts and, due to human imperfection, it is possible that cij /= cik ckj .
Whenever it happens, we will call such a situation an inconsistency. If the difference between c ij and c ik c kj is small or
happens rarely, it probably will not have much impact on the final result. However, if the difference is large and it happens
relatively often, then the results of the pairwise comparisons may be considered unreliable and, therefore, the ranking
K. Kułakowski et al. / International Journal of Approximate Reasoning 115 (2019) 221–234 225

results may not be trustworthy. This observation leads to a question about the degree of inconsistency of the PC matrix C .
A popular way of determining the level of inconsistency in a set of pairwise comparisons is the use of inconsistency indices.
Probably the best-known index is one proposed by Saaty in 1977 [35]. It is defined as:

λmax − n
CI = ,
n− 1
where λmax is the principal eigenvalue of C , and n is the number of alternatives. It has been proven that CI reaches 0 when
the PC matrix C is fully consistent, and it gets higher values when C is more inconsistent [35]. Since then, many different
inconsistency indices have been created. A comprehensive overview of the inconsistency indices can be found in [3,2,26,31].

2.4. Sensitivity

Another factor that affects the credibility of the ranking is the sensitivity of the result. By sensitivity we mean the
extent to which the perturbations of input data can change the final result. If a little disorder can significantly modify
the result, then the ranking result is unstable and, therefore, not credible (we can not be sure if the final result is not
accidental). Reversely, if reasonably small changes in the input data do not cause noticeable modifications of the result, then
we can trust that the result obtained is a consequence of decision-making data deliberately introduced into the system.
Simply put, it can be assumed that the sensitivity can be used to determine the data quality. The problem is, however,
that the sensitivity is very hard to measure. What does it mean “a small change in input”? What change, how can it be
quantified? What does “noticeable modifications of the result” mean? A person who wants to deal with sensitivity analysis
must answer all these questions. For the purpose of this article, we have assumed that the inconsistency index determines
the input data perturbations. To measure the extent to which the results have been modified, we use two methods: the
Manhattan distance1 and the rescaled Kendall tau distance.
The Manhattan distance between two priority vectors w and u is defined as follows:

.
n

M d ( w , u) = | w (a i ) − u(ai)| . (3)
i=1

This metric provides us with information about the average difference between two different priorities assigned to the same
alternatives. As all the entries of priority vectors sum up to 1, the result Md(w, u) 2. ≤
Very often, the ranking results are interpreted only qualitatively. This means that the decision makers are interested
in who the winner is, who is in second and third place, but not what the numerical priorities of alternatives are. Let
O : R+ n → {1, . . . , n}n be the mapping assigning to every ranking vector w its ordinal counterpart in such a way that the
i-th element of O (w) indicates the position of the i-th alternative in the ranking (1). For example, if

w = [0.3, 0.5, 0.2]T


then its ordinal vector is

O ( w ) = [2, 1, 3]T . (4)


Qualitative interpretation of ranking vectors leads to a question about the extent that both O ( w ) and O ( u ) differ from each
other. The answer can be the Kendall tau rank distance that counts the number of pairwise disagreements between two
ranking lists [19,11]. Let us define the Kendall tau distance formally:
. Σ
Kd(p, q) = .(i, j) | i < j and sign(p(ai ) − p(aj )) /= sign(q(ai ) − q(aj )).
where p , q are ordinal vectors. Since the maximal value of K d ( p , q) for two n-element vectors is n(n− 1)/2 it is convenient
to use the rescaled Kendall tau distance, i.e.
2K d ( p , q)
Krd (p , q) = ,
n(n − 1)
≤rd ( p , q) 1.≤The rescaled Kendall tau distance is the second method used in the article for the purpose of
so that 0 K
measuring discordance between ranking results. Since the vectors produced by EVM, GMM or the Harker method are not
ordinal before applying K rd they have to be transformed to their ordinal counterparts using O mapping.
Sometimes, the Kendall tau distance is called a Bubble sort distance. The reason is that, when there are no ties, their value
represents the number of swaps that are performed by the bubble sort algorithm [8] when transforming the first list into
the second one.

1
In [16] Harker used Chebyshev distance ǁ·ǁ∞ for this purpose.
226 K. Kułakowski et al. / International Journal of Approximate Reasoning 115 (2019) 221–234

Example 3. Let us consider two ordinal vectors p =1,[ 2, 4, 3 ]T and q =3[, 4, 1, 2 ] T . It is easy to observe that K d(p, q) = 5 as
the discordant pairs of indices are: (1, 3), (1, 4), (2, 3), (2, 4), (3, 4). Indeed, there are five binary swaps needed to transform
p into q. They are:

1. p = [1, 2, 4, 3]T → [1, 2, 3, 4]T ,


(a) [1, 2, 3, 4]T → [1, 3, 2, 4]T ,
(b) [1, 3, 2, 4]T → [3, 1, 2, 4]T ,
(c) [3, 1, 2, 4]T → [3, 1, 4, 2]T ,
(d) [3, 1, 4, 2]T → [3, 4, 1, 2]T = q.

Example 4. Assuming n = 4, the rescaled value is Krd(p, q) = 5/6.

3. Measures of incompleteness

3.1. Incompleteness and sensitivity

According to EVM, the priority vector meets the equation (2). In other words, the weight of every alternative w(ai ) meets
the equation
n
.1
w(a i) = c ij w ( a j ). (5)
λmax
j =1

Hence, the priority of one alternative is expressed by the weighted average of all other alternatives. With this regularity, we
also deal with the case of GMM [25]. The equation (5) suggests that the perturbation of one single element c ij , assuming
that the other elements have not changed, should not significantly affect the value of w(ai ). However, in the case of an
incomplete PC matrix, the relationships between alternatives are weakened. The priorities of individual alternatives are
determined by fewer expressions in the form c ij w ( a j ) than normally. This suggests that the susceptibility for perturbations
of the rankings calculated based on the incomplete PC matrices is higher than normal. This, of course, should translate to
the usually higher sensitivity of such decision models, which means that the completeness of the matrix correlates with
the sensitivity of the method. The more comparisons available, the less vulnerable the model. One may ask whether the
number of missing elements is not enough as an index. To answer this, let us consider the following two PC matrices with
three (six, when the reciprocal elements are taken into account) missing comparisons.
⎛ ⎞
1 c12 ? ? ?
C1 = c21 1 c23 c24 c25
⎜ ? c32 1 c34 c35 ⎟ , (6)
⎜ ⎟
⎝ ? c42 c43 1 c 45 ⎠
? c52 c53 c54 1
⎛ ⎞
1 c12 ? ? c15
C2 = c21 1 c23 ? c25
⎜ ? c32 1 c34 c35 ⎟ . (7)
⎜ ⎟
⎝ ? 1 c 45 ⎠
? c43
c51 c52 c53 c54 1

In the first matrix, a1 is compared only with a 2. Thus, a perturbation on c12 completely changes the value w(a1). In the
second matrix, a1 is compared with a2 and a 5. Therefore, the same perturbation on c12 will have less impact on the priority
w(a 1 ). In Section 4 this intuition will be confirmed by the Montecarlo experiment. The above consideration leads us to the
conclusion that the incompleteness index, which would be useful in determining the sensitivity of the decision model, should
also take into account the arrangement of missing comparisons.

3.2. Regularity index

In graph theory, a regular graph is one whose vertices have the same degree [9]. Breaking this rule is considered as the
introduction of irregularities. The larger the scale of deviation from the average, the higher the irregularity of the graph. One
of the ways to measure the average distance from the average value of a set of numbers is a standard deviation. This concept
will allow us to construct a regularity index for incomplete PC matrix. Let Avd(C ) be the average degree of G C = (V , E , L )
[9, p. 5], i.e.
K. Kułakowski et al. / International Journal of Approximate Reasoning 115 (2019) 221–234 227

n
.
Avd(C )1 deg(a ).
=n
i=1 i
Hence, the regularity index of the PC matrix C be defined as:
,
..n

IR(C ) =. (Avd(C ) − deg ai)2.


i=1

The regularity√index allows to detect differences between matrices such as C 1 and C 2 . In particular RI(C 1) = 6/5 = 1.095
and IR(C 2) = 7/10 = 0.836. Thus, the second matrix C 2 has been considered by IR as more regular than C 1 . The regularity
index is insensitive to the number of comparisons in the PC matrix. For this reason, it is worth using it to compare two
matrices with the same number of comparisons differing only in their distribution.

3.3. Incompleteness index

In an n × n PC matrix, a single alternative can be compared with, at most, n − 1 other alternatives. Therefore, the
maximal value of deg(ai ) for i = 1, . .., n is n − 1 (see Definition 2). Similarly, the number of missing comparisons is given
by n − 1 − deg(ai ). Because the desired behavior is that the newly constructed index should be higher for C1 than for C 2, the
higher value of the expression ei = n − 1 − deg(ai ) for some particular i should contribute more to the value of the index
than two or more smaller expressions such that eq1 + ... + eqr = ei . To achieve this, let us square the expression (n − 1
− deg(ai ))2. Thus, the expression
.
n

S( C) = (n − 1 − deg(ai ))2 , (8)


i=1
where C is a PC matrix, combines two features together. It rises when the number of missing comparisons increases and,
providing that there are two matrices of the same size and with the same number of missing comparisons, it is higher for a
matrix that has larger irregularities in the distribution of missing values. Let us compute the mean of missing values raised
to the square. As a result, we get the formula:
1
S(C ) (9)
n
which preserves both important features and its value is bounded and varies within the range [0, (n − 1)α ]. Hence, in order
to get the final form of the index, let us divide (9) by (n − 1)2, i.e.
1
S(C)
II(C )= n . (10)
(n − 1)2
It is clear that 0≤ ≤II 1. When the PC matrix is fully incomplete, i.e. there are no comparisons between alternatives, II(C )
is 1. Reversely, if C is complete, i.e. all the alternatives are defined, II(C ) equals 0. Providing that the PC matrix is reciprocal,
every alternative has to be compared with at least one different alternative. The maximal value of II(C ) that allows the
. Σ2
ranking to be created is reached when just one alternative is compared with all the others. It is then given by n−1 n· n−2
n−1 .
The condition
. Σ2
n −1 n− 2
II(C ) ≤ ·
n n− 1
. Σ2
is necessary, but is not sufficient. Hence, there may be PC matrices for which II is smaller than n −1 n· n−2
n −1 but, in spite of
this, one can not create the ranking.

Example 5. Consider the matrices C1 and C2 given by (6) and (7). Let us calculate their incompleteness indices:
1
.5
5 i=1 (4 − deg(ai ))2 9+ 1+ 1+ 1
II(C1) = = = 0.15
16 80
1 .5
i=1 (4 − deg(ai ))2 4+ 1+ 1+ 4
II(C2) = 5
= = 0.125
16 80

As we can see, the index of the first matrix is greater than the index of the second one, which reflects the fact that the
distribution of the missing items in the rows of C 2 is more aligned than in C 1 . However, both indices are quite small, as both
matrices lack only 6 elements (out of 20).
228 K. Kułakowski et al. / International Journal of Approximate Reasoning 115 (2019) 221–234

3.4. Properties of incompleteness index

Although the regularity index does not take into account the number of pairwise comparisons in the PC matrix, the
incompleteness index does. However, it seems interesting to consider the relationship between one and the other. In par-
ticular, we may expect that if one matrix is considered more irregular by regularity index than the other, then it is also
considered more incomplete by the incompleteness index. Indeed such a relationship is taking place.

Theorem 1. For two incomplete PC matrices C1 and C2 having the same number of comparisons holds

IR(C1)> IR(C2) ⇔ II(C1)> II(C2)

Proof. Let A = {a1, . . . , a}n be the set of alternatives. Obviously, the condition IR (C1) > IR(C2) is equivalent to IR2 (C 1) >
IR2(C2), i.e.
n n
. .
(Avd( C 1 ) − deg1 (ai ))2 > (Avd( C 2 ) − deg2 (ai ))2 , (11)
i=1 i=1

where deg j (a i ) means the degree of ai in the graph induced by C j . Since for every undirected graph holds that the degrees
of all its vertices sum up to double number of edges then for every n × n incomplete PC matrix C holds that
2k
Avd(C )= ,
n
where k is the number of comparisons in C (excluding diagonal). As C 1 and C 2 have the same number of comparisons we
have Avd(C1) = Avd (C2) = 2k/n. Thus, the inequality (11) can be written as
n n
. .
(q − deg1 (ai ))2 > (q − deg2 (ai ))2 ,
i=1 i=1

where q = 2k/n. By transforming the above expression we get the following equivalent inequalities
n n
. .
(q − deg1 (ai ))2 − (q − deg2 (ai ))2 > 0,
i=1 i=1
n
.. Σ
deg21(ai ) − 2q · deg1 (ai ) − deg22(ai ) + 2q · deg2 (ai ) > 0,
i=1
n n
.. Σ .
deg21(ai ) − deg22(ai ) − 2q(deg1 (ai ) − deg2 (ai )) > 0,
i=1 i=1

and,
.n Σ
n .
. Σ . .
n
deg21(ai ) − deg22(ai ) − 2q deg1(ai ) − deg2(ai ) >0
i=1 i=1 i=1

. .
As i deg1 (ai ) = i deg2 (ai ) = 2n/k then the above equation boils down to
..
n Σ
deg21(ai ) − deg22(ai ) − 2q · 0 > 0.
i=1
Obviously the value of q does not affect the truth of the above expression. Hence, let q n 1. In particular, starting
=−
from
n
.
deg21(ai ) − deg22(ai ) − 2(n − 1) · 0 > 0,
i=1
and by repeating the above reasoning in the opposite direction we get
n n
. .
(n − 1 − deg1 (ai ))2 > (n − 1 − deg2 (ai ))2 ,
i=1 i=1
K. Kułakowski et al. / International Journal of Approximate Reasoning 115 (2019) 221–234 229

which means (see (8)) that

S(C1)> S(C2).
Due to the definition of the incompleteness index (10) we get

II(C1)> II(C2).
As all the above transformations are equivalent then the reverse reasoning is also true. ❑

The natural question that arises is what would happen to the incompleteness index II if we add one more comparison
to C without changing the arrangement of its elements. Of course, the completed matrix will have a lower value of the
incompleteness index.

Theorem 2. For two incomplete PC matrices C1 and C2, where C2 was obtained from C1 by adding a single comparison, holds

II(C1)> II(C2)

Proof. To prove the above property it is enough to observe that.


by adding one comparison to C1 we increase degrees for
some two vertices a p and ar . Hence, providing that II(C 1 ) = n (n − 1 − deg1 (ai ))2 the incompleteness index II(C 2 ) gets
i=1
the form
n
. .
II(C2) = (n − 1 − deg1(ai ))2 + (n − 2 − deg1(ai ))2.
i=1 i=p,r
i /=p,r

As n − 2 − deg1(ai ) ≥ 0 for i = p, q then II(C1) > II(C2). ❑

4. Properties of incompleteness index - a numerical study

4.1. Relationship between incompleteness, inconsistency and sensitivity

An entirely consistent matrix is resistant to reductions in the set of paired comparisons. That is because it is sufficient to
compare one alternative with another already ranked to precisely determine the ranking of the former. Hence, as long as it
is possible to compute the ranking, i.e., the PC matrix is irreducible, the calculated ranking is the same regardless of which
comparisons are missing. However, if a PC matrix is inconsistent, missing comparisons start to matter.
In order to investigate the impact of inconsistency and incompleteness on the sensitivity, we randomly prepare 1000
complete and consistent PC matrices C = { C 1 , . . . , C1000}. Then, every matrix from C is perturbed so that we obtain 41 sets
C 1 , . . . , C41 of matrices with the increasing average inconsistency CIavg given as
.
1 n
CIC (=j ) =
avg(C ), for j i 1 ,.. ., 41.
CI
n
i=1

The average of the inconsistencies2 of those groups starts from CIavg(C1) = 0.001, CIavg(C2) = 0.004, CIavg(C3) = 0.008 and,
finally, reaches CIavg(C41) = 0.3385. Next, we extend every C j by addingj irreducible
j
incomplete matrices jrandomly
,k
obtained
j
ˆ and its elements by C
from those originally located there. Let us denote the extended C by C ∈ Cˆ , where k means
i

,ik
the,k number of missing comparisons and i indicates
,k the consistent PC matrix C i ∈ C from which C j originated. ,k For every
C j we compute the incompleteness index II(C j ), the measures of sensitivity i.e. Kendall distance Krd(w(Ci ), w(C j )) and
i ,k i ,k i
the Manhattan distance Md(w(Ci ), w(C j )). Priority vectors w(Ci ) and w(C j ) are calculated using EVM and the Harker’s
i i
method correspondingly.
,k
In Fig. 3, we can see the relationship between the average value of sensitivity for the matrices C j with
i
the given average
,k
inconsistency CI avg ( C j ) and the average incompleteness given in the form of the index II(C j ). When
i
the considered PC matrices
are consistent i.e. CI a vg ( ) 0 thenC the
j
= resulting rankings do not depend on incompleteness either. The distance between the
rankings obtained from consistent complete and incomplete matrices is 0. However, when inconsistency starts increasing, the
impact of incompleteness becomes apparent.

2
Note that up to now all considered matrices are complete, so we can examine their consistency using Saaty’s consistency index CI.
3
As an irreducible n × n matrix must have at least n − 1 comparisons (we are counting only comparisons over the diagonal) then, for every inconsistent
matrix C ∈ C j , we generate n(n − 1 )/2 − (n − 1 ) = (n2 − 3n + 2 )/2 incomplete matrices.
230 K. Kułakowski et al. / International Journal of Approximate Reasoning 115 (2019) 221–234

Fig. 3. Relationship between the average consistency level CIavg , incompleteness index II and sensitivity given as the average Manhattan distance Md
between rankings obtained from consistent and inconsistent (and incomplete) 9 × 9 matrices.

Fig. 4. Relationship between the average consistency level CIavg , incompleteness index II and sensitivity given as the average Kendall distance Krd between
rankings obtained from consistent and inconsistent (and incomplete) 9 × 9 matrices.

An increase in both inconsistency and incompleteness translates to an increase in the average Manhattan distance. For
very small values of inconsistency (CIavg ≈ 0.001), the Manhattan distance is about 0.01 and, following the increase of II
it takes values near 0 .04. For larger values, e.g. CIavg
≈ 0.11, the value of Md ranges between 0.1 and 0.4, and similarly
for CIavg ≈0.38 the average values of M d are between 0.2 and 0.8. This observation indicates that the highly incomplete
PC matrices are almost four times more vulnerable to random perturbations than the complete matrices. As the maximal
possible value of the Manhattan distance for vectors whose elements add up to 1 is 2, the value of M= d 0.4 means that
this index reaches 20% of its maximal value.
When the inconsistency is small (CIavg ≈ 0.001) the average values of the Kendall distance are spanned between 0.005
and 0.025. Then, for moderately inconsistent matrices (CIavg ≈ 0.11), they range between 0.05 and 0.15, then for (CIavg ≈
0.38) the values of the Kendall index go from 0.09 to 0.25.
This means that, for PC matrices with reasonably high inconsistency, we may expect that 25% or more pairs may ran-
domly change their order. Similarly as before, the incompleteness may significantly increase (from three to four times) the
sensitivity of the PC method.

4.2. Regular and irregular matrices - case study

We may suppose that the more missing comparisons to the given alternative, the more vulnerable its weight and the
position in the ranking. In extreme cases, if the given alternative ai is compared to only one other alternative a j , i.e.,
/= other values in the i-th row and j-th column of C are undefined, the ranking of ai depends
except c ij , where i j all
primarily on c ij . Any perturbation of c ij can translate into significant changes in the weight of the i-th alternative. In
the opposite case, the missing comparisons are evenly distributed between alternatives. This ensures the relative safety of
each alternative, providing, of course, that the number of missing alternatives is not too high. The above observations
allow us to indicate an example of a fairly regular and irregular PC matrix with a fixed number of missing compar-
isons.
Let us number the selected entries in the n × n PC matrix in such a way that in the first row c13 corresponds
to 1, c14 − 2 and c1,n has the assigned number n − 2. Similarly, in the second row c24 gets the number n − 1, c25 − n
− the last element cn−2,n gets the number (n2 3n 2)/2.
and the last element in the row c2,n gets 2n 4. Finally, − +
Elements directly above the diagonal are not indexed (the above numbering scheme is shown in the form of the ma-
trix C w ).
K. Kułakowski et al. / International Journal of Approximate Reasoning 115 (2019) 221–234 231

⎛ ⎞
1 c12 c( 1 ) ·· · ·· · c( n − 2 )
1 c23 c(n−1) ·· · c(2n−4)
⎜ .. .. ⎟
. . ··· .
Cw = ⎜ . Σ
. n2 −3n+2


⎝ .. ⎠
⎜ cn−21,n−1 c cn 1,n ⎟
2

1
Then, in order to prepare the highly irregular (and possibly highly sensitive) matrix with x missing comparisons, it is enough
to remove comparisons with assigned numbers from 1 to x and their counterparts below the diagonal. For example, the
highly irregular 7 by 7 PC matrix with 9 missing comparisons may look like:
⎛ ⎞
1 c12 ? ? ? ? ?
c21 ? ? ? ? ⎟
⎜ 1 c23

w cc35 cc36 c37 ⎟


⎜ ? c32 c1 c34
C(9) = ? 43 c
⎜ ? ? c53 c1 45
1 c56
46 47 ⎟
c57
⎝ 54

?
? ?
? cc73
63 cc64
74 c 65
75 1
c76 1
c67
For the purpose of creating the matrices with the fairly even distribution of missing values corresponding to some G C b
we use another numbering scheme. Let us assign the number 1 to c13, 2 to c24, 3 to c35, and n − 2 to cn−2,n . The number
n − 1 is assigned to c14, n to c25 and finally 2n − 5 to cn−3,n . The last numbered element is c1n with the value of the index
(n2 − 3n + 2)/2 (the fairly regular numbering scheme is shown as the matrix C b )
⎛ ⎞
2 −3n+2
1 c12 c(1) c(n−1) ·· · c(
n 2 )

⎜ 1 c23
.. ⎟
c( 2 ) . .
.. .. .. ⎟
Cb = ⎜ . . . .
..
⎜ . cn −2,n −1 c(n−2) ⎟
⎝ 1 cn−1,n ⎠
1

For example, the regular 7 by 7 PC matrix with 9 missing comparisons is as follows:


⎛ ⎞
1 c12 ? ? c15 c16 c17
c21 1 c23 ? ? c26 c27
⎜ ⎟
? c32 1 c 34 ? ? c37

Cb(9)
=⎜ ? ? c 1 c ? ?
43 45
⎜ c51 ? ? c54 1 c56 ? ⎟
⎝ c 67 ⎠
c 61 c 62 ? ? c65 1
c71 c72 c73 ? ? c76 1
(9 )
It is easy to observe that in Cb all alternatives have two missing comparisons (so each of them is compared with the three
(9 )
others), while in C w alternative a1 is compared only with a2 and a2 is compared only with a1 and a 3. In the worst case, the
perturbations of c12 and c23 may lead to significant weight changes in a1 and a 2.
A question arises regarding the extent to which the regular and irregular distribution of missing comparisons translates
to the measured sensitivity, and of course to the values of the incompleteness index. In order to answer these questions,
we prepared 1000 random inconsistent and incomplete PC matrices 9 × 9 with the average inconsistency CI ≈ 0.1, then
(i ) 4 i = (i )
we removed their elements according to both the regular C b and the irregular C w pattern subsequently assuming (i) (i)
0, 1, 2, . . . , 28 missing elements. Then we measured the average distance of the ranking vectors obtained from C and C
b w
and a complete and not perturbed matrix, and, similarly, we computed the average value of incompleteness index II.
In Figs. 5a and 5b we can see two plots. The lower plot in both figures represents the average sensitivity of incomplete
PC matrices with the missing values distributed according to the C b scheme. The upper plot corresponds to the average
sensitivity of incomplete PC matrices with the missing values distributed according to C w . Both plots look quite similar. They
grow as the number of missing comparisons increases, but the plot corresponding to the irregular incompleteness scheme

Note that for n = 9 we get n −3n+2 = 28.


2
4
2
232 K. Kułakowski et al. / International Journal of Approximate Reasoning 115 (2019) 221–234

Fig. 5. Impact of the distribution of missing comparisons (the lower the better), measured in the group of random PC matrices 9 × 9 with the average
inconsistency CI ≈ 0.1.

Fig. 6. Impact of the distribution of missing comparisons to the index of incompleteness, measured in the group of random PC matrices 9 × 9.

grows faster. It is interesting to note that, starting from thirteen missing comparisons, the difference in sensitivity between
matrices in the form C b and C w reaches almost 40%. It shows how important the distribution of missing comparisons is for
sensitivity. It is worth noting that for CI higher than 0.1 both plots (Fig. 5a and 5b) are simply get higher values as they
would be multiplied by a constant factor greater than 1.
It is worth noting that the shape of both plots do not depend on the assumed level of CI. For CI differ than 0.1 both
plots (Fig. 5a and 5b) look like they have been rescaled by a properly selected fixed ratio.
In Fig. 6, we can see plot of II. It should be noted as incompleteness index does not depend on inconsistency, those plots
do not depend on CI.
Although the index rises alongside the increase in the number of missing values, its increase differs from the plots of
sensitivity. For a low number of missing values (here 14 which is 50% of all comparisons possible to remove) the index seem
to mimic the sensitivity charts (Fig. 5). However, for the larger numbers of missing comparisons the differences between
the PC matrices formed according to C b and C w are important.

4.3. Results of the experiments

The first experiment (Section 4.1) clearly shows that both inconsistency and incompleteness almost equally contribute
to the sensitivity of the given PC matrix. It means that, when assessing the quality of the matrix, its completeness cannot
be ignored. On the other hand, Figs. 3 and 4 suggest that when the number of missing elements is small, the impact of
this deficiency on the final ranking is almost negligible. However, when many comparisons are missing, the ranking can
be significantly changed due to incompleteness. It is worth noting that plots for Manhattan distance (Fig. 3) and Kendall
distance (Fig. 4) do not differ significantly. This means that the risk caused by incompleteness affects both the rankings
interpreted quantitatively, and the rankings interpreted qualitatively.
The incompleteness index aims to determine not only the number of missing comparisons but also the regularity of their
distributions. Therefore, in the second experiment (Section 4.2), we consider the influence of the distribution of missing
elements to the sensitivity of the PC method and the values of incompleteness index. It was surprising to us to observe how
vital this distribution is. As we can see, an unfavorable distribution may almost double the value of sensitivity (Fig. 5). For
small numbers of missing comparisons, the incompleteness index tries to imitate this phenomenon. For larger quantities of
missing comparisons the difference between favorable and unfavorable distributions is smaller (Fig. 6) than in the case of
sensitivity measured directly (Fig. 5).
K. Kułakowski et al. / International Journal of Approximate Reasoning 115 (2019) 221–234 233

5. Discussion

The incompleteness index takes into account the number of comparisons and the regularity of their distribution. The
Montecarlo experiments carried out show that the more comparisons in the PC matrix, the better. So whenever we can
increase their number, we should do it. Of course, very often, it is related to the profitability or reasonableness of making
one more comparison. Figs. 5 indicate that the rate of decrease in sensitivity becomes smaller as the number of comparisons
increases. In other words, adding a comparison to the matrix is all the more valuable and vital; the fewer comparisons there
are in the matrix. At some point, however, we can recognize that there is no point in adding further comparisons as the
decrease in sensitivity is not big enough. The incompleteness index can come with help in estimating the speed of this
drop.
The second important assumption regarding the incompleteness index is regularity. The assumption of regularity is based
on the observation that since the ranking value w( ai ) depends on comparisons with other alternatives, the more direct com-
parisons, the less chance to calculate the incorrect value of w( ai ). In a situation where there are many comparisons in the
matrix, this assumption works quite well. In particular, the index behavior observed in (Fig. 6) seems to well reflect the
actual sensitivity of the model (Fig. 5). However, when the number of comparisons decreases, the difference in sensitivity
between the regular and irregular distribution of comparisons does not translate into the index values. This observation sug-
gests that predicting the sensitivity cannot be boiled down only to analyzing the number of comparisons and the regularity
of their distribution.
The second Montecarlo experiment is a preliminary study. In particular, we do not analyze all of the possible PC matrices
but only those generated following the C b and C w schemas. Adoption of such a random matrix generation strategy, on the
one hand, allowed us to limit the number of analyzed matrices (the set of all possible distributions for n alternatives
contains 2n(n−1)/2 elements), on the other hand, it makes impossible to draw general and far-reaching conclusions. This
preliminary research indicates, however, that computing and analyzing the incompleteness index can not replace (at least
not yet) the classical sensitivity analysis. However, we believe that in many cases the value of this index may be a useful
heuristic allowing to limit the number of distribution variants considered. Thus, the incompleteness index should be treated
as a kind of yardstick which allows quick detection that incompleteness can be a problem and should be improved.
The problem of an incomplete set of pairwise comparisons and its sensitivity to disturbances goes beyond the AHP
method. For this purpose, the research undertaken may also apply to other methods using comparisons of alternatives, such
as BWM, MACBETH [33,1], and others. The incompleteness of the set of comparisons has become the basis of the first of
them. The incompleteness index may find less apparent applications. An example of the potential use of this index is the
sequencing of pairwise comparisons, as described by Fedrizzi and Giove [13].
The great advantage of the defined index is the ease of their calculation. As it uses the number of missing comparisons
as the inputs for the n × n PC matrix, we need at most O ( n2 ) operations. Performing sensitivity analysis is usually much
more time and resource consuming. Even worse, as sensitivity analysis tries to answer questions about how the changes in
the input data translate to the method outcome, it is possible that incompleteness as an actual source of problems can be
overlooked. The index of incompleteness reduces the danger. Due to its simplicity, it is suitable for a quick and simple test
of the completeness of the paired decision data.

6. Summary

This paper has developed the incompleteness index for use with the quantitative pairwise comparisons method with
incomplete sets of comparisons. This index can be used as fast and computationally simple data quality tests. It has been
tested in Montecarlo experiments. The conducted trials showed the significant impact of incompleteness, as expressed by
this index, on the sensitivity of the pairwise comparisons based decision model. Although it is clear that incompleteness is
just one of the factors affecting sensitivity, the defined index can help decision makers to discover the risks to sensitivity
arising out of data incompleteness.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have
appeared to influence the work reported in this paper.

Acknowledgement

The research was supported by the National Science Centre, Poland, as a part of the project no. 2017/25/B/HS4/01617
and by the Polish Ministry of Science and Higher Education, AGH contract no. 16.16.420.054. Special thanks due to Ian
Corkill for his editorial help.

References

[1] C.A. Bana e Costa, J.M. De Corte, J.C. Vansnick, On the mathematical foundation of MACBETH, in: J. Figueira, S. Greco, M. Ehr gott (Eds.), Multiple Criteria
Decision Analysis: State of the Art Surveys, Springer Verlag, Boston, Dordrecht, London, 2005, pp. 409–443.
234 K. Kułakowski et al. / International Journal of Approximate Reasoning 115 (2019) 221–234

[2] M. Brunelli, L. Canal, M. Fedrizzi, Inconsistency indices for pairwise comparison matrices: a numerical study, Ann. Oper. Res . 211 (February 2013) 493–
509.
[3] M. Brunelli, M. Fedrizzi, Axiomatic properties of inconsistency indices for pairwise comparisons, J. Oper. Res. Soc. 66 (1) (Jan 2015) 1–15.
[4] F.J. Carmone Jr., A. Kara, S.H. Zanakis, A. Monte, Carlo investigation of incomplete pairwise comparison matrices in AHP, Eur. J. Oper. Res. 102 (3)
(November 1997) 538–553.
[5] V. Čerňanová, W.W. Koczkodaj, J. Szybowski, Inconsistency of special cases of pairwise comparisons matrices, Int. J. Approx. Reason. 95 (2018) 36–45.
[6] Z. Chen, Z. Ning, Q. Xiong, M.S. Obaidat, A collaborative filtering recommendation-based scheme for WLANs with differentiated access service, IEEE
Syst. J. 12 (1) (March 2018) 1004–1014.
[7] J.M. Colomer, Ramon Llull: from ‘Ars electionis’ to social choice theory, Soc. Choice Welf. 40 (2) (October 2011) 317–328.
[8] T.H. Cormen, C.E. Leiserson, R.L. Rivest, C. Stein, Introduction to Algorithms, 3rd edition, MIT Press, 2009.
[9] Reinhard Diestel, Graph Theory, Springer Verlag, 2005.
[10] Y. Dong, Y. Xu, H. Li, M. Dai, A comparative study of the numerical scales and the prioritization methods in AHP, Eur. J. Oper. Res. 186 (1) (March 2008)
229–242.
[11] R. Fagin, R. Kumar, M. Mahdian, D. Sivakumar, E. Vee, Comparing partial rankings, SIAM J. Discrete Math. 20 (3) (2006) 628–648.
[12] P. Faliszewski, E. Hemaspaandra, L.A. Hemaspaandra, J. Rothe, Llull and Copeland voting computationally resist bribery and const ructive control, J. Artif.
Intell. Res. 35 (2009) 275–341.
[13] M. Fedrizzi, S. Giove, Optimal sequencing in incomplete pairwise comparisons for large-dimensional problems, Int. J. Gen. Syst. 42 (4) (February 2013)
366–375.
[14] J. Figueira, M. Ehrgott, S. Greco (Eds.), Multiple Criteria Decision Analysis: State of the Art Surveys, Springer, 2016.
[15] P.T. Harker, Alternative modes of questioning in the analytic hierarchy process, Math. Model. 9 (3) (1987) 353–360.
[16] P.T. Harker, Incomplete pairwise comparisons in the analytic hierarchy process, Math. Model. 9 (11) (1987) 837–848.
[17] W. Ho, X. Ma, The state-of-the-art integrations and applications of the analytic hierarchy process, Eur. J. Oper. Res. 267 (2018) 399–414.
[18] J. Jablonsky, Analysis of selected prioritization methods in the analytic hierarchy process, J. Phys. Conf. Ser. 622 (1) (2015) 1–7.
[19] M.G. Kendall, A new measure of rank correlation, Biometrika 30 (1/2) (1938) 81.
[20] C. Klamler, A comparison of the Dodgson method and the Copeland rule, Econ. Bull. (January 2003) 1–6.
[21] W.W. Koczkodaj, R. Urban, Axiomatization of inconsistency indicators for pairwise comparisons, Int. J. Approx. Reason. 94 (March 2018) 18–29.
[22] G. Kou, C. Lin, A cosine maximization method for the priority vector derivation in AHP, Eur. J. Oper. Res. 235 (1) (May 2014) 225–232.
[23] K. Kułakowski, On the properties of the priority deriving procedure in the pairwise comparisons method, Fundam. Inform. 139 (4) (July 2015) 403–419.
[24] K. Kułakowski, Inconsistency in the ordinal pairwise comparisons method with and without ties, Eur. J. Oper. Res. 270 (1) (2018) 314–327.
[25] K. Kułakowski, A. Kedzior, Some remarks on the mean-based prioritization methods in AHP, in: Ngoc-Thanh Nguyen, Lazaros Iliadis, Yannis Manolopou-
los, Bogdan Trawiń ski (Eds.), Lecture Notes in Computer Science, Computational Collective Intelligence: 8th International Conference, ICCCI 2016,
Halkidiki, Greece, September 28-30, 2016. Proceedings, Part I, Springer International Publishing, 2016, pp. 434–443.
[26] K. Kułakowski, J. Szybowski, The new triad based inconsistency indices for pairwise comparisons, Proc. Comput. Sci. 35 (2014) 1132–1137.
[27] R. Li, L.J. Sun, H. Zhang, C. Yao, G. Luo, Research on DC voltage class series with AHP, J. Eng. 2017 (13) (2017) 1993–1998.
[28] K.K. Mohan, K. Prashanthi, R. Hull, C.D. Montemagno, Risk assessment of a multiplexed carbon nanotube network biosensor, IEEE Sens. J. 18 (11) (June
2018) 4517–4528.
[29] S. Mufazzal, S.M. Muzakkir, A new multi-criterion decision making (MCDM) method based on proximity indexed value for minimizing rank reversals,
Comput. Ind. Eng. 119 (May 2018) 427–438.
[30] D. Pan, X. Liu, J. Liu, Y. Deng, A ranking procedure by incomplete pairwise comparisons using information entropy and Dempster -Shafer evidence
theory, Sci. World J. (August 2014) 1–11.
[31] J.I. Pelaez, E.A. Martinez, L.G. Vargas, Consistency in positive reciprocal matrices: an improvement in measurement methods, IEEE Access 6 (2018) 25600–
25609.
[32] A. Quarteroni, R. Sacco, F. Saleri, Numerical Mathematics, Springer Verlag, 2000.
[33] J. Rezaei, Best-worst multi-criteria decision-making method, Omega 53 (C) (June 2015) 49–57.
[34] D.G. Saari, Condorcet domains: a geometric perspective, in: The Mathematics of Preference, Choice and Order, Springer Berlin Heidelberg, Berlin,
Heidelberg, 2009, pp. 161–182.
[35] T.L. Saaty, A scaling method for priorities in hierarchical structures, J. Math. Psychol. 15 (3) (1977) 234–281.
[36] T.L. Saaty, Relative measurement and its generalization in decision making. Why pairwise comparisons are central in mathematics for the measurement
of intangible factors. The analytic hierarchy/network process, Estad. Investig. Oper. (Statist. Oper. Res. (RACSAM)) 102 (November 2008) 251–318.
[37] L.L. Thurstone, A law of comparative judgment, Psychol. Rev. 101 (1994) 266–270, reprint of an original work published in 1927.
[38] Y. Wang, T.M.S. Elhag, An approach to avoiding rank reversal in AHP, Decis. Support Syst. 42 (3) (December 2006) 1474–1480.
[39] Ying-Ming Wang, C. Parkan, Y. Luo, Priority estimation in the AHP through maximization of correlation coefficient, Appl. Math. Model. 31 (12) (De -
cember 2007) 2711–2718.
[40] Q. Xu, C. Tan, Z. Fan, W. Zhu, Y. Xiao, F. Cheng, Secure multi-authority data access control scheme in cloud storage system based on attribute-based
signcryption, IEEE Access 6 (2018) 34051–34074.
[41] K.K.F. Yuen, Membership maximization prioritization methods for fuzzy analytic hierarchy process, Fuzzy Optim. Decis. Mak. 11 (2) (June 2012) 113–
133.

Potrebbero piacerti anche