Sei sulla pagina 1di 14

A problem-based selection of multi-attribute decision-making

methods
Chung-Hsing Yeh
School of Business Systems, Monash University, Clayton, Victoria, 3800, Australia
E-mail: ChungHsing.Yeh@infotech.monash.edu.au
Received 14 July 1999; received in revised form 23 April 2001; accepted 29 May 2001
Abstract
Different multi-attribute decision-making (MADM) methods often produce different outcomes for selecting or
ranking a set of decision alternatives involving multiple attributes. This paper presents a new approach to the
selection of compensatory MADM methods for a specic cardinal ranking problem via sensitivity analysis of
attribute weights. In line with the context-dependent concept of informational importance, the approach examines
the consistency degree between the relative degree of sensitivity of individual attributes using an MADM method
and the relative degree of inuence of the corresponding attributes indicated by Shannon's entropy concept. The
approach favors the method that has the highest consistency degree as it best reects the decision information
embedded in the problem data set. An empirical study of a scholarship student selection problem is used to
illustrate how the approach can validate the ranking outcome produced by different MADM methods. The em-
pirical study shows that different problem data sets may result in a different method being selected. This
approach is particularly applicable to large-scale cardinal ranking problems where the ranking outcome of
different methods differs signicantly.
Keywords: multi-attribute decision-making, cardinal ranking, validation, sensitivity analysis, attribute weights, entropy
1. Introduction
Multi-attribute decision-making (MADM) has been widely used in ranking or selecting one or more
alternatives from a nite number of alternatives with respect to multiple, usually conicting, criteria or
attributes. Tremendous efforts have been spent and signicant advances have been made towards the
development of numerous MADM models for solving different types of decision problems (Hwang and
Yoon, 1981; Zeleny, 1982; Colson and de Bruyn, 1989; Dyer et al., 1992; Stewart, 1992; Olson, 1996).
Despite all these, there is no best method for the general MADM problem, and the validity of the
ranking outcome remains an open issue. In some specic decision situations such as selecting an
alternative from a shortlist, the decision outcome produced by some MADM methods may not differ
signicantly (Belton, 1986; Karni et al., 1990). However, in decision situations where cardinal ranking
Intl. Trans. in Op. Res. 9 (2002) 169181
#2002 International Federation of Operational Research Societies.
Published by Blackwell Publishers Ltd.
of all or a subset of the alternatives is required, different methods often produce inconsistent rankings
for the same problem (Voogd, 1983; Zanakis et al., 1998). In other words, the ranking outcome is
dependent on the method used. The outcome inconsistency of MADM methods increases as the
number of alternatives to be selected or ranked increases, or when the alternatives have similar
performance (Olson et al., 1995). Selecting a valid method for reecting the values of the decision-
maker (DM) is thus important, in particular if there are a number of MADM methods available and the
alternatives involved have similar performance. If the ranking outcome of different methods differs
signicantly, the validity issue becomes crucial (Hobbs et al., 1992).
The problem of selecting an MADM method has been addressed from various decision contexts.
Most of studies in the literature focus on experimental comparisons on MADM methods in order to
examine their appropriateness of use and/or theoretical validity. Zanakis et al. (1998) give a good
review of these studies. Although some results of these comparative studies are signicant for the
decision problems examined, these results cannot be used as guidelines for a DM to select a proper
MADM method for an application (Ozernoy, 1992).
Because there exists a large variety of decision problems solvable by various MADM methods, the
selection of an MADM method for a given problem has been regarded as an MADM problem itself (e.g.
Hwang and Yoon, 1981; Evans, 1984; Nijkamp and Blaas, 1994). Along this line of research, method
selection procedures have been proposed based on the comparison between specic characteristics of
the decision problem and distinct features of available methods. Most of these procedures have been
implemented in the form of decision support systems (DSS) (e.g. Minch and Sanders, 1986; Hong and
Vogel, 1991; Ozernoy, 1992; Poh, 1998), or as general selection principles (Guitouni and Martel, 1998).
Some procedures have been included as a subsystem of multi-criteria decision support systems
(MADM-DSS), integrated with articial intelligence techniques (Siskos and Spyridakos, 1999). These
systems sound promising in theory, but they may not be appreciated by MADM users in practice because
of the huge cost involved (Yoon and Hwang, 1995). In addition, these procedures may not always make a
clear unequivocal choice (Guitouni and Martel, 1998), in particular between methods of the same
category or class. Due to their implicit and explicit assumptions, the applicability of the methods
selected remains uncertain (Nijkamp and Blaas, 1994). This weakness is evidenced by the fact that these
selection procedures do not normally examine the validity of ranking outcome. To address this problem,
this paper aims at developing a simple and objective approach for selecting from within MADM
methods of the same class for a given decision problem by examining their relative degree of validity.
Zanakis et al. (1998) conduct a comprehensive simulation comparison of eight MADM methods in
terms of performance measures of similarity. Their experiments show that the nal rankings of the
alternatives vary across methods more in problems with a larger number of alternatives. This nding
highlights the importance of selecting an appropriate method for MADM problems of large size.
However, like other studies, the appropriateness or validation of the methods considered is not dealt
with in their study.
The results of existing studies in MADM research suggest that validation of MADM methods
remains a major challenging issue (Stewart, 1997). To help the DM make valid decisions, a mechanism
is required for selecting an appropriate method for a given MADM problem. To this end, this paper
presents a new validation approach that can validate the ranking outcome of three commonly used
compensatory MADM methods by examining their ability in reecting the decision information
embedded in the problem data set. The three methods considered are (a) the simple additive weighting
(SAW) method, (b) the weighted product (WP) method, and (c) the technique for order preference by
170 C.-H. Yeh / Intl. Trans. in Op. Res. 9 (2002) 169181
similarity to ideal solution (TOPSIS). These methods are based on a multi-attribute value function for
representing the DM's preference structure (Keeney and Raiffa, 1993), thus producing a cardinal
preference of the alternatives. Research has shown that MADM methods based on additive value
functions are favored by practical DMs (Hobbs et al., 1992; Zanakis et al., 1998). These methods are
considered because they are applicable to large-scale decision problems where the ranking outcome
produced by different methods is most likely to be signicantly different. In practical applications,
these methods are intuitively appealing to the DM because of their simplicity in both concept and
computation.
In subsequent sections, we rst describe the MADM problem under consideration, together with the
three MADM methods to be examined. We then presented an approach for examining the validity of
the ranking outcome produced by the three methods via sensitivity analysis of attribute weights. Finally,
we conduct an empirical study of a scholarship student selection problem to demonstrate the ef-
fectiveness of the approach.
2. The multi-attribute decision-making problem and methods
The MADM problem involves a set of m alternatives A
i
(i 1, 2, F F F, m). These alternatives are to be
evaluated with respect to a set of n attributes (or criteria) C
j
( j 1, 2, F F F, n), which are independent
of each other. A decision matrix for m alternatives and n attributes is to be given as
X
x
11
x
12
F F F x
1n
x
21
x
22
F F F x
2n
F F F F F F F F F F F F
x
m1
x
m2
F F F x
mn

(1)
where x
ij
represent the performance ratings of alternative A
i
(i 1, 2, F F F, m) with respect to attribute
C
j
( j 1, 2, F F F, n). A weighting vector representing the relative importance of the attributes is to be
given as
W (w
1
, w
2
, F F F, w
n
) (2)
The performance ratings in (1) and the attribute weights in (2) are cardinal values that represent the
DM's absolute preferences. The decision problem is to rank all the alternatives in terms of their overall
preference value, which is obtained based on the data in (1) and (2).
For simplicity and the need of the case study exemplied in this paper, the MADM problem given
above is a single-level case. The three compensatory methods described below for solving the above
problem and the approach for comparing them (to be presented in the next section) are applicable to
problems involving attributes of multi-level hierarchies, although the pair-wise comparison technique
of Saaty's analytic hierarchy process (AHP) (e.g. Saaty, 1994) should be applied if the attribute
hierarchy has more than three levels (Hwang and Yoon, 1981).
2.1. The simple additive weighting (SAW) method
The SAW method, also known as the weighted sum method, is probably the best known and most
widely used MADM method (Hwang and Yoon, 1981). The basic logic of SAW is to obtain a weighted
C.-H. Yeh / Intl. Trans. in Op. Res. 9 (2002) 169181 171
sum of the performance ratings of each alternative over all attributes (Fishburn, 1967; MacCrimmon,
1968). The SAW method normally requires normalizing the decision matrix (X) to allow a comparable
scale for all ratings in X by
r
ij

x
ij
max
i
x
ij
, if j is a benefit attribute
min
i
x
ij
x
ij
, if j is a cost attribute

; i 1, 2, F F F, m; j 1, 2, F F F, nX (3)
where r
ij
(0 < r
ij
< 1) is dened as the normalized performance rating of alternative A
i
on attribute
C
j
. This normalization process transforms all the ratings in a linear (proportional) way, so that the
relative order of magnitude of the ratings remains equal (Nijkamp and van Delft, 1977). The overall
preference value of each alternative (V
i
) is obtained by
V
i

n
j1
w
j
r
ij
; j 1, 2 F F F, mX (4)
The greater the value (V
i
), the more preferred the alternative (A
i
). Research results have shown that the
linear form of trade-offs between attributes used by the SAW method produces extremely close
approximations to complicated nonlinear forms, while being far easier to use and understand (Hwang
and Yoon, 1981).
2.2. The weighted product (WP) method
The WP method uses multiplication for connecting attribute ratings, each of which is raised to the
power of the corresponding attribute weight (Bridgman, 1922; Starr, 1972; Yoon, 1989). This multi-
plication process has the same effect as the normalization process for handling different measurement
units. The logic of WP is to penalize alternatives with poor attribute values more heavily (Chen and
Hwang, 1992). The overall preference score of each alternative (S
i
) is given by
S
i

n
j1
x
ij
wj
; i 1, 2, F F F, mX (5)
where

n
j1
w
j
1. w
j
is a positive power for benet criteria and a negative power for cost criteria.
In this study, for easy comparison with other methods, the relative preference value of each al-
ternative (V
i
) is given by
V
i

n
j1
x
ij
wj

n
j1
(x

j
)
wj
, i 1, 2, F F F, mX (6)
where x

j
max
i
x
ij
and 0 < V
i
< 1. The greater the value (V
i
), the more preferred the alternative
(A
i
).
172 C.-H. Yeh / Intl. Trans. in Op. Res. 9 (2002) 169181
2.3. The technique for order preference by similarity to ideal solution (TOPSIS)
TOPSIS is based on the concept that the most preferred alternative should not only have the shortest
distance from the positive ideal solution, but also have the longest distance from the negative ideal
solution (Hwang and Yoon, 1981; Zeleny, 1982). This concept has been widely used in various MADM
models for solving practical decision problems (e.g. Hwang et al., 1993; Liang, 1999; Yeh et al., 2000).
This is due to: (a) its simplicity and comprehensibility in concept; (b) its computational efciency; and
(c) its ability to measure the relative performance of the decision alternatives in a simple mathematical
form.
TOPSIS normally requires normalizing the performance ratings of alternative A
i
on attribute C
j
by
r
ij
x
ij

m
i1
x
ij
2

; i 1, 2, F F F, m; j 1, 2, F F F, nX (7)
The positive ideal solution A

and the negative ideal solution A

can be determined based on the


weighted normalized ratings ( y
ij
) by
y
ij
w
j
r
ij
; i 1, 2, F F F, m; j 1, 2, F F F, nX (8)
A

( y

1
, y

2
, F F F, y

n
); A

( y

1
, y

2
, F F F, y

n
) (9)
where y

j

max
i
y
ij
, if j is a benefit attribute
min
i
y
ij
, if j is a cost attribute

; y

j

min
i
y
ij
, if j is a benefit attribute
max
i
y
ij
, if j is a cost attribute

;
j 1, 2, F F F, nX
The distance between alternatives A
i
and the positive ideal solution and the negative ideal solution can
be calculated respectively by
D

n
j1
( y

i
y
ij
)
2

; D

n
j1
( y
ij
y

i
)
2

; i 1, 2, F F F, mX (10)
The overall preference value of each alternative (V
i
) is given by
V
i

D

i
D

i
D

i
; i 1, 2, F F F, mX (11)
The greater the value (V
i
), the more preferred the alternative (A
i
).
2.4. Differences between the three methods
The main differences between the three methods described above lie in: (a) the normalization process
for comparing all performance ratings on a common scale, and (b) the aggregation of the normalized
decision matrix and weighting vector for obtaining an overall preference value for each alternative.
C.-H. Yeh / Intl. Trans. in Op. Res. 9 (2002) 169181 173
Due to these structural differences, the rankings produced by the three methods may not be consistent
for a given decision matrix and weighting vector. In fact, the empirical study presented in this paper
shows that the rankings are so different that the validation of the methods used has to be examined in
order to help the DM make rational decisions.
3. The approach to validation of MADM methods
The decision matrix for performance ratings of alternatives contains a certain amount of decision
information for the MADM problem. For a given weighting vector, the ranking outcome is largely
dependent on the degree of divergence of alternatives' performance ratings on individual attributes.
The more divergent the performance ratings for an attribute, the more important the attribute for the
problem (Zeleny, 1982; Shipley et al., 1991). This means that the attribute has more inuence on the
ranking outcome, thus transmitting more information to the DM. This also implies that an attribute is
less important or inuential for a specic problem if all alternatives have similar performance ratings
for that attribute. This concept of decisive information has been used as objective weights of attribute
importance in inter-company comparison problems that require being conducted on a commonly
accepted basis (Diakoulaki et al., 1995; Deng et al., 2000).
Shannon's entropy concept (Shannon and Weaver, 1947) is well suited for measuring the relative
contrast intensities of performance ratings to represent the average intrinsic information transmitted to
the DM (Hwang and Yoon, 1981). This concept coincides with the context-dependent concept of
informational importance (Zeleny, 1982). For the decision matrix X in (1), the expected information
content emitted from each attribute C
j
can be measured by the entropy value (e
j
) as
e
j
k

m
i1
p
ij
ln p
ij
; j 1, 2 F F F, nX (12)
where k 1aln m is a constant which guarantees 0 < e
j
< 1, and
p
ij
x
ij

m
q1
x
qj
; i 1, 2 F F F, m; j 1, 2, F F F, nX (13)
The degree of divergence (d
j
) of the average intrinsic information provided by the corresponding
performance ratings on attribute C
j
can be dened as
d
j
1 e
j
; j 1, 2 F F F, nX (14)
The value of d
j
in (14) represents the inherent contrast intensity of attribute C
j
. The relative degree
of inuence of attribute C
j
to the ranking outcome can be determined by
f
j
d
j

n
q1
d
q
; j 1, 2, F F F, nX (15)
The value of f
j
in (15) is detemined by the decision matrix of the problem. The ranking outcome of
an MADM method, produced based on the same decision matrix, should ideally reect this decisive
information implicitly transmitted to the DM. However, this requirement may only be met to some
degree by the MADM method due to its structural characteristics. The degree to which the method
174 C.-H. Yeh / Intl. Trans. in Op. Res. 9 (2002) 169181
meets this requirement can be reected by the relative degree of inuence of individual attributes on
the ranking outcome, as compared to other attributes. The degree of inuence of an attribute can be
measured by the degree of sensitivity of the ranking outcome to changes in the attribute weight. This
can be carried out by a typical sensitivity analysis process.
Triantaphyllou and Sanchez (1997) give a good review of how sensitivity analysis can be used in
MADM for making better decisions in specic situations. As examined by Mareschal (1988) and
Fischer (1995), sensitivity analysis gives the DM exibility in judging attribute weights and helps the
DM understand how attribute weights affect the ranking outcome. In addition, sensitivity analysis has
been used to deal with the problem of inconsistent outcomes by different methods (Voogd, 1983; von
Winterfeldt and Edwards, 1986). However, it may not be helpful when different outcomes are obtained
in different scenarios (Olson et al., 1995).
In this paper, we use sensitivity analysis as a means for determining how sensitive (the degree of
sensitivity) the ranking outcome of an MADM method is to changes in the attribute weight. This
degree of sensitivity implies the relevance (the degree of inuence) of the attribute to the ranking
outcome. The method is valid, if the relative degree of sensitivity of individual attributes consists with
the value of d
j
in (14). This indicates that the method has the ability to reect the decision information
embedded in the problem data set. In the selection of MADM methods for a given problem in terms of
this validity measure, the method with the highest degree of consistency should be used. As a result,
the most rational ranking outcome can be identied as it best reects the decision information
embedded in the problem dened by a given decision matrix and weighting vector.
4. Empirical study
An Australian university department has recently offered a number of industry-sponsored scholarships
to the rst-year students in its business-oriented undergraduate course on a yearly basis. The scholar-
ships are for three years, subject to the scholarship holders' satisfactory progress in their studies.
During their studies, scholarship students are required to work with industry sponsors for a total period
of one year under an industry-based learning program. Students who fail to pass a performance review
process at the end of an academic year will have their scholarships withdrawn. This is not desirable, as
it is a waste of the limited nancial resource and has a negative impact on the department's overall
prole. Despite the fact that there is no `correct' decision and the performance of scholarship students
is generally beyond the department's control, the department needs to justify that scholarships are
granted to the best-qualied candidates in a fair and rational manner.
The candidates for scholarships are selected based on their performance on non-academic, qualita-
tive attributes (selection criteria), assessed via an interview process. The reason for excluding the
academic attributes is that all the candidates have overcome a considerable academic hurdle to beome
eligible for study in the program. Therefore they are expected to have the capability to complete all the
specied academic requirements of the scholarship. Based on comprehensive discussions with industry
sponsors, a set of eight attributes relevant to the industry-based learning program is determined. These
attributes are briey discussed below:
(1) Community services (C
1
). Voluntary work within the community by candidates is viewed
favorably. Examples include activities involved in social welfare, coaching, peer support, etc.
C.-H. Yeh / Intl. Trans. in Op. Res. 9 (2002) 169181 175
(2) Sports/Hobbies (C
2
). Non-work related activities that the candidates are involved in are deemed as
benecial to the candidates' `well-roundedness'. Candidates with a wider range of interests are
favored.
(3) Work experience (C
3
). This is concerned with the degree of the candidates' participation in any
paid activities. Experiences in more relevant areas and/or with higher responsibility are preferred.
(4) Energy (C
4
). Future demands placed on the candidates will require energy that indicates a positive
attitude and a willingness to participate in demanding tasks.
(5) Communication skills (C
5
). The candidates' ability to communicate is important, as they need to
interact with other individuals in their industry-based learning placements. Their manner of
speaking, writing ability, and appearance are all communication enablers or disablers.
(6) Attitude to business (C
6
). Most candidates, after nishing their studies, will work in the business
world. Their attitude to, and ambitions in, the corporate world are crucial in indicating what kinds
of employees they will make.
(7) Maturity (C
7
). This is related to the candidates' willingness and ability to take on responsibility for
their current situations. The candidates' performance on their academic studies and industry-based
learning placements is highly dependent on the degree of responsibility they take.
(8) Leadership (C
8
). Potential leadership qualities are preferred as they reect on the candidates'
overall performance for their academic studies and industry-based learning placements.
In the scholarship student selection problem, each attribute is weighted equally. This is because the
DMs (interviewers) and the stakeholders (industry sponsors) cannot determine other acceptable weights
in a fair and convincing manner. This setting is in line with the principle of insufcient reason (Starr
and Greenwood, 1977), which suggests the use of equal weights if the DM has no reason to prefer one
attribute to another. This is due to the fact that no single attribute weighting method can guarantee a
more accurate result, and the same DM may elicit different weights using different methods (Weber
and Borcherding, 1993; Doyle et al., 1997; Yeh et al., 1999). In practical applications, this implies that
there is no easy way for determining attribute weights, and there are no criteria for determining what
the true weight is (Weber and Borcherding, 1993).
To illustrate how the validation approach can help the DM select the most rational ranking outcome,
we rst used the 1998 data. There were 57 candidates who attended the interview. Their performance
on the eight evaluation attributes was assessed on a 6-point Likert-type scale, ranging from 5
(extremely high) to 0 (extremely low). The result of this interview process constituted the decision
matrix X, expressed as in (1) with m 57 and n 8. The weighting vector W used was (0.125, 0.125,
0.125, 0.125, 0.125, 0.125, 0.125, 0.125) which satises

n
j1
w
j
1.
Clearly, the SAW, WP, and TOPSIS methods can be used to obtain a cardinal ranking of candidates
based on their overall preference value. Because of the use of equal weights and the same measure unit
for all attributes, a traditional scoring method, called the simple summation (SS) method, can be used
for the problem. The SS method simply adds up the performance ratings of a candidate on all attributes,
and then compares their total scores. To facilitate the comparison between the SS method and the other
three MADM methods, the overall preference value of each alternative (V
i
) by SS is given as
V
i

n
j1
w
j
x
ij
M
; i 1, 2, F F F, mX (16)
176 C.-H. Yeh / Intl. Trans. in Op. Res. 9 (2002) 169181
where M is a constant which equals the maximum score on the measure scale (e.g. M 5 in the
empirical study). The SS method is the same as SAW, except for the normalization process.
The ranking outcomes obtained by the four methods are not consistent, which may cause some
decision difculties. As an illustration, Table 1 shows the top ten rankings with the four methods. For
easy comparison, candidates A
i
(i 1, 2, F F F, 57) are denote in order of their overall preference value
V
i
(i 1, 2, F F F, 57) by the SS method. If there were only ten candidates to be selected, A
6
would not
be selected using TOPSIS, and A
11
would not be selected using SS, SAW, or WP. For most decision
situations where the number of candidates to be selected varies, there will be some candidates being
included using some methods, and being excluded with other methods.
To examine the validity of the ranking outcome produced by the four methods, a sensitivity analysis
procedure was carried out for each method. The procedure aimed at determining the degree of sen-
sitivity of each attribute to the ranking outcome of each method. The procedure carried out for an
attribute using a given MADM method is given as follows:
1. Assign all attributes a weight value of 1, called basic weights (i.e., b
j
1; j 1, 2, F F F, 8).
2. Change the weight for the attribute in the range between 1 and 2, with an increment of 0.1, while
other attributes are kept at their basic weights.
3. Normalize modied attribute weights by w
j
b
j
a

8
j1
b
j
to satisfy

n
j1
w
j
1.
4. Apply the method with the weights obtained at step 3.
5. Compute the percentage of ranking changes, as compared to the ranking outcome with equal
weights.
The range setting of attribute weight changes used in the above procedure is based on the assumption
that no single attribute is more than twice as important as any other attributes; a setting conrmed by
the DM. The result of this analysis is summarized in Fig. 1, which shows the average degree (in per-
centage) of inuence of individual attributes to the ranking outcome using four different methods.
The result in Fig. 1 indicates that the attribute weights have signicant inuence on the ranking
outcome. Although the sensitivity of attribute weight is largely dependent on the data in the decision
Table 1
Comparison of top ten rankings between four methods
SS SAW WP TOPSIS
Ranking A
i
V
i
A
i
V
i
A
i
V
i
A
i
V
i
1 A
1
1.000 A
1
1.000 A
1
1.000 A
1
1.000
2 A
2
0.975 A
2
0.975 A
2
0.973 A
2
0.915
3 A
3
0.950 A
3
0.950 A
3
0.946 A
3
0.892
4 A
4
0.925 A
4
0.925 A
4
0.920 A
4
0.868
5 A
5
0.900 A
5
0.900 A
5
0.895 A
7
0.855
6 A
6
0.900 A
6
0.900 A
7
0.895 A
5
0.847
7 A
7
0.900 A
7
0.900 A
9
0.895 A
9
0.846
8 A
8
0.900 A
8
0.900 A
6
0.887 A
10
0.831
9 A
9
0.900 A
9
0.900 A
8
0.887 A
11
0.815
10 A
10
0.875 A
10
0.875 A
10
0.870 A
8
0.811
C.-H. Yeh / Intl. Trans. in Op. Res. 9 (2002) 169181 177
matrix, its relative degree is also inuenced by the method used. For all methods used, C
5
is the least
sensitive attribute, while the most sensitive attribute is C
1
or C
3
, depending on the method used. The
average degrees of inuence of all attributes by SS, SAW, WP, and TOPSIS are 43.4%, 43.1%, 35.2%,
and 53.0% respectively. This indicates that as a whole TOPSIS is the most sensitive method, while WP
is the least sensitive method for the problem data set of 1998.
The degree to which each method reects the decision information embedded in the problem data set
can be measured by the correlation (or consistency degree) between the relative inuence degrees
of individual attributes obtained by sensitivity analysis for the method (as shown in Fig. 1) and the
relative inuence degrees of corresponding attributes ( f
j
) indicated by the entropy concept (as given in
Table 2). The results for SS, SAW, WP, and TOPSIS using Pearson's correlation coefcients
(Spearman's rank correlation coefcients) are 0.77 (0.62), 0.75 (0.62), 0.69 (0.45), and 0.94 (0.85)
respectively. Clearly, the ranking outcome produced by TOPSIS can best match the decision informa-
tion embedded in the decision matrix.
To examine whether different data sets of the same problem structure will result in a different
method being selected, we have applied the validation approach to the 1995, 1996, and 1997 data (in
which the number of candidates were 65, 61, and 69 respectively). Table 3 summarizes the results,
together with the result of 1998. The results in Table 3 show that different problem data sets may result
Fig. 1. Degree of inuence of individual attributes to the ranking outcome by four different methods.
Table 2
Values for informational importance of attributes for 1998 data
C
1
C
2
C
3
C
4
C
5
C
6
C
7
C
8
Entropy value e
j
0.9778 0.9928 0.9800 0.9902 0.9921 0.9897 0.9916 0.9861
Diversication degree d
j
0.0222 0.0072 0.0200 0.0098 0.0079 0.0103 0.0084 0.0139
Inuence degree f
j
0.2227 0.0718 0.2010 0.0980 0.0790 0.1034 0.0842 0.1398
178 C.-H. Yeh / Intl. Trans. in Op. Res. 9 (2002) 169181
in a different method being selected. This suggests that no single best method can be assumed for the
general cardinal ranking problem, and for a given problem data set, the most appropriate method can
be identied by the validation approach proposed.
5. Conclusion
A number of compensatory MADM methods can be used to solve cardinal ranking problems, dened
by a given decision matrix and weighting vector. Different ranking outcomes are often produced by
different methods. Despite the importance of the validity of the ranking outcome, very few studies have
been conducted to help the DM make valid decisions. In this paper, we have presented a validation
approach to the selection of eligible MADM methods for a given problem data set. The most ap-
propriate method is the one that best reects the decision information content, indicated by the relative
contrast intensity of the alternatives' performance ratings on each attribute based on Shannon's entropy
concept. An empirical study of a scholarship student selection problem has been conducted to illustrate
how the approach can be used to help select the most valid method for a given data set. Different
problem data sets may result in a different method being selected. With its simplicity in both concept
and computation, the approach can be applied in the general cardinal ranking problem solvable by
compensatory MADM methods. It is particularly suited to large-scale problems where the ranking
outcome produced by different methods differs signicantly.
Acknowledgments
The author would like to thank Professor Theodor J. Stewart and two anonymous referees for their
valuable comments and advice.
References
Belton, V., 1986. A comparison of the analytic hierarchy process and a simple multi-attribute value function. European
Journal of Operational Research 26, 721.
Table 3
Correlation coefcients between the inuence degree f
j
and sensitivity degree of four methods
Problem data set SS SAW WP TOPSIS Method selected
Pearson's
(Spearman's)
Pearson's
(Spearman's)
Pearson's
(Spearman's)
Pearson's
(Spearman's)
1995 0.79 (0.69) 0.72 (0.75) 0.92 (1.00) 0.82 (0.89) WP
1996 0.85 (0.69) 0.21 (0.32) 0.87 (0.41) 0.66 (0.45) WP
1997 0.74 (0.45) 0.79 (0.49) 0.66 (0.40) 0.77 (0.45) SAW
1998 0.77 (0.62) 0.75 (0.62) 0.69 (0.45) 0.94 (0.85) TOPSIS
C.-H. Yeh / Intl. Trans. in Op. Res. 9 (2002) 169181 179
Bridgman, P.W., 1922. Dimensional Analysis. Yale University Press, New Haven, CT.
Chen, S.J., Hwang, C.L., 1992. Fuzzy Multiple Attribute Decision Making: Methods and Applications. Springer-Verlag,
New York.
Colson, G., de Bruyn, C., 1989. Models and Methods in Multiple Criteria Decision Making. Pergamon, Oxford.
Deng, H., Yeh, C-H., Willis, R.J., 2000. Inter-company comparison using modied TOPSIS with objective weights. Computers
& Operations Research 27 (10), 963973.
Diakoulaki, D., Mavrotas, G., Papayannakis, L., 1995. Determining objective weights in multiple criteria problems: the
CRITIC method. Computer & Operations Research 22 (7), 763770.
Doyle, J.R., Green, R.H., Bottomley, P.A., 1997. Judging relative importance: direct rating and point allocation are not
equivalent. Organizational Behavior and Human Decision Processes 70, 6572.
Dyer, J.S., Fishburn, P.C., Steuer, R.E., Wallenius, J., Zionts, S., 1992. Multiple criteria decision making, multiattribute utility
theory: the next ten years. Management Science 38, 645653.
Evans, G.W., 1984. An overview of techniques for solving multiobjective mathematical problems. Management Science 30
(6), 12681282.
Fischer, G.W., 1995. Range sensitivity of attribute weights in multiattribute value model. Organizational Behavior & Human
Decision Processes 62 (30), 252266.
Fishburn, P.C., 1967. Additive Utilities with Incomplete Product Set: Applications to Priorities and Assignments. ORSA
Publication, Baltimore, MD.
Guitouni, A., Martel, J.-M., 1998. Tentative guidelines to help choosing an appropriate MCDA method. European Journal of
Operational Research 109, 501521.
Hobbs, B.F., Chakong, V., Hamadeh, W., Stakhiv, E.Z., 1992. Does choice of multicriteria method matter? An experiment in
water resources planning. Water Resources Research 28, 17671780.
Hong, I.B., Vogel, D.R., 1991. Data and model management in a generalized MCDM-DSS. Decision Sciences 22, 125.
Hwang, C.L., Lai, Y.J., Liu, T.Y., 1993. A new approach for multiple objective decision making. Computers & Operations
Research 20, 889899.
Hwang, C.L., Yoon, K., 1981. Multiple Attribute Decision MakingMethods and Applications, A State-of-the-Art Survey.
Springer-Verlag, New York.
Karni, R., Sanchez, P., Tummala, V.M.R., 1990. A comparative study of multiattribute decision making methodologies. Theory
and Decision 29, 203222.
Keeney, R., Raiffa, H., 1993. Decision with Multiple Objectives, Preferences and Value Tradeoffs. Cambridge University
Press, New York.
Liang, G.S., 1999. Fuzzy MCDM based on ideal and anti-ideal concepts. European Journal of Operational Research 112,
682691.
MacCrimmon, K.R., 1968. Decision Making among Multiple Attribute Alternatives: A Survey and Consolidated Approach.
Rand Memorandum RM-4823-ARPA, Washington, DC.
Mareschal, B., 1988. Weight stability intervals in multicriteria decision aid. European Journal of Operational Research 33
(1), 5464.
Minch, R.P., Sanders, G.L., 1986. Computerized information systems supporting multicriteria decision making. Decision
Sciences 17, 395413.
Nijkamp, P., Blaas, E., 1994. Impact Assessment and Evaluation in Transportation Planning. Kluwer Academic Publishers,
Dordrecht, The Netherlands.
Nijkamp, P., van Delft, A., 1977. Multi-Criteria Analysis and Regional Decision-Making. Martinus Nijhoff Social Sciences
Division, Leiden, The Netherlands.
Olson, D.L., 1996. Decision Aids for Selection Problems. Springer, New York.
Olson, D.L., Moshkovich, H.M., Schellenberger, R., Mechitov, A.I., 1995. Consistency and accuracy in decision aids:
Experiments with four multiattribute systems. Decision Sciences 26, 723748.
Ozernoy, V.M., 1992. Choosing the best multiple criteria decision-making method. Information Systems and Operational
Research 30 (2), 159171.
Poh, K.L., 1998. A knowledge-based guidance system for multi-attribute decision making. Articial Intelligence in Engi-
neering 12, 315326.
Saaty, T.L., 1994. How to make a decision: the analytic hierarchy process. Interfaces 24, 1943.
180 C.-H. Yeh / Intl. Trans. in Op. Res. 9 (2002) 169181
Shannon, C.E., Weaver, W., 1947. The Mathermatical Theory of Communication. The University of Illinois Press, Urbana.
Shipley, M.F., de Korvin, A., Obid, R., 1991. A decision making model for multi-attribute problems incorporating uncertainty
and bias measures. Computers & Operations Research 18, 335342.
Siskos, Y., Spyridakos, A., 1999. Intelligent multicriteria decision support: Overview and perspectives. European Journal of
Operational Research 113, 236246.
Starr, M.K., 1972. Production Management. Prentice-Hall, Englewood Cliffs, NJ.
Starr, M.K., Greenwood, L.H., 1977. Normative generation of alternatives with multiple criteria evaluation. In: Starr, M.K.,
Zeleny, M. (Eds.), Multiple Criteria Decision Making. North Holland, New York, pp. 111128.
Stewart, T.J., 1992. A critical study on the status of multiple criteria decision making: theory and practice. Omega 20,
569586.
Stewart, T.J., 1997. Future trends in MCDM. In: Climaco, J. (Ed.), Multicriteria Analysis, Springer, Berlin, pp. 590595.
Triantaphyllou, E., Sanchez, A., 1997. A sensitivity analysis approach for some deterministic multi-criteria decision making
methods. Decision Sciences 28 (1), 151194.
von Winterfeldt, D., Edwards, W., 1986. Decision Analysis and Behavioral Research. Cambridge University Press,
Cambridge.
Voogd, H., 1983. Multicriteria Evaluation for Urban and Regional Planning. Pion, London.
Weber, M., Borcherding, K., 1993. Behavioral inuences on weight judgments in multiattribute decision making. European
Journal of Operational Research 67, 112.
Yeh, C.-H., Willis, R.J., Deng, H., Pan, H., 1999. Task oriented weighting in multi-criteria analysis. European Journal of
Operational Research 119 (1), 130146.
Yeh, C.-H., Deng, H., Chang, Y.-H., 2000. Fuzzy multicriteria analysis for performance evaluation of bus companies.
European Journal of Operational Research 126 (3), 459473.
Yoon, K.P., 1989. The propagation of errors in multi-attribute decision analysis: A practical approach. Journal of the
Operational Research Society 40, 681686.
Yoon, K.P., Hwang, C.-L., 1995. Multiple Attribute Decision Making: An Introduction. Sage Publications, Thousand Oaks,
CA.
Zanakis, S.H., Solomon, A., Wishart, N., Dublish, S., 1998. Multi-attribute decision making: A simulation comparison of
select methods. European Journal of Operational Research 107, 507529.
Zeleny, M., 1982. Multiple Criteria Decision Making. McGraw-Hill, New York.
C.-H. Yeh / Intl. Trans. in Op. Res. 9 (2002) 169181 181
Copyright of International Transactions in Operational Research is the property of Wiley-Blackwell and its
content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's
express written permission. However, users may print, download, or email articles for individual use.

Potrebbero piacerti anche