Sei sulla pagina 1di 11

Journal of the Operational Research Society (2011) 62, 2002–2012 © 2011 Operational Research Society Ltd.

y Ltd. All rights reserved. 0160-5682/11


www.palgrave-journals.com/jors/

Discrete cooperative covering problems


1 2 1
O Berman , Z Drezner and D Krass
1 2
University of Toronto, Toronto, Canada; and California State University-Fullerton, Fullerton, USA
A family of discrete cooperative covering problems is analysed in this paper. Each facility emits a signal
that decays by the distance and each demand point observes the total signal emitted by all facilities.
A demand point is covered if its cumulative signal exceeds a given threshold. We wish to maximize
coverage by selecting locations for p facilities from a given set of potential sites. Two other problems that
can be solved by the max-cover approach are the equivalents to set covering and p-centre problems. The
problems are formulated, analysed and solved on networks. Optimal and heuristic algorithms
are proposed and extensive computational experiments reported.
Journal of the Operational Research Society (2011) 62, 2002–2012. doi:10.1057/jors.2010.176
Published online 15 December 2010

Keywords: cooperative cover; facility location; multiple facilities

1. Introduction chosen from a finite set of locations. We note that in


Berman et al (2010a) it was shown that the results
One of the classic concepts in facility location literature is
of ignoring the cooperative aspect of the coverage
that of ‘coverage’, where a customer is considered to be
(ie, using the individual cover model in a setting where
covered if a facility is located within a certain coverage
cooperative cover is more appropriate) can be quite
radius of the customer. The literature on coverage prob-
dramatic—leading to solutions that require twice (or
lems is quite rich; for a review the reader is referred to
more!) the number of facilities that are necessary to
Berman et al (2010b); Schilling et al (1993); Daskin (1995);
cover the same amount of demand when the cooperation
Current et al (2002); Plastria (2002). Note that under the
is properly taken into account. Similar results are
standard coverage framework, only one facility (namely
obtained for the discrete case in the current paper (see
the closest one) determines whether a customer node is
Section 4.1).
covered or not. As discussed in Berman et al (2010a) this
The plan for the paper is as follows. In Section 2 we
‘individual cover’ assumption may not be appropriate in
formulate three variants of the cooperative cover problem,
many potential application areas.
all of them are direct extensions of the standard cover
For example, in a case where facilities emit physical sig-
counterparts. In Section 3 exact and approximate algori-
nals (light, sound, etc), each customer receives signals from
thmic approaches are discussed. An extensive series of
all facilities, not just the closest one. ‘Coverage’ occurs
computational experiments is presented in Section 4. We
when the total (aggregated) signal received by the customer
close with some concluding remarks in Section 5.
exceeds certain acceptable threshold. Note that in this case
the facilities combine (ie, cooperate) to provide coverage.
We refer to this as the ‘cooperative cover’ framework. This
framework occurs for the case of non-physical signals as 2. Formulation
well, for example, in the case of the backup coverage models Let N be a discrete set of customer locations (‘nodes’).
(Hogan and Revelle, 1986). We refer the reader to Berman Demand P weight wi40 is associated with node iAN, and
et al (2010a) for further discussion. W ¼ ni¼1 wi represents the total available demand. We
The cooperative covering problem in the plane was assume that X is a discrete set of potential facility locations
introduced in Berman et al (2010a). The goal of the and that a subset of X of cardinality p for the location of
current paper is to analyse the cooperative cover the source points (facilities) is sought. Without loss of
framework for the discrete location case—that is, when generality, we can assume XCN. For i, jAN let d(i, j)
the set of potential facility locations is discrete must be represent the distance between i and j (any valid distance
metric can be used here). One possible representation of

Correspondence: Z Drezner, College of Business and Economics, this setting is to view N as the nodes of a network G with
California State University-Fullerton, Fullerton, CA 92834, USA. link set L and shortest distance metric d(i, j)i, jAN where
E-mail: zdrezner@fullerton.edu facilities must be located at nodes.
O Berman et al—Discrete cooperative covering problems 2003

As in Berman et al (2010a), we consider the following 2.2. The problems


mechanism of coverage: each source point j emits a signal
Problems 1, 2 and 3 are formulated as integer programming
that decays over distance according to some known non-
problems using the following two sets of decision variables:
negative and non-increasing function of distance f(d), for
example f(d ) ¼ 1/d 2. Each demand point iAN is affected 
1j if a facility is located at node j
by the sum of the signals from xj ¼
P all source points, that is, the 0 j otherwise
total signal at i is given by pj¼1 fðdði; jÞÞ. The signal at a 
demand point must exceed a certain threshold T in order 1 j if node i is covered
yi ¼
for the demand point to be covered, that is, demand point 0j otherwise
iAN is covered if and only if
The problems are formulated for a general decay func-
X
p
tion f(d). For example, f(d) ¼ 1/d 2.
fðdði; jÞÞXT ð1Þ
j¼1
Problem 1: Cooperative Location Set Cover Problem
The summation is referred to the as aggregation operator (CLSCP) is to find the minimum number of source points
since it aggregates signal strengths from individual facil- p required to attain a given coverage aW (0oap1) with
ities. Of course, other aggregation operators (eg, max, a given threshold T.
truncated sum, etc) can also be considered. However, in the ( )
Xn
current paper, as in Berman et al (2010a), we focus on the Problem P1 : min xj
summation operator. i¼1
We note that the mechanism above represents a direct X
n
generalization of the standard cover framework. Indeed, in s:t: fðdij Þxj XTyi i2N
the standard framework, for each potential location jAN a j¼1
coverage radius Rj is specified such that a demand point X
n
iAN is covered from j if and only if d(i, j) oRj. We note yi wi XaW
that when signal strength functions are given by fIj (d) ¼ T i¼1
if dpRj and 0 otherwise, for each jAN, then (1) represents xj 2 f0; 1g; yi 2 f0; 1g i; j 2 N ð2Þ
the standard (individual) coverage condition.
Three cooperative cover models are considered, which When a ¼ 1 the problem is called the Cooperative Set
are direct generalizations of the corresponding problems in Covering Problem (CSCP). We also note that with the
the classical cover framework: (i) finding the minimum signal strength function fI(d ) defined earlier, this problem
number of source points required to attain a given cove- becomes the standard (individual cover) Location Set
rage, (ii) covering the maximum weight of demand points Cover Problem (LSCP).
with p source points, and (iii) for a given number of source
points p, find the maximum possible threshold that yields Problem 2: Cooperative Maximum Covering Location
a given coverage. These problems are described in more Problem (CMCLP) with p source points (facilities) and a
detail below. given threshold T is given by
( )
Xn
2.1. Notation Problem P2 : max yi w i
i¼1
Let
X
n
s:t: fðdij Þxj XTyi i2N
N be the set of nodes,
j¼1
wi be the weight associated with
P node iAN,
W be the total weight W ¼ i2N wi , X
n
xj ¼ p
p be the number of source points (known or j¼1
unknown)
P be the set {1, . . . , p}, xj 2 f0; 1g; yi 2 f0; 1g i; j 2 N ð3Þ
dij be the distance between nodes iAN and jAP,
This problem generalizes Maximal Cover Location Pro-
f(d) be the strength of the signal at distance d
blem (MCLP), which is obtained when the signal strength
(monotonically decreasing),
fI(d ) is used in the formulation above.
fIj (d) be the signal stregth used in the standard
individual cover models,
fij ¼ f(dij), Problem 3: Generalized Cooperative p-Centre (GCpC) is
T be the threshold for coverage. to find the maximum possible threshold T that yields a
2004 Journal of the Operational Research Society Vol. 62, No. 11

given coverage aW with a given number of p facilities appropriate n. The same example can be easily extended to
Problems 2 and 3. &
Problem P3 : maxfTg
Xn Of course, the co-locations can easily be represented in
s:t: fðdij Þxj XTyi i2N the model above by creating p replications of each potential
j¼1
facility location, all located at 0 distance from each other.
X
n
However, this obviously, significantly increases the dimen-
wi yi XaW
i¼1
sionality of the models.
Xn The following theorem was proven in Berman et al
xj ¼ p (2010a) for these problems in the plane. The same proof
j¼1 applies to the network case as well.
xj 2 f0; 1g; yi 2 f0; 1g i; j 2 N ð4Þ
Theorem 2: Assume the total available weight W increa-
For a ¼ 1 we obtain the Cooperative p-Centre Problem ses at most polynomially with the number of customer
(CpCP). As before, with the signal strength function fI(d), locations n. For a given accuracy of A40, suppose an
the problem becomes the standard p-centre problem. optimal algorithm with worst case performance of O(C(n)) is
available for the solution of one of the three problems
We assume that a node can hold at most one facility. (1, 2 or 3), where C(n) is some function of instance size. Then
While this assumption can be made without loss of gene- the other two problems can be optimally solved in time of at
rality with the ‘standard’ set covering, maximum covering most, O(C(n) log n).
or p-centre problems because there is no advantage to
locating more than one facility on the same node, such is As remarked in Berman et al (2010a) the reduction
not the case for the cooperative covering case. In fact, arguments in the proof of the previous result can be
as the following theorem shows, a solution involving applied to heuristic procedures as well, that is, if a heuristic
co-location can outperform a non-co-located solution by procedure exists for one of the three problems, a procedure
an arbitrary large value. with similar performance and running time exists for the
other problems as well. We can thus concentrate on deri-
Theorem 1: When more than one facility can be located ving efficient exact or approximate algorithms for any one
at the same node, the optimal value of the objective func- of the problems above.
tion can increase by an arbitrary large value.

3. Solution approaches
Proof: We prove the theorem by an example. Consider
We concentrate on solving Problem 2. The other problems
the following star network with n nodes. Node #1 is linked
can be solved by applying Theorem 2.
to all other nodes with a link of length 1, and there are
no other links. The distance between node #1 and all
other nodes is equal to 1, and the distance between two 3.1. Exact approaches
nodes (neither is node #1) is equal to 2. We use the
decay function f(d) ¼ 1/d 2, T ¼ 1.5 and all wi ¼ 1. Consider We note that since Problem 2 was formulated as a linear
the CMCLP where p ¼ 2 facilities need to be located. Integer Program (IP), any off-the-shelf IP solver is capa-
When both facilities are located at node #1, the signal ble of producing a solution to this problem. The stan-
covers all n nodes because the cumulative signal is equal dard (individual cover) (MCLP) is known to be ‘integer
to 2. Since the maximum possible cover is n, this is the friendly’—that is, the optimal solution to LP relaxation
optimal solution. When no two facilities can be located frequently has only integral entries, thus providing the
at the same node, the total maximum coverage is equal solution to the IP as well. We note that in view of the first
to 3. To show this, consider locating one facility on node constraint in Problem 2, the integer-friendly property is not
#1 and a second facility on another node. Only these two likely to hold in the current case. Computational results
nodes are covered because all other nodes receive a signal using CPLEX IP solver are presented in Section 4 below.
of 1.25 and are not covered. If both facilities are located Indeed it appears that (CMCLP) is significantly harder to
on nodes different from node #1, node #1 is covered by a solve than its standard (individual cover) counterpart.
signal of 2, and the nodes where the facilities are located
are also covered. All other nodes receive a signal of 0.5
and are not covered. Therefore, the maximum possible 3.1.1. Lagrangian relaxation. Since Lagrangian relaxa-
cover is equal to 3. The ratio between the two solutions is tion is very efficient for solving the standard maximal
n/3 which can be made arbitrarily large by selecting the covering problems (Daskin, 1995, 2006), it seems natural
O Berman et al—Discrete cooperative covering problems 2005

( )
to attempt the same approach in the current framework. Xn
1X n
FðlÞ ¼ max ð1  li Þyi þ li
To this end, we formulate Problem 2 as a Lagrangian 2 i¼1
i¼1
relaxation approach embedded in a branch and bound
Xn  
algorithm. 1
¼ maxf0; 1  li g þ li :
The Lagrangian problem is: i¼1
2

( " ! #)
Xn X
n
Problem LðlÞ: LðlÞ ¼ max wi yi þ li fðdij Þxj  Tyi
i¼1 j¼1
X
n
s:t: xj ¼ p
j¼1

li X0
xj 2 f0; 1gyi 2 f0; 1g i; j 2 N: ð5Þ

The objection function of L(l) can be written as The minimum of F(l) is obtained for li ¼ 1 and its value
is n/2. The gap for this example is n/2, which can be made
( ( ) )
X
n X
n Xn as large as desired.
max ðwi  li TÞyi þ li fðdij Þ xj 
i¼1 j¼1 i¼1
3.2. Heuristic algorithms
Obviously, the optimal solution of L(l) is: We design algorithms to solve Problem 2. The other pro-
 blems can be solved by applying the algorithm for the solu-
1jwi Xli T tion of Problem 2 (Theorem 2).
yi ¼
0jotherwise Once a set P of p nodes is selected, the value of the
objective function F(P) can be directly calculated. There-
Pn
and xj ¼ 1 for the p largest values of i¼1 li fðdij Þ.
fore, a reasonable neighborhood for the search is all
The Lagrangian dual can be solved using subgradient opti- possible exchanges of one selected node with one non-
mization. selected node. This is similar to the exchange algorithm
We were somewhat disappointed that the bound suggested by Teitz and Bart (1968) for the solution of the
obtained by the Lagrangian relaxation is very poor. For p-median problem. The cardinality of such a neighborhood
example, for the smallest test problem (n ¼ 100, p ¼ 5) the is p(np).
optimal solution is 10 (see Table 2). The Lagrangian bound In preparation for any of the heuristic search algorithms,
at the root is equal to 18 leading to a very ineffective upper the distance matrix {dij} between nodes is calculated and
bound. In fact, total enumeration for this problem was a second matrix with the values fij ¼ f(dij) is calculated.
much faster than the branch and bound approach based on Note that if the decay function used is f(d) ¼ 1/d 2 (or any
the Lagrangian relaxation because only a small number of function which is infinite at zero distance), then when
branches were eliminated. The following example shows dij ¼ 0 we use fij ¼ T. This guarantees that each node
that indeed the upper bound of the Lagrangian relaxation covers itself.
can be very poor. In conclusion, it appears that the
Lagrangian relaxation is not an effective approach for
solving discrete cooperative cover models. 3.2.1. The ascent approach. The ascent algorithm is
obvious (for complete details see, for example, Berman
et al, 2009). We show how to execute it in an efficient way.
3.1.2. An example for which Lagrangian relaxation yields a Let P be a selected set of p nodes. Calculating the value
poor bound. Let wi ¼ 1, T ¼ 1 and f(dij) ¼ 1/2p. No point is of the objective function directly requires O(np) time.
covered by selecting p facilities (total signal is 0.5), thus the Therefore, calculating the p(np) values of the objective
optimal solution is 0. Problem L for a given set of liX0 function in the neighborhood directly requires O[np2(np)]
time. We can streamline the calculation and reduce the
( ( ) ) complexity by a factor of p. We maintain a vector
Xn X
n Xn
1 P of the
FðlÞ ¼ max ð1  li Þyi þ li xj  cumulative signal received by each node: Si ¼ j2P fij .
2p
i¼1 j¼1 i¼1 Once
P Si is known, the value of the objective function is
Pn i:Si XT wi . When a node kAP is taken out and a node leP
Since j¼1 xj ¼ p; is added to P, the new value of Si is Sifik þ fil, which is
2006 Journal of the Operational Research Society Vol. 62, No. 11

calculated in O(1). Therefore, calculating the value of the this type an improving move will be found. Also, as is the
objective function following such an exchange requires case for Variant #1, even when an improved solution
O(n) time and evaluating all exchanges in the neighbor- exists in the neighborhood, we would prefer to move
hood requires 0[np(np)] time. To avoid accumulation of to a solution with one non-covered node close to the
round off errors, the values Si for i ¼ 1, . . . , n are threshold, hoping that this node will be covered in
recalculated at the end of each iteration in O(np) time. subsequent moves.
The complexity of one iteration is therefore O[np(np)]. Note that these variants can operate as a regular ascent
algorithm by using F(P)AH(P) for a small enough A as
the objective function.
3.2.2. The improved ascent approach. We noticed that the
ascent algorithm did not perform very well especially
when the cover was not much greater than the value of p. 3.2.3. Simulated annealing. The simulated annealing
Every random selection of p nodes covers at least the (Kirkpatrick et al, 1983) simulates the cooling process
p selected nodes. Therefore, only a few iterations are of hot melted metals. We follow the approach adopted in
performed by the ascent algorithm when the cover does Berman et al (2009). The variant used in this paper
not exceed p by much. If all wi ¼ 1 (or all of them are depends on three parameters: the starting temperature T0,
integer), then the cover must be integer and an improve- the number of iterations I, and the factor ao1 by which
ment in the value of the objective function must be at the temperature is lowered every iteration.
least 1. A local maximum obtained by the ascent app-
roach may be improved if we modify the algorithm to 1. Set the temperature T to T0. Randomly select a set P
break ties between two solutions that have the same value of p nodes, and set the best found solution to F(P). Set
of the objective function. This can be done by creating the iteration counter to zero.
a well defined hierarchy between solutions that have 2. Randomly select a node in P for removal and a node in
the same cover. Let the set of all non covered nodes for NP for addition to P.
a set P be N0 (P). For every iAN0 (P)SipT. As a secondary 3. Calculate DF, the change in the value of the objective
value HðPÞ ¼ mini2N 0 ðPÞ ½T  Si  is calculated. Two sets function by this exchange.
P1 and P2 satisfy P14P2 if either F(P1)4F(P2), or when 4. If DFX0, then
F(P1) ¼ F(P2) and H(P1)oH(P2). This means that if two
solutions have the same cover, the solution in which the — Perform the exchange.
non-covered node closest to the threshold is ‘closer’ to it is — If the new F(P) is better than the best found solution,
preferred. Solutions that contain a node ‘very close’ to the update the best found solution.
threshold are more likely to generate an improving move — Go to Step 6.
in subsequent iterations.
Two variants of the improved approach are suggested. In 5. If DFo0, calculate d ¼ DF/T. Perform the exchange
Variant #1 we just prefer to select a solution with a lower with probability ed.
value of H(P) when there is a tie among the best improving 6. Multiply T by the factor a, increase the iteration counter
solutions. In Variant #2 we proceed with the ascent itera- by one, and return to Step 2 unless the number of itera-
tions to solutions with the same cover but with a smaller tions reaches I.
value of H(P). The algorithm is terminated when neither an
improving exchange nor an exchange leading to a better Note that calculating DF can be done in O(n) time by
value of H(P) with the same cover exist. maintaining a vector Si as explained in Section 3.2.1.
We use the cover F(P) as the value of the objective
Variant #1 When there is a tie between the best improving function. There is no need to consider H(P) to break ties
exchanges, the exchange with the lowest value of H(P) is as suggested in the improved ascent algorithms because
selected. a move to a solution with the same cover will be accepted
regardless of the temperature.
Variant #2 In addition to the rule of Variant #1, if there Following extensive experiments with the 40 (Beasley,
is no improving exchange but an exchange with the same 1990) problems, and attempting not to exceed the time
cover and a lower value of H(P) exists, a move to a solu- required for the ascent algorithm by more than 100 fold,
tion with the same cover and the lowest value of H(P) is the following parameters were selected: T0 ¼ 10, I ¼ 3000
performed. p(np) and a ¼ 15/I.

Variant #2 allows the ascent approach to possibly move


to solutions with the same cover if no improvement is 3.2.4. Tabu search. Tabu search (Glover, 1977, 1986;
possible, with the hope that following a few iterations of Glover and Laguna, 1997) proceeds from the terminal
O Berman et al—Discrete cooperative covering problems 2007

solution of the ascent algorithm by allowing downward the two candidates and the first parent (distance between
moves hoping to obtain a better solution in subsequent two solutions is defined as the number of different selected
iterations. A tabu list of forbidden moves is maintained. nodes in the two solutions) are calculated, and the farther
Tabu moves stay in the tabu list for tabu tenure iterations. candidate is selected as the second parent. If the offspring
To avoid cycling, the forbidden moves are the reverse is better than the worst population member, it replaces it
of recent moves. Similarly to the ascent algorithm, the unless it is identical to an existing population member.
changes in the value of the objective function in the neigh-
bourhood are evaluated. If a move leads to a solution
better than the best found solution, this move is executed 3.2.6. The merging process. The crucial part of the algo-
and the tabu list emptied. If none of the moves leads to rithm is the merging process of two parents to produce an
a solution better than the best found solution, the best offspring. Many genetic algorithms use a crossover ope-
permissible move (disregarding moves in the tabu list), rator for the creation of the offspring. We followed the
whether improving or not, is executed. approach suggested in Berman and Drezner (2007) and
Each move involves removing a node in P and Alp et al (2003), which takes advantage of the special
substituting it with a node not in P. The tabu list consists structure of our problem.
of nodes recently removed from P so they are not allowed Each parent consists of a list of p nodes. The two lists of
to re-enter P. An exchange between two nodes is permi- p nodes are merged and the result is a list of cardinality
ssible if the node entering P is not in the tabu list. The ppp0 p2p. When there are c nodes common between the
maximum possible number of entries in the tabu list is np. two parents, p0 ¼ 2pc. Four additional nodes are added to
Following extensive experiments we used for tabu tenure the list as a mutation forming a list L of p0 þ 4 nodes (the
a randomly generated value between 10% and 50% of np selection of four additional nodes is a result of extensive
rounded down. The tabu tenure is randomly generated in experiments with various numbers of additional nodes).
each iteration. A wide range for the tabu tenure was shown The c common nodes must be part of the offspring. The
to yield good results in Drezner and Marcoulides (2009). remaining pc nodes need to be selected from the
We tested three variants of the tabu search. One employs remaining p0 þ 4c ¼ 2( pc) þ 4 nodes. This means that
the regular ascent approach to compare two solutions and all the nodes which are not in L cannot join the offspring.
the other two employ Variant #1 or Variant #2 of the Selecting these pc nodes is done by the Variant #2 of the
improved ascent criterion for comparing two solutions. ascent algorithm based on 2( pc) þ 4 candidate nodes. We
The results by using Variant #2 of the improved ascent cri- tried the ascent algorithm and the two variants of the
terion were the best on our set of test problems. Therefore, improved ascent algorithm and Variant #2 clearly provided
we recommend to apply Variant #2 improved ascent the best results. This restricted ascent algorithm is faster
approach in the tabu search. than the regular ascent algorithm because it involves the
Let h be the number of iterations by the Variant #2 of the selection of pc nodes out of 2( pc) þ 4 nodes rather than
improved ascent approach. In order to require a run time selecting p out of n nodes.
comparable with the Variant #2 of the improved ascent
approach, the tabu algorithm was run for an additional
29h iterations. Each problem was solved 10 times. 3.2.7. The improved genetic algorithm. We also propose
to enhance the genetic algorithm similarly to the sugges-
tion in Drezner (2005). The idea is to generate a better
3.2.5. A genetic algorithm. Genetic algorithms (Holland, starting population and this is achieved by applying
1975; Drezner and Drezner, 2005) simulate the evolution Variant #2 on each starting population member. To run
of species and the survival of the fittest. Constructing the algorithm with population pop 10 times requires 10pop
a good genetic algorithm for our problem turned out to be applications of Variant #2. Therefore, if we used a popu-
quite a challenge. Following extensive experiments and lation of 100 for all problems, it will require 1000 appli-
testing many of the improvements proposed in Drezner cations of Variant #2, which takes much longer time than
and Drezner (2005), we recommend the following para- the total time required for the application of Variant #2 as
meters of the genetic
n algorithm.
o We maintain a popula- reported in Table 3. We therefore experimented with
tion of pop ¼ max 2n p ; 100 solutions (a solution is a list a population of pop ¼ 30. Running this improved version
of p nodes) and ran the algorithm for 10pop generations. of the genetic algorithm 10 times requires a longer time
At each generation, two parents are randomly selected than 300 runs of the Variant #2 improved ascent algo-
from the population and produce an offspring. We used rithm because the time of the evolution is added to the
the distance criterion (Drezner and Marcoulides (2003); time of the population construction.
Drezner and Drezner, 2005) for parent selection. Specifi- Another way to interpret running the improved genetic
cally, we randomly selected one parent and selected two algorithm 10 times is a generation of 300 solutions by
candidates for the second parent. The distances between variant #2. These 300 solutions are divided into 10
2008 Journal of the Operational Research Society Vol. 62, No. 11

populations of 30 solutions each, and the evolutionary Table 1 Comparison of cooperative and individual cover
process is applied on each of these 10 populations for 300 solutions
generations attempting to improve the best solution in each n p Optimal solution Rel. difference (%)
population by generated offspring (each requires a
restricted Variant #2 improved ascent algorithm). The Coop cover Indiv cover
improved genetic algorithm requires longer run time than
running Variant #2 300 times because of the added time for 100 5 10 6 67
100 10 18 13 38
the evolution. 100 10 15 13 15
100 20 20 20 0
100 33 35 33 6
4. Computational experiments 200 5 16 11 45
200 10 24 16 50
We tested the algorithms on the set of 40 problems proposed 200 20 32 20 60
by Beasley (1990) for testing algorithms for p-median 200 40 47 40 18
200 67 75 67 12
problems. We used f(d) ¼ 1/d 2 and T ¼ 0.01p106.
The following computational experiments were per-
formed:

K Comparison of cooperative cover and individual cover Table 2 Solving max-cover problems by CPLEX
solutions. The results are reported in Section 4.1 and
in Table 1. n p Result Time (sec) n p Result Time (sec)
K Optimal solutions to the cooperative cover model using 100 5 10* 0.03 500 5 38 **
CPLEX. These results are reported in Section 4.2 and 100 10 18* 0.02 500 10 42w **
in Table 2. 100 10 15* 0.02 500 50 105 **
K Heuristic solutions to the cooperative cover model. These 100 20 20* 0.01 500 100 146* 1.93
results are reported in Section 4.3 and in Tables 3–5. 100 33 35* 0.02 500 167 187w **
200 5 16* 7.21 600 5 52 **
200 10 24* 3.07 600 10 65 **
200 20 32* 0.13 600 60 134 **
200 40 47* 0.07 600 120 190w **
4.1. Comparison of cooperative cover and individual cover 200 67 75* 0.08 600 200 211* 122.84
solutions 300 5 27w ** 700 5 65 **
300 10 25w ** 700 10 61 **
The purpose of this set of experiments was to compare the 300 30 58w ** 700 70 160w **
optimal solutions to the cooperative and individual cover 300 60 80* 0.31 700 140 223 **
models. Specifically, we wanted to estimate the ‘penalty’ of 300 100 120* 0.30 800 5 82 **
400 5 35w ** 800 10 81 **
using the classic individual cover objective when the 400 10 44w ** 800 80 174 **
cooperative cover objective is more appropriate. Clearly, 400 40 71 ** 900 5 109 **
the value of the cooperative cover solution is always at 400 80 112* 1.03 900 10 117 **
least as large as the value of the individual cover solution. 400 133 144* 2.52 900 90 205 **
However, for the practical applications, it is the size of *Optimal.
the gap between the two that is important: a large gap **Run terminated after 1 h.
w
implies that recognizing that a given situation requires the Best known.
cooperative rather than the individual cover framework is
very important.
We used first 10 (Beasley, 1990) problems in these
experiments (the ones consisting of 100 and 200 node application is one of the most important decisions a loca-
networks with various numbers of facilities). Each instance tion modeller has to make.
was solved to optimality using the cooperative and indivi-
dual (standard) cover objectives. The results are presented
4.2. Optimal solutions to the cooperative cover model
in Table 1. The relative difference between the coopera-
using CPLEX
tive and individual cover optimal solutions ranges from
0 to 67% with the average difference being 31%. This In this section we demonstrate the difficulty of solving the
shows that, as in the continuous location case described integer programming formulation for the cooperative
in Berman et al (2010a), the differences between the cover model to optimality. CPLEX was applied to solve
cooperative and individual cover solutions is quite large the integer programming formulation (Problem P2). The
and the choice of the appropriate framework for a given results are summarized in Table 2. If no optimal solution
O Berman et al—Discrete cooperative covering problems 2009

Table 3 Solving max-cover problems by ascent algorithms


n p Best known Ascent (1000 runs) Variant 1 (1000 runs) Variant 2 (300 runs)

w z Time (min) w z Time (min) w z Time (min)

100 5 10* 973 0.27 0.05 942 0.58 0.05 283 0.57 0.02
100 10 18* 1000 0.00 0.14 1000 0.00 0.15 300 0.00 0.07
100 10 15* 1000 0.00 0.09 1000 0.00 0.10 300 0.00 0.06
100 20 20* 1000 0.00 0.04 1000 0.00 0.04 300 0.00 0.18
100 33 35* 1000 0.00 0.10 1000 0.00 0.10 300 0.00 0.42
200 5 16* 964 0.25 0.18 970 0.24 0.21 299 0.04 0.07
200 10 24* 962 0.17 0.64 883 0.54 0.72 272 0.39 0.25
200 20 32* 143 2.68 1.26 198 2.51 1.44 261 0.41 1.06
200 40 47* 1000 0.00 1.51 1000 0.00 1.64 300 0.00 2.96
200 67 75* 421 0.77 1.88 505 0.66 2.04 300 0.00 6.15
300 5 27 218 3.08 0.53 383 2.30 0.67 300 0.00 0.25
300 10 25 977 0.09 1.58 948 0.21 1.89 300 0.00 0.86
300 30 58 0 11.05 9.99 0 8.63 12.06 7 2.65 7.18
300 60 80* 1000 0.00 18.10 1000 0.00 20.24 300 0.00 23.23
300 100 120* 768 0.21 19.08 832 0.15 20.82 300 0.00 56.42
400 5 35 817 0.84 1.01 746 1.35 1.22 272 0.53 0.41
400 10 44 990 0.05 3.64 985 0.06 4.39 297 0.02 1.38
400 40 72 0 11.80 28.55 0 10.30 34.98 2 4.08 22.27
400 80 112* 0 2.50 60.53 0 2.46 67.62 34 0.85 78.00
400 133 144* 0 4.37 20.03 0 4.35 22.09 162 0.40 167.24
500 5 39 82 2.82 1.63 119 2.68 1.91 247 0.85 0.76
500 10 42 926 0.24 5.39 950 0.16 6.73 299 0.01 2.36
500 50 106 0 5.01 104.97 1 4.28 125.69 8 2.10 56.25
500 100 146* 914 0.06 169.35 915 0.06 190.50 300 0.00 200.16
500 167 187 0 6.59 54.97 0 6.47 62.79 5 1.36 451.28
600 5 54 186 2.23 2.44 89 1.91 2.96 108 1.23 0.97
600 10 66 130 1.94 8.44 354 1.47 10.38 237 0.56 3.87
600 60 137 0 2.40 259.15 26 2.93 279.15 190 0.33 122.45
600 120 190 0 7.46 387.20 0 6.48 442.78 28 0.68 488.21
600 200 211* 0 3.45 68.27 0 3.40 77.96 84 0.53 888.94
700 5 70 614 1.00 3.69 741 0.78 4.34 247 0.49 1.42
700 10 67 64 2.98 11.37 212 2.08 14.34 241 0.50 5.12
700 70 160 126 0.87 486.07 104 2.31 525.87 169 0.38 233.20
700 140 225 0 15.98 507.41 0 13.49 634.45 0 2.28 843.46
800 5 89 929 0.21 4.48 861 0.29 5.49 278 0.13 1.91
800 10 86 79 3.40 16.69 138 2.52 19.99 76 1.64 6.63
800 80 177 0 1.63 801.64 5 2.35 901.21 157 0.31 386.80
900 5 118 251 1.78 5.71 249 1.72 7.02 86 1.54 2.21
900 10 131 134 2.79 22.76 136 2.45 27.41 80 1.53 9.19
900 90 209 0 1.66 1340.74 11 3.45 1386.34 113 0.45 624.00
Average 441.7 2.57 110.78 457.6 2.39 122.68 196.1 0.67 117.52
*Optimal by Table 2
w
Number of times best known solution found.
z
Percent of average solution below the best known solution.

was found within 1 h, CPLEX was terminated and the 4.3. Computational results with heuristic algorithms
current best solution recorded. This solution was then
compared to the heuristic solutions for the same instances The six heuristic algorithms were coded in Fortran,
that are described in the following section. compiled by Intel Fortran 9.0 and ran on a 2.8 GHz
While all instances with np200 were solved to optimality Pentium 4 desktop computer with 256 MB RAM. The
in a relatively short run time, CPLEX failed to find optimal ascent algorithm and Variant #1 of the improved ascent
solutions to the majority of instances with nX300 within were replicated 1000 times for each of the problems. In
1 h of CPU time. It is interesting to note that for the order to require comparable run times by the Variant #2 of
majority of instances with np500, CPLEX found the best the ascent algorithm, it was run 300 times. The simulated
known solution. annealing, tabu search and the genetic algorithm were run
2010 Journal of the Operational Research Society Vol. 62, No. 11

Table 4 Solving max-cover problems by metaheuristics


n p Best known Sim. annealing (10 runs) Tabu search (10 runs) Genetic alg. (10 runs)

w z Time (min) w z Time (min) w z Time (min)

100 5 10* 10 0.00 0.12 10 0.00 0.02 10 0.00 0.02


100 10 18* 10 0.00 0.30 10 0.00 0.07 10 0.00 0.07
100 10 15* 10 0.00 0.29 10 0.00 0.06 10 0.00 0.07
100 20 20* 10 0.00 0.59 10 0.00 0.01 10 0.00 0.09
100 33 35* 10 0.00 0.89 10 0.00 0.36 10 0.00 1.11
200 5 16* 10 0.00 0.43 10 0.00 0.07 10 0.00 0.05
200 10 24* 10 0.00 0.97 10 0.00 0.25 10 0.00 0.12
200 20 32* 10 0.00 2.14 10 0.00 0.96 10 0.00 0.49
200 40 47* 10 0.00 4.62 10 0.00 2.90 10 0.00 6.01
200 67 75* 10 0.00 7.07 10 0.00 5.98 10 0.00 12.45
300 5 27 10 0.00 1.09 10 0.00 0.25 10 0.00 0.12
300 10 25 10 0.00 2.20 10 0.00 0.89 10 0.00 0.36
300 30 58 5 0.86 7.92 8 0.34 7.60 10 0.00 4.40
300 60 80* 10 0.00 17.85 10 0.00 23.22 10 0.00 47.50
300 100 120* 10 0.00 28.05 10 0.00 51.38 10 0.00 51.28
400 5 35 10 0.00 1.94 10 0.00 0.40 10 0.00 0.37
400 10 44 10 0.00 4.14 10 0.00 1.45 10 0.00 0.66
400 40 72 10 0.00 19.12 9 0.14 22.97 10 0.00 17.35
400 80 112* 0 1.79 43.42 8 0.18 83.06 0 0.89 125.84
400 133 144* 0 0.90 67.72 7 0.21 165.48 10 0.00 191.51
500 5 39 10 0.00 2.94 10 0.00 0.69 10 0.00 0.75
500 10 42 10 0.00 6.13 10 0.00 2.39 10 0.00 1.09
500 50 106 10 0.00 40.08 10 0.00 58.73 10 0.00 34.62
500 100 146* 10 0.00 88.77 10 0.00 193.32 10 0.00 329.37
500 167 187 0 2.62 136.45 1 0.91 442.58 10 0.00 329.15
600 5 54 10 0.00 4.38 10 0.00 1.20 10 0.00 1.49
600 10 66 10 0.00 9.33 10 0.00 3.92 10 0.00 2.16
600 60 137 10 0.00 72.05 10 0.00 126.63 10 0.00 71.04
600 120 190 0 1.32 152.86 1 0.47 502.91 4 0.32 451.57
600 200 211* 0 1.42 237.27 5 0.24 905.08 10 0.00 790.03
700 5 70 10 0.00 6.00 10 0.00 1.52 10 0.00 2.45
700 10 67 10 0.00 11.92 10 0.00 4.77 10 0.00 3.27
700 70 160 10 0.00 115.30 10 0.00 232.63 10 0.00 144.35
700 140 225 0 1.42 243.73 1 1.11 826.92 0 0.62 726.95
800 5 89 10 0.00 8.05 10 0.00 1.96 10 0.00 3.38
800 10 86 10 0.00 16.48 10 0.00 6.81 10 0.00 4.43
800 80 177 10 0.00 171.03 10 0.00 408.75 10 0.00 233.35
900 5 118 10 0.00 10.68 10 0.00 2.13 10 0.00 4.27
900 10 131 10 0.00 22.58 10 0.00 10.48 7 0.69 5.75
900 90 209 10 0.00 251.84 10 0.00 647.09 10 0.00 330.07
Average 8.38 0.26 45.47 9.00 0.09 118.70 9.28 0.06 98.24
*Optimal by Table 2.
w
Number of times best known solution found.
z
Percent of average solution below the best known solution.

10 times each for every problem. The results are summa- found a solution of 224 four times for the n ¼ 700, p ¼ 140
rized in Tables 3 and 4. problem). The average result of the ascent algorithm was
In Table 3 we report the results for the three variants of 2.57% below the best known solution while the average
the ascent algorithms. For each problem we depict: the best result of Variant #1 was 2.39% below the best known
known solution, the number of times that the best known solution, and for Variant #2 it was only 0.67% below the
solution was found, the percentage of the average result best known solution. The ascent algorithm found the best
below the best known solution and the time required for all known solution in 44.2% of the runs while Variant #1
runs in minutes. found it in 45.8% of the runs and Variant #2 found it in
The ascent algorithm failed to find the best known 65.4% of the runs. Run time was about 11% longer for
solution for 12 problems while Variant #1 failed to find it Variant #1 than the ‘standard’ ascent because some effort
for eight problems and Variant #2 for only one problem (it is required to calculate H(P) and on the average it required
O Berman et al—Discrete cooperative covering problems 2011

Table 5 Solving max-cover problems by the improved genetic algorithm


n p BK w z Time (min) n p BK w z Time (min)

100 5 10* 10 0.00 0.02 500 5 39 10 0.00 0.80


100 10 18* 10 0.00 0.10 500 10 42 10 0.00 2.55
100 10 15* 10 0.00 0.06 500 50 106 10 0.00 57.87
100 20 20* 10 0.00 0.21 500 100 146* 10 0.00 319.15
100 33 35* 10 0.00 0.81 500 167 187 9 0.05 471.12
200 5 16* 10 0.00 0.08 600 5 54 10 0.00 1.03
200 10 24* 10 0.00 0.26 600 10 66 10 0.00 4.03
200 20 32* 10 0.00 1.11 600 60 137 10 0.00 126.77
200 40 47* 10 0.00 5.33 600 120 190 9 0.05 527.68
200 67 75* 10 0.00 11.18 600 200 211* 10 0.00 1001.94
300 5 27 10 0.00 0.26 700 5 70 10 0.00 1.50
300 10 25 10 0.00 0.88 700 10 67 10 0.00 5.49
300 30 58 10 0.00 7.82 700 70 160 10 0.00 250.33
300 60 80* 10 0.00 42.12 700 140 225 2 0.36 900.18
300 100 120* 10 0.00 57.68 800 5 89 10 0.00 2.03
400 5 35 10 0.00 0.43 800 10 86 10 0.00 6.98
400 10 44 10 0.00 1.41 800 80 177 10 0.00 417.59
400 40 72 10 0.00 24.23 900 5 118 10 0.00 2.38
400 80 112* 10 0.00 115.61 900 10 131 10 0.00 9.62
400 133 144* 10 0.00 216.38 900 90 209 10 0.00 632.33
Average 9.75 0.01 130.68
*Optimal by Table 2.
w
Number of times best known solution (BK) found.
z
Percent of average solution below the best known solution.

more iterations because it found better solutions. Variant #2 annealing. Except for the two problems for which the
required about the same run time for 300 replications. In genetic algorithm failed to find the best known solution
conclusion, Variant #2 clearly outperformed the other while the tabu search did, the genetic algorithm performed
ascent algorithms. Variant #1 improves the performance of the best.
the ascent algorithm. Since the improved genetic algorithm takes longer than
In Table 4 we report the results for the simulated the other algorithms, we report its results separately in
annealing, tabu search and the genetic algorithm. Table 5. By the run times reported in Table 5, each evolu-
The simulated annealing algorithm failed to find the best tionary process takes, on the average, 1.3 min per
known solution for six problems while tabu search found it population for a total of about 13 min for all 10 runs.
for all 40 problems. The genetic algorithm failed to find it The improved genetic algorithm found the best known
for two problems (for problem n ¼ 400, p ¼ 80 it found a solution for all 40 problems. For 37 problems it found it in
solution of 111 10 times, and for problem n ¼ 700, p ¼ 140 all runs. It found the best known solution 97.5% of the
it found a solution of 224 six times and 223 four times). time and in the 10 instances out of 400 that it failed to find
The average result of simulated annealing was, on the the best known solution, it found a solution with only
average, 0.26% below the best known solution while the one fewer node covered. The average solution was only
average result of tabu search was 0.09% below the best 0.01% below the best known solution. Run time was about
known solution and for the genetic algorithm it was only 131 min, which is somewhat longer than the other tech-
0.06% below the best known solution. The simulated niques. In conclusion, the improved genetic algorithm is
annealing algorithm found the best known solution in clearly the best one and the extra run time required by it is
83.8% of the runs while tabu search found it in 90% of the definitely worth it.
runs and the genetic algorithm found it in 92.8% of the
runs. The simulated annealing algorithm found the best
5. Conclusions and future research
known solution in all 10 runs for 33 of the 40 problems.
The tabu search found it in all 10 runs for 32 of the 40 The discrete cooperative covering problem was analysed
problems while the genetic algorithm found it for 36 of and solved. The problem is based on a new definition of
the 40 problems. Run time was about 45 min per problem coverage. Each facility is sending a signal that decays over
for simulated annealing, 119 min for tabu search and distance. A node is considered covered if the total signal
98 min for the genetic algorithm. In conclusion, tabu search received by the node exceeds a certain threshold. Three
and the genetic algorithm performed better than simulated cover-type problems were defined and formulated as linear
2012 Journal of the Operational Research Society Vol. 62, No. 11

IPs. We showed, through computational experiments, that Berman O, Drezner Z and Krass D (2010b). Generalized coverage:
the differences between the cooperative and individual New developments in covering location models. Comput Opns
(traditional) cover solutions is substantial, indicating the Res 37: 1675–1687.
Current J, Daskin M and Schilling D (2002). Discrete network
importance of using the cooperative cover approach for the location models. In: Drezner Z and Hamacher HW (eds). Faci-
applications where it is appropriate. lity Location: Applications and Theory. Springer-Verlag: Berlin,
We proposed and tested seven heuristic approaches: an pp 81–118.
ascent approach, two improvements on the ascent app- Daskin MS (1995). Network and Discrete Location: Models,
roach, simulated annealing, tabu search, a genetic algo- Algorithms, and Applications. John Wiley & Sons: New York.
Daskin MS (2006). Sitation. http://sitemaker.umich.edu/msdaskin/
rithm and an improved genetic algorithm . All algorithms software#SITATION_Software.
were tested on 40 test problems ranging between 100 and Drezner T and Drezner Z (2005). Genetic algorithms: Mimick-
900 nodes and between 5 and 200 facilities. For the maxi- ing evolution and natural selection in optimization models. In:
mal covering problem an integer programming formula- Bar-Cohen Y (ed). Biomimetics—Biologically Inspired Techno-
tion was solved by CPLEX. Experiments with Lagrangian logies. CRC Press: Boca Raton, FL, pp 157–175.
Drezner Z (2005). Compounded genetic algorithms for the
relaxation failed to provide a good bound for a branch and quadratic assignment problem. Opns Res Lett 33: 475–480.
bound algorithm. The improved genetic algorithm provi- Drezner Z and Marcoulides GA (2003). A distance-based selection
ded the best results. Variant #2 of the improved ascent of parents in genetic algorithms. In: Resende MGC and
algorithm was also effective and provided good results de Sousa JP (eds). Metaheuristics: Computer Decision-making.
compared with the other ascent algorithms. Kluwer Academic Publishers: Boston, MA, pp 257–278.
Drezner Z and Marcoulides GA (2009). On the range of tabu tenure
The obnoxious version of the problem is of interest. The in solving quadratic assignment problems. In: Petratos P and
objective is changed from a maximal cover objective to Marcoulides GA (eds). Recent Advances in Computing and
a minimal cover one. The continuous version of the pro- Management Information Systems. Athens Institute for Educa-
blem, where the facilities can be located anywhere on the tion and Research: Athens, Greece, pp 157–168.
network and not necessarily on nodes, is a very challenging Glover F (1977). Heuristics for integer programming using
surrogate constraints. Decision Sci 8: 156–166.
problem and deserves attention. Glover F (1986). Future paths for integer programming and links to
artificial intelligence. Comput Opns Res 13: 533–549.
Glover F and Laguna M (1997). Tabu Search. Kluwer Academic
Acknowledgements —This research was supported, in part, by the Publishers: Boston, MA.
Natural Sciences and Engineering Research Council of Canada. Hogan K and Revelle C (1986). Backup coverage concepts in the
location of emergency service. Mngt Sci 32: 1434–1444.
Holland JH (1975). Adaptation in Natural and Artificial Systems.
University of Michigan Press: Ann Arbor, MI.
References Kirkpatrick S, Gelat CD and Vecchi MP (1983). Optimization by
simulated annealing. Science 220: 671–680.
Alp O, Drezner Z and Erkut E (2003). An efficient genetic
Plastria F (2002). Continuous covering location problems. In:
algorithm for the p-median problem. Ann Opns Res 122:
Drezner Z and Hamacher HW (eds). Facility Location: Applica-
21–42.
tions and Theory. Springer-Verlag: Berlin, pp 37–79.
Beasley JE (1990). OR-library—Distributing test problems by
Schilling DA, Vaidyanathan J and Barkhi R (1993). A review of
electronic mail. J Opl Res Soc 41: 1069–1072.
covering problems in facility location. Location Sci 1: 25–55.
Berman O and Drezner Z (2007). The multiple server location
Teitz MB and Bart P (1968). Heuristic methods for estimating the
problem. J Opl Res Soc 58: 91–99.
generalized vertex median of a weighted graph. Opns Res 16:
Berman O, Drezner Z and Wesolowsky GO (2009). The maximal
955–961.
covering problem with some negative weights. Geogr Anal 41:
30–42.
Berman O, Drezner Z and Krass D (2010a). Cooperative cover Received December 2009;
location problems: The planar case. IIE Trans 42: 232–246. accepted September 2010 after one revision

Potrebbero piacerti anche