Sei sulla pagina 1di 17

1.

Introduction

This paper presents a tabu search approach to solving the traveling-salesman problem with time windows
(TSPTW) as defined by Baker (1983) Dumas et al. (1993) and Desrosiers et al. (1995). The objective of the
TSPTW is to find an optimal tour where a single vehicle is required to visit each of a given set of locations
(customers) exactly once and then return to its starting location. The vehicle must visit each location within
a specified time window, defined by an earliest service start time and latest service start time. If the vehicle
arrives at a service location before the earliest service start time, it is permitted to wait until the earliest
service start time is reached. The vehicle conducts its service for a known period of time and immediately
departs for the location of the next scheduled customer. All parameters of the problem are assumed to be
known with certainty.

The parameters required to define a specific instance of the problem are the travel times between all pairs
of locations, [travelTime.sub.ij]; the customer's service time, [serviceTime.sub.i]; the earliest service start
time, [earliestTime.sub.i]; and latest service start time, [latestTime.sub.i], for each of n customers. For
algorithmic simplicity, any nonzero [serviceTime.sub.i] is added to [travelTime.sub.ij] for customer i and all
customers j. If [latestTime.sub.i] is assumed to be the latest time that a service may end, then
[serviceTime.sub.i] is subtracted from [latestTime.sub.i]. Thus any nonzero [serviceTime.sub.i] need not be
considered explicitly by the solution algorithm after the effects are accounted for within the
[travelTime.sub.ij] and [latestTime.sub.i]. We assume the [travelTime.sub.ij] satisfy the triangle inequality.

A tour is defined by the order in which the n customers are served and may be represented by

T = {[[Tau].sub.0], [[Tau].sub.1], [[Tau].sub.2], . . ., [[Tau].sub.n], [[Tau].sub.n 1]}, (1)

where [[Tau].sub.i] is the index of the customer in the ith position of the route. For notational convenience,
in the explanations that appear below we will presume, without loss of generality, that [[Tau].sub.i] = i, [for
every] i. By convention the 'customer' in position 0 and position n 1 is the depot, and the remaining
customers may occupy any position from 1 to n inclusive. Define N = {the set of all customers including the
depot}, so that [absolute value of N] = n 1, and [absolute value of T] = n 2. The variables required to define
the tour's solution value are:

[A.sub.i] = [D.sub.i-1] [t.sub.i-1], i, (2)

[D.sub.i] = max[[A.sub.i], [earliestTime.sub.i]], (3)

[W.sub.i] = max[0, [earliestTime.sub.i] - [A.sub.i]] = [D.sub.i] - [A.sub.i], (4)

where [A.sub.i] is the arrival time, [D.sub.i] is the departure time, and [W.sub.i] is the waiting time at
customer i. Whenever [W.sub.i] [greater than] 0, the vehicle arrives at customer i before [earliestTime.sub.i].

There are several objective functions that are commonly addressed in the literature (Desrosiers et al., 1995;
Savelsbergh, 1992b). Some authors desire to minimize the tour's total amount of travel time, [Z.sub.t](T),

1
whereas others prefer to minimize the tour completion time, [Z.sub.c](T), which equals [Z.sub.t](T) plus the
sum of all the [W.sub.i]. If the time that the truck is actually absent from the depot is important, then
[Z.sub.c](T) may be appropriate. However, if the actual vehicle usage is the dominant concern then [Z.sub.t]
(T) may be selected.

Our preferred objective function is hierarchical in form. First we wish minimize [Z.sub.c](T); then, if there is a
set of two or more tours that yield the optimum [Mathematical Expression Omitted], we wish to minimize
[Z.sub.t](T), the total travel time, for all tours with [Mathematical Expression Omitted]. This hierarchical
objective is practical from an operational standpoint and demonstrates the power and flexibility of the tabu
search approach. This is particularly true from the view of a manager who will often desire a minimal
mission completion time. Following that primary goal, a manager will desire minimum wear-and-tear or
usage of the equipment. We express our hierarchical objective as

Minimize{[Z.sub.c](T)[[Z.sub.t](T)]}, (5)

where

[Z.sub.c](T) = [Z.sub.t](T) [summation of] [W.sub.i] where i = 0 to n 1 and [Z.sub.t](T) = [summation of]
[t.sub.i,i 1] where i = 0 to n. (6)

Because the [earliestTime.sub.i] constraints are 'soft', only the following constraints must be enforced:

[D.sub.i] [less than or equal to] [latestTime.sub.i] [for every] i [element of] N. (7)

The [earliestTime.sub.i] become important to the problem because the [W.sub.i] are added to the objective
function. Since [D.sub.i] = [W.sub.i] [A.sub.i], it follows that

[D.sub.n 1] = [W.sub.n 1] [A.sub.n 1] = [W.sub.n 1] [t.sub.n,n 1] [D.sub.n]

= [W.sub.n 1] [W.sub.n] [t.sub.n,n 1] [A.sub.n]

= [W.sub.n 1] [W.sub.n] [t.sub.n,n 1] [t.sub.n-1,n] [D.sub.n-1], (8)

so that

[D.sub.n 1] = [summation of] [t.sub.i,i 1] where i = 0 to n [summation of] [W.sub.i] [D.sub.0] where i = 0 to
n 1. (9)

Therefore [Z.sub.c](T) = [D.sub.n 1] - [D.sub.0]. If [D.sub.0] [not equal to] 0, and [D.sub.0] is itself a decision
variable, then a significantly different problem arises (Savelsbergh, 1992b). In this paper we consider only
the case where [D.sub.0] = 0.

We describe below a tabu search approach for the TSPTW that provides optimal or near-optimal solutions in
a fraction of the time required by other methods described in the literature. Our search is not constrained to
feasible solutions. During the search we penalize infeasible solutions by incorporating a penalty term, P(T):

2
P(T) = K [summation of] ([D.sub.i] - [latestTime.sub.i]) where i = 0 to n, for ([D.sub.i] - [latestTime.sub.i])
[greater than] 0, (10)

where K is an input parameter.

Tabu search uses adaptive memory structures based on identifying and monitoring solution attributes
during the course of the search. Our approach combines two different themes from the literature, which
alternately focus on the use of fine-gauge and broad-gauge attribute definitions. The fine-gauge definitions
that we employ follow the model of reactive tabu search, which generates attributes by hashing functions
with the goal of allowing a single attribute to correspond uniquely to a given solution. The approach
encourages diversification of the search as a result of its ability to detect cycling and to alter the search
parameters to avoid cycling during subsequent iterations. The broad-gauge definitions that we employ
follow the theme of allowing an attribute implicitly to represent a subset of solutions, and thus encourage
diversification at a different level.

The paper is organized as follows. Section 2 gives a brief overview of the literature relevant to our approach
to the TSPTW. Section 3 describes our tabu search approach to the TSPTW. Section 4 presents the results of
an extensive computational analysis using a standard set of test problems. Finally, Section 5 outlines
conclusions and recommendations for further research.

2. Literature review

The TSPTW has not received as much attention as its special case, the traveling-salesman problem (TSP), or
as its generalizations in the class of vehicle routing problems with time windows (VRPTW). In a very real
sense, it stands in the gap between these two more popular problems. The TSPTW is most closely related to
the shortest-path problem with time windows (SPPTW), the vehicle routing problem with time windows
(VRPTW), and the pick up and delivery problem with time windows (PDPTW). To understand the effects of
the time window constraints on the TSPTW an acquaintance with previous work on these related problems
is necessary. Desrosiers et al. (1995) and Savelsbergh (1992a) present very good overviews of the preceding
work on the TSPTW. Solomon and Desrosiers (1988) present a brief survey of the TSPTW and related time
window constrained problems. Baker (1983) presents a very compact formulation of the TSPTW. Most
approaches to the problem that seek to obtain and prove a globally optimal solution are based on dynamic
programming (DP) as in Christofides et al. (1981), Dumas et al. (1993), and Desrosiers et al. (1986). Other
optimal approaches are branch-and-bound based algorithms such as that presented in Baker (1983). The
most well-known heuristic approaches are the k-opt approach of Savelsbergh (1985, 1990) and the more
recent work of Van der Bruggen (1993).

Analysts use three primary formulations to study the structure of the TSPTW. The first formulation
(Desrosiers et al., 1994; Solomon and Desrosiers, 1988) is based on a set of continuous time window
feasibility variables, and a set of (0,1) variables that indicate whether a pair of customers are adjacent on the
tour and visited in a specific order. Dynamic programming formulations for this problem class are presented
in Christofides (1981) and Dumas et al. (1992). The third formulation (Baker, 1983) uses only continuous
decision variables on a disjunctive network to specify the time that the vehicle encounters each node. This
formulation employs constraints that are based on the absolute value of the difference of the decision
3
variables. While Baker's formulation is linear in terms of the decision variables, the constraints would lead to
a possible set of [2.sup.n] linear programs if every combination were analyzed. Baker's algorithm branches
on the selection of the node ordering and then solves the dual of the problem to determine the bound to
the problem. Baker's dual problem is an easily solved longest path network formulation.

A common characteristic of optimal algorithms is their heavy reliance on time window relationships to
sufficiently reduce the dynamic programming state-space and to restrict the number of feasible solutions so
that dynamic programming approaches are computationally practicable. Ultimately, it is the time window
constraints that determine whether the problem is tractable from an optimal algorithm perspective.

The TSPTW is not as 'well solved' as the TSP, primarily because the nature of the time window constraints
requires that two separate but related questions be answered. The first question is whether a specific
TSPTW possesses a feasible solution. The second question is associated with the difficulty of finding an
optimal solution to the TSPTW. Savelsbergh (1985, 1992a) proves that finding a feasible solution to the
general TSPTW is a strongly NP-complete problem. Thus finding an optimal solution to the TSPTW is a
strongly NP-hard problem.

However, in some TSPTW instances it is very easy to find a feasible solution. The special case where the
smallest [latestTime.sub.i] is greater than the length of the longest tour in the problem, and all
[earliestTime.sub.i] = 0 reduces to the TSP where any permutation of customers is a feasible solution. In the
Vehicle Scheduling Problem (Bodin et al., 1983), a special case of the TSPTW where [latestTime.sub.i] =
[earliestTime.sub.i][for every] i, it is a simple matter to find the feasible solution, if one exists. In general, as
the width of the time windows increases, it is easier to find a feasible solution to the TSPTW because the
TSPTW approaches the TSP.

In practice, many heuristic or approximate algorithms described in the literature, i.e., Savelsbergh (1985),
and Van der Bruggen et al. (1993), start from a feasible solution and require that all subsequent solutions be
feasible. This is a viable approach for many problem instances.

The VRPTW is a generalization of the TSPTW where more than one vehicle with finite capacity is available to
serve the customers. Desrochers et al. (1992) formulate the VRPTW by employing a set-partitioning model
and using a column generation approach to solve the problem in two phases. First, they use linear
programming (LP) to solve a set-covering relaxation of the original problem. The dual variables are used as
input to a DP subproblem to determine the minimum-cost routes that satisfy feasibility constraints. The DP
subproblems output new feasible routes, which are then appended to the master problem, which is again
solved by LP. The procedure iterates between the master problem and subproblems until no improving
routes are found. The algorithm ensures feasibility by employing a branch-and-bound scheme to force a
feasible integer solution to the problem.

The DP subproblem is of interest here because it determines single-vehicle subtours that are feasible with
respect to the time-window and vehicle capacity constraints, i.e., each subproblem is a capacitated SPPTW.
It is in this context that Desrochers et al. (1992) address the computational complexity of the SPPTW as a
function of the time window parameters and propose three DP solution procedures for the subproblems
(Desrochers et al., 1992, p. 347). Many authors cite the width of the time windows as the main determinant

4
of the ability of an optimal procedure to solve a particular problem (Solomon and Desrosiers, 1988; Dumas
et al., 1993).

Because the width of the time windows determines the solution difficulty for a TSPTW, a great amount of
attention has been given to methods to reduce time window width. Desrochers et al. (1992) outline four
procedures that are applied sequentially until no further reduction in the time windows is found. In related
work, many researchers have developed methods to eliminate arcs from the underlying DP network
structure, thereby eliminating the need to consider arcs and states that lead to obvious infeasibilities.
Desrosiers et al. (1995) cite arc elimination work done by Langevin et al. (1990) to reduce the set of feasible
arcs under consideration.

If the TSPTW time windows are so wide that they present no limitation, any ordering of the customers is
feasible. However, as the widths of the time windows narrow, these constraints may cause feasible solutions
to the TSPTW to be more and more remote from one another. In fact, Van der Bruggen et al. (1993)
construct examples where no reasonable neighborhood structure allows connectivity of these feasible
solutions. Indeed, it is possible to construct example problems where the unique optimum solution is
disjoint from all other feasible solutions for reasonable search neighborhoods.

3. A tabu search approach to the TSPTW

3.1. Overview

The rich nature of the TSPTW requires an algorithm that is robust with respect to parameter settings across
a wide spectrum of constraints, objective functions and time window widths. These considerations led to
the selection of tabu search as our primary metaheuristic search technique. Because several authors have
given in depth discussions of tabu search (see, for example, Glover and Laguna, 1993), we will forgo a
detailed discussion of tabu search. In essence, tabu search avoids becoming trapped in local optima by
exploiting memory and data structures that prevent immediately moving back to a previously examined
solution, and more generally prevent moving to solutions that share certain attributes with previous
solutions. Tabu search proceeds by defining a current solution. A neighborhood structure is then imposed
that enables the algorithm to develop other solutions from the current solution. A candidate list of the
neighbors is examined to determine the best move available in terms of the selected objective function. The
current solution is updated to the best of the neighbors that is not tabu. Common forms of memory include
recency-based and frequency-based memory. Defined relative to recency-based memory, a neighbor is tabu
if it has an 'attribute' (which may be a combination or function of elementary attributes) that has appeared
in a solution within a designated number of previous algorithm steps. The primary parameter required for
this type of memory is the 'tabu length', which is usually the number of algorithm steps for which the
designated attribute is declared tabu. Often, tabu search algorithms incorporate 'aspiration criteria' to allow
a tabu move to be accepted whenever such a move is deemed advantageous to the search. Other
procedures can be designed to encourage the algorithm to intensify or diversify the search. Those
incorporating frequency-based memory keep track of how often solution attributes occur in different classes
of solutions visited. A strict diversification approach, for example, keeps track of attribute frequencies over
all solutions visited during the history of the search (or of the search within a given region). Then
diversification is initiated by seeking to discourage moves to solutions that embody higher-frequency
attributes or to encourage moves to solutions that embody lower-frequency attributes. A strict
5
intensification approach instead keeps track of frequencies over sets of elite solutions (which may be
differentiated by clustering). During an intensification phase, high-frequency attributes from an elite domain
are encouraged or even 'locked in'. Ideally, intensification and diversification should not be independent but
allowed to interact so that each makes reference to the relevant influences from the other. Within a
framework that varies the emphasis on such influences, the tabu search method terminates after a
designated number of iterations or a stated amount of computation time.

Tabu search may be described as a feedback-driven approach, where the reliance on adaptive memory
determines both the form and the use of this feedback. Motivated by the concern for robustness, we
decided to adapt the Reactive Tabu Search procedures described by Battiti and Tecchiolli (1994) to the
TSPTW. Reactive tabu search uses a memory structure based on coded attributes, determined by hashing,
which provide a fine-gauge screening to differentiate between different individual solutions. Other forms of
tabu search typically rely on memory structures based on a courser level of differentiation, embodied in
broad gauge attributes, thus focusing more strongly on 'features' that may be shared in common by multiple
solutions. Neither reactive tabu search nor standard tabu search excludes the types of memory used by the
other, but each has its apparent predominant emphasis. Our approach blends these two themes, by
memory structures designed to derive the benefits of both fine-gauge and broad-gauge attribute definitions.

Reactive tabu search stresses the use of routines that automatically adjust the search parameters based on
the quality of the search. Such an automatic response mechanism is characteristic of many tabu search
procedures, according to how 'quality' is defined. In the fine-grain view of reactive tabu search, quality is
determined by the number of iterations since the last time a particular solution was visited. High-quality
searches do not frequently cycle through the same solution. (In other tabu search approaches, quality often
depends instead on seeking to avoid repetition of 'key attributes' at a broader level, via frequency-based
memory.) The analyst chooses length of cycle, multiplicative factors for the amount of tabu length increase
or decrease as input parameters. The algorithm proceeds as follows (within the context of the usual tabu
scheme):

1. The tabu algorithm moves to a neighbor solution.

2. The algorithm determines whether this particular solution has been visited before.

(a) If the solution has been visited within the designated minimum cycle length, the tabu length is
increased by a predetermined factor.

(b) If the solution has never been visited, it is added to the solution structure.

The algorithm also tracks the number of iterations since the last time a change in the tabu length occurred.
If a selected number of iterations has passed, then the tabu length is decreased. As recommended by Battiti
and Tecchiolli (1994), our algorithm computes a moving average of the cycle lengths that are less than the
allowed length. The tabu length is decreased if the algorithm performs more iterations than this moving
average without having changed the tabu length.

6
3. If all candidate neighbors are tabu and none meets the aspiration criteria, the algorithm will allow a move
to the neighbor with the smallest move value regardless of its tabu status. A concurrent decrease in the tabu
length is performed. (This can occur when the tabu length becomes very large and the current solution has a
very small number of permissible moves.)

3.2. Reactive tabu search and hashing structures

Reactive tabu search requires efficient identification of previously visited solutions. For this, as previously
intimated, it effectively creates coded attributes by hashing. (This is an instance of the general approach of
creating new attributes as functions of other attributes.) We use a two-level open hashing structure
(Horowitz et al., 1993) for this identification process. One characteristic of an effective hashing structure is
that it must minimize the occurrence of collisions (Woodruff and Zemel, 1993), i.e., when two nonidentical
tours are incorrectly determined to be identical. A collision causes the reactive tabu search to proceed
incorrectly, behaving as if a previously visited tour has been found. The only way to be certain that collisions
are avoided is to compare the current tour's vector (customer sequence) to the solution vector of every tour
previously visited. The algorithm would have to store the complete solution vector for all tours visited, and
then compare the vectors position by position. This would be very inefficient. The two-level hashing,
described below, yields a very small chance of collisions, and is efficient in terms of computational effort and
memory use.

The first level of hashing uses an array called the hash table that enables the algorithm to efficiently store
and access tours. The 'hashing function', f(T), assigns the tour T to an element in the hash table. The actual
value stored in the array element is the location of the tour's identification values in the computer's
memory. The hashing function we used is based on the tour's objective function value:

f(T) = [Z(T)] mod k. (11)

Note that 0 [less than or equal to] f(T) [less than or equal to] k - 1, and that the hash table is a k-dimensional
array. In our computational work we set k = 1009 because it is a prime number, does not force unreasonable
storage requirements, and is larger than most of the objective function values that we expected to
encounter.

Because more than one tour might yield the same f(T), f(T) cannot be used to verify identical tours. To
overcome this limitation, we store four values associated with each tour within the hash table as shown in
Fig. 1:

1. The computer memory location of another tour with the same value f(T).

2. P(T), the infeasibility penalty term for the tour.

3. The tour hashing value (thv), based on the order of the customers in the tour.

7
4. The iteration when the tour was last visited.

The first value links all tours with the same f(T) in one 'chain'. If their penalty terms are not identical, two
tours on the same chain are not identical. If P(T) for the current tour matches P(T) for a tour in the chain, the
second level of hashing is invoked by comparing the tour hashing values (Woodruff and Zemel, 1993). The
tour hashing value is the transformation of a tour solution vector into an integer. To compute the tour
hashing value, we generate a vector of random numbers [Psi](i) in the range (1, 131072) for each i = 0 to n 1.
The tour hashing value is computed as follows:

thv (T) = [summation of] [Psi] ([Tau] [i]) [Psi] ([Tau] [i 1]) where i = 0 to n. (12)

In our application, the tour hashing value is stored as a unsigned integer value, which allows integers up to
[2.sup.32]-1 (4 bytes). There is an extremely small chance that a collision will occur with this hashing
function (Woodruff and Zemel, 1993; Carlton and Barnes, 1995). Another direct benefit of having this
information available is the ability to determine with reasonable certainty how many different tours are
visited as well as the number of feasible tours that are visited.

In summary, the two-level hashing proceeds as follows:

1. Once a tour is accepted as the incumbent tour, compute f(T) and thv(T).

2. Compare the values of P(T) and thv(T) for the incumbent tour with the values of every tour
in the chain linked to hash table element f(T).

3. If both values P(T) and thv(T) match the stored values, then the tour is being revisited.

4. If the tour is revisited, compute the cycle length and change the tabu length if required.

5. If no match is found, add the tour to the chain.

3.3. The starting solution

Because the search will not be constrained to feasible solutions, any TSP tour will suffice as a starting
solution. However, it is reasonable that a starting tour should satisfy the condition: if [e.sub.j] [t.sub.ji]
[greater than] [latestTime.sub.i] then i precedes j. While enforcing this condition, we select the initial tour
as the tour whose order corresponds to the order of the midpoint of the respective time window.
Specifically, if 1/2 ([latestTime.sub.i] [earliestTime.sub.i]) [less than] 1/2([l.sub.j] [e.sub.j]) then i precedes j.
(13)

After implementing the iterative four-step process of time window reduction as outlined in Desrochers et al.
(1992), the starting tour is the tour that results by sorting the customers into the order of increasing average
time window value while enforcing the initial time window feasibility condition. The midpoint of the time
window is suggested by the work of Van der Bruggen et al. (1993). The algorithm computes the initial
[Z.sub.c](T), [Z.sub.t](T), P(T), thv(T) and initializes the parameters in preparation for the search.

8
3.4. The neighborhood structure

Tabu search uses the neighborhood structure to develop new solutions that are related to the current
solution by an elementary rearrangement of the solution values. Savelsbergh (1985) uses a 2-opt structure
for a symmetric TSPTW, and other authors use the move procedure proposed by Or (1976). The Or-opt move
structure is a subset of the 3-opt neighborhood. The structure we first investigated was a subset of 1-Or
moves and is based on the ideas suggested by the disjunctive graph formulation in Baker (1983).

Van Laarhoven et al. (1992) and Barnes and Chambers (1995) model the general job shop scheduling
problem by using a disjunctive graph network as the underlying structure. Their neighborhood structure
selects two jobs that are adjacent on the critical path and reverses their position. This accomplishes two
significant tasks. First, reversal of arcs on the critical path are the only moves that may lead to improved
solution values. Secondly, reversal of arcs on the critical path will never lead to a subsequent cyclic digraph
(Van Laarhoven et al., 1992, p. 118). In the TSPTW context, Fig. 2 illustrates that a similar reversal of adjacent
customers on the critical path corresponds to a 'swap' of the customers' positions within the tour.

This reversal of customers in the tour corresponds to one element of the 2-opt neighborhood of moves. If
the adjacent customers are i and i 2, then arcs (i - 1, i) and (i 1, i 2) are replaced by arcs (i - 1, i 1) and (i, i 2).
This 'swap' move is also one element of two Or-opt moves families, where customer i is repositioned after
customer i 1 (1-Orf), or customer i 1 is repositioned before customer i (1-Orb). If the symmetry property
([travelTime.sub.ij] = [t.sub.ji][for every], j) is violated, there can be no true 2-opt neighborhood, because
the reversal of two adjacent customers implies that the arc (i, i 1) is traversed in the opposite direction.
Because symmetry is lost in any instance that does not have equal service times at all customers, practical
routing and scheduling problems with service times at each customer rarely possess the symmetry property.
In the absence of symmetry, the arc (i, i 1) must be replaced with the arc (i 1, i) and the resulting potential
change in the path must be considered. Similarly, for any removal of arcs (i - 1, i) and (j, j 1) where i and j
may not be adjacent, the arcs in the path (i, j) must be replaced with arcs in the path (j, i).

Because this neighborhood is a subset of the 2-opt and Or-opt neighborhood, it is limited and our initial
computational studies indicated the need to increase the size of the neighborhood while retaining the
properties related to the disjunctive graph representation of the problem. The next neighborhood we
considered was the neighborhood of 'insertion' moves. As illustrated in Fig. 3, an insertion move
corresponds to an Or-opt move of one customer who is removed from its current position and placed d
positions later (for d [greater than] 0) or earlier (for d [less than] 0) in the tour. We therefore define the
neighborhood of moves as follows: given a tour at some iteration k, [T.sub.k] = {0, 1, 2, . . ., i, i 1, . . ., n, n 1};
the neighborhood is the set of all moves that remove customer i from its current tour position, and then
insert customer i later or earlier in the tour. The number of positions that customer i is moved is the depth,
d, of its insertion. For example, if we insert customer i earlier into the tour before customer j, where j = i - 3,
then this is an I(i, -3) move. If customer i is inserted later in the tour after customer j = i 4, then this is an I(i,
4) move. Note that move I(i - 1, 1) is equivalent to move I(i, -1). An insertion move, I(i, [ or -]d), is equivalent
to a sequence of d swap moves and therefore corresponds to a sequential application exchanging the orders

9
of adjacent customers one at a time. This permits the application of a sequence of incremental move value
evaluations.

There are two natural ways to restrict the candidate set of moves. One way to restrict the set of candidate
moves is by restricting the 'depth' of the search. Another way is to restrict the search by using strong time-
window constraints that restrict the candidate set to those moves that do not violate the strong time-
window infeasibility conditions described in the next section.

3.5. Strong time-window in feasibility

The most powerful way to exploit an inherent time-window relationship between customers is to detect a
condition of 'strong time-window infeasibility' and to use this condition to restrict the candidate moves
under consideration. We use such relationships to eliminate an entire set of moves from further
consideration without unduly restricting the algorithm's ability to move to 'good' regions of the solution
space. We begin with the following definitions:

* A customer i is said to be 'strongly time-window infeasible' with respect to customer j if [e.sub.j] [t.sub.ji]
[greater than] [latestTime.sub.i]. This implies that customer i must always precede customer j in the tour. If
both [e.sub.j] [t.sub.ji] [greater than] [latestTime.sub.i] and [earliestTime.sub.i] [travelTime.sub.ij] [greater
than] [l.sub.j], then the TSPTW has no feasible solution.

* A customer i is said to be 'weakly time-window infeasible' with respect to customer j if [D.sub.j] [t.sub.ji]
[greater than] [latestTime.sub.i].

* IE(i, j) is the earlier insertion of customer i immediately before customer j.

* IL(i, j) is the later insertion of customer i immediately before customer j.

Proposition 1. If customer j is strongly time-window infeasible with respect to customer i, IE(i, j - k) is


infeasible [for every] k: 0 [less than or equal to] k [less than] j.

Proof:

[D.sub.i] = max ([earliestTime.sub.i], [D.sub.j - k - 1] [t.sub.j - k - 1], i) [greater than or equal to]
[earliestTime.sub.i]. (14)

[D.sub.i] [t.sub.i,j - k] [summation of] [t.sub.p,p 1] where p = j - k to j - 1 [greater than or equal to]
[earliestTime.sub.i] [travelTime.sub.ij] [greater than] [l.sub.j]. (15)

Similarly, IL(j, i k) is infeasible [for every] k: 1 [less than or equal to] k [less than] n 1 - i. The proof follows
likewise.

10
This implies that if customer i is inserted before customer j or customer j is inserted after customer i, there
can be no resulting tour that will be feasible. Unfortunately, weak time-window infeasibility does not offer
the same restrictive properties.

Proposition 2: If customer j is weakly time-window infeasible with respect to customer i, IE(i, j - k) may be
feasible for some k. This occurs if both the travel time from j - k - 1 to j decreases more than [D.sub.j] -
[l.sub.j] and the departure time at customer j can be moved earlier by at least [D.sub.j] - [l.sub.j].

Fig. 4 illustrates this concept.

In summary, if a customer j is time-window infeasible, then some insertions may eliminate or reduce the
infeasibility at customer j unless the customer is strongly time-window infeasible with respect to customer i.
In this case the only insertion that may lead to time-window feasibility is to perform one of the moves: IL(i, j
k) for k [greater than or equal to] 1, or IE(j, i - k) for k [greater than or equal to] 0. Conversely, if customer i is
strongly time-window infeasible with respect to customer j, then for a tour {0, . . ., i, . . ., j, . . ., n 1}, it will
never be feasible to perform the move IE(j, i - k) for k [greater than or equal to] 0, or IL(i, j k) for k [greater
than or equal to] 1.

3.6. The move evaluation

Multiple move structures and choice rules, executed either in alternation or in combination (as by 'voting'),
are a common theme in the tabu search literature, particularly in conjunction with 'strategic oscillation'
processes (see, for example, Glover, 1977). Van der Bruggen et al. (1993) construct a local search algorithm
by repeatedly cycling through a series of different move structures. They limit their approach to a simple
form of frequency-based memory for guidance, in what may be viewed as a classical type of intensification
procedure. They state 'the arc-exchange procedures have been selected such that the most effective
procedures are used most frequently' (Van der Bruggen et al., 1993, p. 305). The procedures used most
frequently are the 2-opt, 1-Orf, and 1-Orb moves. The 1-Orf is equivalent to the I(i, d) move for d [greater
than] 0, and the 1-Orb is equivalent to the I(i, d) move for d [less than] 0. A k-Or move repositions a string of
k nodes either later or earlier in the tour. Define an Or move at depth 1 as 1-Orb = 1-Orf = I(i, 1), = I(i 1, -1).
These moves are only the sets of moves such that customer i swaps place with customer j = i 1 in the tour.
Now also define 1-Or at depth k to be the set of moves such that a customer is moved k places earlier in the
tour for k [less than] 0, or k places later in the tour if k [greater than] 0. Note that a 1-Or depth (-2) is exactly
the same as a 2-Or depth (1), and that a 1-Or depth (3) = 3-Or depth (-1). In general:

[Mathematical Expression Omitted]. (16)

It has already been noted that a move 1-Or depth ([ or -]1) are 'swap' moves, so consequently the moves

k-Or depth (j) = k-(1-Or depth(j))

= k-(j-Or depth([ or -]1))

11
= kj-(1-Or depth([ or -]1))

= kj-('swap' moves). (17)

These results establish several things. First, the 'swap' move is a reasonable foundation for a move structure.
Every Or move can be replicated with a sequence of swap moves. Secondly, swap moves are subsets of the
2-opt neighborhood for a problem that has a symmetric time/distance matrix, and is a subset of the 3-opt
neighborhood for the asymmetric case. Thirdly, the 1-Or family of moves is effective because it evaluates a
subset of the other k-Or moves directly as a by-product, i.e., there is duplication of effort if one searches all
of the 1-Or moves and then separately examines all of the k-Or moves. Finally, as noted by Savelsbergh
(1985), the k-Or family of moves preserves the natural ordering of any sequence of time-window feasible
customers.

Given this information and a starting tour, [T.sub.0], for i = 1 to n - 1, we incrementally search all of the
transitions that examine I(i, d) insertions of customer i, d positions later in the tour, stopping when i is
positioned before customer n 1 (depot). Then, for i = 3 to n, we incrementally (examine all I(i, -d) insertions
of customer i, d positions earlier in the tour for d [greater than] 2. We begin at depth 2 because all moves I(i,
1) = I(i 1, -1) of every customer were previously examined. We further restrict the search by using the
conditions of strong time-window infeasibility so that if customer i is strongly time-window infeasible with
respect to customer j, then i is never inserted after j and j is never inserted before i. This sequence of
potential moves is evaluated incrementally by performing a sequence of swap moves based on the
incumbent tour.

At iteration k, let [T.sub.k] be the current tour and let [Mathematical Expression Omitted] be the
neighborhood of tours reachable in one move from [T.sub.k]. The tabu search algorithm must determine
which member of [Mathematical Expression Omitted] will be selected as [T.sub.k 1]. This choice is made by
selecting the nontabu move I(i, [ or -]d) with the best move value, i.e., the one that yields the algebraically
smallest move value of Z([T.sub.k 1]).

Any move value consists of three components, the change in the travel time, [Delta]T, the change in the
waiting time, [Delta]W, and the change in the penalty for lateness, [Delta]P. Although [Delta]T is easy to
compute in constant time, [Delta]W and [Delta]P are more difficult because they can be computed only by
traversing the entire proposed [T.sub.k 1]. Rather than compute each component individually, we chose to
compute

Delta](tour completion time) = [Delta]T [Delta]W,

12
with the move cost for I(i, [ or -]d) equal to [Delta](tour completion time) [Delta]P. This value is computed
incrementally as i moves through the tour, but is still the most time-consuming procedure within the
algorithm.

The addition of time windows makes the efficient computation of move values difficult. Savelsbergh (1985)
has developed constant-time methods for evaluating moves that are a subset of the 2-opt class. However,
Savelsbergh's procedure requires that the search remain feasible at every step. The problem becomes much
more complicated when infeasible moves are allowed because of the potential effect that a move has on
customers that are serviced later in the tour. For example, a move that reduces the travel time between
customers i and j in a feasible [T.sub.k] will yield a feasible [T.sub.k 1]. However, if a move reduces the travel
time between nodes i and j in an infeasible [T.sub.k], there is no easy way to assess the feasibility of [T.sub.k
1]. The feasibility of customers after j in [T.sub.k 1] is a function of both the amount of infeasibility after
customer j and the waiting time after customer j. Even more of a complication is that the subsequent
feasibility is affected by the location of the waiting and infeasibility in the tour after customer j. In short,
given that one starts from an infeasible [T.sub.k], there is apparently no good way to assess the feasibility of
[T.sub.k 1] unless one completely evaluates [T.sub.k 1].

3.7. The tabu criteria, tabu length, and tabu data structures

Tabu search algorithms use short-term memory functions to determine whether a solution with a particular
attribute has been visited before. With recency-based memory, if the algorithm detects that the candidate
solution (or, more precisely, a particular collection of its attributes) has been visited within a prespecified
number of iterations, then the candidate move is declared 'tabu'. The number of iterations for which this
move is not allowed is called the tabu length. The selection of the attribute to be examined, the data
structure used, and the tabu length are critical to the success and efficiency of the search. Another
important consideration in the design of the tabu search algorithm is the selection of the aspiration
criterion. In our algorithm, the aspiration criterion is invoked whenever a move is found that yields an
objective function value lower than any previously found value.

The data structure we employ for a broad-gauge definition of move attributes is an (n 1) x (n 1) array called
tabu_list(i, j). The rows of tabu_list(i, j) correspond to each customer number. The array column index
corresponds to each tour position. Thus the attributes we track are the occurrence of customers in given
tour positions. This array stores the value of the current iteration, k, plus the current value of the tabu
length, k tabu_length. Again, assume that at the current iteration all customers are renumbered so that their
position in the tour is their customer number. If move I(i, d) is accepted, the value k tabu_length is stored at
tabu_list(i, i). This prevents any 'return' move of the customer to position i for tabu_length iterations.

Customer i could be moved from its new position by being directly chosen for another move at some future
iteration. Customer i also could move from its new position indirectly as the result of the movement of other
customers. Our tabu restriction takes both of these possibilities into consideration because failure to do so
could cause indefinite 'indirect cycling' between two or more customers. Discussions of such memory
structures and associated tabu restrictions may be found in Glover and Laguna (1993).

13
Using this broad-gauge attribute memory, the algorithm can verify all of the tabu conditions by a one-step
check of the candidate move under consideration, I(i, d). If the current iteration k [less than or equal to]
tabu_list(i, i d) then the move is tabu and is not allowed unless it leads to a value for the objective function
that is lower than any value previously encountered.
One possible benefit of the fine-gauge memory of the hashing structure is that one can identify precisely
whether a specific tour has been visited. Because the tour hashing value and the iteration when the tour
was last visited is immediately available within the hashing table, we applied an 'exact' tabu criterion to the
algorithm. This criterion was not based on a broad-gauge attribute of the move. Rather, the tabu status of a
candidate tour was determined by searching the hashing table to see whether a tour with the same hashing
value had been previously visited. The exact criterion simply computed the tour hashing value for the
candidate move and if that hashing value appeared in the structure, the iteration when it was last visited
was returned. Then the iteration last visited tabu_length was compared with the current iteration to
determine the tour's tabu status. This approach yielded results poorer than those from the tabu criteria
based on broad-gauge positional attributes as described above.

The attribute we selected to define the tabu status is derived from attributes described by Malek et al.
(1989). They also report that tabu 'list sizes of the order of the size of the problem' seemed to perform well.
This observation and some initial computational studies indicated that we should set tabu_length = min(30,
number of customers). This initial tabu length value has given consistently good performance in conjunction
with the previously described reactive tabu search attributes.

3.8. Selection of the new incumbent tour

Most tabu search algorithms choose to move to the neighbor tour that has the smallest move value on a
candidate list. Indeed, this is what our algorithm does if we are minimizing either [Z.sub.c](T) or [Z.sub.t](T).
For the hierarchical objective, [Z.sub.c](T)[[Z.sub.t](T)], simply moving to the tour that has the smallest
[Z.sub.c](T) is insufficient. The difference is that we move to the neighbor that has the smallest [Z.sub.t](T)
from among the neighbors that have the smallest [Z.sub.c](T), provided that the move is not tabu.

4. Computational results

The 145 symmetric, Euclidean TSPTW problems we studied are from Dumas et al. (1993). Exclusive of the
depot, the problems range from 20 to 200 customers with varying time windows and customer coordinates
randomly chosen on the interval (0, 50). Inter-customer Euclidean distances are truncated integers, and are
modified whenever the triangle inequality is not satisfied. Using a second-nearest-neighbor TSP tour based
on the coordinates, Dumas et al. (1993) computed the arrival time at each customer and then used that
time as the 'midpoint' of the time window for customer i: mp(i). Next, for a specified time window width w,
the authors generated two random numbers, [r.sub.1](i) and [r.sub.2](i) in the range (0, w) for each
customer. The time window for customer i is [earliestTime.sub.i] = max(0,mp(i) - [r.sub.1](i)) and
[latestTime.sub.i] = mp(i) [r.sub.2](i). This yields an average time window width of w for the particular
problem set and guarantees at least one feasible tour for each problem. Note that the arrival time at each
customer corresponding to the second-nearest neighbor tour is not the midpoint of the customer's time
window. Dumas et al. (1993) have noted that the optimal TSPTW tours have very few arcs in common with
the second-nearest neighbor tour. (We generated very few feasible initial tours with the method described
in Section 3.3.) Dumas et al. (1992) used average time window widths of 20, 40, 60, 80, and 100 units to
14
generate the time windows. There are five problems for each combination of the number of customers and
time window widths in Table 1. Our algorithms are coded in C and the tests were conducted on an IBM RISC
6000 workstation. The code was compiled with the standard C compiler using optimization flag '-03'.

For comparison purposes with Dumas et al. (1993), the algorithm was run on all problems with the objective
minimize [Z.sub.t](T). Table 2 shows our reactive tabu search algorithm results for the following parameter
settings:

Penalty factor, K = 1.0;

Maximal depth, d = the number of customers in the problem;

Tabu_length = min (30, number of customers);

Tabu_length increase factor = 1.2;

Tabu_length decrease factor = 0.9;

Minimum cycle length allowed = 50.

Table 1. Layout of TSPTW test problems

Average time window widths

20 40 60 80 100

Nodes

20 5 5 5 5 5

40 5 5 5 5 5

60 5 5 5 5 5

80 5 5 5 5 5

100 5 5 5 5

150 5 5 5

200 5 5

In six of the more difficult problem classes, those with 100 or more customers having time window widths of
40 or greater, an initialization phase of 100 moves with K = 0.25 was implemented before continuing the
search with K = 1.0. The smaller penalty in the initialization phase has the effect of freeing the search in the
15
earliest stages from as great an affinity for feasible solutions and allows a quicker approach to good
solutions in the more difficult problem classes.

The first two columns of Table 2 identify the problem class addressed in each row. Column four gives the
average value of the minimal [Z.sub.t](T) for the five problems in each class and column three gives the
average [Z.sub.c](T) associated with the minimal values of [Z.sub.t](T). Columns five and six give the average
number of moves and the average computation times (in seconds), respectively, required to obtain the
minimal [Z.sub.t](T). Column seven gives the optimal average solution for each class and column eight gives
the average seconds of time required by the algorithm of Dumas et al. (1993). Column nine gives the time
for which our algorithm was allowed to run on each problem in the class. Unfortunately, optimal results are
available from Dumas et al., only for each class of five problems and not for individual problems (E. Gelinas,
personal communication). Finally, the last column gives the average percentage deviation from the optimum
for our algorithm's best solutions to the five problems in the class.

Dumas et al. (1993) do not provide the average optimal solutions for the n80w100, the n100w80, or the
n150w60 problem sets. This is because either the memory requirements were too large, or the run time was
excessive when applying their algorithm. The TS algorithm does not exhibit such problems and can attack
larger problems both in number of customers and the width of the time windows.

Our computation times do not include the time required to convert the customer (x, y) coordinates to a
time/distance matrix, but they do include the time to [TABULAR DATA FOR TABLE 2 OMITTED] perform time
window reductions and the time required to construct the initial solution. In all cases, the average [Z.sub.t]
(T) produced by the tabu search algorithm is no worse than 0.78% higher than the average optimal solution
for a class. In the classes where the optimal solution is known, but is not found, for all problems in the class,
the algorithm averages within 0.36% of the optimal solution. This implies that the tabu search finds the
optimal solution for many of the individual problems, and is very close to the optimal solution for the
remaining problems.

Table 3 presents the results for the tabu search algorithm applied against the hierarchical objective function
min {[Z.sub.c](T)[[Z.sub.t](T)]}. The results show that the algorithm is flexible and can be easily applied to
different objective functions. In 23 of the 30 problem classes, the hierarchical objective returned smaller
values [Z.sub.c](T) and it failed to return a smaller [Z.sub.c](T) in only one class, n60w60.

Because the algorithm is not restricted to remain feasible, it often returns many 'super-optimal' solutions
that are only slightly infeasible and may be attractive alternatives to the decision maker because of their
significant decrease in travel time and/or completion times. For example, consider the last of the problems
with 150 customers and time windows of width 60 units. Our best feasible solution for the study
documented in Table 2 yielded [Z.sub.c](T) = 1001 and [Z.sub.t](T) = 850. However, the method also provided
a routing with [Z.sub.t](T) = 835, which incurred a P(T) = 4 where 2 of the 150 customers experienced an
extension of 1 unit in their required [latestTime.sub.i], and one customer experienced a 2 unit lateness of
departure. The decision maker could use this information to assess whether the marginal violations of the
[latestTime.sub.i] would be justifiable in view of a reduction of 15 units in the [TABULAR DATA FOR TABLE 3
16
OMITTED] route's total travel time. Indeed, the method identified 113 routes that had a [Z.sub.t](T) less than
850 with P(T) less than or equal to 5 units. This kind of information can be extremely valuable to practical
decision makers where the [latestTime.sub.i] constraints can often be negotiated with individual customers
for a mutual gain.

The results presented in Tables 2 and 3 were obtained without 'tuning' the algorithm to the problem set
studied. The reactive tabu search seems to be robust across a wide range of problem types and parameter
settings.

5. Recommendations for further research

The primary goal for future research is to extend and apply the general framework presented above to
problems of a more complicated nature that often occur in practical settings. Augmentations to be
addressed in the near future include limitations on the length of the route and limitations on the duration
for which a vehicle may operate. The next major area for further research is the application of this approach
to problems with multiple vehicles. The multiple-vehicle TSP (m-TSP) is easily transformed into an equivalent
TSP. Based on this transformation, the tabu search approach to the TSPTW detailed here should be
applicable to the m-TSPTW problem with only minor modifications. After refining the approach for the
multiple-vehicle TSPTW, the addition of vehicle capacity constraints will permit extension of these
techniques to the design and implementation of practical vehicle routing and scheduling problems faced
daily by various entities within the private and public sectors.

Acknowledgements

We thank Jacques Desrosiers and Eric Gelinas for providing the test problems and optimal results and also
for providing helpful comments and insights to the TSPTW. We also acknowledge insights provided by
Roberto Battiti, Giampietro Techiolli and Dave Woodruff that assisted us in the design and implementation
of reactive tabu search and hashing schemes used herein.

17

Potrebbero piacerti anche