Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
a r t i c l e in f o a b s t r a c t
Article history: The job-shop scheduling problem is one of the most arduous combinatorial optimization problems.
Received 25 June 2007 Flexible job-shop problem is an extension of the job-shop problem that allows an operation to be
Accepted 20 July 2010 processed by any machine from a given set along different routes. This paper present a new approach
Available online 10 August 2010
based on a hybridization of the particle swarm and local search algorithm to solve the multi-objective
Keywords: flexible job-shop scheduling problem. The particle swarm optimization is a highly efficient and a new
Flexible job-shop scheduling evolutionary computation technique inspired by birds’ flight and communication behaviors. The multi-
Multi-objective optimization objective particle swarm algorithm is applied to the flexible job-shop scheduling problem based on
Particle swarm optimization priority. Also the presented approach will be evaluated for their efficiency against the results reported
Local search
for similar algorithms (weighted summation of objectives and Pareto approaches). The results indicate
that the proposed algorithm satisfactorily captures the multi-objective flexible job-shop problem and
competes well with similar approaches.
& 2010 Elsevier B.V. All rights reserved.
0925-5273/$ - see front matter & 2010 Elsevier B.V. All rights reserved.
doi:10.1016/j.ijpe.2010.08.004
G. Moslehi, M. Mahnam / Int. J. Production Economics 129 (2011) 14–22 15
method. Gao et al. (2008) have developed a new approach (scheduling). In this study, the objective is to find an assignment
hybridizing genetic algorithm with variable neighborhood des- and a schedule to minimize makespan or maximal completion
cent to exploit the ‘‘global search ability’’ of genetic algorithm and time by machines (F1); total workload of the machines (F2), which
‘‘the local search ability’’ of variable neighborhood descent for represents the total working time of all machines; and critical
solving multi-objective problem. Zhang et al. (2009) proposed machine workload (F3). Several multi-objective approaches such
hybridizing the two optimization algorithms, particle swarm as Pareto and the weighted summation of objectives have been
optimization (PSO) and tabu search (TS) algorithms, an effective developed but we will focus only on the Pareto approach.
hybrid approach for the multi-objective FJSP. Recently, Xing et al. For the purposes of this study, we assume that machines are
(2009a) presented a simulation model framework to solve the independent from each other and that set up times and move
multi-objective flexible job-shop scheduling problem by weighted times between operations are negligible. Each job has a release
summation method. They improved the sequencing performance time ri, indicating the earliest possible time that the job can be
by using ant colony optimization algorithm (ACO). Also, Xing et al. processed and a due date di. Preemption is not allowed and each
(2009b) developed their previous approach and proposed an job can be processed only by one machine at a time. Also, each
efficient search method for the problem. machine can at most perform one operation at a time and
As mentioned before, the integrated approach is used by machines never break down and are available during the
considering assignment and scheduling at the same time. Hurink scheduling period. The processing time of an operation Oi,j on
et al. (1994) proposed an approach using tabu search algorithm in machine k is predefined and no operation can be interrupted.
which re-assignment and re-scheduling are considered as two
different types of moves. An integrated approach presented by
Dauze re-Pére s and Paulli (1997) defined a neighborhood structure
3. Particle swarm optimization algorithm
for the problem. Chen et al. (1999) also applied genetic algorithm
to solve FJSP and introduced a new coding for each solution and
Particle swarm optimization (PSO) is an evolutionary compu-
different crossover and mutation operators to minimize the
tation and population based on the stochastic optimization
makespan for all jobs. Mastrolilli and Gambardella (2002)
technique proposed by Eberhart and Kennedy (1995). PSO
improved Dauze re-Pére s’ tabu search and presented two neigh-
because of its two main features, namely, optimization via social
borhood functions. Baykasoğlu (2002) proposed an approach
evolution and simplicity of use has become the focus of attention
based on linguistics, simulated annealing and a dispatching rule
by many researchers. PSO requires only primitive and simple
based heuristic. He defined FJSP with alternative process plans
mathematical operators, and it is computationally inexpensive in
(operation sequences). Also Baykasoğlu et al. (2004) presented
terms of both memory requirements and time. Members of the
FJSP as a grammar and defined the productions in the grammar as
swarm in PSO can be considered as agents searching through the
controls and used multiple-objective tabu search to solve the
solution space. Each single solution is like a ‘bird’ in the D-
multi-objective flexible job-shop scheduling problem. They
dimensional search space, called ‘particle’. All particles have
selected makespan, total tardiness and load balance as the
fitness values which are evaluated by the fitness function, and
objective functions. Chan et al. (2006) used a genetic algorithm
velocities which direct the flight of the particles. Initially, PSO
to solve FJSP under resource constraints which follows the
consists of a randomly produced population and velocity in the
integrated approach. Also, Tay and Ho (2008) investigated the
user’s considered domain. The swarm size is problem dependent
potential use of genetic programming (GP) for evolving effective
with the sizes of 20–50 as the most common sizes (Hu et al.,
and robust composite dispatching rules for solving the multi-
2004). Then, the velocity is dynamically adjusted at each step
objective FJSP with respect to minimum makespan, mean
according to the experience by itself and its colleagues as given by
tardiness, and mean flow time objectives.
Eq. (1). The first part of Eq. (1) represents the inertia of the
In this paper, we propose an integrated multi-objective
previous velocity. The second part is the ‘cognition’ part,
approach based on hybridization of particle swarm optimization
representing individual thinking, and the third part is ‘social’
and local search algorithm to solve the flexible job-shop
awareness, representing cooperation among the particles. The
scheduling problem. PSO allows an extensive search of solution
new particle position is found by adding the new velocity to the
space while the local search algorithm is employed to reassign the
current position according to Eq. (2):
machines to operations and to reschedule the results obtained
from the PSO, which will enhance convergence speed. Vi,t þ 1 ¼ w Vi,t þRand C1 ðPi Xi,t Þ þrand C2 ðPg Xi,t Þ ð1Þ
The remainder of the paper is organized as follows: In Section
2, we describe the assumptions and formulation of the flexible Xi, t þ 1 ¼ Xi, t þ Vi, t þ 1 ð2Þ
job-shop scheduling problem in detail. Section 3 describes the
standard PSO algorithm and its application to multi-objective wherein i is the ith particle; Xi,t is the position of particle i in
FJSP. In Section 4, we present our approach to solving the FJSP. iteration t; Vi,t is the velocity of particle i in iteration t; Pi is the
Then, the experimental results are illustrated and analyzed in best previous position of particle i so far, also called pbest
Section 5. Finally, Section 6 provides conclusion and suggestions (memorized by every particle); and Pg is the best previous
for further study of this problem. position among all the particles, which is called gbest (memorized
in a common repository). w is inertial weight and its function is to
balance global and local exploitations of the swarm. One of the
2. Problem formulation most widely used methods of inertia weighting is linear
decreasing, which is determined by the following equation:
In flexible job-shop problem, n jobs on m machines are
w ¼ wmax ððwmax wmin Þ=itermax Þ iter ð3Þ
scheduled so that each job i has ni operations in the form of
Oi,1 ,Oi,2 ,. . .,Oi,ni and the set of machines M ¼{M1, M2, y, Mm} is where wmax is the initial value of weighting coefficient; wmin, the
available. For each operation Oi,j, there is a set of machines which final value of weighting coefficient; itermax, maximum number of
are capable of performing it and Oi,j should be assigned to one of iterations; and iter is the current iteration. C1 and C2 are two
these machines (routing). The second problem is sequencing the learning factors which control the influence of pbest and gbest on
operations on each machine in order to optimize some criteria the search process and the most common value for them is
16 G. Moslehi, M. Mahnam / Int. J. Production Economics 129 (2011) 14–22
C1 ¼C2 ¼2. Rand and rand are two random numbers within the 3.2. Multi-objective particle swarm optimization
range of [0,1].
The procedure for standard PSO is summarized as follows: In PSO, pbest and nbest (gbest) play crucial roles in guiding the
Step 1: Initialize a population of particles with random particle’s search. Transforming PSO to MOPSO requires a redefini-
positions and velocities in the D-dimensional problem space. tion of nbest (gbest) selection rule in order to obtain an optimal
Step 2: Evaluate the objective values of all particles, set pbest of front solution. Naturally, this selection is from the set of Pareto-
each particle equal to its current position, and set gbest equal to optimal solutions. This problem is both difficult and important for
the position of the best initial particle. attaining convergence and diversity of solutions. Since each
Step 3: Update the velocity and position of particles according particle of the swarm should select one of the Pareto-optimal
to Eqs. (1) and (2). solutions as its global best particle, the one selected is called the
Step 4: Evaluate the objective values of all particles. best local guide of that particle. So, the most important part of
Step 5: For each particle, compare its current objective value MOPSO is determination of the best local guide of particles. In this
with its pbest value. If the current value is better, then update method, elitism strategy is considered in order not to lose the
pbest with the current position and objective value. non-dominated solutions during generations. According to elitism
Step 6: Determine the best particle of the current whole strategy, the algorithm evaluates the particles in the population
population with the best objective value. If the objective value is and compares members of the current population with members
better than that of gbest, then update gbest with the current best of the actual archive. Then, it surveys the particles to be
particle. considered for insertion into the archive and those to be removed.
Step 7: If a stopping criterion is met, then output gbest and its Thus, if no two points in the archive dominate each other, the
objective value; otherwise, go back to Step 3. archive is called ‘domination-free’. Fig. 1 shows how to find the
best local guide through the elitist strategy using the sigma
method.
3.1. Neighborhood structure
The selection of the best local guide for each particle of the
population from a set of Pareto-optimal solutions in different
As mentioned earlier, gbest is the best global solution among multi-objective particle swarm optimization methods has a great
all particles. gbest is an extreme situation of the nbest version in impact on the convergence and diversity of solutions. In the
which all particles of the population are considered as neighbors multi-objective PSO presented in this paper, we used the sigma
of each particle. We can use nbest as the best position of n method introduced by Mostaghim and Teich (2003). According to
neighbor particles have achieved so far. The neighborhood of a this method, a value si is assigned to each point with coordinates
particle is the social environment a particle encounters. So, a (f1i , f2i ) and s is defined as s ¼ ðf12 f22 Þ=ðf12 þ f22 Þ. Thus, all points
particle is affected only by its neighbors not all population. It is occurring on the line f2 ¼ af1 will have the same value for
generally accepted that a larger neighborhood size will make the s ¼ ð1aÞ2 =ð1 þ aÞ2 . In its general form, s is a vector of C2k
particles to converge faster, while a small neighborhood size will elements, where k is the dimension of the objective space. For
help prevent creation of premature or pre-convergence (Hu et al., example, for the three coordinatesf1 ,f2 ,f3 , s is defined as
2004).
Kennedy and Mendes (2002) investigated various neighbor- 0 1
f12 f22
hood structures and their influences on the performance of the B f 2 f 2 C
algorithm. ring and star topologies are the most widely used @ 2 3A
neighborhood structures. In the ring topology, the neighborhood ! f32 f12
s¼ ð4Þ
size is assumed to be three and each particle can communicate ðf12 þf22 þf32 Þ
with its two nearest neighbors while in the latter topology the
particle has communication with all neighbors. For determining Different values of s are shown in Fig. 2.
the neighbors, simply a list of particles is generated regardless of
their positions. This structure which is called social, has a simple
definition which is a great advantage in discrete problems while
the geographical neighborhood as a different structure need to
compute the distances between particles and finding the
positions of the nearest particles in a neighborhood. In this study
the star and social neighborhood structure is employed which has
an important effect on controlling the convergence of algorithm.
Fig. 1. Finding the best local guide with elitist strategy using sigma method
(Mostaghim and Teich, 2003). Fig. 2. Sigma method for a three objective pace (Mostaghim and Teich, 2003).
G. Moslehi, M. Mahnam / Int. J. Production Economics 129 (2011) 14–22 17
In order to find the best local guide, in the first step, we assign current iteration is calculated from the following equation:
the value sj to each particle j in the archive. In the second stage, si
Delayj ¼ Xðn þjÞ 1:5 Pmax ð5Þ
for particle i of the population is calculated. Then, the distance
between particle i and all the archive points is calculated. Finally, where Delayj is the allowed delay time of operation j and Pmax is
particle k in the archive whose sk has the minimum distance from the maximum processing time of all operations. In each iteration,
si is selected as the best local guide for particle i. In other words, the operation with the highest priority is selected among the
the particle with a sigma value closer to that of the archive operations whose precedence is met in the interval [t; t + Delayj]
member is selected as its best local guide. The sigma method can (set E). This operation will be scheduled at the earliest in terms of
be used for different but positive objects. An important point to its release time and precedence and capacity constraints. Then,
note here is that the range of equiponderance of objects has a the time t is updated and the procedure is repeated. This
great impact on the convergence and diversity of solutions. algorithm continues until all operations are scheduled.
operation 1 2 1 2 3 1 2
particle position 5 4 2 5 1 3 1
processing machine M2 M3 M1 M4 M2 M4 M2
into the local optima, mutation operator, based on the two changes each dimension of the particle with a probability of m2.
parameters m1 and m2, is employed in the intermediate stages. This Using neighborhood (nbest) is another method of preventing the
operator chooses the particles with a probability of m1 and randomly particle from pre-convergence; social neighborhood is, thus, applied
in this approach. But for providing the possibility of searching optima in the second stage – i.e., the result is not improved after g2
boundaries between neighborhoods, if after g1 iterations, the Pareto- iterations – the swarm will be randomly reproduced again.
optimal solutions show no improvement, then global neighborhood Because the PSO algorithm is sensitive to the primary swarm,
will replace the local one. Also, if the algorithm reaches the local it would be better to use some heuristic method to find the proper
initial points. One of the favorable assignments could be the first 5.1. Weighting summation of objectives approach.
operation priority (the machine with the lowest processing time),
which is identified as a dispatching rule. Also, a greedy heuristic AL+ CGA (Kacem et al., 2002a), PSO+SA (Xia and Wu, 2005),
algorithm as introduced by Kacem et al. (2002a) is employed to GA+ VND (Gao et al., 2008), PSO +TS (Zhang et al., 2009), SACO
assign appropriate machines to operations. This procedure (Xing et al., 2009a) and ESM (Xing et al., 2009b) algorithms are
enables us to assign each operation to the right machine while included in the group of approaches applied to FJSP that used the
taking into account the processing times and workloads of weighted summation of objectives. In these algorithms, three
machines according to which we have already assigned opera- instance problems based on practical data in three small (8 8),
tions. Below is a brief summary of the assignment algorithm medium (10 10), and large sizes (15 10) were employed as
(Kacem et al., 2002a): benchmark problems. Different release times are not considered
by these algorithms; so, release time values in the proposed
Assignment procedure:
algorithm (MOPSO+ LS) are considered zero. Because the compu-
Pijk: processing time of operation Oi,j on machine k tation times, maximum runs, and weights are not reported in the
Sijk: Assignment of operation Oi,j to machine k literature, an accurate comparison seems unachievable. However,
the comparison of the approach presented here with other
Initialization: Set all elements of S to 0 and recopy P in P0 . algorithms shows that the priority-based approach is competitive
For i¼1 to n with all others. Even if the proposed algorithm did not compete
For j ¼1 to ni well with other algorithms using the weighting summation of
Min ¼N; objectives, this could not be considered as its weakness. The
position ¼1; values were selected for parameters are shown in Table 1. Also,
For k¼1 to m thirty runs are made for each case to check the consistency.
IF (P0 oMin) Then Table 2 show the results obtained from MOPSO+LS and from
Min ¼P0 ijk; previous methods for all instances. The best solutions are
position ¼k; indicated. The results indicate that ESM algorithm has better
EndIF results among previous algorithms. However it is because that
EndFor this algorithm was run with different weight sets for different
Si,j,position ¼1; objectives and the best results were presented. In the 8 8
For j0 ¼j + 1 to ni problem, the solutions of MOPSO+ LS are dominating AL+ CGA in
P0 i,j0 ,position ¼Pi,j,position + P0 i,j0 ,position;
all iterations. Also the solution of other algorithms is among
EndFor MOPSO+LS solutions but never dominates. So, in small-sized
For i0 ¼i+ 1 to n
problems, our proposed approach has no dominating solution but
For j0 ¼1 to ni0 is not dominated by them, either. In the 10 10 problem, our
P0 i0 ,j0 ,position ¼Pi,j,position +P0 i’,j’,position;
algorithm often has better or at least equiponderant solutions
EndFor compared with others. The MOPSO+ LS algorithm was able to
EndFor
dominate previous algorithms for problems of the size 15 10
EndFor and this result is based on the best solution found in all iterations.
In addition, results indicate that in 70% of the iterations, there is
always one better solution or at least one equal to the best
previous solution. Finally, although we were not able to compare
5. Computational results
Table 1
Computational experiments were carried out to compare the MOPSO+ LS algorithm parameters in comparison with weighted summation
approaches and to evaluate the efficiency of our proposed methods.
method. The algorithms were implemented in Matlab 7 on a
Pentium IV running at 2 GHz on the Windows XP operating nm Swarm Neighborhood Iterations Wmax Wmin C1 C2 g1 g2
size size
system. In this section, the presented approach is compared with
other methods proposed for solving the FJSP. Other algorithms are 88 70 35 65 0.9 0.4 0.8 1.2 7 15
categorized in two groups of weighting summation of objectives 10 10 70 35 65 0.9 0.4 0.8 1.2 7 15
and Pareto approaches, each of which is separately inspected 15 10 100 33 100 0.9 0.4 0.8 1.2 14 18
below due to their different natures.
Table 2
Comparison of MOPSO +LS with weighted summation objectives approaches.
Size (n m) Method AL+ CGA PSO + SA GA + VND PSO + TS SACO SME MOPSO + LS
Objects 1 2 1 2 1 1 2 1 2 1 2 3 4 1 2 3 4 5
10 10 Makespan (F1) 7 – 7 – 7 7 – 7 8 8 7 8 – 8 7 8 7 –
Total workload (F2) 45 – 44 – 43 43 – 42 42 41 42 42 – 41 42 42 43 –
Max workload (F3) 5 – 6 – 5 6 – 6 5 7 6 5 – 7 6 5 5 –
15 10 Makespan (F1) 23 24 12 – 11 11 – 11 11 11 – – – 12 11 – – –
Total workload (F2) 95 91 91 – 91 93 – 91 93 91 – – – 93 91 – – –
Max workload (F3) 11 11 11 – 11 11 – 11 10 11 – – – 10 11 – – –
G. Moslehi, M. Mahnam / Int. J. Production Economics 129 (2011) 14–22 21
Kacem, I., Hammadi, S., Borne, P., 2002a. Approach by localization and multi Tseng, L.Y., Lin, Y.T., 2010. A genetic local search algorithm for minimizing total
objective evolutionary optimization for flexible job-shop scheduling problems. flow time in the permutation flow shop scheduling problem. International
IEEE Transactions on Systems, Man, and Cybernetics, 1–13. Journal of Production Economics 127(1), 121–128.
Kacem, I., Hammadi, S., Borne, P., 2002b. Pareto-optimality approach for Tung, L.F., Lin, L., Nagi, R., 1999. Multi-objective scheduling for the hierarchical
flexible job-shop scheduling problems. Hybridization of evolutionary algo- control of flexible manufacturing systems. The International Journal of Flexible
rithms and fuzzy logic. Mathematics and Computers in Simulation 60, Manufacturing Systems 11, 379–409.
245–276. Wu, Z., Weng, M.X., 2005. Multiagent scheduling method with earliness and
Kennedy, J., Mendes, R., 2002. Population structure and particle swarm tardiness objectives in flexible job shops. IEEE Transactions on System, Man,
performance. In Proceedings of the IEEE Congress on Evolutionary Computa- and Cybernetics-Part B 35 (2), 293–301.
tion, pp. 1671–1676. Xia, W., Wu, Z., 2005. An effective hybrid optimization approach for multi-
Mastrolilli, M., Gambardella, L.M., 2002. Effective neighborhood functions for the objective flexible job-shop scheduling problems. Computers and Industrial
flexible job shop problem. Journal of Scheduling 3, 3–20. Engineering 48, 409–425.
Mati, Y., Rezg, N., Xie, X., 2001. An integrated greedy heuristic for a flexible job Xing, L-N., Chen, Y-W., Yang, K-W., 2009a. Multi-objective flexible job shop
shop scheduling problem. IEEE International Conference on Systems, Man and schedule: design and evaluation by simulation modeling. Applied Soft
Cybernetics,2534–2539. Computing 9, 362–376.
Mostaghim, S., Teich, J.R., 2003. Strategies for finding local guides in multi- Xing, L-N., Chen, Y-W., Yang, K-W., 2009b. An efficient search method for multi-
objective particle swarm optimization (MOPSO). In Proceedings of the IEEE objective flexible job shop scheduling problems. Journal of Intelligent
Swarm Intelligence Symposium, pp. 26–33. Manufacturing 20 (3), 283–293.
Tay, J.C., Ho, N.B., 2008. Evolving dispatching rules using genetic programming for Zhang, G., Shao, X., Li, P., Gao, L., 2009. An effective hybrid particle swarm
solving multi-objective flexible job-shop problems. Computers and Industrial optimization algorithm for multi-objective flexible job-shop scheduling
Engineering 54, 453–473. problem. Computers and Industrial Engineering 56 (4), 1309–1318.