Sei sulla pagina 1di 13

Original Article

Proc IMechE Part B:


J Engineering Manufacture
1–13
An improved meta-heuristic approach Ó IMechE 2015
Reprints and permissions:
for solving identical parallel processor sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/0954405414564410

scheduling problem pib.sagepub.com

S Bathrinath1, S Saravanasankar1, SS Mahapatra2, Manas Ranjan Singh2


and SG Ponnambalam3

Abstract
This article considers the problem of scheduling n jobs on m identical parallel processors where an optimal schedule is
defined as one that produces minimum makespan (the completion time of the last job) and total tardiness among the set
of schedules. Such a problem is known as identical parallel processor makespan and total tardiness problem. In order to
minimize makespan and total tardiness of identical parallel processors, improved versions of particle swarm optimization
and harmony search algorithm are proposed to enhance scheduling performance with less computational burden. The
major drawback of particle swarm optimization in terms of premature convergence at initial stage of iterations is avoided
through the use of mutation, a commonly used operator in genetic algorithm, by introducing diversity in the solution.
The proposed algorithm is termed as particle swarm optimization with mutation. The convergence rate of harmony
search algorithm is improved by fine tuning of parameters such as pitch adjusting rate and bandwidth for improving the
solution. The performance of the schedules is evaluated in terms of makespan and total tardiness. The results are ana-
lyzed in terms of percentage deviation of the solution from the lower bound on makespan. The results indicate that par-
ticle swarm optimization with mutation produces better solutions when compared with genetic algorithm and particle
swarm optimization in terms of average percentage deviation. However, harmony search algorithm outperforms genetic
algorithm, particle swarm optimization, and particle swarm optimization with mutation in terms of average percentage
deviation. In certain instances, the solution obtained by harmony search algorithm outperforms existing clonal selection
particle swarm optimization.

Keywords
Identical parallel processors, genetic algorithm, particle swarm optimization, mutation, harmony search algorithm, make-
span, total tardiness

Date received: 29 June 2014; accepted: 17 November 2014

Introduction is processed on machine i. Each machine can process


only one job at a time. Furthermore, the jobs cannot
Scheduling is a decision-making process primarily con- be processed more than one machine at the same time.
cerned with allocation of operations on machines in The objective is to find a job schedule that can mini-
such a manner that some process performance mea- mize the makespan and total tardiness. The problem of
sures such as makespan, tardiness, earliness, and flow
time can be minimized.1,2 The scheduling procedures
1
available today are not only quite realistic but also pos- Department of Mechanical Engineering, Kalasalingam University,
Krishnankoil, India
sess fast solution-generating capability. It finds wide- 2
Department of Mechanical Engineering, National Institute of Technology
spread applications in both manufacturing and service Rourkela, Rourkela, India
industries.1 The parallel processor scheduling is one of 3
School of Engineering, Monash University Malaysia, Bandar Sunway,
the scheduling scenarios that arise when a set of n jobs Malaysia
available at time 0 needs to be processed on m identical
Corresponding author:
parallel processors (IPPs; machines). Each job has to SS Mahapatra, Department of Mechanical Engineering, National Institute
be processed without interruption on one of the avail- of Technology Rourkela, Rourkela 769008, India.
able m machines with processing time pij where jth job Email: mahapatrass2003@yahoo.com

Downloaded from pib.sagepub.com by guest on February 18, 2015


2 Proc IMechE Part B: J Engineering Manufacture

scheduling n jobs on m IPPs where an optimal schedule problem model when the job processing times are iden-
is defined as one that gives the smallest makespan (the tical. Dogramaci3 has proposed a dynamic program-
completion time of the last job) and minimum total tar- ming algorithm to minimize total weighted tardiness.
diness among the set of schedules is known as IPP Elmaghraby and Park4 have used a branch and bound
makespan and total tardiness problems (IPPMST). (B&B) technique to minimize some penalty functions
Using the standard three field notations, makespan is of tardiness considering identical due dates and pro-
normally
P denoted as P k Cmax and total tardiness as cessing times. Later, this method is modified by Barnes
P k Tj , where P designates IPPs and the jobs that and Brennan,5 Azigzoglu and Kirca,6 and Yalaoui and
are not constrained, Cmax represents makespan, and Chu7 to reduce the computational burden. Tanaka and
STj denotes total tardiness. Such a scheduling scenario Araki8 have proposed a solution strategy based on
is treated as NP-hard problem, and the exact solution B&B algorithm with Lagrangian relaxation technique
for small size problems has been reported using tradi- to obtain a tight lower bound (LB). Most of the exact
tional operation research methods such as dynamic methods are limited to solve generalized IPPMST
programming and mixed-integer linear programming.3–8 because they consider either equal processing times or
However, the computational time and cost increase in a common due dates or both. Therefore, a number of
non-polynomial manner for a reasonable size problem. heuristic methods have been proposed to solve the
In practical situation, large-scale problems are dealt with problem. To minimize mean tardiness on a single
a view to obtain quick solutions at less computational machine, Wilkerson and Irwin22 have presented a heur-
effort. As exact models cannot address this issue, a large istic method based on neighborhood search. This
number of heuristics have been proposed for solving method has been extended to solve the parallel machine
IPPMST.9–15 Although heuristics generate good solu- case by Baker and Scudder.23 Kim et al.24 have consid-
tions, the solution quality is far away from optimal solu- ered the problem of determining the allocation and
tion. Therefore, nature-inspired algorithms like tabu sequence of jobs on parallel machines for the objective
search (TS), simulated annealing (SA), genetic algorithm of minimizing total tardiness. Kumar et al.25 have pro-
(GA), and particle swarm optimization (PSO) are pro- posed a SA-fuzzy logic approach to select the optimal
posed recently to generate approximate solutions close to weighted earliness–tardiness combinations in a non-
the optimum with considerably less computational identical parallel machine environment. Dogramaci
time.13,15–17 Among these algorithms, PSO is considered and Surkis26 have implemented a heuristic method
as a robust algorithm for its computational efficiency based on list algorithms designed for parallel machine
than other algorithms, and also it needs very few para- problem. Ho and Chang27 have proposed a new mea-
meters for tuning the algorithm. In PSO, the whole algo- surement technique called traffic congestion ratio com-
rithm rests on two equations, namely, velocity and bining processing time and due dates to obtain traffic
position updation rules. Normally, swarm-based optimi- priority index for job assignment. Simon and Andrew28
zation algorithms trap at local optima where the algo- and Patrizia et al.29 have compared various heuristics
rithm loses its diversity after few iterations. To avoid the on similar instances to study the effectiveness of heuris-
premature convergence, mutation operator has been bor- tics. With the development of evolutionary computing
rowed from GA to be embedded with PSO to avoid techniques, meta-heuristics procedures are used to solve
being trapped in local optima. Furthermore, this algo- IPPMST problems to generate approximate solutions
rithm has been compared with the latest search-based close to the optimum with considerably less computa-
algorithm known as harmony search algorithm (HSA). tional effort. GA has been applied by Bean19 to solve
In this article, smallest position value (SPV) rule IPPMST where a representation technique called ran-
proposed by Tasgetiren et al.,18 originally borrowed dom keys is proposed to maintain feasibility from par-
from the random key representation of Bean,19 is used ent to offspring in GA. With the objective of workflow
for problem representation in PSO for sequencing the balancing, Rajakumar et al.30 used GA to solve parallel
jobs in IPPMST. Moreover, a non-uniform mutation machine scheduling problem. Chaudhry and Drake31
operator is incorporated when it is observed that search have presented a GA approach to minimize total tardi-
procedure starts losing diversity and tends toward pre- ness of a set of tasks for identical parallel machines and
mature convergence.20 The results obtained from parti- worker assignment to machines. Koulamas32 has pro-
cle swarm optimization with mutation (MPSO) have posed a polynomial decomposition heuristic and a
further been validated using HSA with SPV rule prob- hybrid SA heuristic for the problem. Kim and Shin33
lem representation integrating Mahdavi et al.’s21 fine have implemented TS algorithm that schedules jobs on
tuning schemes. parallel machines when release times and due dates of
jobs are known a priori and at sequence-dependent
setup times. Bilge et al.34 have developed a TS
Literature review
approach to solve the IPPMST with sequence-
In order to solve identical parallel machine scheduling dependent setups on uniform parallel machines.
problem, Lawler2 has proposed a transportation Anghinolfi and Paolucci35 have proposed a hybrid

Downloaded from pib.sagepub.com by guest on February 18, 2015


Bathrinath et al. 3

meta-heuristic approach that integrates several features for generalized job shop problem-solving, and the gen-
of TS, SA, and variable neighborhood search (VNS) eralization includes feeding times, sequences of setup-
for solving IPPMST. dependent operations, and jobs with different routings
For minimizing the makespan, Graham9 has used among work centers including ‘‘multi-identical’’
well-known longest processing time (LPT) rule for machines. Wang and Brunn52 have proposed an effec-
assigning the list of jobs to the least loaded machines. tive GA for job shop sequencing and scheduling where
Coffman et al.10 have proposed the MULTIFIT heuris- a simple heuristic rule is adapted and embedded into
tic and proved the close relation between the bin- the GA to avoid the production of unfeasible solutions.
packing problem and maximum completion time prob-
lem. Lee and Massey11 have proposed COMBINE
heuristic that utilizes the LPT rule for an initial solution Scheduling on IPPs
and improves the schedule by MULTIFIT algorithm. The scheduling of IPPs is regarded as one of the most-
Gupta and Ruiz Torres12 have proposed LISTFIT studied classical scheduling problems that involve the
heuristic for minimizing makespan in identical parallel assignment of multiple jobs onto the machines. Among
machines with new LB). Kurz and Askin36 have investi- the performance measures considered, makespan, total
gated a hybrid flexible flow line environment with iden- tardiness, and earliness are commonly used to test the
tical parallel machines and non-anticipatory sequence- schedules. The problem can be stated as follows: there
dependent setup times with an objective to minimize are n jobs waiting for being scheduled on m IPPs so as
the makespan. to minimize makespan and total tardiness. The basic
Min and Cheng13 have proposed GA for solving assumptions need to be considered for solving IPPMST
large-scale identical parallel machine scheduling prob- problems are as follows:
lem with too many jobs and machines to minimize
makespan. Kennedy and Eberhart37 have proposed 1. All jobs can be processed on any of the parallel
PSO algorithm having the capability of updating of machines.
position and velocity of particles and quick conver- 2. Each of the parallel machines can process at most
gence to obtain near-optimal solutions. Tasgetiren et one job at a time.
al.18 have proposed a PSO for permutation flow shop 3. Each job has to be processed without interruption
scheduling problem. Liao et al.38 and Liu et al.39 have on one of the machines.
also used PSO in flow shop scheduling problem. Niu et 4. All jobs are available to be processed at time 0.
al.40 have proposed a clonal selection algorithm to 5. No job can be processed by more than one
improve the swarm diversity and avoid premature con- machine.
vergence. Geem et al.41 have proposed HSA, a nature- 6. The processing time and the due date of the jobs,
inspired algorithm, mimicking the improvisation of the number of jobs, and the number of machines
music players. The harmony in music is analogous to are given and fixed.
the optimization solution vector, and the musician’s 7. No downtime is considered, and setup time is
improvisations are analogous to the local and global included in processing time.
search schemes in optimization techniques. It has been
successfully applied to various optimization problems A set N = {J1, J2, ..., Jn} of n jobs are to be sched-
such as sequential quadratic programming, structural uled on a set M = {M1, M2, ..., Mm} of m identical par-
optimization, and so on.21,42–44 Zhang et al.45 have allel machines. The first objective is to find the optimal
addressed the dynamic job shop scheduling problem schedule S = {S1, S2, ..., Sm} where Sj is a subset of jobs
with random job arrivals and machine breakdowns. assigned to machine Mj such that max{C1(S), C2(S), ...,
Renna46 has adopted a pheromone-based approach in Cm(S)} = Cmax(S) is minimum where
creating schedules for the job shop environment in cel-
lular manufacturing systems. Rui et al.47 have devel- X
Cj ðSÞ = pi ð1Þ
oped bidirectional convergence ant colony algorithm to pi 2Sj
solve the integrated job shop scheduling problem with
tool flow in flexible manufacturing system (FMS). Kim The following notations have been used for finding
and Lee48 have considered the scheduling problem in total tardiness: i is the index of machine, i = 1, 2, ..., m,
hybrid flow shops with parallel machines at each serial where m is the number of machines;
production stage where each job may visit each stage P = fS1 , S2 , . . . , Sj , . . . , Ski g is the sequence of jobs
several times, called the re-entrant flows. Marimuthu et which are assigned to machine i; Sj is the index of job,
al.49 have addressed the problem of making sequencing where j = 1, 2, ..., ki and ki represents the number of
and scheduling decisions for n jobs in m machine flow total jobs to be processed by machine i; pSj is the pro-
shops with a lot-sizing constraint. Pan et al.50 have pre- cessing time of job Sj; STSj is the starting time of job
sented a novel multiobjective particle swarm optimiza- Sj; cSj is the completion time of job Sj; and dSj is the
tion (MOPSO) algorithm for solving no-wait flow shop due date of job Sj.
scheduling problems with makespan and maximum tar- The IPP total tardiness can be represented mathema-
diness criteria. Rossi and Dini51 have presented a GA tically in the following manner

Downloaded from pib.sagepub.com by guest on February 18, 2015


4 Proc IMechE Part B: J Engineering Manufacture

STS1 = 0 ð2Þ The basic elements of PSO are summarized as fol-


lows: The particle Xti denotes the ith particle in the
Constraint (2) describes the starting time of the
swarm at iteration t and is represented by D number of
machine is equal to 0
dimensions as Xti = ½xti1 , xti2 , . . . , xtiD  where xtij is the
cS1 = STS1 + pS1 ð3Þ position value of the ith particle with respect to the jth
dimension (j = 1, 2, ..., D). The population popt is the
Constraint (3) denotes the completion time of the set of r particles in the swarm at iteration t,
job in the sequence which is the summation of starting popt = ½Xt1 , Xt2 , . . . , Xtr . The permutation of the jobs is
time and the processing time of the particular job in the represented by pti = ½pti1 , pti2 , . . . , ptin , where ptij is the
sequence assignment of job j of the particle i in the permutation
STSj = cSj1 , j.1 ð4Þ at iteration t. The velocity of the particle is represented
as Vti = ½vti1 , vti2 , . . . , vtiD , where Vtij is the velocity of the
cSj = STSj + pSj ð5Þ particle i at iteration t with respect to the jth dimension.
The tardiness time of job Sj is calculated using the The inertia weight wt is a parameter to control the
following equation impact of the previous velocities on current velocity.
  For each particle, the personal best is defined as
TSj = max 0, cSj  dSj ð6Þ Pti = ½pti1 , pti2 , . . . , ptiD , where ptij is the position value of
the ith personal best with respect to the jth dimension.
The second objective of this article is to minimize The global best is defined as Gt = ½gt1 , gt2 , . . . , gtD ,
total tardiness which is defined using equation (7) where gtij is the position value of the ith global best with
respect to the jth dimension. The velocity and position
m X
X ki
T= TSj ð7Þ of the particles are updated in equations (8) and (9)
i=1 j=1    
vtij = wt1 vt1 t1 t1
ij + c1 r1 pij  xij + c2 r2 gt1
j  xt1
ij
The mixed-integer programming approach for IPP
machine scheduling problem formulated above can be ð8Þ
solved by B&B technique, but computational efforts xtij = xt1 + vtij ð9Þ
ij
are prohibitive for large-scale problems.
where c1 and c2 are social and cognitive parameters and
r1 and r2 are uniform random numbers between (0, 1)
PSO
PSO is inspired by the social behavior of bird flocking w = wt1 3 b ð10Þ
and fish schooling. It is a population-based, bio- where b is the decrement factor. The updation of equa-
inspired optimization algorithm originally introduced tions (8) and (9) may lead the particles moving toward
by Kennedy and Eberhart.37 In PSO, particles are compound vector of global and local optimal solutions.
called as potential solutions; each particle adjusts its
position based on its own experience as well as the
experience of a neighboring particle by utilizing the best Proposed PSO algorithm
known position in the search space as well as the entire
swarm’s best known position. By adapting the global
Solution representation
and local exploration capabilities, PSO has maintained The solution representation is one of the important
its flexibility toward reaching the global optimal solu- issues when designing PSO. To maintain the bonding
tions. The particles in PSO have their own memory that between the problem domain and PSO particles for
enables to retain good solutions among all particles. IPPMST, D number of dimensions is represented for n
But in case of GA, the previous knowledge of the prob- number of jobs. The particle Xti = ½xti1 , xti2 , . . . , xtiD  cor-
lem has been destroyed once the population changes responds to the continuous position values for n num-
and chromosomes share their information with each ber of jobs in IPPMST. The particle does not have the
other. In PSO, previous best value is called as Pbest of capability of doing permutation by itself. Hence,
the particle and the best value of all the particles among Tasgetiren et al.18 introduced SPV rule to determine
the Pbest in the swarm is called as Gbest. Compared with the permutation implied by the position values xtij of
GA, all the particles in PSO tend to converge to the particle Xti as shown in Table 1.
best solution quickly even in the local version in most The smallest position in the table is xti4 =  2:92.
cases, and it does not require any kind of classical opti- So, the dimension j = 4 is assigned to the first job
mization methods such as gradient descent and quasi- P ti1 = 4 in the permutation P ti . Similarly, next smallest
Newton to solve optimization problem. Due to its sim- values are arranged in sequence. The different permuta-
plicity, easy implementation, and quick convergence, tions at each iteration t can be obtained by updating
PSO has gained much attention among the researchers the position of each particle. The initial sequence of
and has been successfully applied to a wide range of jobs is P tij = ½4, 6, 3, 5, 2, 1. After calculating the
applications in almost all fields of engineering.18,20 sequence of jobs by SPV rule, the jobs have been

Downloaded from pib.sagepub.com by guest on February 18, 2015


Bathrinath et al. 5

Table 1. SPV rule.

Dimension, j 1 2 3 4 5 6

xijt 2.13 1.21 21.20 22.92 0.97 21.91


vijt 1.78 20.34 21.8 1.54 20.81 0.6
Jobs, P tij 4 6 3 5 2 1

SPV: smallest position value. The bold values shows the smallest position value.

Table 2. Processing times for the jobs.


Mutation operator
Mutation operator adopted from GA tends to alter one
Jobs j 1 2 3 4 5 6
or more gene values in a chromosome from the initial
pj 6 8 10 9 12 7 state. This leads to the generation of new gene value
that is being added to the gene pool. Moreover, GA
can be able to attain better solution than the previous
one through these newly generated genes. Hence, muta-
Table 3. Assignment of jobs using SPV rule. tion is an important operator of the genetic search as it
helps to avoid local optimal by preventing the popula-
Machines mi 1 2
tion of chromosomes. In case of PSO, the lack of popu-
pij 9(4) 7(6) lation diversity among the swarms is considered as a
12(5) 10(3) factor for the convergence on local minima. Hence, the
6(1) 8(2) PSO has been incorporated with mutation operator to
SPV: smallest position value.
enhance its global search capacity and to improve per-
formance. In this article, a non-uniform mutation oper-
ator by Michalewicz20 has been incorporated in PSO
assigned to the machines with the same sequence. Let pj which helps to mutate some particles selected randomly
be the processing time of n number of jobs as shown in from the swarm. If the number of iterations with no
Table 2. diversity in the solution exceeds, mutation is carried
For example, there are two machines (m = 2) and six out. This operator works by changing a particle posi-
jobs (n = 6). From the SPV rule, the fourth job occu- tion dimension (xiD )
pies the first position in the initial sequence. The corre- 
xiD + deltaðt, U  xiD Þ : rb = 1
sponding processing time of the particular job has been MutateðxiD Þ = ð11Þ
xiD + deltaðt, xiD  LÞ : rb = 0
taken from Table 2 and placed in the first machine as
shown in Table 3. Similarly, the sixth job in the initial where t is the current iteration number, U is the upper
sequence occupies the second position. So, the corre- bound value of the particle dimension, L is the LB value
sponding processing time for the sixth job from Table 3 of the particle dimension, rb is the randomly generated
has been placed in the second machine. Both the bit, and delta (t, y) returns the value in the range [0:y]
machines have been assigned with two different jobs.  
b
Among these machines, the machine that completes the deltaðt, yÞ = y 1  r(1t=T) ð12Þ
assigned job first is assigned with a new job from the
sequence in Table 1. In this case, second machine where r is the random number generated from the uni-
assigned with job number 6 completes the job first as form distribution range [0:1], T is the maximum num-
processing time for the job is 7 units of time. The third ber of iteration, and b is the tunable parameter which is
position in the sequence is job number 3 having the pro- set as 5 by Michalewicz.19 The pseudo-code for the pro-
cessing time of 10 units of time. Hence, the second posed MPSO is given as follows:
machine is occupied by the job number 3. This process
is repeated until all the jobs are assigned to machines. Initialize the parameters, including swarm size, maxi-
From Table 3, it is evident that job completion time of mum number of iteration, w, c1, c2
first machine is 27 units of time and that of second While (termination condition i.e. maximum Iteration)
machine is 25 units of time. Hence, the makespan has Do
been considered as the maximum of the completion t=0
time in all the machines. In this case, makespan is con- Initialize particle’s position and velocity stochastically;
sidered as 27 units of time. From Table 3, pij denotes Apply SPV rule;
the processing time of the jth jobs in ith machine where Evaluate each particle’s fitness using the objective
9(4) represents fourth job is processed having 9 units of function;
time. Initialize pbest position;

Downloaded from pib.sagepub.com by guest on February 18, 2015


6 Proc IMechE Part B: J Engineering Manufacture

Initialize gbest position with the particle with lowest fit-


ness in the swarm;
t = t + 1;
Update the velocity of the particles by equation (8);
Update the position of the particles by equation (9);
Apply SPV rule;
Evaluate each particle’s fitness using the objective
function;
Perform mutation if t \ (Tmax* PMUT) where Tmax is
maximum number of iteration and PMUT is probabil-
ity of mutation;
Find the new gbest and pbest values by comparison;
Update gbest of the swarm and pbest of each particle;
end do
end

The pseudo-code has been further explained with the


introduction of flow chart as shown in Figure 1.

HSA
Geem et al.41 introduced an interesting meta-heuristic
algorithm called HSA. It is population based and
mimics the musical process of searching for a better
state of harmony such as jazz improvisation. It has its
own advantages over traditional optimization tech-
niques by having great power in global search and its
simplicity in both concept and implementation. Due to
its advantages and novelty, HSA has received increas-
ing attention among the researchers and has been vigor-
ously applied to various engineering optimization
problems. One of the finest qualities of HSA is the iden-
tification of high-performance regions of the solution Figure 1. Flow chart for MPSO algorithm.
space at a reasonable time. However, while performing
the local search for numerical application, it is not able
to withstand at its best. Hence, fine tuning the HSA columns consists of the variables of each solution
characteristics becomes inevitable. In order to improve (jobs). Each solution xi is considered as a one-
such fine tuning characteristics and convergence rate, dimensional array. The size of the array can be
Mahdavi et al.21 have proposed an improved harmony designed by the maximum number of jobs considered
search which has the fine tuning feature of mathemati- in the problem instance. HM is represented as given in
cal techniques having the potential of outperforming
equation (13)
traditional HSA. This algorithm uses harmony memory
2 3
considering rate (HMCR) and pitch adjusting rate x11 x12 ... x1K1 x1K
(PAR) for finding the solution vector in the search 6 x2 x22 ... x2K1 x1K 7
6 1 7
space. The optimization procedure of HSA is given in 6 : : : : : 7
6 7
the following steps. HM = 6 6 : : : : : 7
7
6 : : : : : 7
6 7
1. Initialization of the harmony memory (HM); 4 xHMS1 xHMS1 . . . xHMS1 xHMS1 5
1 2 K1 K
2. Improvisation of a new HM; xHMS
1 xHMS
2 . . . xHMS
K1 xHMS
K
3. Updation of the HM; ð13Þ
4. Checking for the stopping criteria. Otherwise
repeat steps 2 and 3.
Improvisation of new HM
Initialization of HM The divergence and convergence of the search in HSA
A set of initial solutions of harmony memory size has been maintained during this process. HMCR and
(HMS) is generated to build the HM, which is repre- PAR are considered as the main parameters for conver-
sented by a matrix of two dimensions where rows con- gence or divergence of the search capability by
sists of a set of solutions xi (population size), while Mahdavi et al.21 Here, the new solutions are

Downloaded from pib.sagepub.com by guest on February 18, 2015


Bathrinath et al. 7

constructed stochastically using one of the following assignment of jobs to the machines to calculate the
three operators: (1) memory consideration (based on objective function. The traditional HSA only uses fixed
the HMCR), (2) random consideration (based on values for both PAR and bw. These values have been
1 2 HMCR), and (3) pitch adjustment (based on the adjusted in the initialization step and not been changed
PAR). For the memory consideration, the value of the during new generations. This has been considered as
first decision variable (x01 ) has been taken from any of the main drawback of traditional HSA. Hence, the
the values as specified in HM range (x01  x01 HMS). need has been raised to sort out these issues. However,
The other decision variable values (x02 , x03 , . . . , x0K ) have by fine tuning the optimized solution vectors using the
also been taken in the same manner parameters such as PAR and bw, convergence rate has
   been increased, and this leads to optimal solution. Still
x0 i 2 x1i , x2i , :::, xHMS with probability HMCR
x0i i there has been a probability of having a dispute by
x0 i 2 X0 i with probability (1  HMCR)
assigning small and larger values for PAR and bw or
ð14Þ vice versa. Hence, the parameter values need to be
Each and every component obtained by the memory selected judiciously. Fesanghary et al.42 have developed
consideration is analyzed to determine whether the one of such fine tuning parameters for PAR and bw for
pitch should be adjusted reducing the dispute among the values given to their
parameters. It has been reported that using equation
Pitch adjusting decision for x0i stated below for PAR and bw, these issues have been
 sorted out
Yes with probabilty PAR ð15Þ
No with probabilty (1  PAR) ðPARmax  PARmin Þ
PARðgnÞ = PARmin + 3 gn ð17Þ
NI
If the pitch adjustment decision for x0i is Yes, then x0i
is replaced as follows where PAR represents the pitch adjusting rate for each
generation, PARmin represents the minimum pitch
x0i x0i 6randðÞ  bw ð16Þ adjusting rate, PARmax denotes the maximum pitch
adjusting rate, NI denotes the number of solution vec-
where bw represents an arbitrary distance bandwidth
tor generations, and gn represents the generation
and rand() represents random number between 0 and 1.
number.
By considering HM, pitch adjustment has been
For bandwidth, tuning has been done using the fol-
applied to each variable of the new HM vector. These
lowing equation
improvisation steps are almost similar to reproduction
in GA which uses crossover and mutation operators. bwðgnÞ = bwmax expðc:gnÞ ð18Þ
Thus, the improvisation utilizes the full HM to con-
struct the solutions, but in case of GA, new chromo- where c = Ln(bwmin =bwmax )=NI, bw(gn) denotes the
somes are generated by crossover of two parents or bandwidth for each generation, bwmax denotes the max-
mutation. While considering the operators in GA, it imum bandwidth, and bwmin represents the minimum
needs to be carefully designed for highly constrained bandwidth.
problems for reaching out its optimal solutions.
Results and discussions
Updation of HM
The computation study aims to analyze the perfor-
If the new harmony vector, (x0 = x01 , x02 , . . . , x0K ), is bet- mance of MPSO and HSA to minimize makespan and
ter than the worst in the HM, judged in terms of the total tardiness for the IPP scheduling. The algorithms
objective function value, the new harmony is included are coded in MATLAB R2010a and executed in IntelÒ
in the HM, and the existing worst harmony is removed Coreä i5 CPU M430 at 2.27 GHz with 4 GB RAM.
from the HM. The benchmark problems have been taken from
Tanaka and Araki8 and are also available at https://
sites.google.com/site/shunjitanaka/pmtt. Fisher53 has
Check for stopping criteria
proposed the standard method for generating these
If the maximum number of improvisations (stopping problems. The integer processing times pj (1 4 j 4 n) of
criterion) is satisfied, computation is terminated. these problems have been generated from the uniform
Otherwise steps 2 and 3 will be repeated. distribution between [1, 100]. The
P total processing times
have been computed by P = nj= 1 pj , the due dates dj
(1 4 j 4 n) have been generated using the uniform dis-
Proposed HSA
tribution ½P(1  t  R=2)=m, P(1  t + R=2)=m,
In HM vector, each member in the population pool is where n is the number of jobs, m is the number of
converted to job schedule using SPV rule–based prob- machines, and t is the tardiness factor, and due date
lem representation discussed in section ‘‘Solution repre- range is changed by n = 20, m = {2, 3, 4, 5, 6, 7, 8, 9,
sentation’’ for getting the initial sequence of jobs and 10}, t = {0.2, 0.4, 0.6, 0.8, 1.0}, and R = {0.2, 0.4, 0.6,

Downloaded from pib.sagepub.com by guest on February 18, 2015


8 Proc IMechE Part B: J Engineering Manufacture

0.8, 1.0}. For every combination of m, n, t, and R, five


problems have been generated. There are five charac-
teristics which are used to represent the problem: num-
ber of jobs, number of machines, tardiness, due date
range, and position of the instance. For example, the
notation of problem 20_03_04_08_001 represents the
first problem of three machines, 20-job, t = 0.4, and
R = 8. Among these benchmark problems, 30 problems
have been randomly selected for testing and analysis of
proposed algorithms. MPSO parameter settings are as
follows: the inertial weight w is set as 0.6, both social
and cognitive parameters are set as 1.04, and the decre-
ment factor b is set as 0.7. The maximum number of
iterations taken for MPSO is 15,000. HSA parameter
settings are as follows: HMCR value as 0.9, PARmin as
0.4, PARmax as 0.9, bwmin as 0.0001, and bwmax as 1.0. Figure 2. Convergence curve for makespan with MPSO for
The maximum number of iteration taken for HSA is the problem 20_10_02_08_001.
5000. For calculating GA, single-point crossover opera-
tion and swap mutation operation have been adopted.
The crossover rate and the mutation rate are set as 0.85
and 0.1, respectively.
The MPSO algorithm has been tested with GA and
PSO. In all the cases, the percentage deviation (PD) of
MPSO is better than GA and PSO. HSA has been tested
with the same benchmark problem where the PD value
of HSA is better than GA, PSO, and MPSO. The results
are represented in terms of PD of the solution from the
LB reported by Gupta and Ruiz Torres.12 The LB has
been calculated using the equation given below

LB = max max pi ; P=m ð19Þ
14i4n

Best Makespan  LB
PD = 3 100 ð20Þ
LB
The average percentage deviation (APD) of the pro- Figure 3. Convergence curve for makespan with HSA for the
posed MPSO has been tested with GA and PSO in problem 20_10_02_08_001.
IPPMST. The APD can be calculated as follows

P
I
The improvement rate of MPSO with respect to GA
PDðLÞ
L is 68.3% and with respect to PSO is 38.5%. The con-
APD = ð21Þ vergence curve for makespan with MPSO is presented
I
in Figure 2.
where I is the total number of problems and L stands Similarly, the improvement rate for HSA can be
for index of the problem. defined as follows
From Table 4, it is observed that the calculated
value of APD for GA is 0.2686 and PSO is 0.1385. The Improvement rate ð%Þ
APD value obtained by using MPSO is 0.0851. Based ð23Þ
ðAPDGA, PSO, MPSO  APDHSA Þ
on APD comparisons, MPSO turns out to be superior =
APDGA, PSO, MPSO
to GA and PSO, whereas the performance of GA is
worst among all algorithms. The APD value for HSA The improvement performance of HSA with respect
is 0.0523. It clearly shows HSA is superior to GA, to GA is 80.5%, with respect to PSO is 62.2%, and
PSO, and MPSO. with respect to MPSO is 39.9%. The convergence curve
The improvement rate in APD using MPSO can be for makespan with HSA is presented in Figure 3. The
defined as follows results of GA, PSO, MPSO, and HSA for makespan
and their corresponding APD values are shown in
ðAPDGA, PSO  APDMPSO Þ
Improvement rate ð%Þ = Table 4.
APDGA, PSO
The same benchmark problem has been tested with
ð22Þ total tardiness, and same instances of the problems

Downloaded from pib.sagepub.com by guest on February 18, 2015


Table 4. APD values for makespan.

m n Benchmark problem LB GA PSO MPSO HSA PD of GA = PD of PSO = PD of MPSO = PD of HSA =


(LB 2 GA)/LB (LB 2 PSO)/LB (LB 2 MPSO)/LB (LB 2 HSA)/LB
Bathrinath et al.

3 20 20_03_04_08_001 397 433 415 411 402 0.0907 0.0453 0.0353 0.0126
4 20 20_04_02_02_001 298 327 307 304 299 0.0973 0.0302 0.0201 0.0034
4 20 20_04_02_02_002 235 272 251 242 236 0.1574 0.0681 0.0298 0.0043
4 20 20_04_02_02_003 321 335 324 323 322 0.0436 0.0093 0.0062 0.0031
5 20 20_05_02_02_001 238 272 251 246 239 0.1429 0.0546 0.0336 0.0042
6 20 20_06_02_02_001 199 224 205 204 200 0.1256 0.0302 0.0251 0.0050
7 20 20_07_02_02_001 170 191 178 174 172 0.1235 0.0471 0.0235 0.0118
8 20 20_08_02_02_001 149 171 159 156 152 0.1477 0.0671 0.0470 0.0201
9 20 20_09_02_02_001 132 157 141 137 134 0.1894 0.0682 0.0379 0.0152
10 20 20_10_02_02_001 119 145 132 131 125 0.2185 0.1092 0.1008 0.0504
10 20 20_10_02_02_002 95 121 110 106 102 0.2737 0.1579 0.1158 0.0737
10 20 20_10_02_02_003 128 170 151 148 141 0.3281 0.1797 0.1563 0.1016
10 20 20_10_02_02_004 99 134 119 112 111 0.3535 0.2020 0.1313 0.1212
10 20 20_10_02_02_005 100 142 116 110 106 0.4200 0.1600 0.1000 0.0600
10 20 20_10_02_04_001 119 160 142 131 125 0.3445 0.1933 0.1008 0.0504
10 20 20_10_02_04_002 95 132 117 102 99 0.3895 0.2316 0.0737 0.0421
10 20 20_10_02_04_003 128 169 151 148 141 0.3203 0.1797 0.1563 0.1016
10 20 20_10_02_04_004 99 132 119 112 110 0.3333 0.2020 0.1313 0.1111
10 20 20_10_02_04_005 100 130 116 109 106 0.3000 0.1600 0.0900 0.0600
10 20 20_10_02_06_001 119 154 142 131 125 0.2941 0.1933 0.1008 0.0504
10 20 20_10_02_06_002 95 140 118 110 101 0.4737 0.2421 0.1579 0.0632
10 20 20_10_02_06_003 128 159 140 133 130 0.2422 0.0938 0.0391 0.0156
10 20 20_10_02_06_004 99 130 119 112 110 0.3131 0.2020 0.1313 0.1111
10 20 20_10_02_06_005 100 142 116 109 106 0.4200 0.1600 0.0900 0.0600
10 20 20_10_02_08_001 119 142 131 127 125 0.1933 0.1008 0.0672 0.0504

Downloaded from pib.sagepub.com by guest on February 18, 2015


10 20 20_10_02_08_002 95 137 117 102 99 0.4421 0.2316 0.0737 0.0421
10 20 20_10_02_08_003 128 173 151 148 141 0.3516 0.1797 0.1563 0.1016
10 20 20_10_02_08_004 99 132 119 112 110 0.3333 0.2020 0.1313 0.1111
10 20 20_10_02_08_005 100 126 116 109 106 0.2600 0.1600 0.0900 0.0600
10 20 20_10_02_10_001 119 159 142 131 125 0.3361 0.1933 0.1008 0.0504
Average percentage deviation 0.2686 0.1385 0.0851 0.0523

APD: average percentage deviation; LB: lower bound; GA: genetic algorithm; PSO: particle swarm optimization; MPSO: particle swarm optimization with mutation; HSA: harmony search algorithm; PD: percentage
deviation.
9
10 Proc IMechE Part B: J Engineering Manufacture

Table 5. Total tardiness value comparison.

m n Benchmark problem OPT CSPSO GA PSO MPSO HSA

3 20 20_03_04_08_001 240 – 460 448 432 250


4 20 20_04_02_02_001 144 – 175 158 148 144
4 20 20_04_02_02_002 105 105 122 109 108 105
4 20 20_04_02_02_003 151 – 210 192 161 157
5 20 20_05_02_02_001 149 – 181 163 156 149
6 20 20_06_02_02_001 161 – 193 178 175 168
7 20 20_07_02_02_001 164 – 201 188 178 167
8 20 20_08_02_02_001 177 – 210 192 181 177
9 20 20_09_02_02_001 181 – 205 188 182 181
10 20 20_10_02_02_001 195 – 224 206 201 196
10 20 20_10_02_02_002 135 136.6 152 144 139 136
10 20 20_10_02_02_003 202 – 233 220 206 203
10 20 20_10_02_02_004 180 180 202 195 182 180
10 20 20_10_02_02_005 157 – 200 184 174 160
10 20 20_10_02_04_001 165 – 191 176 167 167
10 20 20_10_02_04_002 85 86.5 129 115 87 85
10 20 20_10_02_04_003 151 – 193 178 162 152
10 20 20_10_02_04_004 162 165.8 186 171 165 162
10 20 20_10_02_04_005 108 – 158 138 135 118
10 20 20_10_02_06_001 161 – 189 174 164 163
10 20 20_10_02_06_002 35 36.7 69 58 39 36
10 20 20_10_02_06_003 122 – 165 147 127 125
10 20 20_10_02_06_004 157 159.4 188 172 159 157
10 20 20_10_02_06_005 83 – 133 125 118 98
10 20 20_10_02_08_001 170 – 223 204 177 173
10 20 20_10_02_08_002 24 24.3 71 63 36 24
10 20 20_10_02_08_003 120 – 156 138 128 122
10 20 20_10_02_08_004 164 169.6 220 199 178 172
10 20 20_10_02_08_005 78 – 128 102 88 81
10 20 20_10_02_10_001 177 – 241 216 204 184

OPT: optimal solution; PSO: particle swarm optimization; CSPSO: clonal selection particle swarm optimization; GA: genetic algorithm; HSA:
harmony search algorithm; MPSO: particle swarm optimization with mutation.

have been compared with clonal selection PSO


(CSPSO) reported by Niu et al.40 and also the optimal
solution (OPT) reported by Tanaka and Araki.8 The
results for total tardiness are shown in Table 5. The
comparison with instances by Niu et al.40 indicates that
the total tardiness value by HSA is either exactly
matching or better for eight instances out of nine
instances. For instances 20_04_02_02_002 and
20_10_02_02_004, total tardiness value by CSPSO are
105 and 180, respectively, and same values are obtained
by HSA. For the instance 20_10_02_02_002, the aver-
age total tardiness value by CSPSO is reported as
136.6, whereas the corresponding value by HSA is 136.
For the instance 20_10_02_04_002, total tardiness val-
ues by CSPSO and OPT are 86.5 and 85, respectively,
whereas HSA results in total tardiness of 85. For Figure 4. Convergence curve for total tardiness using MPSO
instances 20_10_02_04_004, 20_10_02_06_002, and for the problem 20_10_02_08_001.
20_10_02_06_004, total tardiness values by HSA are
better than CSPSO. However, in case of the instance
20_10_02_08_004, CSPSO results in better total tardi- Figures 4 and 5. PSO, originally proposed by Kennedy
ness value as compared to HSA. This is to be noted and Eberhart,37 is mainly used in continuous optimiza-
that total tardiness value produced by HSA is similar tion problems due to quick solutions. However, it has
or superior to GA, PSO, and MPSO. Out of 30 an inherent drawback of premature convergence at ini-
instances, the total tardiness value for 9 instances is tial stage of iterations. The limitations of the PSO can
comparable with results produced by HSA. The con- be overcome by the use of mutation, a commonly used
vergence curves for the total tardiness are shown in operator in GA, through introduction of diversity in

Downloaded from pib.sagepub.com by guest on February 18, 2015


Bathrinath et al. 11

project scheduling problem. The mapping of continu-


ous position values of PSO or HM vector of HSA to
schedule can be made with application of SPV rule with
little modification. In case of continuous optimization
problems, for example, process optimization, both the
proposed algorithms can be applied without the need
of application of SPV rule. However, combinatorial
problems require the generation of combination by the
use of SPV rule. Application of the proposed algo-
rithms to combinatorial case makes problems more
complex than continuous counterpart, and preservation
of diversity of solution becomes an important issue. In
this study, the processing time is assumed to be deter-
ministic and machines are perfectly reliable. In future,
Figure 5. Convergence curve of total tardiness using HSA for the algorithms may be modified to incorporate stochas-
the problem 20_10_02_08_001. tic processing times and machine unreliability for sol-
ving IPPMST. Although the study applies SPV rule for
generation of job schedule, other mapping methods
the solution. The proposed algorithm is termed as may be explored in conjunction with the proposed
MPSO. MPSO is found to perform better than GA and algorithm to study the performance of the algorithms
PSO, in both continuous and combinatorial optimiza- in IPPMST.
tion problems. However, a newly developed meta-
heuristics known as HSA is simple in concept and easy
Declaration of conflicting interests
for implementation. HSA needs only few parameters to
be tuned like PSO. Since HSA is a population-based The authors declare that there is no conflict of interest.
meta-heuristic, the multiple harmonic groups can be
used in parallel. This parallelism technique usually Funding
leads to better establishment with higher efficiency.
Hence, fine balance of intensification and diversifica- This research received no specific grant from any fund-
tion as well as the good combination of parallelism ing agency in the public, commercial, or not-for-profit
with elitism is the key to the success of HSA and, in sectors.
fact, to the success of any meta-heuristic algorithm.
Thus, HSA performs better than GA, PSO, and References
MPSO. 1. Pinedo M. Scheduling theory, algorithms and systems. 3rd
ed. New York: Springer Science + Business Media, LLC,
2008.
Conclusion 2. Lawler EL. A ‘‘pseudopolynomial’’ algorithm for sequen-
In this article, two new meta-heuristic approaches have cing jobs to minimize total tardiness. Ann Discrete Math
been presented for minimizing makespan and total tar- Bk 1977; 1: 331–342.
diness in IPP scheduling problems. In the first 3. Dogramaci A. Production scheduling of independent
jobs on parallel identical machines. Int J Prod Res 1984;
approach, PSO has been modified using a mutation
16: 535–548.
operator and job sequence using SPV rule. In the sec-
4. Elmaghraby SE and Park SH. Scheduling jobs on a num-
ond approach, HSA has been improved using the ber of identical machines. AIIE T 1974; 6: 1–13.
Mahdavi et al.21 parameter in PAR and bw, and the 5. Barnes JW and Brennan JJ. An improved algorithm for
sequence of job has been obtained through SPV rule. scheduling jobs on identical machines. AIIE T 1977; 9(1):
The scheduling performance by both the algorithms 23–31.
has been compared with GA and PSO. It is found that 6. Azigzoglu M and Kirca O. Tardiness minimization on
the performance of MPSO is superior to GA and PSO. parallel machines. Int J Prod Econ 1998; 55(2): 163–168.
However, it is to be noted that HSA outperforms GA, 7. Yalaoui F and Chu C. Parallel machine scheduling to
PSO, and MPSO. Both the algorithms happen to be minimize total tardiness. Int J Prod Econ 2002; 76(3):
novel in the field of scheduling. The study provides a 265–279.
framework for practicing engineers and managers to 8. Tanaka S and Araki M. A branch-and-bound algorithm
with Lagrangian relaxation to minimize total tardiness
schedule large-scale IPP with less computational bur-
on identical parallel machines. Int J Prod Econ 2008;
den. However, tuning of algorithmic parameters is vital 113(1): 446–458.
to obtain best schedules. Both the algorithms are 9. Graham RL. Bounds on multiprocessor timing anoma-
extremely simple to implement. Although the proposed lies. SIAM J Appl Math 1969; 17: 416–429.
algorithms have been tested in IPPMST, the approach 10. Coffman EG, Garey MR and Johnson DS. An applica-
can be extended to flow shop, job shop, flexible low tion of bin-packing to multi-processor scheduling. SIAM
shop, flexible job shop, FMS, and resource-constrained J Comput 1978; 7: 1–17.

Downloaded from pib.sagepub.com by guest on February 18, 2015


12 Proc IMechE Part B: J Engineering Manufacture

11. Lee CY and Massey JD. Multiprocessor scheduling com- 29. Patrizia B, Gianpaolo G, Antonio G, et al. Rolling hori-
bining LPT and MULTIFIT. Discrete Appl Math 1988; zon and fix-and-relax heuristics for the parallel machine
20: 233–242. lot sizing and scheduling problem with sequence-
12. Gupta JND and Ruiz Torres AJ. A LISTFIT heuristic dependent set-up costs. Comput Oper Res 2008; 35(11):
for minimizing makespan on identical parallel machines. 3644–3656.
Prod Plan Control 2001; 12(1): 28–36. 30. Rajakumar S, Arunachalam VP and Selladurai V. Work-
13. Min L and Cheng W. A genetic algorithm for minimizing flow balancing in parallel machines through genetic algo-
the makespan in the case of scheduling identical parallel rithm. Int J Adv Manuf Tech 2007; 33(11–12): 1212–1221.
machines. Artif Intell Eng 1999; 13: 399–403. 31. Chaudhry IA and Drake PR. Minimizing total tardiness
14. Abdekhodaee AH and Wirth A. Scheduling parallel for the machine scheduling and worker assignment prob-
machines with a single server: some solvable cases and lems in identical parallel machines using genetic algo-
heuristics. Comput Oper Res 2002; 29: 295–315. rithms. Int J Adv Manuf Tech 2009; 42(5–6): 581–594.
15. Chang PY, Damodaran P and Melouk S. Minimizing 32. Koulamas C. Decomposition and hybrid simulated
makespan on parallel batch processing. Int J Prod Res annealing heuristics for the parallel machine total tardi-
2004; 42(19): 4211–4220. ness problem. Nav Res Log 1997; 44(1): 109–125.
16. Yeh W-C, Lai P-J, Lee W-C, et al. Parallel machine sche- 33. Kim CO and Shin HJ. Scheduling jobs on parallel
duling to minimize makespan with fuzzy processing times machines: a restricted tabu search approach. Int J Adv
and learning effects. Inform Sciences 2014; 269: 142–158. Manuf Tech 2003; 22(3–4): 278–287.
17. Bathrinath S, Saravanasankar S, Ponnambalam SG, et 34. Bilge Ü, Kiracx F, Kurtulan M, et al. A tabu search algo-
al. Bi-objective optimization in identical parallel machine rithm for parallel machine total tardiness problem. Com-
scheduling problem. In: BK Panigrahi, PN Suganthan, S put Oper Res 2004; 31(3): 397–414.
Das, et al. (eds) Swarm, evolutionary, and memetic com- 35. Anghinolfi D and Paolucci M. Parallel machine total tar-
puting: 4th international conference, SEMCCO 2013, diness scheduling with a new hybrid metaheuristic
Chennai, India, December 19–21, 2013, proceedings, part I approach. Comput Oper Res 2007; 34(11): 3471–3490.
(Lecture Notes in Computer Science (LNCS), vol. 8297). 36. Kurz ME and Askin RG. Scheduling flexible flow lines
Heidelberg: Springer, 2013, pp.377–388. with sequence-dependent setup times. Eur J Oper Res
18. Tasgetiren MF, Liang YC, Sevkli M, et al. A particle 2004; 159(1): 66–82.
swarm optimization algorithm for makespan and total 37. Kennedy J and Eberhart RC. Particle swarm optimiza-
flowtime minimization in permutation flowshop sequen- tion. In: Proceedings of IEEE international conference
cing problem. Eur J Oper Res 2007; 177(3): 1930–1947. neural networks, Piscataway, NJ, 27 November-1 Decem-
19. Bean JC. Genetic algorithm and random keys for sequen- ber 1995, pp.1942–1948. New York: IEEE.
cing and optimization. ORSA J Comput 1994; 6(2): 154– 38. Liao CJ, Tseng CT and Luarn P. A discrete version of
160. particle swarm optimization for flowshop scheduling
20. Michalewicz Z. Genetic algorithms + data structures = problems. Comput Oper Res 2007; 34(10): 3099–3111.
evolution programs. 3rd ed. London: Springer-Verlag, 39. Liu B, Wang L and Jin YH. An effective hybrid PSO-
1996. based algorithm for flow shop scheduling with limited
21. Mahdavi M, Fesanghary M and Demangir E. An buffers. Comput Oper Res 2008; 35(9): 2791–2806.
improved harmony search algorithm for solving optimi- 40. Niu Q, Zhou T and Wang L. A hybrid particle swarm
zation problems. Appl Math Comput 2007; 188: 1567– optimization for parallel machine total tardiness schedul-
1579. ing. Int J Adv Manuf Tech 2010; 49: 723–739.
22. Wilkerson LJ and Irwin JD. An improved algorithm 41. Geem ZW, Kim JH and Loganathan GV. Harmony
for scheduling independent tasks. AIIE T 1971; 3: 245– search optimization: application to pipe network design.
293. Int J Model Simulat 2002; 22(2): 125–133.
23. Baker KR and Scudder GD. Sequencing with earliness 42. Fesanghary M, Mahdavi M, Minary-Jolandan M, et al.
and tardiness penalties: a review. Oper Res 1990; 38(1): Hybridizing harmony search algorithm with sequential
22–36. quadratic programming for engineering optimization
24. Kim S-I, Choi H-S and Lee D-H. Scheduling algorithms problems. Comput Method Appl M 2008; 197: 3080–3091.
for parallel machines with sequence-dependent set-up 43. Kang SL and Geem ZW. A new structural optimization
and distinct ready times: minimizing total tardiness. Proc method based on the harmony search algorithm. Comput
IMechE, Part B: J Engineering Manufacture 2007; 221(6): Struct 2004; 82(9–10): 781–798.
1087–1096. 44. Sivasubramani S and Swarup KS. Environmental/eco-
25. Kumar R, Tiwari MK and Shankar R. Scheduling of nomic dispatch using multi-objective harmony search
flexible manufacturing systems: an ant colony optimiza- algorithm. Electr Pow Syst Res 2011; 81: 1778–1785.
tion approach. Proc IMechE, Part B: J Engineering Man- 45. Zhang L, Gao L and Li X. A hybrid genetic algorithm
ufacture 2003; 217(10): 1443–1453. and tabu search for a multi-objective dynamic job shop
26. Dogramaci A and Surkis I. Evaluation of a heuristic for scheduling problem. Int J Prod Res 2013; 51(12): 3516–
scheduling independent jobs on parallel identical proces- 3531.
sors. Manage Sci 1979; 25(12): 1208–1216. 46. Renna P. Job shop scheduling by pheromone approach in
27. Ho JC and Chang YL. Heuristics for minimizing mean a dynamic environment. Int J Comp Integ M 2010; 23(5):
tardiness for parallel machine. Nav Res Log 1991; 38(3): 412–424.
367–381. 47. Rui Z, Shilong W, Zheqi Z, et al. An ant colony algo-
28. Simon D and Andrew W. Heuristic methods for the iden- rithm for job shop scheduling problem with tool flow.
tical parallel machine flowtime problem with set-up times. Proc IMechE, Part B: J Engineering Manufacture 2014;
Comput Oper Res 2005; 32(9): 2479–2491. 228(8): 959–968.

Downloaded from pib.sagepub.com by guest on February 18, 2015


Bathrinath et al. 13

48. Kim HW and Lee DH. Heuristic algorithms for re- shop scheduling problems. Proc IMechE, Part B: J Engi-
entrant hybrid flow shop scheduling with unrelated neering Manufacture 2008; 222(4): 519–539.
parallel machines. Proc IMechE, Part B: J Engineering 51. Rossi A and Dini G. An evolutionary approach to com-
Manufacture 2009; 223(4): 433–442. plex job-shop and flexible manufacturing system schedul-
49. Marimuthu S, Ponnambalam SG and Jawahar N. Tabu ing. Proc IMechE, Part B: J Engineering Manufacture
search and simulated annealing algorithms for scheduling 2001; 215(2): 233–245.
in flow shops with lot streaming. Proc IMechE, Part B: J 52. Wang W and Brunn P. An effective genetic algorithm for
Engineering Manufacture 2007; 221(2): 317–331. job shop scheduling. Proc IMechE, Part B: J Engineering
50. Pan QK, Wang L and Qian B. A novel multi-objective Manufacture 2000; 214(4): 293–300.
particle swarm optimization algorithm for no-wait flow 53. Fisher ML. A dual algorithm for the one-machine sche-
duling problem. Math Program 1976; 11: 229–251.

Downloaded from pib.sagepub.com by guest on February 18, 2015

Potrebbero piacerti anche