Sei sulla pagina 1di 13

Computers Ops Res. Vol. 25, No. 2, pp.

99-111, 1998
Pergamon PII: S0305-0548(97)00046.-4
© 1998 Elsevier Science Ltd
All rights reserved. Printed in Great Britain
0305-0548/98 $19.00 + 0.00

A HEURISTIC-BASED GENETIC ALGORITHM FOR WORKLOAD


S M O O T H I N G IN A S S E M B L Y L I N E S

Yong Ju Kimt:~, Y e t Keun K i m l t § and Yongkyun C h o ~


Department of Industrial Engineering, Chonnam National University, Kwangju, 500-757, Republic of Korea
2 Department of Computer Science, Chodang University, Muan-Kun, Chollanamdo, 534-800,
Republic of Korea

(Received 9 June 1997)

Scope and Purlmse---This paper considers an assembly line balancing problem with the objective of distributing
the workload evenly to the workstations on an assembly line. Since assembly line balancing is an NP-hard
combinatorial optimization problem, some heuristic method is needed to solve the large-scale problem. A
stochastic search technique known as the genetic algorithm has been proven effective in many combinatorial
problems: This paper presents a new genetic algorithm for solving the problem. The genetic algorithm
incorporates problem-specific information and heuristics in order to cope with the hard constraints of the
problem and to facilitate the search process for high quality solutions. Efficient heuristic-based genetic operators
for the problem are developed. Extensive computational experiments are performed on various test-bed problems
to demonstrate the effects of problem-specific heuristic information on the performance of our genetic algorithm
and to verify the efficacy of our algorithm.

Abstract--Workload smoothing in assembly lines has many beneficial features: it established the sense of equity
among workers, and, more importantly, contributes to increasing the output. Although assembly line balancing
has been studied extensively, workload smoothing as the objective has been relatively neglected in the literature.
This study presents a new heuristic procedure based on genetic algorithm to balance an assembly line with the
objective of maximizing workload smoothness. To improve the capability of searching good solutions, our
genetic algorithm puts emphasis on the utilization of problem-specific information and heuristics in the design
of representation scheme and genetic operators. Extensive computational experiments are performed for the
algorithm. The advantages of incorporating problem-specific heuristic information into the algorithm are
demonstrated. The performance comparison of our genetic algorithm with three existing heuristics and with an
existing genetic algorithm is made. The experimental results show that our algorithm outperforms the existing
heuristics and the compared genetic algorithm. In many cases, our algorithm also improves cycle time. © 1998
Elsevier Science Ltd. All rights reserved

1. INTRODUCTION

This study presents a new heuristic procedure to solve an assembly line balancing problem with the
objective of distributing the workload as evenly as possible among the workstations. Although assembly
line balancing has been studied extensively, workload smoothing as the objective has been relatively
neglected in the literature. Our procedure differs from conventional ones in that it is based on a stochastic
search technique known as the genetic algorithm. Since genetic algorithms have been proven effective
in many combinatorial optimization problems, it seems natural to apply the approach to line balancing
problems. Genetic algorithms, in their original form, are known to have some difficulties when the
problem contains hard constraints, such as the precedence relations among the tasks in assembly line
balancing. Various techniques have been proposed to deal with such difficulties [1,2]. For this study, we
take an approach to utilizing problem-specific information and heuristics in order to improve the
capability of searching good solutions. We performed extensive experiments to verify the efficacy of the
proposed algorithm. As will be shown in later sections, our algorithm outperforms existing heuristic

t To whom all correspondence should be addressed. Tel.: 00 82 62 520 7023; fax: 00 82 62 523 5506; e-mail:
kimyk@chonnam.chonnam.ac.kr.
:~Yong Ju Kim received his B.S., M.S. and Ph.D. degrees in Industrial Engineering from Chonnam National University at Kwangju,
Korea. His research interest is combinatorial optimization including the application of genetic algorithms and tabu search
techniques.
§ Yet Keun Kim is a professor in the Department of Industrial Engineering at Chonnam National University at Kwangju, Korea.
He received his doctoral degree in Operations Research from Seoul National University at Seoul, Korea. His current research
interests are in the area of heuristic combinatorial optimization including the applications of genetic algorithms, tabu search,
and simulated annealing to production scheduling, assembly line balancing, and sequencing problems.
q[ Yongkyun Cho received his B.S. in electronic engineering from Seoul National University. He attended the Ph.D. program in
computer science at University of Southern California, Los Angeles. He is currently a full-time lecturer of Department of
Computer Science at Chodang University, Muan-Kun, Chollanamdo, Korea. His research interests are genetic algorithm, and
the application of logic in computer science.

99
100 Yong Ju Kim et al.

procedures.
The remainder of this paper is organized as follows. In Section 2, we briefly overview the line
balancing problem and present a short introduction to genetic algorithms. The proposed algorithm is
discussed in detail in Section 3. Experimental results are reported in Section 4. Concluding remarks are
given in Section 5.

2. BACKGROUNDS
In this section, we present a brief survey on existing heuristic procedures for assembly line balancing
problems. It is followed by the description of the problem considered in this study. Then, we provide an
overview on genetic algorithms with an emphasis on the use of heuristic information.

2.1. Assembly line balancing problem for workload smoothing (ALBPS)


An assembly line is specified by a finite set of tasks, the processing time for each task, and precedence
relations which define the permissible orderings of the tasks [3]. An assembly line balancing problem
involves assigning the tasks to the workstations on a line while optimizing some objective(s) without
violating the precedence relations imposed on the problem. Since assembly line balancing is an NP-hard
combinatorial optimization problem [4], there have been extensive efforts to develop heuristic methods
for solving large-scale problems [3,5]. Most researches so far have focused either on minimizing the
number of workstations for a given cycle time (called Type-I problem) or on minimizing the cycle time
for a given number of workstations (called Type-II problem).
Several works [6,7] pointed out that workload smoothing in assembly lines (i.e., assigning nearly
identical workload to workstations) has some beneficial features: it certainly establishes the sense of
equity among workers, and, more importantly, contributes to increasing the output. Most heuristic
procedures for Type-I problems are greedy in the sense that they assign as much work as possible to the
early workstations so that the number of workstations is reduced. Consequently, they tend to underload
later workstations. Since the procedures designed for Type-II problems usually solve the problems as a
series of Type-I problems, the same difficulty remains.
However, the heuristic proposed by Gehrlein and Patterson [8], which is a modification of the Type-I
procedure by Hoffmann [9], is shown to improve the workload smoothness for certain classes of line
balancing problems. The original Hoffman's procedure is an enumeration method: the first workstation
is loaded by a subset of available tasks which makes the workload maximum among all feasible
combinations of tasks; the procedure is repeated for the next workstations until all tasks are assigned.
Since the complete enumeration in each workstation assignment imposes heavy computational burden,
Gehrlein and Patterson [8] modified Hoffman's procedure as follows: curtail the search for the set of tasks
to be assigned to a workstation so that the idle time at the workstation falls within an allowed value
determined by a parameter &

{ total workload t
idle time <- cycle t i m e - theoretical minimum number of workstations "

It is reported that this modification contributes not only to reducing the computation time but also to
smoothing the workload among workstations.
The only heuristic procedure known to us which explicitly has workload smoothing as its objective is
due to Rachamadugu and Talbot [6]. Given an assignment of tasks to workstations, the procedure tries
to improve the workload smoothness by iteratively reassigning the tasks across the workstations based
on a simple heuristic. The procedure is used as a postprocessor to the solutions to either Type-I or Type-II
problems.
In this paper, we consider an assembly line balancing problem to optimize the workload smoothness
when the number of workstations is given. We only address the case of single model assembly line with
deterministic task time. To evaluate workload smoothness, we use the same measure as in Rachamadugu
and Talbot [6], the mean absolute deviations (MAD) of workloads. More precisely,

MAD= -1 ~ I T j - ~ ,
n j=l
A heuristic-based genetic algorithm for workload smoothing in assembly lines 101

where
m: number of tasks,
n: number of workstations,
I: set of tasks (I= { 1,2..... i ..... m}),
J: set of workstations (J= { 1,2.... ,j..... n }),
lj: subset of tasks assigned to workstation j, j ~ J,
t~: processing time for task i, i eL
T~=Ei~tti
_~ J
(i.e., workload of workstation j, j E J),
T=(l/n)Y,i~tt i (i.e., mean workload).
Other notation will be defined as needed.

2.2. Genetic algorithms


Genetic algorithms are stochastic procedures which imitate the biological evolutionary process of
genetic inheritance and the survival of the fittest [2,10]. A genetic algorithm is an iterative procedure.
During each iteration, a finite set, called a population, of individuals are maintained. Each individual
represents a potential solution to the problem. Thefitness of each individual is measured according to an
evaluation function (evaluation step). Then, a new population is formed by selecting the more fit
individuals (selection step). Some members of the new population are altered by applying genetic
operators (recombination step). The above process is repeated until some termination criteria are met.
To design a genetic algorithm for a particular problem, the following components must be
determined:
(1) a genetic representation for potential solutions to the problem;
(2) an evaluation function to compute the fitness of each solution;
(3) a selection scheme to generate a new population;
(4) genetic operators to alter the individuals; and
(5) various genetic parameters to control the evolutionary process.
The representation is an encoding of potential solutions to the problem. The evaluation function reflects
the objective of the optimization. Two kinds of genetic operators are used in most genetic algorithms:
crossover operators which combine genetic material from two individuals, and mutation operators which
randomly alter some composition of an individual. These operators, in conjunction with the
representation schemes, greatly influence the performance of genetic algorithms. The genetic parameters
are experimentally determined tofine-tune the evolutionary process. A genetic algorithm should strike a
balance between the exploitation of good individuals to facilitate the search, and the exploration of search
space to prevent a premature convergence to local optima.
Genetic algorithms, in their original form, do not utilize problem-specific information as in
conventional optimization techniques. For some combinatorial optimization problems, especially those
with hard constraints, genetic algorithms show poor performance [ 1,11]. It is attributed to the blindness
of the search process in genetic algorithms (i.e., the inability of genetic algorithms to utilize problem-
specific heuristic information), although it makes genetic algorithm powerful in exploring the search
space and enhances its applicability. Extensive research has been in progress to find the ways to modify
genetic algorithms in this direction.
For many combinatorial optimization problems, it has been noted that the choice of a representation
scheme natural to the problem and of genetic operators incorporating problem-specific heuristics is
critical to the performance [ 12-14]. In particular, when there exist non-trivial constraints on the solutions,
such heuristics are also used to adapt the solutions so as to maintain their feasibility and to improve
solution quality [1]. Suh and van Gucht [12] suggest a technique of blending a genetic algorithm with a
more standard optimization algorithm to combine the advantages of both.
An ALBPS can be viewed as a problem of partitioning a set of objects (tasks) into a given number of
groups (workstations). Many genetic algorithms are proposed to solve various partitioning problems
[11,15-17]. Since the objective of ALBPS is to make the workload for each workstation as equal as
possible, ALBPS is similar to the equal piles problem [15,17]. However, there is an essential difference
between the two problems. In ALBPS, the workstations are given as a sequence and the tasks must be
assigned in accordance with the precedence relations, whereas the groups in the equal piles problem are
sets and there is no such constraint among objects. These hard constraints for ALBPS make it necessary
102 Yong Ju Kim et al.

to use the problem-specific information extensively in our genetic algorithm. As will be seen in the next
section, such information is incorporated in the representation scheme as well as in the genetic
operators.
The application of genetic algorithms to assembly line balancing has not been widely studied so far.
Anderson and Ferris [18] proposed a genetic algorithm for Type-II problems and Leu et al. [ 19] presented
a genetic algorithm to solve Type-I problem with multiple objectives. Both studies differ from ours not
only in the problem considered, but also in the use of problem-specific information. A genetic algorithm
for workload smoothing was presented by Kim et al. [20]. However, they employed a different
representation from ours in this study and did not take advantage of problem-specific heuristics.

3. A HEURISTIC-BASEDGENETICALGORITHMFORALBPS
In this section, we describe the proposed genetic algorithm for ALBPS, called heuristic-based genetic
algorithm (HGA). As previously mentioned, we place emphasis on the utilization of problem-specific
information in the design of representation scheme and genetic operators. Major components of HGA are
explained in some detail, which are followed by a schematic description of overall procedure for HGA.

3.1. Representation
In a genetic algorithm, each individual is an encoding of a potential solution. For ALBPS, it represents
an assignment of the tasks to a given number of workstations. In addition, it must satisfy the precedence
relations among the tasks. Several representation schemes have been proposed for the partitioning
problems [15]. However, not all of them are suitable for ALBPS. In the permutation encoding, for
example, an individual is a permutation of the tasks arranged in the order of the workstations. Under this
scheme, several distinct individuals may result in an identical assignment, since different orderings of the
tasks in a workstation have no significance in ALBPS. Moreover, the number of workstations and the
workload for each workstation can be determined only after some decoding step. Thus, it would be
difficult to find good genetic operators for ALBPS, in which the number of workstations is fixed and the
smoothness, a property of workstations, is the objective. A different encoding scheme is proposed by
Falkenauer [11,17] to eliminate the redundancy in the representation and to employ genetic operators
adapted to the partitioning problems.
For HGA, we use a straightforward encoding scheme, called group-number encoding [15]. An
individual is a string of length m (the number of tasks), each element of which is an integer between 1
and n (the number of workstations). If the ith element is j, it represents that the task i is assigned to
workstation j.
Example. Assume that the number of tasks is 9, the number of workstations 3, and the precedence
relations given as Fig. 1.
Then, the individuals p~=(2 1 3 2 1 3 2 3 3) and p2=(1 1 1 1 2 2 3 3 3) represent the following
assignments: for individual p~, tasks {2, 5}, {1, 4, 7}, and {3, 6, 8, 9} are allotted to workstation 1, 2,
and 3, respectively, and for individual P2, tasks { 1, 2, 3, 4}, {5, 6}, and {7, 8, 9 } to workstation 1, 2, and
3, respectively.
For the partitioning problems, the group-number encoding does not guarantee a unique representation
for a particular solution, since group numbers are used simply to distinguish two groups [11,15]. Also,
renumbering is necessary when some groups are created or deleted as the result of genetic operations.
These objections do not apply in case of ALBPS, since a group-number identifies a specific workstation
in a sequence and the number of workstations is fixed. However, we agree with Falkenauer [11] in that

5 6

6 1

Fig. 1. An example.
A heuristic-based genetic algorithm for workload smoothing in assembly lines 103

i
!! ii! -iii~i;:/iiil, ~dd

T
t:
1 2 3 4 1 2 3 4

(a) (b)

Fig. 2. Two assignmentswith the same MAD values.

genetic operations should be based on the information contained in the grouping. As will be explained
below, the units of genetic operations in HGA are groups (workstations), so that the groups with desirable
qualities have higher probability of being preserved.

3.2. Evaluation function


An evaluation function determines the measure of fitness for potential solutions. This measure is used
to select the more fit individuals for the next generation of population so that they have a chance to inherit
their good characteristics to offsprings. In ALBPS, the objective is to distribute the workload as equal as
possible among workstations and is measured by MAD defined in Section 2. However, as indicated by
Falkenauer [17], there is a subtle point to consider. See, for example, two workload assignments in Fig.
2 with same MAD values. The assignment (b) has one workstation (workstation 2) whose workload is
exactly equal to the mean workload, whereas the assignment (a) has none. In HGA, the genetic operations
are performed on the workstation basis and the good characteristics are workstations with workload close
to the average, Thus, workstation 2 in (b) is the sought-for good characteristic to preserve.
Therefore, the assignment (b) is preferable to (a), and should be given higher fitness. We adopt the
following evaluation function for HGA:

eval= 1 ~ IT~- TI.


n j=l

Note that eval faithfully reflects the objective function and, at the same time, has more resolving power.

3.3. Selection
A selection scheme should provide a balance between two factors: population diversity and selective
pressure. If selective pressure is too great, the population diversity decreases and this may result in a
premature convergence. Weak selective pressure makes the search ineffective. Among many selection
techniques, we use a tournament selection employed in messy genetic algorithms [21]. Given a
population, we first form a random permutation of the whole population. The permutation is divided into
blocks of k (called tournarnent size) individuals. From each block, we select the best individual
(determined by their fitness) as a candidate to be included in the next generation. If all the blocks are
exhausted, a new permutation is formed. The process continues until the desired population size is
obtained.
Those selected candidates undergo genetic operations to be explained later. After the applications of
genetic operators, they are evaluated again. To finally determine the population for the next generation,
we use additional elitist strategy: if the best individual in the current generation is better than all
candidates, it is included in the next generation replacing the worst candidate. Thus we guarantee the best
solution obtained during the iterations to be preserved without being accidentally destroyed by genetic
operations.

3,4. Heuristic structural crossover (HSX) and heuristic structural mutation (HSM)
The most important role of crossover operators is to transmit good characteristics (i.e., workstations
with workloads close to the average) of parents to offsprings. Therefore, the crossover operator for
ALBPS must effectively identify and transmit such workstations. A number of crossover operators for the
partitioning problems are proposed to treat the groups as the basic units of operation [11,16,17]. The
structural crossover (SX) proposed by von Laszewski [16], for example, randomly chooses a group from
104 Yong Ju Kim et al.

one of the two parents and copies it to an offspring. The remaining groups are copied from the other
parent. Since this copying process usually destroy the requirements of the problem, the offspring
undergoes an adaptation step to maintain the feasibility. ALBPS can be viewed as a partitioning problem
with constraints and SX uses the same representation scheme. Therefore, the crossover operator
developed here for HGA, called heuristic structural crossover (HSX), is similar to SX. However, there
are major differences:
(1) HSX chooses and copies multiple groups (workstations), instead of one group in SX;
(2) HSX uses problem-specific information in choosing the groups to be copied, whereas the choice
in SX is at random; and
(3) The adaptation step of HSX is, of course, different from that of SX, due to the problem-specific
constraints of ALBPS.
To explain HSX in detail, we need some notation:
T/(p): workload of workstation j, j e J in assignment p,
Di(p)= ITj(p)- 7q, j • J (i.e., absolute workload deviation for workstation j in assignment
p),
B~(p~,p2)=Dj(pO - D~(p2), j • J (i.e., difference of the absolute workload deviations for workstation
j in assignments p~ and P2)-
Let p~ and P2 be the two parents selected for crossover. The procedure for HSX is as follows:
Step 1: Calculate Bj(p,p2) for j = 1..... n. Sort J in the decreasing order of Bj(p~,p2).
Step 2: Form a subset Q(p~,p2) of workstations by choosing the first Nx(<n/2) workstations from J.
Step 3: Generate an offspring o~ by assigning a workstation each task i, i= 1..... m, as follows:
(a) if task i was assigned in P2 to the workstation in Q(p~,p:), then assign it in o~ to the same
workstation as in P2;
(b) otherwise, assign it in o~ to the same workstation as in p~;
Step 4: Among the tasks in o~ whose assignments are copied from p , mark
(a) the tasks assigned to the workstations in Q(P~,p2), and
(b) the tasks in violation of the precedence relations.
Step 5: Reassign the marked tasks in Ol using the adaptation step described in Section 3.5.
Another offspring 02 is produced by reversing the roles of p~ and p:. In Step 2, Q(P,,p2) consists of Nx
workstations for which the assignment P2 is better than assignment p~, since their workloads are closer
to the average. Those workstations with good characteristics are preserved in the offspring o~ (Step 3(a)),
while the remaining workstations are inherited from p~ (Step 3(b)). Thus, the offspring o~ is similar to
parent p~, but incorporates some good characteristics from the other parent P2. HSX tries to improve the
current assignment by crossing it with another assignment, without disturbing the current one too much.
The number Nx is one of the control parameters for HGA (see Section 4), and determines how much
crossover should occur. Notice that we use a problem-specific information (MAD values) in Steps 1 and
2 of the crossover operation.
The offsprings generated in Step 3 do not necessarily satisfy the precedence relations imposed on
ALBPS. Moreover, by copying the assignment p~ in Step 3(b), some tasks may be assigned to a
workstation contained in Q(p~,p:). In this case, the workload for the workstation in o, will be greater than
the workload in P2, and the goodness of the workstation may have been destroyed. To maintain the
feasibility as a solution to ALBPS and to preserve the good quality inherited from parents, each offspring
undergoes the adaptation in Steps 4 and 5. This step is described in Section 3.5.
Example (continued).
Let Nx= 1.7"= 16.
Step 1:
T~(p0= 15, T2(p0= 17, and T3(p0= 16. D,(p,)= 1, D2(p0= 1, and D3(p,)=0.
Tl(P2)=21, T2(p2)= 16, and 7'3(/92)= 11. Dl(p2)=5, D2(P2)=0, and D3(P2)--5.
Bl(pl,p2) = - 4, B2(pl,p2)= 1, and B3(p.p2)= - 5. After sorting J in the decreasing order of B/(pl,p2), we
have J= {2,1,3}.
Step 2: Q(p,p2)= {2}.
Step 3: Generate o~ as follows.
A heuristic-based genetic algorithm for workload smoothing in assembly lines 105

p~=( 2 1 3 2 1 3 2 3 3)
fill III)
ol=( 2 1 3 2 2 2 2 3 3)
T !
p2=( 1 1 1 1 2 2 3 3 3)

Step 4:
(a) The tasks 1, 4, and 7 in o, are marked, because they are assigned to workstation 2 which is
contained in Q(p,,p2).
(b) The task 3 in ot is marked, because the task is in violation of the precedence relations, Thus, we
have ol=(* 1 * * 2 2 " 33).
Step 5: After the adaptation step (see the example in Section 3.5), we obtain
o,=(1 1 2 1 2 2 3 3 3 ) .
Another offspring o2=(1 13 1 2 3 2 3 3 ) is produced by reversing the roles ofpl and P2-
A mutation operator acts on a single parent and produces an offspring by introducing small changes.
The main role of mutations in a genetic algorithm is to ensure the diversity of the potential solutions and
to prevent a premature convergence to local optima. In HGA, we use the heuristic structural mutation
(HSM). For each individual, HSM randomly chooses some tasks according to mutation rate and marks
those tasks for reassignment. The reassignment is performed by the same adaptation procedure as used
in HSX.

3.5. Adaptation step


As mentioned above, the marked tasks in Step 4 of HSX and in HSM must be reassigned in order to
satisfy the feasibility as a solution to ALBPS and to maintain good characteristics introduced by genetic
operations. For this, we employ a greedy heuristic procedure based on best-fit-decreasing rule. A similar
procedure is used for equal piles problems [17]. The adaptation heuristic for HGA is based on the task
processing time and takes into account the precedence relations. This is another place where problem-
specific information is utilized in HGA. We need the following notation to explain the adaptation
procedure:
R: set of tasks to be reassigned,
NRi: number of tasks in R immediately preceding task i,
E(i): set of tasks in I - R immediately preceding task i,
L(i): set of tasks in I - R following task i,
Tj: workload of workstation j for the tasks in I - R.
The heuristic procedure for adaptation in HSX and HSM is given as follows:
Step 1: Set R to contain all the marked tasks and calculate T j , j = 1, 2 ..... n.
Step 2: Calculate NRi, i E R.
Step 3: Among the tasks in R, choose a task i* such that NRr=O. If there are more than one, choose
the task with the largest processing time. If a tie still occurs, break the tie randomly.
Step 4: Calculate E(i °) and L(i*). Find the latest among the workstations to which the tasks in E(i*) are
assigned. If E(i*) is empty, pick the first workstation. Find the earliest among the workstations to which
the tasks in L(i*) are assigned. If L(i') is empty, pick the last workstation. (These are the range of
workstations to which task i ° can be assigned without violating the precedence relations.)
Step 5: Among the workstations in the range found in Step 4, determine j" as follows:
(a) if there exist workstations such that T/+t,,-<T, then l e t j ' be the one with the largest T/;
(b) otherwise, let j* be the workstation with the smallest Tj'.
If a tie occurs either in (a) or in (b), break the tie randomly.
Step 6: Reassign task i* to workstation j'.
Step 7: Remove task i ° from R. IfR is empty, then stop; otherwise, let Tr'*---Tj +t,. and go to Step 2.
During the adaptation, a task with the largest processing time among the marked tasks is selected while
not violating the precedence relations (Step 3). The latest and the earliest workstation to which the
selected task can be allocated are found in accordance with the precedence relations (Step 4). To
106 Yong Ju Kim et al.

determine the workstation, a simple heuristic is used so that the added workload fits best without
exceeding the average workload. If such fit cannot be found, a workstation with the smallest workload
is chosen (Step 5). The task is reassigned to the selected workstation (Step 6). These steps are repeated
until all the marked tasks are reassigned (Step 7).
Example (continued).
o~=(* 1 " ' 2 2 " 3 3 ) .
Step 1: R= {1, 3, 4, 7}. TI'=5,T2'=16, and T3'=5.
Step 2: NRt =0, NR3= 1, NR4 = 1, and NR7 = 1.
Step 3: Set i*= 1 since NRt=O.
Step 4: E(1)={}, and L(1)= {6, 8, 9}. E(1) is empty, and the task 6, 8 and 9 in L(1) are assigned to
workstation 2, 3 and 3, respectively. Therefore, task 1 can be assigned to the workstation ranging from
1 to 2 without violating the precedence relations.
Step 5: Tl'+t t = 5 + 4 = 9 and T2' +t~ = 16+4=20. Since Tl' +tl--<~r= 16, set j*= 1 from Step 5(a).
Step 6: Obtain o~=(1 1 * * 2 2 * 3 3) by assigning task 1 to workstation 1.
Step 7: Remove task 1 from R. We have R = {3,4,7 } and Tt' = 9.
Step 2: NR3=0, NR4=0, and NR7= 1.
Step 3: Since NRa=NR4=O and task 4 has a greater processing time than task 3, set i*=4.
Step 4: E(4)= {1} and L(4)= {8, 9}. The task 1 in E(4) is assigned to workstation 1. Both the task 8
and 9 in L(4) are assigned to workstation 3. Thus, task 4 can be assigned to the workstations from 1
to 3 without violating the precedence relations.
Step 5: T~'+t4=16, T2'+t4=23, and T3'+t4=12. Setj*=l since Tl'+t 4 and T3'+t4 is less than 7" and
Tt' > T3'.
Step 6: Obtain o1=(1 1 * 1 2 2 * 3 3) by assigning task 4 to workstation 1.
Step 7: Remove task 4 from R. We have R= {3,7} and T~'=16.
This procedure is repeated until R is empty. Finally, oj=(1 1 2 1 2 2 3 3 3) is produced.

3.6. The overall procedure of riGA


Having described various components of HGA, we are now ready to present an overall schematic of
HGA. We need additional notation:
P(t): population in the tth generation (t=0, 1, 2 .... ),
Np: population size,
Pc: crossover rate,
Pm: mutation rate.
Step 1: (Initial Population) Let t=0. Form the initial population P(0) by randomly generating Np
individuals (i.e., assignments) satisfying the precedence relations.
Step 2: (Evaluation) Evaluate each individual in P(0) using eval in Section 3.2.
Step 3: (Selection) Select Np individuals from P(t) as in Section 3.3. Copy them to P(t+ 1).
Step 4: (Crossover) Randomly choose 0.5 × Pc x Np pairs of parents in P(t+ 1) for crossover.
Apply HSX to each pair of parents to obtain Pc ×Np offsprings. Replace the parents with the
offsprings in P(t+ 1).
Step 5: (Mutation) For each individual in P(t+ 1), randomly choose Pm× Np × n elements (i.e., tasks)
for mutation. Apply HSM for the chosen elements.
Step 6: (Evaluation) Evaluate each individual in P(t+ 1) using eval.
Step 7: (Elitist strategy) If the best individual in P(t) is better than the best individual in P(t+ 1), replace
the worst individual in P(t+ 1) with the best in P(t). P(t+ 1) is the population for the next generation.
Step 8: Let t,.--t+ 1. If the stopping criteria are met, then stop. Otherwise, go to Step 3.

4. EXPERIMENTS AND ANALYSIS


The experiments are carried out on five test-bed problems: 45-task problem in Kilbridge and Wester
[22], 70-task in Tonge [23], 83- and 11 l-task in Arcus [24], and 148-task in Bartholdi [25]. The first four
problems are known to be challenging among assembly line balancing [26] and Bartholdi's problem has
the largest number of tasks in the literature known to us. We modify the processing times in some of the
problems, since some values are much larger than other processing times and impose a limit on the
A heuristic-based genetic algorithm for workload smoothing in assembly lines 107

number of workstations: the processing time of task 18 in Tonge's problem is changed from 319 to 143,
and those of tasks 79 and 108 in Bartholdi's problem from 2.81 to 1.11 and from 3.83 to 0.43,
respectively. For Batholdi's problems, the processing times are multiplied by 100 and the restrictions that
some tasks are assigned only to one side of the line are ignored for comparison with other problems.
The experiments for HGA are repeated 30 times for each problem setting, and the best value obtained
at each run is taken. At each experiment, HGA stops if no improvement is made during the consecutive
productions of 3000 individuals. HGA is coded in C++ and implemented on an IBM-PC with Pentium
CPU at 66 MHz. First, various genetic parameters are empirically determined, and the genetic operators
previously discussed are investigated. The performance comparison of HGA with three existing
heuristics and with an existing genetic algorithm is made. Although we perform extensive experiments,
only the representative results are presented.

4.1. Genetic parameter setting

In HGA, several control parameters, such as population size Np, tournament size k, crossover rate Pc,
mutation rate Pm, and the number of crossover workstations Nx, must be determined. Although many
studies are done on determining proper values of the parameters [27], determining the genetic parameters
still remains as an art rather than science [2]. Extensive preliminary experiments are carded out to
determine appropriate parameter values. We use the population size Np of 100. The tournament size k is
set to 2. We test crossover rate Pc and mutation rate Pm for the values of 0.05, 0.10, 0.15 ..... 0.95. The
best combinations of crossover and mutation rates turn out to be in the range of (0.70--0.90) and
(0.104).20), respectively, although there are variations depending on the problems. The mutation rate is
given high relatively to the one usually used in classical GA. It should be high since precedence relations
of ALBPS and adaptation in HGA make it difficult for the mutated gene to take a new position within
the individual so that in many cases the genes may not be actually mutated. The number of crossover
workstations Nx controls the amount of information inherited to the offsprings from parents during
crossover. For the problems with small number of tasks and workstations, the changes in Nx do not have
significant effect on the performance of HGA. However, for the problems with a large number of tasks
and workstations, HGA becomes sensitive to the values of Nx. In general, Nx's values in the range from
n14 to n13 produce good solutions. For the larger values of Nx, HGA converges prematurely. Table 1
summarizes the ranges of the control parameters which show good performance. The values in
parentheses are those actually used in the experiments reported here.

Table 1. Proper ranges of control parameters

Problems No. of tasks No. of workstations Crossover rate Mutation rate No. of crossover
workstations

Kilbridge and Wester 45 3,4,6,7,10 0.5-0.9 0.20 1-2


(0.70) (0.20) (I)

Tonge 70 7,8,9,10,11 0.6-0.8 0.20 2-3


(0.70) (0.20) (2)
21,22 0.7-0.9 0.25 4-5
(0.80) (0.25) (5)

Arcus 83 8,9,10,11 0.8-0.9 0.15 1-3


(0.85) (0.15) (2)
12,14,16 0.8-0.9 0.15 2-4
(0.85) (OA5) (3)

Arcus 111 9 0.8-0.9 0.15 2-3


(0.85) (0.15) (2)
15,16,18 0.8-0.9 0.15 4.--6
(0.85) (0.15) (5)
27,28 0.8-0.9 0.15 7-8
(0.85) (0.15) (8)

Bartholdi 148 10,15,20 0.8--0.9 0.I0 2-4


(0.85) (0.15) (3)
25,30 0.8-0.9 0.I0 5-8
(0.85) (0.15) (6)
108 Yong Ju Kim et al.

400 900
I Hx.Ha
, I Hx-Ra 800
350
i- --
I
~" 700

i 1""', I"..... ~-a= I


m

"o 250
.tI" •........... '...... - |
400
...............
C c

J 150 2oo
N . C .................. 100

100 = i = ~ 0
~|iimalumilmimilm:
I | I !

0 25 50 75 100 0 25 50 75 100
Evaluation(x100) Evaluation(xlO0)

(a) Arcus's 83-task problem (n=12) (b) Arcus's 111-task problem (n=18)

Fig. 3. Effects of heuristic information on the performance of HGA.

4.2. Experiments on the use of heuristic information


As mentioned in previous sections, HGA heavily relies on problem-specific information and heuristic
procedures based on it. To see the efficacy of such approach, we compare the performance of HGA when
randomness is used instead of heuristic information. First, Step 2 of HSX is modified to randomly choose
the workstations to be crossed over. We denote the original method as Hx and the random one as Rx.
Second, the adaptation step in Section 3.5 is modified to randomly select the workstations to which the
marked tasks are reassigned. We denote the original method as Ha and the random one as Ra. We
compare the MAD values for the four combinations and the typical results are shown in Fig. 3. It is
obvious that the use of heuristic information improves both search ability and solution quality. It is
interesting to note that the use of heuristic information in the adaptation step has far greater impact on
the performance of HGA than in Step 2 of HSX. This indicates that, in the presence of hard constraints,
selection scheme and genetic operators by themselves are not sufficient and that an effective and efficient
adaptation step is critical to the performance of a genetic algorithm.
4.3. Performance comparison
First, the performance of HGA is compared with that of three existing heuristics: maximum duration
rule (MAXDUR) proposed by Moodie and Young [28], modified Hoffmann procedure (Hoff-0) by
Gehrlein and Patterson [8], and Rachamadugu and Talbot's [6] procedure (RT). MAXDUR is known to
have good performance for Type-I problems among single path decision rules. Hoff-0 is known as one
of the best heuristics for Type-I problems. Moreover, Hoff-0 is also reported to improve the workload
smoothness [5]. As mentioned above, MAXDUR and Hoff-0 are originally designed for Type-I problems
and, in ALBPS, the number of workstations is fixed to n. To use MAXDUR and Hoff-0 for ALBPS, we
iteratively solve Type-I problems by incrementing the cycle time by one time unit, until the number of
workstations is equal to n. For Hoff-0, we set 0 to 2.0, which showed the most improvement in the
smoothness [6]. To find the initial assignment for RT, MAXDUR and Hoff-0 are also used. A summary
of the results is shown in Table 2. For MAXDUR and Hoff-0, the MAD values with and without
postprocessing by RT are shown. For HGA, the average, the best, the worst MAD values, and the
standard deviation among 30 runs are shown. The improved rate is computed by 100 × (the best MAD
of heuristics with RT - the average MAD of HGA)/(the best MAD of heuristics with RT). The average
MAD values of HGA are better than any of the heuristics except two cases. The best MAD values of
HGA are best for all cases. Even the worst MAD values of HGA are best except four cases. Overall,
HGA outperforms three existing heuristics in various problem settings. The computation time for
MAXDUR and Hoff-0 including the postprocessing by RT is below 15 s, while HGA takes from 14 to
35 s depending on the problem size. However, the increase in computation time due to the problem size
is moderate.
A heuristic-based genetic algorithm for workload smoothing in assembly l i n e s 109

Second, the performance comparison of HGA with the genetic algorithm proposed by Kim et al. [20]
is also made. The impartial comparison between genetic algorithms is difficult since the performance
depends on many genetic components such as representation, genetic operators, genetic parameters, etc.
In the compared genetic algorithm (c-GA), a permutation encoding and a specific decoding suitable for
ALBPS are employed. Partially mapped crossover and reciprocal exchange are used as the genetic

Table 2. Comparison of workload smoothness for HGA and otherheuristics


Problems No. of MAXDUR HOFF-2.0 RT with HGA Improvement
work-stations rate (%)
MAXDUR HOFF-2.0 Best Worst Mean s.d.

Kilbridge and Wester's 45-tasks 3 1.67 0.00 1.67 0.00 0.00 0.00 0.00 0.00 0
4 12.25 0.00 3.25 0.00 0.00 0.00 0.00 0.00 0
6 1.83 0.00 1.83 0.00 0.00 0.00 0,00 0.00 0
7 2.06 0,24 1.12 0,24 0,24 0.24 0.24 0.00 0
10 2.98 0.64 0.98 0.64 0.32 0,36 0.33 0.02 48

Tonge's 70-tasks 7 9.24 1.22 7.49 1.22 0.49 0.78 0.57 0.11 53
8 17.50 0.44 12.94 0.44 0.38 0.94 0.59 0.18 -34
9 8.33 1.56 6.78 1.56 0,44 1.78 1.56 0.48 0
10 6. I0 2.00 4.90 2.00 0.20 1.40 0.69 0.37 66
11 18.70 2.08 13.10 2.08 0.17 1.02 0.47 0.28 77
21 9.29 9.90 6.21 5.39 1.41 2,45 2.00 0.24 63
22 6.86 2.40 4.19 2.40 1.83 1.83 1.96 0.10 18
Arcus' 83-tasks 8 718.31 152,38 461.38 152.38 57.16 57.16 57.16 0.00 62
9 493.35 88.81 410.11 88.81 25.80 34.05 27.43 2.15 69
10 544.46 26.04 506.40 26.04 16.30 23.18 17.83 1.60 32
II 568.79 120,51 480.59 120.51 69.49 104.48 88.82 6.36 26
12 642.03 382,46 399.47 2 0 3 . 1 1 130.35 130.35 130.35 0.00 36
14 362.57 44.94 362.57 44.94 28.55 66.98 52.02 11.69 - 16
16 405.00 168.35 405.00 168.35 56.45 107.33 72.22 11.90 57

Arcus" 11l-tasks 9 15.36 4.57 5.68 4.57 0.62 7,73 2.44 1.54 47
14 475.98 239.10 190.91 148.92 7.84 15.41 9.90 2.23 93
15 228.81 230.42 97.25 117.02 7.13 36.85 14.41 6.93 85
16 112.67 102.99 22.84 66.69 3.09 21.55 8.48 5.34 63
18 105.70 135.37 47.72 93.83 4.19 41.84 16.76 8.71 65
27 185.43 238.83 132.92 1 6 1 . 9 6 54.95 88.42 66.72 9.10 50
28 280.34 330.06 168.98 167.76 70.97 94.04 80.95 4.85 52

Bartholdi's 148-tasks 10 18.68 2.64 11.08 2,12 0.32 1.08 0.56 0.20 74
15 8.98 0.85 6.08 0,85 0.50 0.62 0.75 0.28 12
20 10.55 2.33 4.45 1,77 0.18 0.76 0.43 0.17 76
25 8.16 8.61 4,12 2,30 0.49 1.78 0.79 0.26 66
30 7.99 2.95 3.81 1.54 0.78 1.36 1.06 0.16 31

400 900

I...... c.GA1 800


350 I ...... c-GA I
HGA~ ,.-, 700 HGA

|
300

"o 250
i
~ 500

c
400

30O

........................... ~.200
150
100
100 I I I | 0 | I I I

25 50 75 100 0 25 50 75 1O0
EvaluaUon(xl00) Evaluation(xl00)
(a) Arcus's 83-task problem (n=12) (b) Arcus's 11 l-task problem (n=18)

Fig. 4. Performance comparison of HGA with a compared genetic algorithm (c-GA).


110 Yong Ju Kim et al.

Table3. Comparisonof cycletimefor HGAand otherheuristics


Problems No. of MAXDUR HOFF-2.0 RT with HGA
workstations
MAXDUR HOFF-2.0 Best Worst Mean s.d.
Tonge's70-tasks 8 451 439 450 439" 439 444 440.67 ÷ 0.94
11 333 322 333 322÷ 320 324 320.43" 0.95
21 176 177 176" 177 170 178 176.37÷ 1.96

Arcus' 83-tasks 9 8632 8528 8383" 8528+ 8528 8528 8528.00÷ 0.00
10 7835 7614 7835 7614÷ 7598 7634 7607.00" 8.79
14 5576 5467 5576* 5467" 5467 5761 5590.43 80.37

Arcus' I 1Iqasks 9 16736 16729 16736 16729" 16724 16785 16730.63" 11.00
16 9494 9572 9457÷ 9572 9424 9602 9449.53" 41.85
27 5752 5810 5752' 5810÷ 5754 6125 610S.67 67.23
28 5689 5689 5689" 5689" 5689 5689 5689.00" 0.00
"Thebest, "thesecondbest.

operators. The operators may produce illegal offsprings, so that the offsprings are repaired. The genetic
parameters used are properly set by preliminary experiments. The more details can refer to Kim et al. [20]
The typical results are shown in Fig. 4. H G A outperforms c-GA. The main reasons for the result may be
thought two-fold. One is the degree of redundancy (i.e., the number of distinct individuals representing
the identical solution o f the original problem) [11], and the other is the utilization of problem-specific
information. In HGA, no redundancy exists and problem-specific information is incorporated, whereas in
c - G A redundancy exists to a significant degree and the information is not used.
Finally, it may be of interest to compare the cycle times of H G A with those of the existing heuristics.
Intuitively, if the smoothness is improved, the cycle time should be also improved. However, two
objectives are not necessarily identical, since the cycle time is determined by the workstation with the
greatest workload while the smoothness is affected also by the workstations with below-average
workloads. In Table 3, some of the cycle times for H G A and the heuristics are shown. It is somewhat
surprising that the cycle times for H G A are best in many cases, or at least comparable with the existing
heuristics. This suggests that the workload smoothness should be considered as an important objective
in assembly line balancing.

5. CONCLUSION
We have presented a heuristic-based genetic algorithm to solve an assembly line balancing problem
with workload smoothness as the objective. We have introduced various modifications in the genetic
representation, the evaluation function, and, in particular, the genetic operators. The main emphasis is to
utilize problem-specific information and heuristics, in order to cope with the hard constraints of the
problem and to facilitate the search process for high quality solutions. We have reported a summary of
extensive computational experiments for the algorithm. Incorporating problem-specific heuristic
information significantly improves the search capability for good solutions. In particular, the use of an
efficient heuristic in the adaptation step is crucial for the performance of the algorithm. We have
compared our algorithm with three existing heuristics and with an existing genetic algorithm. The
experimental results show that our algorithm outperforms the existing heuristics and the algorithm
compared in optimizing the workload smoothness. In many cases, our algorithm also improves cycle
time.

REFERENCES
1. Eiben, A. E., Rau6, P.-E. and Ruttkay, Zs., How to apply genetic algorithms to constrained problems. In Practical Handbook
of Genetic Algorithms, Vol. 1: Applications, ed. L. Chambers. CRC Press, Boca Raton, 1995, pp. 307-365.
2. Michalewicz, Z., Genetic Algorithms + Data Structures= Evolution Programs, 2nd ed. Springer-Verlag,Berlin, 1994.
3. Ghosh, S. and Gagnon, R. J., A comprehensive literature review and analysis of the design, balancing and scheduling of
assembly systems. International Journal of Production Research, 1989, 27, 637-670.
4. Gutjahr, A. L. and Nemhauser, G. L., An algorithm for the line balancing problem. Management Science, 1964, 11, 308-315.
5. Talbot, F. B., Patterson, J. H. and Gehrlein. W. V., A comparative evaluation of heuristic line balancing techniques.
Management Science, 1986, 32, 430--454.
6. Rachamadugu, R. and Talbot, B., Improving the quality of workload assignments in assembly lines. International Journal of
Production Research, 1991, 29, 619-633.
7. Smunt, T. L. and Perkins, W. C., Stochastic unpaced line design: Review and further experimental results. Journal of
Operations Management, 1985, S, 351-373.
A heuristic-based genetic algorithm for workload smoothing in assembly lines I11

8. Gehrlein, W. V. and Patterson, J. H., Balancing single model assembly lines: Comments on a paper by E.M. Dar-El (Manoor).
AIIE Transactions, 1978, 10, 109-112.
9. Hoffmann, T. R., Assembly line balancing with a precedence matrix. Management Science, 1963, 9, 551-562.
10. Goldberg, D. E., Genetic Algorithms in Search, Optimization & Machine Learning. Addison-Wesley, Reading, 1989.
I I. Falkenauer, E., A new representation and operators for genetic algorithms applied to grouping problems. Evolutionary
Computation, 1994, 2, 123-144.
12. Suh, J. Y. and van Gucht, D., Incorporating heuristic information into genetic search. In Proc. 2nd Int. Conf. Genetic Algorithms
and Their Applications, ed. J. J. Grefenstette. Lawrence Erlbaum, 1987, pp. 100-107.
13. Grefenstette, J. J., Incorporating problem specific knowledge into genetic algorithms. In Genetic Algorithms and Simulated
Annealing, ed. L. Davis. Pitman, London, 1987, pp. 42--60.
14. Chockalingam, T. and Arunkumar, S., Genetic algorithm based heuristics for the mapping problem. Compuwrs and Operations
Research, 1995, 22, 55--64.
15. Jones, D. R. and Beltramo, M. A., Solving partitioning problems with genetic algorithms. In Proc. 4th Int. Conf. Genetic
Algorithms, ed. R. Belew and L. Booker. Morgan Kaufmann, 1991, pp. 442--449.
16. Von Laszewski, G., Intelligent structural operators for k-way group partitioning problem. In Proc. 4th Int. Conf. Genetic
Algorithms, ed. R. Belew and L. Booker. Morgan Kaufmann, 1991, pp. 45-52.
17. Falkenauer, E., Solving equal piles with the grouping genetic algorithm. In Proc. 6th Int. Conf. Genetic Algorithms, ed. L. J.
Eshelman. Morgan Kaufmann, 1995, pp. 492-497.
18. Anderson, E. J. and Ferris, M. C., Genetic algorithms for combinatorial optimization: the assembly line balancing problem.
ORSA Journal on Computing, 1994, 6, 161-173.
19. Leu, Y. Y., Matheson, L. A. and Rees, L. P., Assembly line balancing using genetic algorithms with heuristic-generated initial
populations and multiple evaluation criteria. Decision Science, 1994, 25, 581--606.
20. Kim, Y. K., Kim, Y. J. and Kim, Y., Genetic algorithms for assembly line balancing with various objectives. Computers and
Industrial Engineering, 1996, 30, 397--409.
21. Goldberg, D. E., Korb, B. and Deb, K., Messy genetic algorithms: Motivation, analysis, and first results. Complex Systems,
1989, 3, 493-530.
22. Kilbridge, M. D. and Wester, L., A heuristic method of assembly line balancing. Journal of Industrial Engineering, 1961, 12,
292-298.
23. Tonge, F. M., A Heuristic Program of Assembly Line Balancing. Prentice-Hall, Englewood Cliffs, 1961.
24. Arcus, A. L., An analysis of a computer method of sequencing assembly line operations. Ph.D. dissertation, University of
California, Berkeley, 1963.
25. Bartholdi, J. J., Balancing two-sided assembly lines: a case study. International Journal of Production Research, 1993, 31,
2447-246 I.
26. Hoffmann, T. R., Assembly line balancing: A set of challenging problems. International Journal of Production Research, 1990,
28, 1807-1815.
27. Grefenstette, J. J., Optimization of control parameters for genetic algorithms. IEEE Transactions on Systems, Man, and
Cybernetics, 1986, 16, 122-128.
28. Moodie, C. L. and Young, H. H., A heuristic method of assembly line balancing for assumptions of constant or variable work
element times. Journal of Industrial Engineering, 1965, 16, 23-29.

Potrebbero piacerti anche