Sei sulla pagina 1di 6

A Novel Neural Network Model for the Performance Evaluation

of Flexible Manufacturing Systems


University of Catania - Faculty of Engineering

Institute of Informatic and Telecommunications
Viale.A.Doria, 6 Catania 95 125 (ITALY), fax:+39 95 338280

Abstract - The paper deals with the problem of performance guarantee an optimal solution to the optimization problem,
optimization of Flexible Manufacturing Systems. As widely they have the evident advantage of low calculating times.
documented in literature, this is a hard task on account of its One of the most important contributions in this field is
computational complexity. For this reason a number of heuristic provided by [3], in which two heuristic algorithms are
techniques are currently available, the best known of which are
based on Event Graphs, which are a particular class of Petri
proposed. Although they feature high percentages of optimal
Nets. The paper proposes a performance optimization technique solutions, some remarks need to be made. One concerns the
which, although it is based on Event Graphs, applies different extremely vague definition of certain conditions surrounding
algorithms than traditional heuristic ones. More specifically, a the optimization problem to be solved. In addition, although
novel neural model is used to solve the optimization problem. the algorithms are heuristic and therefore reduce the set of
The neural model was obtained by making significant changes solutions to be explored, the time required to reach a solution
in a network which is well known in literature: the Hopfield cannot be established a priori, and may range in an interval
network The modifications were made in order to meet the whose upper bound is the time required to calculate an
constraints typical of performance optimization of Flexible
exhaustive solution.
Manufacturing Systems. The aim of the paper is to present the
new neural model and show the performance optimization
The aim of this paper is to present an altemative to
results that can be obtained by using it. The results that will be these heuristic algorithms, which can overcome the limits
presented highlight the goodness of the solution proposed and outlined above. The performance optimization technique
its applicability in the factory automation environment. proposed in the paper is again based on Event Graphs, but
applies different algorithms than traditional ones. In
I. INTRODUCTION particular, a novel neural model is used to solve the
optimization problem. The neural model was obtained by
One of the main goal in the area of the factory making significant changes to the well-known Hopfield
automation is to fully exploit the resources present in a network [4][5][6]. The modifications were made in order to
Flexible Manufacturing System (FMS) [I], to optimize its meet the constraints typical of performance optimization of
productivity (or performance). This can be reached in Flexible Manufacturing Systems.
different way such as, for example, scheduling of the The paper will present the new neural model and its
sequence of activities performed by each resource or fixing use in the FMS performance optimization. Then, in order to
the number of part types processed simultaneously in each highlight the capacity of the neural approach to solve the
production cycle of the FMS. In the paper it will be assumed problem, some example of performance optimization of
that the processing sequence for each resource is always FMSs are shown.
fixed, while the number of parts present in the production
cycle can be varied. Too low a number of parts processed 1I.EVENT GRAPHS-BASED FMS PERFORMANCE
underexploits the available resources, while too high a OPTIMIZATION METHODOLOGY
number causes conflicts which slow down the productivity of
the system. The aim to be reached is to determine a number The class of Petri Nets denoted as Event Graphs is
of parts to be processed such as to optimize the productivity particularly suitable to represent Flexible Manufacturing
of the FMS as a whole. This problem is generally Systems. An Event Graph is a Petri Net in which each place
characterized by very high calculation time needed to obtain has one input transition and one output transition [2]. Each
a feasible solution. For example, literature provides well- transition can have an associated firing time, in which case
known techniques for performance optimization based on the the Event Graph is called timed. The presence of a token in a
use of a particular class of Petri Nets, called Event Graphs place enables firing of the output transition for that place.
[ 2 ] . In this case the FMS performance optimization is a Firing may occur in null time, if the transition is immediate,
combinatorics task of a non-polynomial dimension (i.e. an or in the time associated with the transition if it is timed. An
NP-hard problem). Event Graph is said to be strongly connected if there is a
There are numerous contributions in literature path joining any pair of places. An elementary circuit in a
presenting heuristic algorithms for performance strongly connected Event Graph is a direct path that goes
optimization. Although these algorithms do not always

0-7803-2775-6196 $4.00 0 1996 IEEE 1478


from one place back to the same place, while no other place of the need for a new neural model, the following subsection
is repeated. presents the original Hopfield model [4][5][6] on which the
Modelling an FMS by Event Graph is quite simple new model is based. Then, the limits of the original Hopfield
and consists of representing each FMS resource (buffer, model and the capacity of the new model to overcome them,
machine, robot, etc.) by a place, and the activity performed i d 1 be pointed out.
by the resource by a transition, to which the processing time
of the activity is associated. AThe Hopfield Neural Network
The Event Graph can be easily used to the
performance evaluation and optimization of the FMS. The The Hopfield neural model [4][5][6] is very suitable
optimization problem of a FMS modelled by a strongly to solve optimization problems. This network was first used
connected Event Graph, can be formulated as minimizing the to solve the well-known Travelling Salesman Problem (TSP)
quantity: [7], and then its use was extended to a large number of
optimization problems.
n The Hopfield-type model is based on a single-layer
C U I 'XI (1) architecture of neurons, the outputs of which are fed back
where U, are the p-invariants of the Event Graph [2] and xi towards the inputs. Fig.1 shows a Hopfield network with n
represents the number of tokens in place Pi, under the neurons.
condition :
M(y 1 2 a . P(Y) (2)
where a is the performance required, M(y) indicates the
number of tokens in the elementary circuit y of the strongly
connected Event Graph, and p(y) is the sum of all the firing 01

times of the transitions belonging to the elementary circuit y.

Condition ( 2 ) may not be the only one. In many cases, one O2
machine may carry out several operations sequentially. The
corresponding Event Graph will feature particular
elementary circuits called command circuits for each
machine. Each command circuit specifies the sequencing of
the jobs on the corresponding machines. Indicating as Tc the
set of command circuits, the total number of tokens, M(yC),
'd'y%T, has to be equal to 1. More than one token in a
command circuit would, in fact, correspond to the impossible
situation in which the relative machine is performing two O"
jobs at the same time. When there are command circuits in
Fig. 1 - Hopfield Neural Model
an Event Graph, ( 2 ) therefore becomes:
As can be seen, the i-th neuron (which is drawn by a
circle) receives the outputs of the other neurons and an
external bias current Ii, and produces an output Oi. Each
vyc Erc (2")
feedback between the output of the j-th neuron and the input
where T*=T-T,. of the i-th neuron has an associated weight, wij, which
As can be seen from (1),(2') and (27, optimization determines the influence of the j-th neuron on the i-th
of the performance of a manufacturing system can be neuron. In the i-th neuron, the weighted sum of the outputs
achieved by solving an integer linear problem where at least Oi and the external bias current Ii, produces a signal Ui given
as many constraints as the elementary circuits in the graph by:
must be considered. This represents the major drawback of
such an approach, since it requires a great deal of time to
find a solution. In [3] two heuristic algorithms to reduce this
computational complexity are proposed. In the following
sections the author presents an alternative strategy for FMS where 't is a user-selected decay constant. The output of the
performance optimization, based on the use of a novel neural i-th neuron, Oi is linked to Ui, by a sigmoidal monotonic
model. increasing function. In the paper the following function is
1+ tanh(Q)
The aim of this section is to describe the novel
neural model used to solve the FMS performance
optimization problem. In order to gain better understanding


where the p-armeter uo controls the effective steepness of the A strategy by which such problems can be solved
function: the lower it is, the steeper the function. features the possibility of dynamically modifying the bias
Hopfield [6] showed that if the weights matrix current values in order to meet all the surrounding
W=[wij] is symmetrical and if the function gi is a steep-like conditions. A neural model which can achieve this aim is
curve (i.e.uo+O), the dynamics of the neurons described by shown in Fig.2. Comparison between Figs. 1 and 2 highlights
(3) follow a gradient descendent of the quadratic energy the modifications made to the model originally proposed in
function, also known as the Lyapunov function: [51[61.

Under the same hypothesis, Hopfield [6] showed that the

minima of the energy h c t i o n (4) coincide at the corners of
the hypercube defined by OI€ {0,1}.
These theoretical results allow a solution to a
particular optimization problem to be obtained fiom the
stabilized outputs of the Hopfield network, by the following
method. First the surrounding conditions of the optimization
condition are identified and expressed in the form of the
energy function given by (4). Each term of the energy
function relating to a surrounding condition is multiplied by
a real coefficient which weights the influence of the
condition on the solution to the problem. By comparison
with function (4) the weights and bias currents are expressed Bias Current Level Neuron Level
as a function of the coefficients by which the surrounding
conditions are multiplied. If the weights matrix obtained is Fig.2-Neural Model Proposed
symmetrical and the function gi is steep-like, the stable
output of the Hopfield model obtained by imposing the Two different levels can be seen in the model: the
weights and biases previously calculated corresponds to a Bias Current Level and the Neuron Level. Both levels
minimum of (4) and thus to a solution to the problem. receive the fed-back neural outputs. The bias current level
processes the outputs received and on the basis of them
B.Description of the Novel Neural Model modifies the bias currents according to the conditions
surrounding the problem to be solved. The neuron level, on
As shown in the previous subsection, the Hopfield the other hand, is the same as the one in the Hopfield model
model features the presence of a constant bias current presented in [5][6]. Processing at the two levels is in cascade
throughout the neural evolution. The bias current plays an as it is first necessary to modi@ the bias currents for each
essential role in the dynamics of the neural model since, as neuron and then calculate the output.
can be seen in (3), it determines the value of each neuron. A During evolution of the neural network the bias
bias value close to zero causes the output of the i-th neuron currents have to be updated according to the output values. If
to assume a value of 1 or 0, depending exclusively on the the neural solution meets all the surrounding conditions at
that iteration, the bias currents are left unaltered. If, on the
weighted sum 2 wij. 0,. A very positive bias current value other hand, one or more conditions are not met, the bias
j current for each neuron which does not meet the conditiods
determines an output value close to 1. By virtue of its role, is modified in such a way as to force the neural solution to
the bias current is always linked to a surrounding condition, meet the surrounding conditiods.
so that the output value of each neuron meets the condition.
The considerations made so far show that, in the 1V.USING THE NOVEL NEURAL NETWORK TO FMS
version proposed in [5][6], the Hopfield neural model can PERFORMANCE OPTIMIZATION
only be applied to solve problems in which all the
surrounding conditions feature a fixed target. In such cases The aim of this section is to show how the novel
an appropriate bias current value ensures the validity of the neural network-based approach is used to optimize the
solution. If, however, a problem features at least one performance of an FMS. The neural solution to the FMS
surrounding condition which does not rigidly constrain the performance optimization is achieved through two different
neural solution but allows it to vary within a range of steps: modelling the FMS by the novel neural network and
possible values, the neural model described above does not link the network to the surrounding condition of the problem.
provide a valid solution. This kind of problem cannot be The two following subsections will give a detailed
solved by recourse to constant bias currents, which description of these two steps. In these sections it will be
drastically limit the variability of the neural solution. shown that the Hopfield model presented in Section 1II.A is
not capable of solving the performance optimization

problem, a solution to which is, however, possible using the
novel model presented in Section 1II.B. n n n
minimize ( c u i .xi)2=minimize ( c c u i 'xi 'U, .x,)
i=l i=lj=1
A. Modelling the FMS by the Novel Neural Network
This condition can be expressed in the following term of the
Lyapunov energy function:
The first step in the strategy proposed is
representing the FMS using the novel neural network. This is
achieved by mapping the Event Graph modelling the FMS -
A. ( ~ ~ u l . o l . u ~
A . o ~ ) = - - . ( ~ ~ u(~
5 )~ u ~ ~ o l ~ o j )
with the neural model, by making each place P, in the Event 2 i j 1
2 i j
Graph correspond to the neuron 0, in the novel neural based on the consideration that each x, corresponds to O,,
network. The value (1 or 0) of the output of each neuron 0, according to the mapping between the Event Graph and the
models the presence or absence of a token in the place P, neural model, as stated in the previous section. By
modelled: if the output is 1 the place corresponding to the comparison of (5) with (4), it is possible calculate the
neuron contains a token, if it is 0 the place contains no contribution to the bias currents and weights of this term:
tokens. It is clear that the proposed coding of the neural
output is based on the necessary assumption that each place wIJ = -A.u, . u J 7I, = 0
in the Event Graph contains at most one token. In general The energy function term relating to the validity of
there is no limit in an Event Graph to the maximum number tlhe solution has to impose a certain number of tokens in each
of tokens in a place. Such a limit can only be imposed by the elementary circuit of the Petri net. The corresponding term
particular features of the FMS modelled (e.g. if a place of the Lyapunov Function has to be of the following kind:
models a buffer, the maximum number of tokens in the place
is equal to the real capacity of the buffer). In [3] it is Constant
demonstrated that it is always possible to modify a strongly
.c( Coi-M(y>>2 (6)
y i:Piey
connected graph into an equivalent one in which each place
possesses at most one token. The proposed mapping between This term is, in fact, minimized when the number of
the Event Graph and the novel Hopfield-based Neural activated neurons in each circuit y is equal to M(y). The
Network can occur under the hypothesis of the original Event value of this number is given by conditions (2') and (2").
Graph having been converted into an equivalent one in which Condition (2') establishes that, in all the circuits y*Er*, this
each place contains at most one token. number has to be at least equal to [ a .p(y)l, while condition
(2") states that in all the circuits ycErc, M(yC)is strictly 1.
B.FMS Performance Optimization by the Novel Neural
Eielow we will illustrate the expressions of the energy
function terms relating to conditions (2') and (2").
As said above, ( 6 ) establishes that there are M(y)
As already mentioned in Section III.A, solving any
tokens in each circuit. Therefore the following Lyapunov
optimization problem by means of the Hopfield network is
energy function term:
generally achieved through a number of steps. First, all the
constraints of the optimization problem are expressed in
terms of Lyapunov energy functions. Then, the weights and
bias currents of the Hopfield network are obtained according
to the surrounding conditions, so that it can provide a establishes that in each circuit y*ET* there is a number of
solution to the problem. These steps are examined in greater
detail below, with reference to the surrounding conditions . Comparing (7) with (4) we get
(1),(2') and (2") which, as mentioned previously, are always
present in the problem of FMS performance optimization. tlne following weights and bias current values:
The expression of the energy function linked to the
problem of FMS performance optimization comprises two
terms respectively relating to the quality and validity of the
solution. The term relating to quality has to be based on
condition (1) while the term related to the validity has to take
into account conditions (2') and (2"). The expressions of
these two terms will be formulated below. Each term will be
multiplied by a real coefficient (as will be seen, by the where ny* represents the number of circuits in the set I-* to
coefficients A,B and C) which weights the influence of each which the places with index i and j simultaneously belong.
term on the others. As can be seen, the bias current assumes a constant value,
The term relating to the quality of the solution can fixing the number of tokens in each non-command
be obtained by considering that condition (1) can also be
expressed in the following form: elementary circuit as [a.p(y*)]. For this reason, the bias
current value given by (8") might make the solution non-


valid. A valid solution may, in fact, feature the presence of

some circuits in which the number of tokens is strictly 6B- * / a . p ( y * ) ] . The value 6B represents a fkaction of
Y :P,e/
greaterthan [ a . p ( y * ) ] . B. In this way, the increment of the bias current forces the
These considerations show that the original network to satisfy the condition (2').
Hopfield model presented in Section 1II.A is not capable of Condition (2'7, relating to imposing a single token
solving the problem being dealt with here. The novel model in each of the command circuits, corresponds to the
presented in Section III.B, on the other hand, is capable of Lyapunov function term:
reaching this goal. It is, in fact, characterized by variability
in the bias currents; they can be varied during the neural
iteration in order to cause the number of tokens in each

r *1
circuit y*ET*, to be no less than a . p ( y ) . In this way the
neural network is hee to establish the number of tokens in
where the first sum is extended to VycETc. For each yCETc,
the second sum is extended to YOi such that the
corresponding place Pi in the Event Graph belongs to yc. As
each circuit y*E;T*, this respecting the validity of the can be seen this condition imposes a single token in each
solution. In other words, the number of tokens in each circuit command circuit yc. Imposing a single token in each
is not determined a priori but is varied (always, of course, command circuit guarantees the validity of the solution. The
being greater than or equal to a minimum value) so as to contribution to the bias currents and weights of (9) becomes:
guarantee the validity of the solution.
Taking into account (8') and (87, the modification w..- =C.n , I,+ = C.n
1J y'1.J YCt

of the bias currents in the novel neural model has to be made

according to the algorithm shown in Fig.3. represents the number of circuits in the set Tc to
where nyc3,J
which the places with index i and j simultaneously belong,
and nY: represents the number of circuits in the set Tc to
contribution of (8') to the weights is calculated;
contribution of (8") to the bias currents is set to zero; which the place with index i belongs. In this case the
repeat contribution of (2") to the bias current is constant and does
for (each neural iteration) do not have to be modified at each iteration.
for Vy*: P i q * do
The aim of this section is to give the results of FMS
performance optimization obtained using the neural
approach presented above. Two examples will be considered
if ( 1 oj <[a.p(y*)/) then which are well known in literature and refer to a Job-Shop
j:Pj cy * [3] and a deterministic Kanban system [ 8 ] .
The Event Graph shown in Fig.4 models a job-shop
composed of four machines M1, M2, M3 e M4, which can
manufacture three types of parts denoted by RI, R2 and R3.
calculate the output of each neuron according to The production mix is 25%, 25% and 50% for RI, R2 and
the new bias currents; R3, respectively. The production processes of the part-types
end are R1 = (M1(1), M2(1), M3(3), M4(3)}, R2 = {M1(1),
until (the neural output is stable); M4(1), M3(2)}, R3 = (M1(1), M2(2), M4(1)}, where the
number in brackets specifies the processing time for each
machine. As can be seen, in the model there are four
Fig.3-Algorithm proposed for the Updating of the Bias Currents command circuits relating to the four machines: rC={yCI, yC2,
yc3,yc4} where yc1=(P13, P14, P15, P16), yc2=(P17, P18,
The algorithm can be explained as follows. At each P19), yc3 = (P20, P21), and yc4=(P22, P23, P24, P25).
neural iterationall the non-activated neurons are considered Table I refers to the solution of job-shop
(i.e.the neurons for which Oi=O holds). For each of them all performance optimization problem provided by the neural
the circuits y* to which the corresponding place belongs are strategy proposed in the previous sections, considering a
considered (i.e. the condition P i q * holds). The number of value of a=1/6 and the following values of U, in (5): u,=l Vi
tokens for each of these circuits is counted. If it is less than E [0,12] and u,=0 Vi E [ 13,251. For each place of the Event
Graph, the table shows the distribution of the tokens
1a * 1
e p(y ) , i.e. condition (2') is not met for that circuit, the
bias current of the neuron being considered is increased by
determined by the neural network. In particular only the
places containing one token are shown. The places not

shown in Table I contain no token. As can be verified, this assuming a=1/3 and values of ui=l Vi~[O,l7]and ui=O Vi€
solution produces the best performance for the whole system. [18,23].



The paper has presented an original approach to

FMS performance optimization. It is based on the modelling
of an FMS by a novel neural model. The solution provided
by the neural network determines the FMS configuration
which maximizes performance. From tests carried out on a
large number of examples, it was found that the quality of the
neural solution is always high, as the solution reached is
R3 always optimal or close to optimal.


Fig.4-Event Graph Model of a Job-Shop. [ 11 J.R.Pimente1, "Communication Networks for

Manufacturing", Prentice-Hall International Editors,
TABLE I 1990.
OPTIMAL SOLUTION BY THE NEURAL APPROACH [2] R.Zurawski, "Petri Nets and Industrial Application: A
Tutorial", IEEE Transaction on Industrial Electronics,
I P2 1 P3 [ P4 I P5 I P8 I P11 [ P13 [ P17 I P20 I P23
December 1994, vo1.41, n.6, pp.567-583.
[3] S.Laftit, J.M.Proth, X.L.Xie, "Optimization of Invariant
Criteria for Event Graphs", IEEE Transaction on
The Event Graph in Fig.5 shows a Kanban system Automatic Control, vo1.37, n0.5, May 1992, pp.547-555.
composed of a production line with 3 machines, MI, M2 e R.Hecht-Nielsen, "Neurocomputing", Reading, MA:
M3, which can manufacture two part-types denoted as R1 Addison-Wesley, 1990.
and R2. J.J.Hopfield, "Neural Networks and Physical Systems
with Emergent Collective Computational Abilities",
proceedings National Academy of Sciences, vol. 79,
pp.2554-2558, April 1982.
[6] J.J.Hopfield, "Neurons with Graded Response Have
Collective Computational Properties Like those of two-
state Neurons", proceedings National Academy of
Sciences 81:3088-3092,May 1984.
[7] J.J.Hopfield, D. W.Tank, "Neural Computations of
Decisions in Optimization Problems" Biol.Cybern.,
~01.52,pp. 141-152, 1986.
[SI M.Di Mascolo, Y.Frein, Y.Dallery, R.David, "A Unified
Fig.5-Event Graph Model of a Kanban System Modelling of Kanban Systems using Petri Nets",
with three Machines and 2 Part Types.
Technical Report no.89-06, LAG.Grenoble, France,
September 1989.
The manufacturing times of one part of R1-type on
M1, M2 and M3 are 1,2 and 1 respectively. The
manufacturing times of one part of R2-type on M1, M2 and
M3 are 1,l and 2 respectively. The parts enter the production
line according to the sequence R17R2,R1,R2,... In the model
shown in Fig.5, there are three command circuits for the
three machines: TC={ycl,yc2,yc3}where ycI=(P19,P20),
yc2=(P21, P22), and yc3 z(P23, P24).
As above, Table I1 gives the distribution of tokens
corresponding to the best performance for the Kanban
system. It was obtained using the neural approach and