Sei sulla pagina 1di 19

This article was downloaded by: [University of Wisconsin-Milwaukee]

On: 04 October 2014, At: 16:48


Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

International Journal of Production


Research
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/tprs20

Safety stock determination based on


parametric lead time and demand
information
a b
Alex J. Ruiz-Torres & Farzad Mahmoodi
a
Departmento de Gerencia, Facultad de Administración de
Empresas , Universidad de Puerto Rico, Recinto de Río Piedras ,
San Juan, Puerto Rico, 00931-3332
b
School of Business, Box 5790, Clarkson University , Potsdam, NY
13699-5790, USA
Published online: 29 Apr 2009.

To cite this article: Alex J. Ruiz-Torres & Farzad Mahmoodi (2010) Safety stock determination
based on parametric lead time and demand information, International Journal of Production
Research, 48:10, 2841-2857, DOI: 10.1080/00207540902795299

To link to this article: http://dx.doi.org/10.1080/00207540902795299

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014
International Journal of Production Research
Vol. 48, No. 10, 15 May 2010, 2841–2857

Safety stock determination based on parametric lead time


and demand information
Alex J. Ruiz-Torresa* and Farzad Mahmoodib
a
Departmento de Gerencia, Facultad de Administración de Empresas, Universidad de Puerto
Rico, Recinto de Rı´o Piedras, San Juan, Puerto Rico, 00931-3332; bSchool of Business,
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

Box 5790, Clarkson University, Potsdam, NY 13699-5790, USA


(Received 1 July 2008; final version received 28 January 2009)

In many production environments where demand and lead times are variable,
significant levels of safety stock inventory are required to assure timely
production and delivery of the final product. Traditional models to determine
the appropriate safety stock level may result in more safety stocks at sub-assembly
and finished goods levels than necessary and thus lead to higher inventory
carrying costs than desired. Such models generally incorrectly assume that the
demand during the lead time follows a normal distribution. This paper revisits
and analyses a re-ordering point inventory model developed by Estes (1973) that
accounts for demand and lead time variability without making any particular
distributional assumptions. Instead, it focuses on historical data to determine the
possible outcomes of the replenishment cycle. We compare the proposed model
with the traditional model by conducting simulation analysis using three data sets
obtained from an electronics manufacturer. The results indicate that the proposed
model yields much closer to target service levels and lower inventory carrying
costs than the traditional model, regardless of the data set used.
Keywords: inventory management and inventory systems; safety stock
determination

1. Introduction
In the great majority of dependent demand production environments where not only the
demand, but also procurement and manufacturing lead times are variable, safety stocks
are required to achieve reasonable service levels. In such environments, the master
production schedules are generally fixed for just a short period of time because market
dynamics and competitive pressures often prevent them to be set for the entire
procurement and manufacturing lead time duration. Thus, safety stocks are required at
subassembly levels to assure timely production and delivery of the final product, and in
general, the consideration of safety stocks in the supply chain is a highly relevant problem
(Villegas and Smith 2006, Chen et al. 2007) given it affects customer service performance
and inventory costs. As many studies have shown, the amount of safety stock should be
derived based on the average and variability of demand and supply, as well as the desired
service level (e.g. Chase et al. 2004, Krajewski and Ritzman 2005, Stevenson 2005).
Research in the development of inventory systems and policies that include safety stock

*Corresponding author. Email: alex.ruiztorres@uprrp.edu

ISSN 0020–7543 print/ISSN 1366–588X online


ß 2010 Taylor & Francis
DOI: 10.1080/00207540902795299
http://www.informaworld.com
2842 A.J. Ruiz-Torres and F. Mahmoodi

calculations is abundant and includes Chandra and Grabis (2008), Persona et al. (2007),
Jung et al. (2004), and Pang and Yang (2002).
In the traditional r, Q inventory system, the re-order point is a function of the average
demand during the expected lead time, plus the required safety stock (r ¼ Ld þ ss), where d
represents the average demand per unit time, and L the average lead time and ss the safety
stock. The amount of safety stock is a function of five factors: average and standard
deviation of lead time, average and standard deviation of demand, and the desired service
level. The traditional model to calculate the safety stock is ss ¼ Zs0 , where
s0 ¼ ðLs2d þ d 2 s2L Þ1=2 and Z is based on the desired service level. Note that sd and sL
represent the standard deviation of demand and standard deviation of lead time,
respectively.
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

There are concerns regarding the above approach to determine the appropriate amount
of safety stock as the combined variability component s0 often overestimates the ‘joint’
variability of demand and lead time. In other words, safety stock based on the above-
mentioned calculation can actually result in higher service levels than desired and
consequently higher inventory carrying costs. One of the main reasons for the
overestimation is that the assumption of normality for the demand during the lead time
may be incorrect, and therefore the calculated safety stock resulting in a service level
significantly different from what was desired. It is often the case that the normal
distribution is not the best representation of the demand during the lead time. However,
information about the form of the probability distribution of the lead time demand is
generally limited. Often, only an estimate of the mean and the variance is available. There
is a tendency to use the normal distribution under these conditions, although many studies
have demonstrated that the normal distribution does not offer the best shield against the
occurrences of other distributions with the same mean and variance (e.g. Moon and Choi
1997, 1998, Ouyang and Wu 1998, Lin 2008).
Several studies have applied the mini-max distribution free approach to a variety of
inventory problems since the late 1950s. For example, Scarf (1958), Shore (1986), Gallego
and Moon (1993), and Moon and Choi (1995), studied the newsvendor problem where
only the mean and variance of demand is known without any assumptions regarding the
form of the demand distribution. In addition, Gallego (1992) and Moon and Choi (1994)
applied the mini-max distribution free approach to a continuous review inventory model,
while Moon and Gallego (1994) applied the approach to a periodic review inventory
model. Furthermore, Moon and Choi (1997) applied the distribution free approach to
a single period model with stochastic demand under a two-echelon production system and
developed a procedure that provides optimal inventory levels considering the worst
distribution. They concluded that the distribution free approach is robust. More recently,
Pan et al. (2004) applied the mini-max distribution free approach to study the integrated
inventory systems with the objective to simultaneously optimise the order quantity, lead
time, backordering and re-order point. Finally, Pan and Hsiao (2001) and Lin (2008)
applied the mini-max distribution free procedure to optimise continuous review inventory
model with backorder price discount in which the lead time and ordering cost reductions
are inter-dependent.
Another distribution free approach used to calculate the re-order point is based on the
bootstrapping approach. In this approach, the data is used to formulate an empirical
distribution of the lead times. This empirical distribution is then used to generate the
re-order point estimate (Bookbinder and Lordhal 1989, Wang and Rao 1992).
This approach was also used by Fricker and Robbins (2000) who addressed cases where
International Journal of Production Research 2843

no information is available about the inventory position and empirical distributions are
determined for both the demand and the lead time.
This paper re-visits a re-order point model initially proposed by Estes (1973) that
accounts for demand and lead time variability without any assumptions regarding the
form of the lead time demand distribution. Instead it focuses on historical information
regarding the lead time and the demand in order to generate all the possible outcomes of
the replenishment cycle (and the probability of each outcome) to determine the expected
service level that matches the target service level. The rest of the paper is organised
as follows: Section 2 describes the traditional model for determining the re-order
point. Section 3 describes the Estes (1973) model and presents a numerical example of the
calculations required for this model. Section 4 presents a simulation analysis illustrating
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

how the traditional and proposed (Estes 1973) models compare by utilising three data sets
obtained from an electronics manufacturer. Finally, Section 5 provides the discussion of
the results and provides direction for future research.

2. The traditional model


An important factor in the determination of safety stock is the definition of service level
(SL). The service level of an inventory system can be defined in several ways (Waters 2003),
one case being the percentage of times that demand during a cycle is fully served by the
available inventory (i.e. all demand filled service level). A second definition of service level
is based on the average percentage of demand that was filled from the available inventory
(i.e. fill rate service level), also presented as one minus the ratio of units not available
(stockout units) over the demand (Ballou 2004). The difference between these two service
level definitions is demonstrated by a simple example. Assume r ¼ 100 and the demand
during lead time for five re-ordering cycles is 96, 103, 99, 107, and 110. The resulting
measure assuming that all demand filled service level definition for this r would be 40%
(only in two out of the five cycles was the demand fully met), while based on the fill rate
service level definition, the service level measure equals 96.3% (average of 1, 100/
103, 1, 100/107, 100/110). Clearly, the resulting service level for a system given a fixed
r based on the second definition will be at least as high as the measure based on the first
definition. In this paper we are concerned with the fill rate service level (thus, if otherwise
noted, SL implies this definition and is defined as a percentage), as this measure is most
relevant and used in practice based on our observations.
As mentioned earlier, the traditional model to calculate the safety stock is ss ¼ Zs0 ,
where s0 ¼ ðLs2d þ d 2 s2L Þ1=2 and sd and sL represent the standard deviation of demand and
standard deviation of lead time, respectively. Given our fill rate service level definition, the
value of Z must be based on the expected number of units available during the cycle and
the total expected demand per cycle. As described in Ballou (2004), the expected number of
units of demand, but not filled per cycle is s0 E(z), where E(z) is the unit normal loss integral.
To determine the Z closest to the desired SL, we estimate E(z) by (1  L)Ld/s0 and then find
the corresponding Z value. Note that this formula assumes the replenishment quantity is
equal to the expected demand during the replenishment cycle. The general formula is
presented in Silver et al. (1998) and stated as (1  SL)Q/s0 , where Q is the number of units
remaining.
For example, let us assume d ¼ 6 units per day, sd ¼ 1.768, L ¼ 1.5 days, sL ¼ 0.51,
therefore s0 equals 3.75. For a SL of 80%, the expected number of units not filled in a cycle
2844 A.J. Ruiz-Torres and F. Mahmoodi

is 1.8((1  SL)  L  d), and the value of E(z) equal 0.48. For an E(z) ¼ 0.48 the related
Z ¼ 0.152, and therefore the desired ss ¼ 0.57 units, and r ¼ 8.43 units. Assuming
SL ¼ 99%, then E(z) ¼ 0.024, the corresponding Z ¼ 1.59, and r ¼ 14.95. As a comparison,
if the service level assumed all demand filled at 80%, the value of Z ¼ 0.842 and r ¼ 12.15
(3.73 units more), and at the SL of 99%, Z ¼ 2.33, and r ¼ 17.72 (2.77 units more). Thus,
as mentioned earlier, the all demand filled definition will lead to at least as much safety
stock and re-order point levels as the fill rate service level definition for a similar
percentage value.

3. A joint probability model (Estes model)


Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

A joint probability approach to determine the re-order point was originally proposed by
Estes (1973). We refer to this model as the expected value re-order-point (EVR) method.
As Estes (1973) suggests, this model utilises information that is often available to most
companies (or could be ‘easily’ collected) that utilise fixed re-order point inventory
control systems. The first piece of information that the model requires is usage by time
period, without considering if the demand occurs before or during the re-order cycle.
The model assumes that there are x possible levels of demand per time period (i.e.
a[1], a[2], a[3], . . . , a[x] represent the demand per unit time values). For example, if x ¼ 3,
and a[1] ¼ 4, a[2] ¼ 6, and a[3] ¼ 8, there is a probability pj that the demand per time
unit is at a particular level j. Thus, p1 is the probability of a daily demand equal to
4 units; p2 is the probability of a daily demand equal to 6 units, etc. Note that the
model assumes that the demand during each time unit is independent of the previous
time unit.
The second piece of information required by the Estes model is the lead time from
re-order signal to delivery. There are y possible levels of lead time, represented by b[1], b[2],
b[3], . . . , b[y], and Y ¼ {b[1], b[2], b[3], . . . , b[y]}. We limit all b[w] to be integer values. For
example, if y ¼ 2, and b[1] ¼ 1 day, and b[2] ¼ 2 days, there is a probability qj that the lead
time for a cycle is at a particular level j. Thus, q1 is the probability of the lead time equal to
1 day and q2 is the probability of the lead time equal to 2 days. As in the case of the
demand, the assumption is that the lead time of a cycle is independent of the previous
cycle.
The EVR model utilises the decision tree approach to estimate the service level under
each of the possible outcomes. The first split in the tree is based on the number of lead time
levels; therefore the number of branches at the first level is y. In principle each branch will
then split into x branches and this will be repeated for the number of lead time units of that
branch. Figure 1 presents the decision tree for the previously discussed example. The first
split signifies a lead time of 1 day (top part of the figure), and a lead time of 2 days (bottom
part of the figure).
Note that the decision variable is the re-order point, thus the service level equation is
based on this unknown. The variable Df,G is used to represent the lead time demand for
a particular lead time and sequence of demands, where f represents the lead time branch
value, and G represents the demand values during the lead time. For example, Db[2],a[2]a[3]
signifies the lead time demand when the lead time is b[2], and given b[2] ¼ 2, the demand
for the 2 lead time units (days) is based on the demand values of a[2] followed by a[3].
If the inventory value is positive, the ending inventory for that cycle represents the excess
inventory carried (resulting in inventory carrying costs), while if the inventory value is
International Journal of Production Research 2845

a[1] Probability
D b[1], a[1] = a[1] q1 p1
a[2]
D b[1], a[2] = a[2] q1 p2
a[3]
D b[1], a[3] = a[3]
q1 p3
b[1]

a[1] D b[2], a[1] – a [1] = a[1] + a[1] q 2 p1 p1


a[2]
D b[2], a[1] – a [2] = a[1] + a[2] q 2 p1 p2
a[3]
a[1]
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

D b[2], a[1] – a [3] = a[1] + a[3] q 2 p1 p3

a[1] D b[2], a[2] – a [1] = a[2] + a[1] q2 p2 p1


b[2] a[2] a[2]
D b[2], a[2] – a [2] = a[2] + a[2] q 2 p2 p2
a[3]
D b[2], a[2] – a [3] = a[2] + a[3] q 2 p2 p3

a[3] a[1] D b[2], a[3] – a [1] = a[3] + a[1] q 2 p3 p1


a[2]
D b[2], a[3] – a [2] = a[3] + a[2] q 2 p3 p2
a[3]
D b[2], a[3] – a [3] = a[3] + a[3] q 2 p3 p3

Figure 1. Example decision tree.

negative, it represents the unsatisfied demand (and the service level based on the fraction
of the demand satisfied in that cycle). The boxes at the end of the branches in Figure 1
represent the lead time demand, and next to that is the corresponding probability.
Using simple combinatorics, the number of branches to be analysed can be reduced
significantly. Given Db[2],a[2]a[3] ¼ Db[2],a[3]a[2], what is relevant is the number of times
each particular level of lead time demand occurs, and not the particular sequence. Figure 2
presents an updated representation of the example’s decision tree. Note that Db[2],a[2]:a[3]
represents the lead time demand after one level 2 demand and one level 3 demand have
occurred, regardless of the sequence of the particular demands.
As previously mentioned, the number of initial branches depends on the lead time
levels. For each lead time level branch with b[w] time units, the number of branches at the
second level (and therefore outcomes) is based on the value of y and the properties of the
Pascal Triangle. The number of outcomes for a branch with b[w] time units and y demand
values is determined by 1/( y  1)!u¼0. . .y2(b[w] þ 1 þ u). For example, if b[w] ¼ 4 and
y ¼ 3, then there are 15 outcomes. Example outcomes are {4a[1], 4a[2], . . . 2a[1]: 2a[2], 3a[2]:
a[3], . . . , a[1]: a[2]: 2a[3], . . .}, with lead time demand values of {4a[1], 4a[2], . . . 2a[1] þ
2a[2], 3a[2] þ a[3], . . . , a[1] þ a[2] þ 2a[3], . . .}. Let [w] represent the coefficient for a[w],
thus for a branch with Db[w],a[1]:a[2]:2a[3], and therefore a lead time demand of
a[1] þ a[2] þ 2a[3], the corresponding values are [1] ¼ 1, [2] ¼ 1, [3] ¼ 2, and [4] ¼ 0.
The coefficient for the probability of a branch is then calculated by
2846 A.J. Ruiz-Torres and F. Mahmoodi
P
( u¼0. . .y [u])!/w¼0. . .y [w]!, thus when [1] ¼ 1, [2] ¼ 1, [3] ¼ 2, and [4] ¼ 0, the
probability is 12qb[w] p1 p2 p23 .
These characteristics of the problem allow for fewer calculations than the original full
enumeration. As the value of maxw¼0. . .x(b[w]) and the number of demand levels y increase,
the number of outcomes increases exponentially, limiting the applicability of this
approach. However, by reducing the fidelity of the data model, this limitation can be
overcome, for example by changing the base time units to ‘2 days’ or ‘2 weeks’, and by
aggregating the demand levels to reduce the size of y.
Let set b[w] include all the possible outcomes when the lead time is b[w] time units and
there are y demand levels. As mentioned earlier, the number of items in this set would be 1/
( y – 1)!u¼0. . .y–2(b[w] þ 1 þ u). Let c be a member of set b[w], for example c ¼ a[1]: a[2]:
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

2a[3]. The service level for an outcome c is 100% if r  Db[w],c as the demand during the
lead time was less than or equal to the amount of inventory at the point of the re-order
signal (r). If r 5 Db[w],c it is clear that the demand during the lead time exceeded the
re-order point, thus the service level is less than 100% and equal to r/Db[w],c P . Let (c)
represent the coefficient for the probability of this outcome, calculated by ( u¼0. . .y[u])!/
w¼0. . .y[w])!. Thus (c) ¼ (a[1]:a[2]:2a[3]) ¼ 12. Let p0c represent the probability of
2
a branch c, and p0c ¼ w¼0. . .y p½w
w for that particular c, thereforePp0a½1:a½2:2a½3 ¼ pP
1 p2 p3 . The
overall service level given an unknown r is determined by: b[w] 2 Yqb[w] ( c 2 b[w]
p0c ðcÞ (min (100%, r/Db[w],c)). The EVR method will iteratively search for the value of
r that matches the desired service level based on the described equation.

a[1] Probability
D b[1], a[1] = a[1] q1 p 1
a[2]
D b[1], a[2] = a[2] q1 p 2
a[3]
D b[1], a[3] = a[3]
q1 p 3
b[1]

D b[2], 2a[1] = 2a[1] q 2 p1 2


2a[1]

D b[2], a[1] : a [2] = a[1] + a[2] 2q2 p1 p 2

a[1]:a[2]
b[2] D b[2], a[1] : a [3] = a[1] + a[3] 2q2 p1 p 3
a[1]:a[3]

2a[2]
D b[2], 2a[2] = 2a[2] q 2 p2 2
a[2]:a[3]

D b[2], a[2] – a [3] = a[2] + a[3] 2q2 p2 p 3


2a[3]

D b[2], 2a[3] = 2a[3] q 2 p3 2

Figure 2. Example’s updated decision tree.


International Journal of Production Research 2847

3.1 Numerical example


This section provides an example of the calculations required to determine the re-order
point using the EVR method. Assume q1 ¼ q2 ¼ 50%, p1 ¼ 37.5%, p2 ¼ 25%, and
p3 ¼ 37.5%. Let a[1] ¼ 4, a[2] ¼ 6 and a[3] ¼ 8; and y ¼ 2, and b[1] ¼ 1 day and b[2] ¼
2 days. Note that these parameters result in the same d, sd, L, and sL described in Section 2
(sd ¼ 1.768, L ¼ 1.5 days, sL ¼ 0.51, s0 ¼ 3.75). To obtain a service level of 80% the EVR
method estimates an r ¼ 7.275, while for a service level of 99% the re-order point equals
13.9. Tables 1 and 2 present the service level calculations when r ¼ 7.275, and r ¼ 13.9 (as
a reminder, the traditional method re-order points are 8.429, and 14.95, respectively).
When r ¼ 7.275 and the lead time is one day, two levels of demand fall below the re-order
point thus have a service level of 100%, and when the lead time is 2 days, all the combined
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

demand values are larger than the re-order point, thus the service level is always less than
100%. While not shown in a table, when r ¼ 8.429 (the r estimated by the traditional
method for a service level of 80%), the EVR method calculations indicate a service level of
86.35%, while if r ¼ 14.95, the EVR estimate for the service level is 99.54%. Finally, in the
proposed method the r value should never exceed 16 (given this is the highest combination
of demand and lead time), while the traditional method would consider such values (for
a very high service level), demonstrating another weakness of the traditional method.

4. Simulation analysis
In this section we compare the proposed EVR method with the traditional model by
conducting simulation analysis on three data sets obtained from an electronics
manufacturer. The three data sets were initially analysed in order to estimate d, sd, L,
and sL assuming normality. Using Arena’s Input Analyzer the data sets were also analysed
in order to obtain parameter estimates for seven additional distributions: Exponential,
Lognormal, Gamma, Beta, Weibull, Triangular, and Uniform (Altiok and Melamed
2007). The Input Analyzer software performs a Kolmogorov-Smirnov (K-S) test to
determine the fit of the distribution and parameters to each data set. This was used to
determine what distributions to test in the simulation analysis. Finally, each data set was
grouped and analysed in order to estimate the parameters required for our model: a, p, b,

Table 1. Expected carrying cost and service level calculations for r ¼ 7.275.

b[w] c p0c (c) Db[w],c (min (100%, r/Db[w],c))

1 a[1] 37.5% 1 4 100.0%


a[2] 25.0% 1 6 100.0%
a[3] 37.5% 1 8 90.9%
P
c 2 b[w]p0c (c) (min (100%, r/Db[w],c) 96.6%
2 2a[1] 14.1% 1 8 90.9%
a[1]:a[2] 9.4% 2 10 72.8%
a[1]:a[3] 14.1% 2 12 60.6%
2a[2] 6.3% 1 12 60.6%
a[2]:a[3] 9.4% 2 14 52.0%
P 2a[3] 14.1% 1 16 45.5%
c 2 b[w]p0c (c) (min (100%, r/Db[w],c) 63.4%
Expected service level: 50% (96.6%) þ 50% (63.4%) 80.0%
2848 A.J. Ruiz-Torres and F. Mahmoodi

Table 2. Expected carrying cost and service level calculations for r ¼ 13.9.

b[w] c p0c (c) Db[w],c (min (100%, r/Db[w],c))

1 a[1] 37.5% 1 4 100.0%


a[2] 25.0% 1 6 100.0%
a[3] 37.5% 1 8 100.0%
P
c 2 b[w]p0c (c) (min (100%, r/Db[w],c) 100%
2 2a[1] 14.1% 1 8 100.0%
a[1]:a[2] 9.4% 2 10 100.0%
a[1]:a[3] 14.1% 2 12 100.0%
2a[2] 6.3% 1 12 100.0%
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

a[2]:a[3] 9.4% 2 14 99.3%


2a[3] 14.1% 1 16 86.9%
P
c 2 b[w]p0c (c) (min (100%, r/Db[w],c) 98%
Expected Service Level: 50% (100%) þ 50% (98%) 99%

Figure 3. Prototype tool to determine reorder point using the EVR method.

and q. As part of this analysis we developed a software tool to estimate the re-order point
given a desired service level based on a maximum of 10 demand levels per time unit and
with lead times of up to 8 time units. An image of the interface is presented in Figure 3.
Note that this is a prototype tool; features such as those related to file management
and error prevention were not included. The limitations on the number of demand levels
and time unit ranges for the prototype are based on the computational time experienced
during development. Table 3 presents the computational times for a sample set of number
of demand (x) and lead time levels ( y) combinations including different value for the
lead time ranges for y. The computer used for the experiments had an Intel Core Duo
International Journal of Production Research 2849

Table 3. Computational times for the prototype software.

x (demand levels) y (lead time levels) Range of lead times CPU time (seconds)

4 5 1–5 51
7 5 1–5 2
10 5 1–5 4
4 5 4–8 5
7 5 4–8 306
10 5 4–8 3533
4 8 1–8 7
7 8 1–8 324
10 8 1–8 4142
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

CPU at 2.2 GHz. When there are five lead time levels ( y ¼ 5) and they are in the range of
1–5 time units, the required computational time is insignificant (less than 4 seconds),
regardless of the number of demand levels. However, with five demand levels ( y ¼ 5) but
with higher values for the lead times (4–8 time units), the computational times increase
exponentially as the number of demand levels increase. The same exponential growth in
computational times was observed with y ¼ 8 and using the full range of lead times (1–8
time units). These results show the constraints on time and demand levels and ranges, and
in the case of an industrial application for the software, computing times would also limit
the type of data that can be analysed. However, as described next, any range of data can be
modified to meet these constraints by the creation of demand and time buckets.

4.1 Characteristics of the data sets used


The three data sets used represent items that display demand and lead time variability.
Specifically, item 1 displays medium variation in both demand and lead time, while item 2
displays high variation of demand but relatively low variation of lead time. Finally, item 3
displays low variation in demand as well as lead time. Also, note that item 3’s demand and
lead time patterns fit the normal distribution better than that of the other two items. These
three items were selected from a larger set provided by the company as they were
representative of the remaining data sets.
The first item analysed has a mean demand of 72.13 units with a standard deviation of
26.61, while its average lead time is 3.55 days with a standard deviation of 1.11. The lead
time values for item 2 were relatively large and given the constraints of the developed
software (maximum of eight time units), the data was transformed into a half week time
scale. The average half week demand was 259.85 units with a standard deviation of 222.23.
The lead time had an average of 6.2 half weeks and a standard deviation of 1.21. Note that
the assumption of demand normality for this data set will lead to poor results due to the
high probability of negative demand. Item 3 had lead time characteristics relatively similar
to that of item 2 (although the time scales are quite different: half weeks for item 2 and
days for item 3), with the normal distribution being one of the two best fitting
distributions. The average lead time was 4.2 days with a standard deviation 0.95, while the
average demand was 303.95 with a standard deviation of 25.97. Table 4 summarises the
‘normal distribution’ parameters for the three items.
2850 A.J. Ruiz-Torres and F. Mahmoodi

Table 4. Data parameters.

d sd L sL

Item 1 72.13 26.61 3.55 1.110


Item 2 259.85 222.23 6.18 1.210
Item 3 303.95 25.97 4.18 0.949

Table 5. Best fitting distribution and parameters.


Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

Best fitting demand Error Best fitting lead time Error

Item 1 34.5 þ 77  BETA(0.667, 0.713) 0.02340 1.5 þ 4  BETA(1.15, 1.09) 0.00637


Item 2 26 þWEIB(239, 1.06) 0.01136 NORMAL 6.18, 1.21) 0.03252
Item 3 NORMAL(304, 25.9) 0.03830 1.5 þ 6  BETA(5.12, 6.21) 0.02227

Table 5 provides the parameters and the square error for the best fitting distribution
for each item. Out of all the distributions tested, only the Beta, Normal, and Weibull
resulted in a ‘best fit’, although the Exponential, Gamma, and Uniform were ‘close’ for
some of the variables. As can be noted, the normal distribution was the best fit in two of
the six cases. In addition, BETA(5.12, 6.21) behaved similarly to a normal distribution.
The analysis indicated that the most ‘non-normal’ distributions were the demand and lead
times for item 1 and the demand for item 2.
To form item 1’s demand parameters for the EVR method (a parameter set), the range
for the data was determined (35 to 111 per day) and then nine groups were formed, each
with a range of 10 units; the first group was from 35 to 44, the next from 45 to 54, and
so on. Then, we determined the average values for each group. For example, for the range
55–64 the data had the following five values: 59, 62, 63, 63, and 64 for an average of 62.2
(a[i] ¼ 62.2). Given there were 62 data points, the probability for this demand level is
8.06% (b[i] ¼ 5/62 ¼ 8.06%). Forming the parameter set for the lead times was significantly
easier due to a smaller range of lead times and the use of integer values for lead times.
Figure 4 provides a graphical presentation of the parameters a and b for item 1 (note that
there was no demand points for the range 45–54). For item 2 the groupings for the demand
data were made using an approach similar to that used for item 1, grouping demand from
1–100, 101–200, etc. Figure 5 depicts the graphical representations of the parameters a and
b for item 2. For item 3 the groupings were performed in ranges of 10 units, resulting in
8 demand groups. Figure 6 presents the graphical representations of the parameters a and
b for item 3.

4.2 The simulation model


A simulation model was developed to compare the re-order points determined by the
traditional and the proposed methods. Besides estimating the service level, we also
calculated the holding costs based on the inventory left at the end of each cycle; units left
International Journal of Production Research 2851

40%

30%
Probability

20%

10%

0%
39.4 62.2 67.8 80.5 85.0 99.0 107.9
Demand (Day)

40%
Probability

30%
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

20%
10%
0%
2 3 4 5
Lead Time (days)

Figure 4. Graphical representation of sets a and b for item 1.

40%

30%
Probability

20%

10%

0%
66.4 141.1 230.9 343.5 422.6 550.1 808.7
Demand (Half Week)

50%
40%
Probability

30%
20%
10%
0%
3 4 5 6 7 8
Lead Time (half weeks)

Figure 5. Graphical representation of sets a and b for item 2.

at the end of cycle  H0 , where H0 represents the holding cost per unit per cycle.
The simulation model is fairly simple and works in the following manner for each
simulated re-order cycle (note that it is assumed that the lead time is always larger than 0,
and cycles with lead times equal to or smaller than 0 are not simulated):
Step 1: Use lead time distribution (best fit for the item) to generate a lead time value ¼ .
Step 2: If   0, then go to Step 1.
Step 3: Let P ¼   bc, and  ¼ bc.
Step 4: Let D ¼ 0. Let W ¼ 1.
2852 A.J. Ruiz-Torres and F. Mahmoodi

40%

30%
Probability

20%

10%

0%
212 286 295 306 314 324 332 345
Demand (Day)
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

60%
50%
Probability

40%
30%
20%
10%
0%
2 3 4 5 6 7
Lead Time (days)

Figure 6. Graphical representation of sets a and b for item 3.

Step 5: Use demand per time unit distribution (best fit for the item) to generate a demand
value ¼ . Let D ¼ D þ d.
Step 6: Let W ¼ W þ 1. If W   then go to Step 5
Step 7: Use demand per time unit distribution (best fit for the item) to generate a demand
value ¼ . Let D ¼ D þ P.
Step 8: Holding cost for the cycle is H0 max(0, r  D). Service level for the cycle is
min(100%, r/D).
A total of 1000 replenishment cycles were simulated. For each re-ordering point and
distribution combination a total of 10 replications were conducted. The average values for
the inventory carrying cost and service level were tracked to determine the start up period
using the graphical method. The analysis indicated that the steady state condition was
reached after 50–100 cycles. Thus, to ensure that the steady state condition is reached, the
data corresponding to the first 100 cycles were discarded from all the simulation models.
Simulation runs for each item were performed at target service levels of 50%, 60%, 70%,
80%, 90%, 95%, and 99%.

4.3 Item 1 analysis


Table 6 presents the results of the analysis for item 1, indicating the resulting re-order
points, and service levels from the simulations (i.e. minimum, average, and maximum of
International Journal of Production Research 2853

Table 6. Simulation results for item 1.

Estimated r Trad. resulting SL EVR resulting SL


EVR cost
TSL Trad. EVR Min. Ave. Max. Min. Ave. Max. savings

50% 132 112 57.0% 57.5% 58.0% 49.2% 49.6% 49.9% 133%
60% 161 138 67.0% 67.6% 68.1% 59.2% 59.7% 60.5% 121%
70% 194 168 76.2% 76.7% 77.3% 68.9% 69.6% 70.4% 80%
80% 232 206 84.5% 84.9% 85.5% 79.2% 79.7% 80.0% 54%
90% 280 261 92.1% 92.4% 92.7% 89.2% 89.7% 90.3% 27%
95% 322 306 96.1% 96.3% 96.5% 94.6% 95.0% 95.4% 18%
99% 397 380 99.2% 99.4% 99.5% 98.9% 99.0% 99.2% 12%
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

the 10 replications), as well as the cost savings due to using the EVR method. For all TSL
levels the traditional method resulted in higher re-order points than EVR, with the
percentage difference between the two (rTrad  rEVR)/rEVR decreasing as the TSL increased.
At the lower TSL levels (i.e. 50%–80%), the resulting average service level was higher than
the TSL by 5–7% for the traditional method, while the average for the EVR method was
less than 1% below TSL. At the higher TSL levels (i.e. 90%–99%), the difference between
the resulting SL and the TSL was smaller for the traditional method, while the EVR had
close to ‘perfect’ performance. The range of service levels (max–min resulting SLs) was
narrow (1% or less) for both methods. For the traditional method, the minimum resulting
SL was always above target, while for the EVR method, the minimum was always below
target, thus in one or more replications the TSL was not achieved (by at most 1.1%),
demonstrating the approach does not guarantee it will meet the TSL, and can result in
lower than expected service levels.
These results demonstrate that the traditional method could result in significant
overestimation of the re-order point, leading to higher SL values than the target, as well as
higher inventory carrying costs. However, as the TSL increased the opportunity for such
gaps decreased, thus at very high TSL the traditional method resulted in closer to the
target actual service levels. In addition, the EVR method determined a re-order point that
was within 1% of the target, in most cases resulting in service levels that were less than 1%
of the target. Finally, with regards to inventory carrying costs, the savings resulting from
lower re-order points (EVR based) were highly significant at the lower TSL levels, and
decreased in significance as the TSL increased (i.e. ranged from 133% to 12%).

4.4 Item 2 analysis


The results for item 2 (presented in Table 7) demonstrated the increased ‘accuracy’ of the
EVR method when compared to the traditional, which can be attributed to the high
demand variation and not behaving ‘normally’. The r determined by the traditional
method resulted in SLs about 7–9% above target at the lower TSL levels, while the EVR
method had an average SL less than 1% above TSL, resulting in significant cost savings.
At the higher TSL levels, the traditional method resulted in higher re-order points than
EVR, but similar to item 1, the resulting SLs were closer to the TSL and the savings
decreased as TSL increased. Only at TSL ¼ 99% did the EVR method result in an
2854 A.J. Ruiz-Torres and F. Mahmoodi

Table 7. Simulation results for item 2.

Estimated r Trad. resulting SL EVR resulting SL


EVR cost
TSL Trad. EVR Min. Ave. Max. Min. Ave. Max. savings

50% 838 695 58.6% 59.0% 59.8% 49.7% 50.0% 50.8% 177%
60% 1025 857 68.6% 69.5% 70.1% 59.4% 60.2% 61.0% 136%
70% 1233 1044 77.2% 78.7% 79.3% 69.8% 70.3% 71.0% 104%
80% 1479 1280 85.9% 86.8% 87.2% 79.8% 80.5% 81.7% 77%
90% 1818 1623 93.2% 93.7% 94.1% 90.0% 90.3% 90.5% 45%
95% 2094 1921 96.4% 96.7% 96.9% 95.0% 95.2% 95.5% 28%
99% 2596 2505 99.0% 99.1% 99.2% 98.7% 98.9% 99.1% 8%
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

Table 8. Simulation results for item 3.

Estimated r Trad. resulting SL EVR resulting SL


EVR cost
TSL Trad. EVR Min. Ave. Max. Min. Ave. Max. savings

50% 637 605 51.6% 52.0% 52.4% 49.1% 49.4% 49.8% 393%
60% 767 732 61.9% 62.5% 63.1% 59.3% 59.6% 60.0% 74%
70% 904 860 72.3% 73.0% 73.5% 69.5% 69.9% 70.2% 68%
80% 1056 1007 82.7% 83.0% 83.3% 79.8% 80.1% 80.8% 46%
90% 1251 1186 92.0% 92.2% 92.6% 89.2% 89.6% 90.1% 43%
95% 1400 1338 96.3% 96.5% 96.7% 94.9% 95.1% 95.4% 25%
99% 1659 1628 99.4% 99.5% 99.5% 99.3% 99.3% 99.4% 7%

average SL below target, and only by 0.1%. The minimum of the 10 replications for the
EVR method was at most 0.6% from target, and in most cases less than 0.3% from target,
so the accuracy of the EVR method was noticeably higher than the traditional. Besides the
TSL ¼ 80%, the maximum resulting SL was less than 1% above target. With regards to
inventory carrying costs, the savings resulting from lower re-order points were highly
significant at the lower TSL levels, and decreased in significance as the TSL increased
(i.e. ranged from 177% to 8%).

4.5 Item 3 analysis


Given that the lead time and demand for this item behaved ‘normally’, we hypothesised
that the performance of the traditional method would be close to that of EVR model,
which as shown in Table 8, proved to be correct, at least when compared to the results for
items 1 and 2. The average resulting SLs were 1–3% above the TSL, while the resulting
average SL for the EVR method were less than 0.5% below the target, a result similar to
that of item 1. However, the minimum resulting SL for the EVR method was off by at
most 0.9%. While the cost savings resulting from using the EVR method were high (i.e.
393% to 7%), the resulting SLs were slightly below the target service levels. We point out
that when the target service level is 50%, the EVR cost savings are very high (i.e. 393%).
International Journal of Production Research 2855

This can be attributed to the fact that both the traditional method and the EVR method
have almost no safety stock (and therefore almost no holding costs). While the EVR
method resulted in an average holding cost of 0.0211, the traditional model resulted in an
average holding cost of 0.1041 (therefore the 393% savings). By comparison, when the
TSL was 99%, the holding costs were 386.8 and 361.2 for the traditional and the EVR
methods, respectively (therefore the 7% savings).

5. Conclusions
A significant amount of safety stock is required in production environments where both
demand and lead times are variable to assure timely production and delivery of the final
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

product. Traditional models to determine the appropriate amount of safety stocks


generally assume that the demand during the lead time follows a normal distribution.
However, information about the form of the probability distribution of the demand during
the lead time is by and large lacking, and the normal distribution is often not the best
representation of the demand during the lead time. In fact, several studies have
demonstrated that the normal distribution does not provide the best shield against the
occurrences of other distributions with the same mean and variance.
This paper presents an alternative re-ordering point model (i.e. EVR method) that
considers demand and lead time variability by focusing on historical data to determine the
possible outcomes of the replenishment cycle, without making any distributional
assumptions. We compare the proposed model with the traditional model by conducting
simulation analysis using three data sets obtained from an electronic manufacturer. The
results indicate that the proposed model yields much closer to target service levels and
lower inventory carrying costs than the traditional model, regardless of the data set used.
This confirms the perception that the traditional method tends to overestimate the re-order
point, resulting in higher service levels than targeted, and consequently higher than desired
inventory carrying costs.
Future research of interest includes the calculation of re-order points considering the
lead time variability of diverse suppliers and the analysis of data sets from other industries.
In addition, future research of practical significance includes how to determine safety
stocks for different seasons of the year (e.g. when demand is highly volatile, or when
demand is stable) and to develop a method that can trigger the change in safety stock
levels. Finally, historical data from the company in question demonstrated that when the
demand was highly volatile, safety stocks were typically not sufficient, while during
months of stable demand, safety stocks were excessive. Proactive management of safety
stocks in such environments provides opportunities for future research.

Acknowledgements
The authors wish to thank the anonymous referees for their insightful and valuable comments and
suggestions that helped to significantly improve this paper.

References

Altiok, T. and Melamed, B., 2007. Simulation modeling and analysis with Arena. Burlington, MA:
Academic Press (Elsevier).
2856 A.J. Ruiz-Torres and F. Mahmoodi

Ballou, R.H., 2004. Business logistics/supply chain management. Upper Saddle River, NJ: Pearson
Prentice Hall.
Bookbinder, J.H. and Lordahl, A.E., 1989. Estimation of inventory re-order levels using the
bootstrap statistical procedure. IIE Transactions, 21 (4), 302–312.
Chandra, C. and Grabis, J., 2008. Inventory management with variable lead-time dependent
procurement cost. Omega, 36 (5), 877–887.
Chase, R.B., Jacobs, F.R., and Aquilano, N.J., 2004. Operations management for competitive
advantage. 10th ed. New York, NY: McGraw-Hill/Irwin.
Chen, A., Hsu, C.-H., and Blue, J., 2007. Demand planning approaches to aggregating and
forecasting interrelated demands for safety stock and backup capacity planning. International
Journal of Production Research, 45 (10), 2269–2294.
Estes, R., 1973. The joint probability approach and reorder point determination. Journal of
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

Production and Inventory Management, 14 (2), 50–56.


Fricker, R.D. and Robbins, M., 2000. Retooling for the logistics revolution – designing Marine Corps
inventories to support the warfighter. Santa Monica, CA: RAND Research Division.
Gallego, G., 1992. A minimax distribution free procedure for the (Q, r) inventory model. Operation
Research Letters, 11 (1), 55–60.
Gallego, G. and Moon, I., 1993. The distribution free newsboy problem: review and extensions.
Journal of Operational Research Society, 44, 825–834.
Jung, J.Y., et al., 2004. A simulation based optimisation approach to supply chain management
under demand uncertainty. Computers & Chemical Engineering, 28 (10), 2087–2106.
Krajewski, L.J. and Ritzman, L.P., 2005. Operations management: processes and value chains. 7th ed.
Upper Saddle River, NY: Pearson Education, Inc.
Lin, Y., 2008. Minimax distribution free procedure with backorder price discount. International
Journal of Production Economics, 111, 118–128.
Moon, I. and Choi, S., 1998. A note on lead time and distributional assumptions in continuous
review inventory models. Computers and Operations Research, 25 (11), 1007–1012.
Moon, I. and Choi, S., 1997. Distribution free procedures for make-to-order (MTO),
make-in-advance (MIA), and composite policies. International Journal of Production
Economics, 48, 21–28.
Moon, I. and Choi, S., 1995. The distribution free newsboy problem with balking. Journal of
Operational Research Society, 46, 537–542.
Moon, I. and Choi, S., 1994. The distribution free continuous review inventory system with a service
level constraint. Computers and Industrial Engineering, 27, 209–212.
Moon, I. and Gallego, G., 1994. Distribution free procedures for some inventory models. Journal of
Operational Research Society, 45, 651–658.
Ouyang, L. and Wu, K., 1998. A minimax distribution free procedure for mixed inventory model
with variable lead time. International Journal of Production Economics, 56/57, 511–516.
Pan, C.H. and Hsiao, Y.C., 2001. Inventory models with back-order discounts and variable lead
time. International Journal of System Science, 32, 925–929.
Pan, C.H., Lo, M., and Hsiao, Y.C., 2004. Optimal reorder point inventory models with variable
lead time and backorder discount considerations. European Journal of Operational Research,
158, 488–505.
Pan, J.C.-H. and Yang, J.-S., 2002. A study of an integrated inventory with controllable lead time.
International Journal of Production Research, 40 (5), 1263–1273.
Persona, A., et al., 2007. Optimal safety stock levels of subassemblies and manufacturing
components. International Journal of Production Economics, 110 (1/2), 147–159.
Scarf, H., 1958. A min-max solution of an inventory problem, Studies in mathematical theory of
inventory and production. Palo Alto, CA: Stanford University Press, Chapter 12.
Shore, H., 1986. General approximate solutions for some common inventory models. Journal of
Operational Research Society, 37, 619–629.
International Journal of Production Research 2857

Silver, E.A., Pyke, F.P., and Peterson, R., 1998. Inventory management and production planning and
scheduling. 3rd ed. New York, NY: John Wiley and Sons.
Stevenson, W.J., 2005. Operations management. 8th ed. New York, NY: McGraw-Hill/Irwin.
Villegas, F.A. and Smith, N.R., 2006. Supply chain dynamics: analysis of inventory vs. order
oscillations trade-off. International Journal of Production Research, 44 (6), 1037–1054.
Wang, M.-C. and Rao, S.S., 1992. Estimating reorder points and other management science
applications by bootstrap procedure. European Journal of Operational Research, 56 (3),
332–342.
Waters, C.D.J., 2003. Inventory control and management. 2nd ed. New York, NY: John Wiley &
Sons.
Downloaded by [University of Wisconsin-Milwaukee] at 16:48 04 October 2014

Potrebbero piacerti anche