Sei sulla pagina 1di 5

Choosing a Time Series Method

Now we have many choices so how do we decide which method and parameters to use?

MONITORING FORECASTS.

MONITORING FORECAST ERRORS Forecast Errors

HOW IS IT USED AND APPLIED?

Forecasts almost always contain errors, so it is important to monitor forecast errors to insure that the forecast is performing well. If the model is performing poorly based on some criteria, the forecaster might reconsider the use of the existing model or switch to another forecasting model or technique. Forecasting control can be accomplished by comparing forecasting errors to predetermined values, or limits. Errors that fall within the limits would be judged acceptable while errors outside of the limits would signal that corrective action is needed (See Figure monitoring Forecasts).

MONITORING FORECASTS.

There are several ways to measure the forecasting errors but, in the interest of time, I am going to cover the Cumulative Sum of Forecast Errors (CFE), Mean Absolute Deviation (MAD) Average Error, Tracking Signals and Statistical Control Charts. CFE is simply the sum of the forecast errors, ΣE t . MAD = Σ |E t |/n (Note: the Excel function =AVEDEV finds the MAD) The benefits of using CFE and MAD, is that the CFE is effective in measuring bias, the higher the CFE, the more bias the forecast is, while the MAD is effective in measuring the overall accuracy of the forecasting method.

AVERAGE ERROR

A measure closely related to cumulative error, that is the average error or bias. It is computed by averaging the cumulative error over the number of time periods.

Formula: E= Sum (e t )

----See Chart---

n A positive value indicates low bias and a negative value indicates a high bias. A value close to zero implies a lack of bias.

Forecasts can also be monitored using either tracking signals or control charts.

Tracking Signals.

A tracking signal is based on the ratio of cumulative forecast error to the corresponding value of MAD.

Tracking signal = Σ(A - F) / MAD

The resulting tracking signal values are compared to predetermined limits. These are based on experience and judgment and often range from plus or minus 3 to plus or minus 8.Values within the limits suggest that the forecast is performing adequately. By the same token, when the signal goes beyond this range, corrective action is needed.

Control Charts.

The control chart approach involves setting upper and lower limits for individual forecasting errors instead of cumulative errors. The limits are multiples of the estimated standard deviation of forecast, S f , which is the square root of MSE. mean squared error (MSE). MSE is the average of the squared forecast errors and is sometimes used as a measure of forecast error. Frequently, control limits are set at 2 or 3 standard deviations.

Example 2.Returning Example 1, the deviation and cumulative deviation have already been computed:

MAD = Σ |A - F| / n = 22 / 8 = 2.75

Tracking signal = Σ (A - F) / MAD = -2 / 2.75 = - 0.73

A tracking signal is as low as - 0.73, which is substantially below the limit (-3 to -8). It would not suggest any action at this time.

Note: After an initial value of MAD has been computed, the estimate of the MAD can be continually updated using exponential smoothing.

MAD t = α(A - F) + (1 - α) MAD t-1

Control Charts. The control chart approach involves setting upper and lower limits for individual forecasting errors instead of cumulative errors. The limits are multiples of the estimated standard deviation of forecast, S f , which is the square root of MSE. Frequently, control limits are set at 2 or 3 standard deviations.

± 2(or 3) S f

Note: Plot the errors and see if all errors are within the limits, so that the forecaster can visualize the process and determine if the method being used is in control.

Example 3.For the sales data in Table 24.2, using the naive forecast, we will determine if the forecast is in control. For illustrative purposes, we will use 2 sigma control limits. TABLE 24.2
ERROR CALCULATIONS
Year
Sales
Forecasts
Error
Error 2
1
320
2
326
320
6 36
3
310
326
-16
256
4
317
310
7 49
5
315
317
-2
4
6
318
315
3 9
7
310
318
-8
64
8
316
310
6 36
9
314
316
-2
4
10
317
314
3 9
-3
467

First, compute the standard deviation of forecast errors Two sigma limits are then plus or minus 2(7.64) = -15.28 to +15.28

Note that the forecast error for year 3 is below the lower bound, so the forecast is not in control (See Figure 24.2). The use of other methods such as moving average, exponential smoothing, or regression might produce a better forecast.

Note: A system of monitoring forecasts needs to be developed. The computer may be programmed to print a report showing the past history when the tracking signal "trips" a limit. For example, when a type of exponential smoothing is used, the system may try a different value of α (so the forecast will be more responsive) and to continue forecasting.

FIGURE 24.2 CONTROL CHARTS FOR FORECASTING ERRORS Forecast Errors

Forecasts almost always contain errors and they can be classified as either bias errors or random errors. Bias errors are the result of consistent mistakes in which the forecast tends to be consistently high or low.

Another method for monitoring forecast error is statistical control charts. The formula without the square root is known as the mean squared error, or (MSE)

The other type of error, random error, results from unpredictable factors that cause the forecast to deviate from the actual demand. It is best to minimize both types of errors by selecting appropriate forecasting models, but eliminating all forms of errors is impossible.

Before errors can be minimized, they must be quantified. A forecast error is simply the difference between the forecast and actual demand for a given time period.

It may be expressed mathematically by E t = D t - F t