Sei sulla pagina 1di 8

QUANTITATIVE ANALYSIS

CHAPTER 9:
FORECASTING

JZEL MAGSUMBOL
TERESSE DACANAY

SUBMITTED TO:
DR. MAXIMO Y. MULDONG
I. BASIC CONCEPTS OF FORECASTING

Forecasting is a prediction, estimate, or a determination of what will occur in the future


based on a certain set of factors.

Examples of values being forecasted: Sales, interest rates, funds, gross national product
(GNP), technological status

Major Categories of Forecasting Time Horizons


1. Short-term Forecast. It covers 1 day to 1 year (e.g., employment, purchasing,
scheduling sales, and production)
2. Intermediate-term Forecast. It covers 1 season to 1 or 2 years (e.g., production
schedules, revenues, cash flow, budget planning)
3. Long-term Forecast. It covers from 2 to 5 years or more (e.g., market trends,
technology, facilities expansion, and general policy)
Forecasting Techniques
1. Qualitative Techniques. A technique based on qualitative data
2. Time Series Techniques. A statistical forecasting technique that is based on
historical data accumulated over a period of time
3. Causal Methods. It is method defined as relationships among independent and
dependent variables in a system of related equations

Factors in Forecasting
1. Trend. It is the general movement or direction in the data
2. Seasonal Factors. Forecasting factors in which variations in a time series associated
with a particular period of time
3. Cyclical Factors. Forecasting factors applicable in longer-term regular fluctuations
which may take several years to complete
4. Random Factors. Events or effects that cannot be predicted with certainty but can
impact on the data

II. FORECASTING METHODS OR TIME SERIES METHODS


A. Simple Moving Average
B. Weighted Moving Average
C. Simple Exponential Smoothing
D. Adjusted Exponential Smoothing
E. Forecast Reliability
A. Simple Moving Average

Simple Moving Average is the un-weighted average of a consecutive number of data


points. It is a forecasting method simply eliminates the effects of seasonal, cyclical and
erratic fluctuations by getting the historical data.

When to use moving average?


If seasonality, trend and cyclical factors are not critical in the variable being
forecast, moving average is an appropriate tool.

It is being computed by using the formula:

(
=

Or


=1 1
=

Where: Ft = Forecast for the time period


St-1 = Actual value for period t-i
N = number of time periods used in the averaging process

B. Weighted Moving Average

Weight moving average is a time series forecasting method in which the most recent
data are weighted heavier compared to later data. This is desirable to vary the weights
given to historical data forecast future demand or sales.

Smoothing constant is a weighting factor used in the exponential smoothing forecasting


technique.

Mathematically, the weighted moving average is computed as follows:


=1
=
=1 1
Where: Ft = Forecast for the time period
St-1 = Actual value for period t-i
N = number of time periods used in the averaging process
Wt-1 = Weight given to (t-1)th period in the averaging process

C. Simple Exponential Smoothing

Exponential smoothing refers to family of forecasting models that are very similar to the
weighted moving average that weights the most recent past data more than distant past
data. The value of is between 0.00 and 1.00. The value of determines the degree of
smoothing that takes place and how responsive the model is to fluctuations of the variable
being forecast. The setting of is typically not specific and is usually done by trial and
error.

The formula simplest exponential smoothing model is of the following terms:

1 = + (1 )

Or

= ( ) + (1 )( )

Where: Ft+1 = Forecast for the next period


Yt = Actual data in the present period
Ft = Previously determined forecast for the present period
= Weighting factor referred to as the smoothing constant

D. Adjusted Exponential Smoothing

Adjusted Exponential Smoothing is adjusted for trend changes and seasonal patterns. It
consists of simple exponential smoothing forecast with trend adjustment factor added to it.
The value of the is a value also between 0.00 and 1.00 similar to . In addition, both
and is often determined subjectively based on the judgment of the forecaster.

+1 = (+1 ) + (1 )

1
+1 + ( ) +1

Where: AFt+1 = Adjusted exponential smoothing I the present period


T = Exponentially smoothed trend factor
Tt = The last period trend factor
= Exponential smoothed trend from time period t

E. Forecast Reliability

Forecast Reliability measures the closeness of forecast in reflecting reality. And, it is


known that no forecast measure will result in a consistently perfect forecast. However, the
primary objective of forecasting is to make it generally reliable or to see if it is accurately
portraying what actually occurs. One way of testing the reliability of a forecast is mean
average deviation (MAD).

Mean Absolute Deviation is the measure of the difference between a forecast and what
actually occurs. Mathematically, the mean absolute deviation can be described as follows:
|
MAD =

III. SIMPLE LINEAR REGRESSION

Regression is a statistical measure that attempts to determine the strength of the


relationship between one dependent variable (usually denoted by Y) and a series of other
changing variables (known as independent variables).

Regression Analysis is a simple statistical tool used to model the dependence of a variable
on one (or more) explanatory variables.

Simple Linear Regression is the least estimator of a linear regression model with a single
predictor (or one independent variable).

Least Square Model determines a regression equation by minimizing the sum of squares
of the vertical distances between the actual y values and the predicted values of y.

Residual the difference between an observed and predicted value. The mean of residuals
is always zero.

Outliers are the points that fall outside the overall pattern of the other points.

Influential Scores are scores whose removal greatly changes the regression line in a
scatterplot.

ASSUMPTIONS OF LINEAR REGRESSION EQUATION


1. Linearity. The mean of each error component is zero.
2. Independence of Error Terms. The terms are independent of each other.
3. Normally Distributed Error Terms. Each error component (random variable) follows
an approximate normal distribution.
4. Homoscedasticity. The variance of the error components is the same for each value of
independent variable.

General Equation:
= 1 + 0

A. Estimating the Coefficient

= 1 + 0

( ) ( )( )
1 = 2
( 2 ) ( )

Where: = predicted or fitted value of Y


= the value of any particular observations of the independent variable
Y = the value of any particular observations of the dependent variable
1 = slope of the regression line
0 = intercept of the simple linear regression

Step 1: Obtain the sum of X, Y, 2 , 2 , and XY.


, , 2 , 2 ,
Step 2: Compute for slope of the simple linear regression.
1=( ) ( )( )
( 2 )()2
Step 3: Compute for the mean value of X and Y.

= =
Step 4: Compute for intercept of the simple linear regression.
0 = 1
Step 5: Substitute the slope and intercept in the general simple linear regression
equation.
= 1 + 0

Step 6: Graph the least square regression line.

B. Sum of Squares for Error

Sum of Squares for Error or Total Variations is the least squares method that determines
the coefficients that minimize the sum of the squared deviation between the points and the
line defined by coefficients.

( )2 = ( )2 +( )2

Or

SST = SSR + SSE

Where: = predicted or fitted value of Y


Y = the value of any particular observations of the dependent variable
= mean of the dependent variable
( )2 = total variations in Y (total sum of squares)
( )2 = explained variation in Y (regression sum of squares)
( )2 = unexplained variation in Y (error sum of squares)

C. Standard Error of Estimate

Standard Error of Estimate is the standard deviation of the observed y values about the
predicted values.
( ) 2
2 () ()
1
=2 = 2 = 2

Where: = standard error of estimate


0 = intercept of the simple linear regression
1 = slope of the regression line
X = the value of any particular observations of the independent variable
Y = the value of any particular observations of the dependent variable
= predicted or fitted value of Y
= sample size

D. Coefficient of Determination

Coefficient of determination is the measure of variation of the dependent variable that is


exlplained by the regression line and the independent variable.

2= =

( )2
2 = = 1 - = 1- ( )2

Where: 2 = coefficient of determination


SSE = error sum of squares
SSR = regression sum of squares
SST = total sum of squares

On the other hand, coefficient of non-determination is the proportion in the dependent


variable that is left unexplained by the independent variable, determined by 1-r2

E. Confidence Interval and Prediction Interval

Confidence Interval and Prediction Interval represents a closed interval where a certain
percentage of the population is likely to lie.
1 ( )2
CONFIDENCE INTERVAL = 2 ( ) + ()2
2

Prediction Interval is an estimate of an interval in which future observations will fall,


with a certain probability, given what has already been observed.

1 ( )2
PREDICTION INTERVAL = 2 ( )1 + + ()2
2

Where: = standard error of estimate


= test for correlation coefficient
X = the value of any particular observations of the independent variable
= predicted or fitted value of Y
= sample size
= mean of the independent variable

Potrebbero piacerti anche