Sei sulla pagina 1di 19

EDUCATION HOLE PRESENTS

QUANTITATIVE TECHNIQUES IN MANAGEMENT


Module I: Introduction

2013

WWW.EDHOLE.COM

Application of Statistics in Business........................................................................................ 2


Classification of Data ......................................................................................................................3
Frequency polygon................................................................................................................................................5
Frequency density table ........................................................................................................................................6
Cumulative frequency distribution .......................................................................................................................7
Cumulative frequency distribution .......................................................................................................................7

Measures of central tendency ................................................................................................ 8


Introduction ..........................................................................................................................................................8

Mean (Arithmetic) ..........................................................................................................................8


When not to use the mean ...................................................................................................................................9
Median ................................................................................................................................................................10

Mode ...........................................................................................................................................10

Measures of Dispersion and Skewness ................................................................................. 13


Range ..................................................................................................................................................................13
Percentiles and related characteristics ...............................................................................................................15
Quartile Finder ....................................................................................................................................................15
Variance and Standard Deviation .......................................................................................................................16
Quantile based measures ...................................................................................................................................17
L-moments ..........................................................................................................................................................18
Cyhelsk's skewness coefficient..........................................................................................................................18
Distance skewness ..............................................................................................................................................18
Groeneveld & Meedens coefficient ...................................................................................................................18

Application of Statistics in Business


Business statistics is the science of good decision making in the face of uncertainty and is used
in many disciplines such as financial analysis, econometrics, auditing, production and operations
including services improvement, and marketing research.
These sources feature regular repetitive publication of series of data. This makes the topic
of time series especially important for business statistics. It is also a branch of statistics working
mostly on data collected as a by-product of doing business or by government agencies. It
provides knowledge and skills to interpret and use statistical techniques in a variety of business
applications. A typical business statistics course is intended for business majors, and covers
statistical study, descriptive statistics (collection, description, analysis, and summary of data),
probability, and the binomial and normal distributions, test of hypotheses and confidence
intervals, linear regression, and correlation.

Classification of Data
The process of arranging data into homogenous group or classes according to some common
characteristics
present
in
the
data
is
called
classification.
For Example: The process of sorting letters in apost office, the letters are classified according to
the cities and further arranged according to streets.
Bases of Classification:
There are four important bases of classification:
(1) Qualitative Base (2) Quantitative Base (3)Geographical Base (4) Chronological or
Temporal Base
(1) Qualitative Base:
When the data are classified according to some quality or attributes such as sex, religion,
literacy, intelligence etc
(2) Quantitative Base:
When the data are classified by quantitative characteristics like heights, weights, ages,
income etc
(3) Geographical Base:
When the data are classified by geographical regions or location, like states, provinces,
cities, countries etc
(4) Chronological or Temporal Base:
When the data are classified or arranged by their time of occurrence, such as years,
months, weeks, days etc For Example: Time series data.
Types of Classification:
(1) One -way Classification:
If we classify observed data keeping in view single characteristic, this type of
classification is known as one-way classification.
For Example: The population of world may be classified by religion as Muslim, Christians
etc
(2) Two -way Classification:
If we consider two characteristics at a time in order to classify the observed data then we
are doing two way classifications.
For Example: The population of world may be classified by Religion and Sex.

(3) Multi -way Classification:


We may consider more than two characteristics at a time to classify given data or observed
data. In this way we deal in multi-way classification.
For Example: The population of world may be classified by Religion, Sex and Literacy.

Interpretation of computer output of diagrammatic and graphical presentation of


data
Frequency distribution

A table of grouped data

Clearer picture than individual data values

Usually presented in the form of a histogram

Also: pie charts, dot plots, stem-and-leaf plots, box plots, etc.

Histogram
A pictorial method for representing frequency data

Similar to a bar chart but:

Data must be measurable, e.g. lengths rather than colours.

Area of a block is proportional to the frequency.

For unequal intervals frequency density is plotted.


Method:
Construct a frequency table (from raw data)

Find range.

Decide class intervals; all same at this stage.

Construct frequency distribution table.

If data scarce at extreme ends, join classes.

If central frequencies very high, split classes.

If class intervals not all same, calculate frequency densities.

Draw the histogram

For given frequency table, close any open intervals with sensible limits.

Label each axis carefully; frequency or frequency density usually shown vertically.

Horizontal axis shows value at interval centre for discrete data, at interval limits for
continuous data.

N.B. Areas proportional to frequency so if width is double then halve height for same frequency.

Frequency polygon
Join up the mid-points of the top of each column. At extremes join to x-axis half an interval
width away. Same area as the histogram
Example
Mileages recorded for a sample of your hired vehicles during a given week yielded the following
data:

138
146
168
146
161

164
158
126
183
145

150
140
138
105
135

132
109
186
108
142

144
136
163
135
150

125
148
109
153
156

149
152
154
140
145

157
144
165
135
128

Min. = 105
Max. = 186
Range = 186 - 105 = 81
Nine intervals of 10 miles width reasonable; extreme intervals may be wider;
may be smaller if block very high.
Next produce a frequency distribution table.
Frequency distribution table
Class Interval
100 & less than
110 & less than
120 & less than
130 & less than
140 & less than
150 & less than
160 & less than

Tally
110
120
130
140
150
160
170

||||

Frequency
4

modal interval

170 & less than 180


180 & less than 190
Total 40
Completed frequency distribution table
Class Interval
100 & less than
110 & less than
120 & less than
130 & less than
140 & less than
150 & less than
160 & less than
170 & less than
180 & less than

Tally
110
120
130 | | |
140
150
160
170
180
190

Frequency
4

||||

0
3
|||| ||
| | | | | | | | | 11
|||| |||
||||

7
8
5
0
2

||
Total 40

Data scarce at extremes so presentation improved by using frequency density.

Frequency density table


Class Interval

Frequency Freq. dens.


(per 10 miles)

100
110
120
130
140
150
160
170
180

&
&
&
&
&
&
&
&
&

less than
less than
less than
less than
less than
less than
less than
less than
less than

110
120
130
140
150
160
170
180
190

4
0
3
7
11
8
5
0
2

2.0
3.0
7.0
11.0
8.0
5.0
1.0
Total 40

Cumulative frequency diagrams


( Ogives, Cumulative frequency polygons)

A graphical method of representing the accumulated frequencies below and including a


particular value, i.e. a running total.
Percentage cumulative frequencies are often calculated.
Method is used for estimating Median and Quartile values.
Also used to estimate the percentage of the data above or below a certain value.
Interquartile or semi-interquartile range of the data used as measure of spread.

Method
Construct a frequency table as before.
Construct a cumulative frequency table.
Calculate the cumulative percentages.
Plot the cumulative percentage on the vertical axis against the end of the interval and join up
the points with straight lines.

Cumulative frequency distribution


Interval
less than
100
120
130
140
150
160
170
190

Freq. Cum.Freq. % Cum. Freq.


0
0
4
4
3
7
7
11
8
5
2
40

0.0
10.0
17.5

Cumulative frequency distribution


Interval
less than
100
120
130
140
150
160

Freq. Cum.Freq.
0
4
3
7
11
8

% Cum. Freq.
0
4
7
14
25
33

0.0
10.0
17.5
35.0
62.5
82.5

170
190

5
2

38
40

95.0
100.0

40

Measures of central tendency


Introduction
A measure of central tendency is a single value that attempts to describe a set of data by
identifying the central position within that set of data. As such, measures of central tendency are
sometimes called measures of central location. They are also classed as summary statistics. The
mean (often called the average) is most likely the measure of central tendency that you are most
familiar with, but there are others, such as the median and the mode.
The mean, median and mode are all valid measures of central tendency, but under different
conditions, some measures of central tendency become more appropriate to use than others. In
the following sections, we will look at the mean, mode and median, and learn how to calculate
them and under what conditions they are most appropriate to be used.

Mean (Arithmetic)
The mean (or average) is the most popular and well known measure of central tendency. It can
be used with both discrete and continuous data, although its use is most often with continuous
data (see our Types of Variable guide for data types). The mean is equal to the sum of all the
values in the data set divided by the number of values in the data set. So, if we have n values in a
data set and they have values x 1 , x 2 , ..., x n , the sample mean, usually denoted by (pronounced
x bar), is:

This formula is usually written in a slightly different manner using the Greek capitol letter,
pronounced "sigma", which means "sum of...":

You may have noticed that the above formula refers to the sample mean. So, why have we called
it a sample mean? This is because, in statistics, samples and populations have very different

meanings and these differences are very important, even if, in the case of the mean, they are
calculated in the same way. To acknowledge that we are calculating the population mean and not
the sample mean, we use the Greek lower case letter "mu", denoted as :

The mean is essentially a model of your data set. It is the value that is most common. You will
notice, however, that the mean is not often one of the actual values that you have observed in
your data set. However, one of its important properties is that it minimises error in the prediction
of any one value in your data set. That is, it is the value that produces the lowest amount of error
from all other values in the data set.
An important property of the mean is that it includes every value in your data set as part of the
calculation. In addition, the mean is the only measure of central tendency where the sum of the
deviations of each value from the mean is always zero.

When not to use the mean


The mean has one main disadvantage: it is particularly susceptible to the influence of outliers.
These are values that are unusual compared to the rest of the data set by being especially small or
large in numerical value. For example, consider the wages of staff at a factory below:
Staff
Salary

1
15k

2
18k

3
16k

4
14k

5
15k

6
15k

7
12k

8
17k

9
90k

10
95k

The mean salary for these ten staff is $30.7k. However, inspecting the raw data suggests that this
mean value might not be the best way to accurately reflect the typical salary of a worker, as most
workers have salaries in the $12k to 18k range. The mean is being skewed by the two large
salaries. Therefore, in this situation, we would like to have a better measure of central tendency.
As we will find out later, taking the median would be a better measure of central tendency in this
situation.
Another time when we usually prefer the median over the mean (or mode) is when our data is
skewed (i.e., the frequency distribution for our data is skewed). If we consider the normal
distribution - as this is the most frequently assessed in statistics - when the data is perfectly
normal, the mean, median and mode are identical. Moreover, they all represent the most typical
value in the data set. However, as the data becomes skewed the mean loses its ability to provide
the best central location for the data because the skewed data is dragging it away from the typical
value. However, the median best retains this position and is not as strongly influenced by the

skewed values. This is explained in more detail in the skewed distribution section later in this
guide.

Median
The median is the middle score for a set of data that has been arranged in order of magnitude.
The median is less affected by outliers and skewed data. In order to calculate the median,
suppose we have the data below:
65

55

89

56

35

14

56

55

87

45

92

We first need to rearrange that data into order of magnitude (smallest first):
14

35

45

55

55

56

56

65

87

89

92

Our median mark is the middle mark - in this case, 56 (highlighted in bold). It is the middle mark
because there are 5 scores before it and 5 scores after it. This works fine when you have an odd
number of scores, but what happens when you have an even number of scores? What if you had
only 10 scores? Well, you simply have to take the middle two scores and average the result. So,
if we look at the example below:
65

55

89

56

35

14

56

55

87

45

We again rearrange that data into order of magnitude (smallest first):


14

35

45

55

55

56

56

65

87

89

92

Only now we have to take the 5th and 6th score in our data set and average them to get a median
of 55.5.
Mode
The mode is the most frequent score in our data set. On a histogram it represents the highest bar
in a bar chart or histogram. You can, therefore, sometimes consider the mode as being the most
popular option. An example of a mode is presented below:

Normally, the mode is used for categorical data where we wish to know which the most common
category, as illustrated below is:

We can see above that the most common form of transport, in this particular data set, is the bus.
However, one of the problems with the mode is that it is not unique, so it leaves us with
problems when we have two or more values that share the highest frequency, such as below:

We are now stuck as to which mode best describes the central tendency of the data. This is
particularly problematic when we have continuous data because we are more likely not to have
any one value that is more frequent than the other. For example, consider measuring 30 peoples'
weight (to the nearest 0.1 kg). How likely is it that we will find two or more people
with exactly the same weight (e.g., 67.4 kg)? The answer, is probably very unlikely - many
people might be close, but with such a small sample (30 people) and a large range of possible
weights, you are unlikely to find two people with exactly the same weight; that is, to the nearest
0.1 kg. This is why the mode is very rarely used with continuous data.
Another problem with the mode is that it will not provide us with a very good measure of central
tendency when the most common mark is far away from the rest of the data in the data set, as
depicted in the diagram below:

In the above diagram the mode has a value of 2. We can clearly see, however, that the mode is
not representative of the data, which is mostly concentrated around the 20 to 30 value range. To
use the mode to describe the central tendency of this data set would be misleading.

Measures of Dispersion and Skewness


If everything were the same, we would have no need of statistics. But, people's heights, ages,
etc., do vary. We often need to measure the extent to which scores in a dataset differ from each
other. Such a measure is called the dispersion of a distribution. This tutorial presents various
measures of dispersion that describe how scores within the distribution differ from the
distribution's mean and median.

Range
The range is the simplest measure of dispersion. The range can be thought of in two ways.
1. As a quantity: the difference between the highest and lowest scores in a distribution.

"The range of scores on the exam was 32."


2. As an interval; the lowest and highest scores may be reported as the range.
"The range was 62 to 94," which would be written (62, 94).

The Range of a Distribution


Find the range in the following sets of data:

NUMBER OF BROTHERS AND SISTERS

{ 2, 3, 1, 1, 0, 5, 3, 1, 2, 7, 4, 0, 2, 1, 2,
1, 6, 3, 2, 0, 0, 7, 4, 2, 1, 1, 2, 1, 3, 5, 12,
4, 2, 0, 5, 3, 0, 2, 2, 1, 1, 8, 2, 1, 2 }

An outlier is an extreme score, i.e., an infrequently occurring score at either tail of the
distribution. Range is determined by the furthest outliers at either end of the distribution. Range
is of limited use as a measure of dispersion, because it reflects information about extreme values

but not necessarily about "typical" values. Only when the range is "narrow" (meaning that there
are no outliers) does it tell us about typical values in the data.

Percentiles and related characteristics


Most students are familiar with the grading scale in which "C" is assigned to average scores, "B"
to above-average scores, and so forth. When grading exams "on a curve," instructors look to see
how a particular score compares to the other scores. The letter grade given to an exam score is
determined not by its relationship to just the high and low scores, but by its relative position
among all the scores.
Percentile describes the relative location of points anywhere along the range of a distribution. A
score that is at a certain percentile falls even with or above that percent of scores. The median
score of a distribution is at the 50th percentile: It is the score at which 50% of other scores are
below (or equal) and 50% are above. Commonly used percentile measures are named in terms of
how they divide distributions. Quartiles divide scores into fourths, so that a score falling in the
first quartile lies within the lowest 25% of scores, while a score in the fourth quartile is higher
than at least 75% of the scores.

Quartile Finder
Find the quartile scores for the following distribution. (See instructions appearing below the
histogram).
The divisions you have just performed illustrate quartile scores. Two other percentile scores
commonly used to describe the dispersion in a distribution are decile and quintile scores which
divide cases into equal sized subsets of tenths (10%) and fifths (20%), respectively. In theory,
percentile scores divide a distribution into 100 equal sized groups. In practice this may not be
possible because the number of cases may be under 100.
A box plot is an effective visual representation of both central tendency and dispersion. It
simultaneously shows the 25th, 50th (median), and 75th percentile scores, along with the
minimum and maximum scores. The "box" of the box plot shows the middle or "most typical"
50% of the values, while the "whiskers" of the box plot show the more extreme values. The
length of the whiskers indicates visually how extreme the outliers are.
Below is the box plot for the distribution you just separated into quartiles. The boundaries of the
box plots "box" line up with the columns for the quartile scores on the histogram. The box plot
displays the median score and shows the range of the distribution as well.

Variance and Standard Deviation


By far the most commonly used measures of dispersion in the social sciences
are variance and standard deviation. Variance is the average squared difference of scores from
the mean score of a distribution. Standard deviation is the square root of the variance.
In calculating the variance of data points, we square the difference between each point and the
mean because if we summed the differences directly, the result would always be zero. For
example, suppose three friends work on campus and earn $5.50, $7.50, and $8 per hour,
respectively. The mean of these values is $(5.50 + 7.50 + 8)/3 = $7 per hour. If we summed the
differences of the mean from each wage, we would get (5.50-7) + (7.50-7) + (8-7) = -1.50 + .50
+ 1 = 0. Instead, we square the terms to obtain a variance equal to 2.25 + .25 + 1 = 3.50. This
figure is a measure of dispersion in the set of scores.
The variance is the minimum sum of squared differences of each score from any number. In
other words, if we used any number other than the mean as the value from which each score is
subtracted, the resulting sum of squared differences would be greater. (You can try it yourself -see if any number other than 7 can be plugged into the preceeding calculation and yield a sum of
squared differences less than 3.50.)

The standard deviation is simply the square root of the variance. In some sense, taking the square
root of the variance "undoes" the squaring of the differences that we did when we calculated the
variance.
Variance and standard deviation of a population are designated by
and
, respectively.
2
Variance and standard deviation of a sample are designated by s and s, respectively.

Variance

Standard Deviation

Population

Sample

In these equations,
is the population mean,
is the sample mean, N is the total number of
scores in the population, and n is the number of scores in the sample.

Quantile based measures


A skewness function

can be defined,[10][11] where F is the cumulative distribution function. This leads to a


corresponding overall measure of skewness[10] defined as the supremum of this over the range
1/2 u < 1. Another measure can be obtained by integrating the numerator and denominator of

this expression.[12] The function (u) satisfies -1 (u) 1 and is well defined without requiring
the existence of any moments of the distribution.[12]
Galton's measure of skewness[13] is (u) evaluated at u = 3 / 4. Other names for this same
quantity are the Bowley Skewness,[14] the Yule-Kendall index[15] and the quartile skewness.
Kelley's measure of skewness uses u = 0.1.[citation needed]

L-moments
Use of L-moments in place of moments provides a measure of skewness known as the Lskewness.[16]
Cyhelsk's skewness coefficient
An alternative skewness coefficient may be derived from the sample mean and the individual
observations:[17]
a = ( number of observations below the mean - number of observations above the mean ) / total
number of observations
The distribution of the skewness coefficient a in large sample sizes (45) approaches that of a
normal distribution. If the variates have a normal or a uniform distribution the distribution of a is
the same. The behavior of a when the variates have other distributions is currently unknown.
Although this measure of skewness is very intuitive, an analytic approach to its distribution has
proven difficult.
Distance skewness
A value of skewness equal to zero does not imply that the probability distribution is symmetric.
Thus there is a need for another measure of asymmetry which has this property: such a measure
was introduced in 2000.[18] It is called distance skewness and denoted by dSkew. If X is a
random variable which takes values in the d-dimensional Euclidean space, X has finite
expectation, X' is an independent identically distributed copy of X and
the Euclidean space then a simple measure of asymmetry is

denotes the norm in

dSkew (X) := 1 - E||X-X'|| / E||X + X'|| if X is not 0 with probability one,


and dSkew (X):= 0 for X = 0 (with probability 1). Distance skewness is always between 0 and 1,
equals 0 if and only if X is diagonally symmetric (X and -X has the same probability
distribution) and equals 1 if and only if X is a nonzero constant with probability one.[19] Thus
there is a simple consistent statistical test of diagonal symmetry based on the sample distance
skewness:
dSkew n (X):= 1- i,j ||x i x j || / i,j ||x i + x j ||.
Groeneveld & Meedens coefficient
Groeneveld & Meeden have suggested, as an alternative measure of skewness,[12]

Where is the mean, is the median, || is the absolute value and E() is the expectation operato

Potrebbero piacerti anche