Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Quantitative Techniques
For Managerial Decisions
1.1 Quantitative ...
Objectives:
After studying this lesson you should be able to
• Importance of quantitative techniques
• Difference between statistics and operations research
• Advantages of quantitaive methods
• Importance of computers in Business Applications
Structure:
1.1 Introduction
1.2 Meaning of quantitative techniques
1.3 Statistics and operations research
1.4 Classification of statistical methods
1.5 Models of operations research
1.6 Various statistical methods
1.7 Advantages of quantitative approach
1.8 Quantitative techniques in business and management
1.9 Uses of computers
1.10 Summary
1.11 Excercises
1.12 Reference Books
1.1 Introduction:
Quantitative studies of management have generally been considered to have originated
during the World War II period, when operations research teams were formed to deal with strategic
and tactical problems faced by the military. These teams, which often consisted of people with
diverse specialties (e.g., Engineers, Mathematicians and Behavioral Scientists) were joined together
to solve common problems through the utilization of scientific methods (Anderson, Sweeney and
Williams, 1994). After the war, many of these team members continued their research on
Centre For Distance Education 1.2 Acharya Nagarjuna University
quantitative approaches to decision-making, leading to the growth and use of management science
in nonmilitary applications such as manufacturing, health care, engineering projects, transportation
and traffic studies, communication, business and educational administration.
Concurrent with these methodological developments, system analysis was developed. It
represents one approach to solving problems within the framework of systematic output followed
by feedback. Thus, information systems facilitated the advance of computer technology. Numerous
software programs were written to develop variants of the post-World War II methodological
approaches, allowing for solutions to more complex and larger problems than those requiring only
intuitive, simpler solutions (Robbins & DeCenzo, 1995).
The contingency viewpoint or the situational approach is the most recent school of thought
about management. In essence, managers who follow this approach can use any other viewpoint,
depending on the current circumstances. Managers or administrators should consider the three
key contingency variables of environment, technology and people, before making a decision
(Hellriegel & Slocum, 1992).
Quantitative Techniques
These techniques require basic knowledge of certain topics in mathematics. The quantitative
approach in decision making is that if the factors that influence the decision can be identified and
quantified then it becames easier to resolve the complexity of the tools of quantitative analysis.
Quantitative analysis is now extended to several areas of business operations and represents
probably the most effective a proaches to handling of some types of decision problems. One can
use terms management science, operations research and quantitative analysis interchangeably.
Definition of Terms:
Quatitative Techniques: Quantitative techniques are managerial aids concerned with the
development of any appropriate utilization of rational approaches of intervention in human affairs.
Quantitative Techniques: Quantitative techniques are mathematical and statistical models
describing a diverse array of variable's relationship and are designed to assist administrators with
management problem solving and decission making.
Statistical Model:
Fact becomes knowledge, when it is used in the successful completion of a decision process.
Once you have a massive amount of facts integrated as knowledge, then your mind will be
superhuman in the same sense that manking with writing is superhuman compared to mankind
Quantitative Techniques Quantitative ...
1.3
For Managerial Decisions
before writing. The following figure illustrates the statistical thinking process based on data in
constructing statistical models for decision making under uncertainties.
Level of exactness of
Statistical Model
Knowledge
Facts
Information
Level of improvements
Data on decision making
The above figure depicts the fact that as the exactness of a statistical model increases, the
level of improvements in decision - making increases. That's why we need statistical data analysis.
Statistical data analysis arose from the need to place knowledge on a systematic evidence base.
This required a study of the laws of probability, the development of measures of data properties
and relationships and so on.
(d) Analysis: The fourth step is find average literacy in different groups this information will
help to get an understanding of the phenonimean.
(e) Interpretation: This is the last step. One can draw the conclusion based in the analysis
which will aid indecision making - a policy decision for improvement ofthe existing
situations.
Characteristics of data:
The data must possess the following charateristics:
(i) They must be aggreate of facts. For example single number cannot be used to study.
(ii) They should be effected to a marked extent by multiplicity of causes for example in
beauty context the observations recorded are effected by a number of factors.
(iii) They must be enumerated or estimated according to reasonable standard of accuracy.
For example blood test is estimated by certain tests on small samples drawn from a
human body.
(iv) They must have been collected in systematic manner for a pre determined purpose.
Lack or msuse of data about the problem situation leads wrong results.
(v) They must be placed in relation to each other. The data should be comparable otherwise
it can not used for relation.
(vi) They must be numerically expressed the data should be in numerically collected
otherwise it is not possible to analyse.
Types of Statistical Data:
Primary and Secondary Data:
Primary data is data that you collect yourself using such methods as:
Direct Observation: Lets you focus on details of importance to you; lets you see a system in
real rather than theoretical use.
Surveys: Written surveys let you collect considerable quantities of detailed data. You have to
either trust the honesty of the people surveyed or build in self - verifying questions.
Interviews: Slow, Expensive and they take people away from their regular jobs, but they allow in
- depth questioning and follow-up questions. They also show non-verbal communication such as
face-pulling, fidgeting, shrugging, hand gestures, sarcastic expressions that add further meaning
to spoken words. e.g., "I think it's a GREAT sytem" could mean vastly different things depending
on whether the person was sneering at the time! A problem with interviews is that people might
say what they think the interviewer wants to hear; they might avoid being honestly critical in case
their jobs or reputation might suffer.
Logs: (e.g., fault logs, error logs, complaint logs, transaction logs). Good empirical, objective
data sources (usually, if they are used well). Can yield lots of valuable data about system
performance over time under different conditions.
Quantitative Techniques Quantitative ...
1.5
For Managerial Decisions
Primary data can be relied on because you know where it came from and what was done to
it. It's like cooking something yourself. You konw what went into it.
Secondary Data:
Secondary data analysis can be literally defined as second-hand analysis. It is the analysis
of data or information that was either gathered by someone else (e.g. researchers, institutions,
other NGOs, etc.) or for some other purpose than the one currently being considered, or often a
combination of the two.
If secondary research and data analysis is undertaken with care and diligence it can provide
a cost - effective way of gaining a broader understanding of specific phenomena and/or conducting
preliminary needs assessments.
Secondary data are also helpful in designing subsequent primary research and as well, can
provide a baseline with which to compare your primary data collection results. Therefore, it is
always wise to begin any research activity with a review of the secondary data.
Operations Research:
The term operations research (O.R.) was coined during World War II, when the British
military management called upon a group of scientists together to apply a scientific approach in the
study of military operations to win the battle. The main objective was to allocate scarce resources
in an effective manner to various military operations and to the activities within each oepration. The
effectiveness of operations research is in military spread interest in it to other government
departments and industry.
Due to the availability of faster and flexible computing facilities and the number of qualified
O.R. professionals, it is now widely used in military, business, industry, transportation, public health,
crime investigation etc.
OR can be regarded as use of mathematical and quantitative techniques to substantiate the
decisions being taken. It also takes tools from subjects like mathematics, statistics, engineering,
economics, psychology etc., and uses them to know the consequences of possible alternative
actions. Making decisions or taking actions is central to all operations.
describes a large number of discrete events. Or, consider the scourge of many students the
Grade Point Average (GPA). This single number describes the general performance of a student
across a potentially wide range of course experiences.
Every time you try to describe a large set of observations with a single indicator you run the
risk of distorting the original data or losing important detail. The batting average doesn't tell you
whether the batter is hitting home runs or singles. It doesn't tell whether she's been in a slump or
on a streak. the GPA doesn't tell you whether the student was in difficult courses or easy ones, or
whether they were courses in their major field or in other disciplines. Even given these limitations,
descriptive statistics provide a powerful summary that may enable comparisons across people or
other units.
Inferential Statistics:
With inferential statistics, you are trying to reach conclusions that extend beyond the
immediate data alone. For instance, we use inferential statistics to try to infer from the sample
data what the population might think. Or, we use inferential statistics to make judgements of the
probability that an observed difference between groups is a dependable one or one that might have
happened by chance in this study. Thus, we use inferential statistics to make inferences from our
data to more general conditions; we use descriptive statistics simply to describe what's going on
in our data.
Perhaps one of the simplest inferential test is used when you want to compare the average
performance of two groups on a single measure to see if there is a difference. You might want to
know whether eighth - grade boys and girls differ in math test scores or whether a program group
differs on the outcome measure from a control group. Whenever you wish to compare the average
performance between two groups you should consider the t-test for differences between groups.
Most of the major inferential statistics come from a general family of statistical models known
as the General Linear Model. This includes the t-test. Analysis of Variance (ANOVA). Analysis of
Covariance (ANCOVA), regression analysis and many of the multivariate methods like factor
analysis, multidimensional scaling, cluster anmalysis, discriminant function analysis and so on.
Given the importance of the General Linear Model, it's a good idea for any serious reserarcher to
become familiar with its workings.
Statistical Decision Theory:
Statistics is a integrated with decision making in areas such as management, public policy,
engineering and clinical medicine. Decision theory states the case and in a self - contained,
comprehensive way shows how the approach is operational and relevant for real - world decision
making under uncertainty.
Starting with an extensive account of the foundations of decision theory, uses concepts of
subjective probability and utility.
OPERATIONS RESEARCH
(ii) Predictive Models: Such models can answer "what if" type of questions i.e., they can
make predictions regarding certain events.
(iii) Prescriptive (normative) Models: Finally, when a predictive model has been repeatedly
successful, it can be used to prescribe a source of action. For example, linear programming
is a prescriptive (or) normative model because it prescribes what the managers ought to do.
Based on structures: These model are classified on the basis of their structures. They are
classified as follows:
(i) Physical Models: These models are the model which are scaled in size (enlarging or
reducing the size) by its original size to analyse and produce the best results.
(ii) Symbolic Models: The symbolic or mathematical models is one which employs a set of
mathematical symbols to present the decision variables of the system.
mean from each and every number and dividing by the standard deviation. Once you have converted
your data into standard scores, you can then use probability tables that exist for estimating the
likelihood that a certain raw score will appear in the population. This is an example of using a
descriptive statistic (standard deviation) for inferential purposes.
Correlation: The most commonly used relational statistic is correlation, and it's a measure of the
strength of some relationship between two variables, not causality. Interpretation of a correlation
coefficient does not even allow the slightest hint of causality. The most a researcher can say is
that the variables share something in common; that is, are related in some way. The more two
things have something in common, the more strongly they are related. There can also be negative
relations, but the important quality of correlation coefficients is not their sign, but their absolute
value. A correlation of - 58 is stronger than a correlation of 43, even though with the former, the
relationship is negative. The following table lists the interpretations for various correlation coefficients:
0.8 to 1.0 Very Strong
0.6 to 0.8 Strong
0.4 to 0.6 Moderate
0.2 to 0.4 Weak
0.0 to 0.2 Very Weak
If you square the pearson correlation coefficient, you get the coefficient of determination,
symbolized by the large letter R. It is the amount of variance accounted for in one variable by the
other. Large R can also be computed by using the statistical technique of regression, but in that
situation, it's interpreted as the amount of variance.
Regression: Regression is the closest thing to estimating causality in data analysis, and that's
because it predicts how much the numbers "fit" a projected straight line. There are also advanced
regression techniques for curvilinear estimation. The most common form of regression, however,
is linear regression, and the least squares method to find an equation that best fits a line representing
what is called the regression of y on x. Instead of findingthe perfect number, however, one is
interested in finding the perfect line, such that there is one and only one line (represented by
equation) that perfectly represents, or fits the data, regardless of how scattered the data points.
The slope of the line (equation) provides information about predicted directionality, and the estimated
coefficients (or beta weights) for x and y (independent and dependent variables) indicate the power
of the relationshop.
Sampling is the process of selecting a small number of elements from a larger defined target
group of elemetns such that the information gathered from the small group will allow judgments to
be made about the larger groups.
Simple random sampling is a method of probability sampling in which every unit has an equal
non zero chance of being selected.
Probability:
• Simple random sampling
• Systematic random sampling
Centre For Distance Education 1.10 Acharya Nagarjuna University
1.10 Summary:
One can constand about quantitative methods to improve the ability as well as literary skills,
so that he can present numerical data and information which requires analysis and interpretation
and take quick decision in net only. Business but also any other fields. This lesson teaches to
various quantitave approach and tells how to solve it. There are powerful computers, techniques
using these techniques you can expleve new and more sophisticated methods of data analysis.
Centre For Distance Education 1.14 Acharya Nagarjuna University
1.11 Exercises:
1. What are the meaning of Quantitative Techiniqes?
2. What is difference between statistics and operations research.
3. Describe the descriptive statistics and inferential statistics.
4. Explain the operations research models?
5. What are statistical methods?
6. Describe the advatages of Quantitative Methods?
7. Explain how Quantitative Techniques used in Business Management?
Lesson Writer
Prof. K. CHANDAN