Sei sulla pagina 1di 98

Research Design

It is simply a plan for a study


It can be called a blueprint for a study
Allocation of resources
Planned sequence of entire process
It is like a plan made by an architect to
build the house
This is used as a guide in collecting and
analyzing the data.
Research design is the arrangement of
conditions for collection and analysis
of data in a manner that aims to
combine relevance to the research
purpose with economy in purpose.

Research design constitutes the blue


print for the collection, measurement
and analysis of data.

Research design is the planned


sequence of the entire process
By research design can
understand…….
What is the study about?
For what purpose?
Type of data required?
Where the data can be found?
Areas of study
Period of time
Samples details
Style of report preparation etc.
Features of a good Research
design
It is a plan that denotes the sources
and types of information relevant to
the research problem.
It is an activity and time based plan
It is based on research question
Depends upon the purpose of the
research
Guide for selecting sources and
types of information needed.
A good design is often characterized
by flexibility, appropriate, efficient and
economical.
Generally, the design which
minimizes bias and maximizes the
reliability of the data collected and
analyzed is considered as a good
design
The design which gives the smallest
experimental error is supposed to be
the best design in many
investigations.
Concepts of research design
Dependent and independent
variables
Extraneous variables
Control
Confounded relationship
Research hypothesis
Experimental and non
experimental hypothesis.
Treatments
Experimental and control
groups
Experiment
Experimental unit
Stages or steps of research
design
1. Selection of a problem
2.Review of the existing
literature
3.Sources of the information
to be utilized
4.Nature of study
5.Objectives of study
6.Geographical area to be
7.Socio cultural context of
study
8.Period of study
9.Dimension of the study
10.Basis for selecting the
data
11.Technique of study
12.The control of error
MERITS OF RESEARCH DESIGN
Helps to save researchers time, money
and energy
Helps to prepare and execute the various
activities
Helps the researcher to document the
research activities
It ensures proper time schedule for the
implementation of the project
It provides confident to the researcher for
completing the research work
It provides a sense of success at every
Types of research design

Exploratory, descriptive and casual


research are some of the major types.
Exploratory research – it is used to
seek insights into general nature of
the problem
No previous knowledge is required
and this research are more flexible,
qualitative and unstructured
The researcher in this method does
not know “what he will find”.
Characteristics of exploratory

It is more flexible and very versatile


Experimentation is not a requirement
Cost incurred to conduct study is low
For data collection structured forms
are not used
Descriptive research
The name itself reveals that it is
essentially a research used to describe
something
It can describe the characteristics of a
group such as customers,
organizations, markets etc
What this research cannot indicate is
that it cannot establish a Cause and
effect relationship
This is the distinct disadvantage of
Steps involved in descriptive studies
Formulation of the problem
of study
Define the population or
universe
Select the sample
Design the method of data
collection
Analyse data and results
Causal research

Descriptive research will suggest a


relationship but will not establish the cause
and effect relationship
Example – the data collected may show that
the number of people who own a car and
their incomes have risen over a period of
time
Despite this one cannot say that the
increase in number of cars is due to rise in
people’s income.
Perhaps improved road conditions or
increase in the number of banks offering car
Choosing a basic method of method

There are three basic research


methods
Survey
Observation
Experiment
SURVEY
Survey is a non experimental, descriptive
research method
Survey method is a method for collecting data
or factual information of certain desired
characteristics of universe.
It is used for collecting primary data based
on verbal or written communication
Under this method the respondents are asked
a number of questions regarding their
behavior, intensions, attitudes, motivations,
life style etc.
Features of survey method

It is a field study


It seeks responses directly from the
respondents
It can cover a very huge population
It an extensive as well as intensive
study
It covers a definite geographical
area
Objectives

To provide information to govt. or


planners or business enterprises
It is used to explain a phenomenon
Surveys are designed to make
comparison of demographic groups
Surveys conducted to know cause
and effect relationship is useful for
making predictions
TYPES OF SURVEYS
Longitudinal study- these are the studies in
which an event or occurrence is measured
again and again over a period of time
 This is known as ‘Time series study’
 Through this study the researcher comes to
know how the market changes overtime.
 Three main types are trend studies, cohort
studies and panel studies
Contd…..
 CROSS-SECTIONAL STUDY-
In this studies variables of interest in a sample of
subjects are examined once and the relationships b/w
them are determined.
It is used to gather information on a population at a
single point of time.
 divided into two types:
 Field study- includes an in depth study. Test
marketing is an example for field study
 Field survey- large samples are a feature of the study
The biggest limitations of this survey are cost and
time
It requires good knowledge like constructing a
questionnaire, sampling techniques etc
Methods of survey

Census method
Sample method
Fax survey
 internet or e-mail
survey or method
Merits of survey
Helps to collect a lot of information from
different types of individuals
It facilitates drawing generalizations about
large populations on the basis of study of
their representative sample
Various samples can be adopted to collect
data
Its useful to verify theories
It is simple to administer
The data obtained are reliable
Coding, analysis and Interpretation of data is
relatively simple
Demerits

Feasibility depends upon the


willingness and cooperation of
the respondents
It is subjects to sampling error
Chance of measurement errors
Respondent’s response may be
misleading.
Limited information
It is expensive
EXPERIMENT

An experiment is defined as


manipulating (changing
values/situations) one or more
independent variables to see how the
dependent variable(s) is/are
affected, while also controlling the
affects of additional extraneous
variables.
Independent variables: - that over
which the researcher has control and
wishes to manipulate i.e. package
size, ad copy, price.
Dependent variables: - that over
which the researcher has little to no
direct control, but has a strong
interest in testing i.e. sales, profit,
market share.
Extraneous variables: - those that
may affect a dependent variable but
are not independent variables.
Types of experiment

Laboratory experiment
Field experiment
Extraneous variables

those that may affect a


dependent variable but are not
independent variables
so these are the variables are
not intentionally studying by
the researcher like
independent and dependent
variables
these are undesirable variable.
Types of Extraneous
variables
History, for e.g.; aging,
physical growth
Testing
Instrumentation
Selection bias
Statistical regression
Attrition
Controlling of extraneous
variables
Simply hold extraneous variables as
constant
Randomization of the effects of
extraneous variables across
treatments.
*randomization means random
assignment of test units to
experimental group by using random
numbers
Goal of experiment
Internal Validity – The degree to which
changes in the dependent variable are
affected by the independent variable.
Maintaining high internal validity means
controlling for all other independent
variables other than the one(s) being
studied
External Validity – The degree to which the
results of a study can be generalized to the
“real world” or large population. Factors
that negatively affect external validity also
negatively affect the generalizability of the
Planning to conduct experimentation

Determine the hypotheses to be tested


Determine the variables
Operationalize the variables
Select the type of experimentation plan
Choose the setting [lab or field]
Make the experimental conditions as
nearly as the expected real life conditions
Introduce suitable method for controlling
extraneous variables
simulation
It is an imitate or simulate a real life
situation.
It is mainly sophisticated set of
mathematical formula
It implies the presence of a replication
so well constructed that the product
can pass for the real thing.
When applied to the study of research
design, simulations can serve as a
suitable substitute for constructing and
understanding field research.
Major uses

Assessment of situation
Understanding a situation
Decision making in a
situation
Steps of simulation
Identify the process or system
Decide the purpose
Develop a mathematical model by
using available information
Collect several sets of input data
Determine the type of simulation
required
Operate the simulation with the
various sets of input data
Types of simulation

Computer simulation
Man simulation
Man computer simulation
Types of experimental design
Pre experimental design
i. One-shot case studies
ii. One-group pretest-posttest design
iii. Static group comparison design
True experimental design
1. the post test only control group design
2. the pretest-post control group design
3. Solomon Four-Group Design
Quasi experimental design
a) Non equivalent control groups
design
b) Time series design
c) Counter balanced design
Factorial design
OBSERVATION

It is a method of scientific


enquiry
It is a systematic viewing of a
specific phenomenon in its
proper setting for the specific
purpose of gathering data for
a particular study
Features

Physical and mental activity


Selective
Purposive and not informal
Grasps events and
occurrences
It should be exact
types
Simple and systematic
Subjective and objective
Casual and scientific
Intra subjective and inter subjective
Factual and inferential
Direct and indirect
Participant and non-participant
Structured and unstructured
Controlled and non controlled
Components

Sensation
Attention
Perception
objectives
It studies collective behavior
and complex social situations
Following up of individual units
composing the situations
Understanding the whole and
parts in their interrelation
Getting the out of the way
details of the situations
Success of observation

Problem should be
clearly and precisely
formulated
Must develop a free
mind
Facts are interrelated
advantages

Collect information
Actual actions and habits of
person are observed
Reduction or elimination are
possible
Can collect info from those
who are unable to communicate
Its perfect and better method
Most reliable method
Disadvantages

Results depends on the observer


Opinions and attitudes cannot be
obtained
It may be one time observation
Expensive
Limited information
Time and energy may lead fatigue
Sampling design
Sampling is concerned with the selection of a
subset of individuals from within a statistical
population to estimate characteristics of the
whole population.
Two advantages of sampling are that
the cost is lower and data collection is faster
than measuring the entire population.
A Sample design
is a definite plan for obtaining a sample from a
given population
Definition

According to Gerald Hursh “a


Sample Design is the
theoretical basis and the
practical means by which we
infer the characteristics of
some population by
generalizing from the
characteristics of relatively few
of the units comprising the
Steps in Sampling Design

Define the population or universe


State the sampling frame
Identify the sampling unit
State sampling method
Determine the sample size
Spell out the sampling plan
Select the sample
Characteristics of Good Sample
Design
Representative.
Viable.
The selected sample design should
not cause more errors.
A good sample design able to
control systematic bias efficiently.
If the sample is well design and
selected, decision makers can use
this info with confidence.
Types of Sampling
Design or methods of
sampling
probability sampling
A probability sampling is one in which
every unit in the population has a
chance (greater than zero) of being
selected in the sample, and this
probability can be accurately
determined.
The combination of these traits makes
it possible to produce unbiased
estimates of population totals, by
weighting sampled units according to
their probability of selection.
1. RANDOM SAMPLING
A. SIMPLE RANDOM SAMPLING
LOTTERY METHOD
RANDOM NUMBER METHOD
B. RESTRICTED RANDOM SAMPLING
STRATIFIED SAMPLING
CLUSTER SAMPLING
MULTI STAGES CLUSTER
SAMPLING
SYSTEMATIC SAMPLING
RANDOM ROUTE SAMPLING
NON PROBABILITY
SAMPLING
 Non Probability Sampling is any sampling method where
some elements of the population have no chance of
selection, or where the probability of selection can't be
accurately determined.
 It involves the selection of elements based on
assumptions regarding the population of interest, which
forms the criteria for selection.
 Hence, because the selection of elements is nonrandom,
non probability sampling does not allow the estimation
of sampling errors.
 These conditions give rise to exclusion bias, placing
limits on how much information a sample can provide
about the population.
TYPES OF NON PROBABILITY
SAMPLING
ACCIDENTAL SAMPLING
JUDGEMENT SAMPLING
CONVENIENCE SAMPLING
SNOWBALL SAMPLING
SELF SELECTION
Sample Size
Sample size is the number of items
to be selected from the universe.
It should be optimum. Formulas,
tables, and power function charts
are well known approaches to
determine sample size.
The Factors Considering While
Deciding The Size Of The Sample
 Nature of the population.
Complexity of tabulation.
 Problem relate with collection of data.
 Type of sampling.
 Basic information.
 Degree of accuracy required for the
study.
CRITERIA OF SELECTING A SAMPLING
PROCEDURE
 1. Nature of the problem.
 2. Goal of researchers.
 3. Geographical area covered by the
survey.
 4. Size of the population under study.
 5. Extent of fact available about
population.
 6. Availability of funds
 7. Available time for study.
 8. Desired reliability of the result.
Sampling Bias
Sampling analysis involve to type of
cost namely cost of collecting data
and cost of an incorrect inference
resulting from the data. They are to
causes for incorrect inference resulting
from data. They are
i. Systematic bias
ii. Sampling errors
.
Causes of systematic bias
 Unsuitable sample frame or
source List.
 Faulty measuring device.
 Non respondent
 Indeterminacy principle.
 Usual bias in reporting data
Sampling errors
 The errors which arise due to the use of
sampling survey are known as sampling
errors. These are random variation in the
sample estimate around the true
population parameters.
Type of sampling errors
 Biased errors: These errors are occurring
due to the faulty selection of sampling
method due to the prejudice of the
researchers.
 Unbiased errors: This type of bias is
occurring due to chance difference
between the items included in the sample.
Causes of bias
Bias may arise due to,
 Faulty process selection.
 Faulty work during the
collection of information.
 Faulty method of analysis.
Non-Sampling Error
Non-sampling errors are other errors which
can impact the final survey estimates,
caused by problems in data collection,
processing, or sample design. They
include:
 Over coverage: Inclusion of data from
outside of the population.
 Under coverage: Sampling frame does not
include elements in the population.
 Measurement error: e.g. when
respondents misunderstand a question, or
find it difficult to answer.
PRINCIPLES OF SAMPLING

LAW OF STATISTICAL
REGULARITY
LAW OF THE INERTIA OF
LARGE NUMBERS
Characteristics of Good Sample
Design
Representative.
Viable.
The selected sample design should
not cause more errors.
A good sample design able to
control systematic bias efficiently.
If the sample is well design and
selected, decision makers can use
this info with confidence.
simulations are useful for
1. improving student understanding of
basic research principles and
analytic techniques.
2. investigating the effects of
problems that arise in the
implementation of research; and
3. exploring the accuracy and utility
of novel analytic techniques
applied to problematic data
structures.
Measurement

 We use some yardstick to determine weight, height or


some other physical object
 We thus measure physical objects as well as abstract
concepts.
 But measurement is relatively complex when it
concerns qualitative or abstract phenomena
 Thus by measurement we mean the process of
assigning numbers to objects or observations .
Contd…………..

 Measuring things such as social conformity,


intelligence, marital adjustment requires much closer
attention than measuring physical weight, biological
age or a person’s financial assets.
 In other words it is not easy to measure properties like
motivation to succeed, ability to stand stress and so
on.
 A researcher has to be quite alert about this aspect
while measuring properties of objects or of abstract
concepts.
Sources of error in
measurement
 Measurement should be precise and unambiguous in
an ideal research study
 As such the researcher must be aware about the
sources of error in measurement. The following are the
possible sources of error in measurement
 Respondent- transient factors like fatigue, boredom,
anxiety etc may limit the ability of the respondent
accurately and fully
Error in measurement

 Situation- situational factors may also come in the way.


 Any condition which places a strain can have serious effects
 Measurer – errors may creep in because of incorrect coding, faulty
tabulation or statistical calculations, particularly in the data –analysis
stage
 Instrument- errors may arise because of defective measuring instrument
 The use of complex words, poor printing, inadequate space for replies,
response choice omissions etc make measuring instrument defective
which results in errors.
How to overcome errors?

 Researcher must know that correct measurement


depends on successfully meeting all of the problems
listed above
 He must, to the extent possible try to eliminate,
neutralize or otherwise deal with all possible sources of
error so that the final results may not be
contaminated.
Tests of sound measurement

 Sound measurement must meet the tests of validity,


reliability and practicality
 These three considerations should be used in
evaluating a measurement tool
 Tests of validity- it indicates the degree to which an
instrument measures what it is supposed to measure.
 Three types of validity:
 content validity- is the extent to which a measuring
instrument provides adequate coverage of the topic
under study
Contd…..

 It can also be determined by using a panel of persons


who shall judge how well the measuring instrument
meets the standard, but there is no numerical way to
express it
 Criterion- related validity- relates to the ability to
predict some outcome or estimate the existence of
some current condition. This criterion must possess the
following qualities:
 Relevance, freedom from bias, reliability and
availability.
Construct validity

 It is the most complex and abstract


 A measure is said to possess construct validity, we
associate a set of other propositions with the results
received from using our measurement instrument
 If measurements on our devised scale correlate in a
predicted way with other propositions, we can
conclude that there is some construct validity
 Finally if the above said criteria and tests are met with,
we may say that our measuring instrument is valid and
will result in correct measurement.
Tests of reliability
 It is another important tests of sound measurement
 A measuring instrument is reliable if it provides consistent results.
 Reliability is not as valuable as validity but it is easier to assess reliability
in comparison to validity
 Two aspects of reliability i.e. stability and equivalence deserves special
mention
 Stability- securing consistent results with repeated measurements with
the same instrument
 Equivalence-considers how much error may get introduced by
different samples of the items being studied.
Tests of practicality

 The measuring instrument ought to be practical


 It should be economical, convenient and
interpretable
 Economical deals with the trade-off between ideal
research project and the budget which can be afford
 Convenience suggests that the measuring instrument
should be easy to administer.
Technique of measurement
tools
 It involves four stages
 Concept development
 Specification of concept dimensions
 Selection of indicators
 Formation of index.
Concept development

 Researcher should arrive at an understanding of major concepts


pertaining to his study

 This is more apparent in theoretical studies than in pragmatic


research
Dimensions of concepts

 Requires the researcher to specify the dimensions of


the concepts that he developed in the first stage

 For instance, one may think of several dimensions such


as product reputation, customer treatment, corporate
leadership, sense of social responsibility and so forth
when one is thinking about the image of a certain
company
Selection of indicators

 Indicators are specific questions, scales or other


devices by which respondent’s knowledge, opinion,
expectation etc are measured

 The use of more than one indicator gives stability to


the scores and it also improves their validity
Formation of index

 The last step is combining various indicators into an


index
 We may need to combine them into single index
 It can be done by giving scale values to the responses
and then sum up the corresponding scores.
 Such an overall index would provide a better
measurement tool than a single indicator.
Scaling
 We should study some procedures which may enable us to measure
abstract concepts more accurately and this brings us to the study of
scaling techniques.
 Scaling describes the procedures of assigning numbers to various
degrees of opinion, attitude and other concepts
 This can be done in two ways:
 1) making a judgment about some characteristics of an individual
and then placing him directly on a scale
 2) constructing questionnaires in such a way that the score of
individual’s responses assigns him a place on a scale.
Contd…..

 A scale is a continuum, consisting of the highest point


and lowest point along with intermediate points
between these two extreme points
 These scale points are related to each other where
the first point happens to be the highest point, the
second indicates higher degree compared to third
 Third point indicates higher degree compared to
fourth and so on……..
Scale construction techniques
 Following are the five main techniques by which scales can be
developed.
 Arbitrary approach- approach where scale is developed on adhoc
basis
 Most widely used approach and scales measure the concept for
which they have been designed
 First the researcher collects few statements which he believes
appropriate for a topic and then people are asked to check in.
 It can be developed very easily, quickly and with relatively less
expense.
 But we do not have objective evidence
 Consensus approach or differential scales or Thurston-type scales – a
panel of judges evaluate the items in terms of whether they are
relevant to the topic area
Scale construction techniques
 Item analysis approach or summated scales or Liker-type scales –likert
scales are developed by evaluating those persons whose scores are
high and those whose scores are low
 Here the respondent is asked to respond to each of the statements in
terms of several degrees usually five (but at times 3 or 7 may also be
used)
 Advantages- relatively easy to construct as it can be performed
without panel of judges
 Disadvantages- we can only examine whether the respondents are
favorable or unfavorable but how much more or less cannot be
predicted.
Guttmann's scalogram analysis or
cumulative scales
 It also contains series of statements to which a
respondent agrees or disagrees

 The special feature is that the statements are in


cumulative series (i.e) related to one another

 Scalogram analysis refers to the procedure for


determining whether a set of items forms a uni
dimensional scale
Example

 Let's say you came up with the following statements:


 I believe that this country should allow more immigrants in.
 I would be comfortable if a new immigrant moved next door to me.
 I would be comfortable with new immigrants moving into my
community.
 It would be fine with me if new immigrants moved onto my block.
Rate the items

 Next, we would want to have a group of judges rate


the statements or items in terms of how favorable they
are to the concept of immigration.
 They would give a Yes if the item was favorable
toward immigration and a No if it is not.
 Notice that we are not asking the judges whether they
personally agree with the statement.
 Instead, we're asking them to make a judgment about
how the statement is related to the construct of
interest.
Administering the Scale

 Once you've selected the final scale items, it's


relatively simple to administer the scale.
 You simply present the items and ask the respondent
to check items with which they agree.
 Each scale item has a scale value associated with it
(obtained from the scalogram analysis). To compute a
respondent's scale score we simply sum the scale
values of every item they agree with. In our example,
their final value should be an indication of their
attitude towards immigration.
Four popular scales

 Four popular scales in business research


are:

 Nominal scales
 Ordinal scales
 Interval scales
 Ratio scales
Measurement
97 and Scaling (4)

 A nominal scale is the simplest of the four


scale types and in which the numbers or
letters assigned to objects serve as labels
for identification or classification

 Example:

 Males = 1, Females = 2
 Sales Zone A = Islamabad, Sales Zone B =
Rawalpindi
 Drink A = Pepsi Cola, Drink B = 7-Up, Drink C =
Measurement
98 and Scaling (5)

 An ordinal scale is one that arranges


objects or alternatives according to their
magnitude

 Examples:

 Career Opportunities = Moderate, Good,


Excellent
 Investment Climate = Bad, inadequate, fair,
good, very good
 Merit = A grade, B grade, C grade, D grade

Potrebbero piacerti anche