Sei sulla pagina 1di 127

UNIT II

Research Design
and
Measurement

Research design Definition types of


research design exploratory and causal
research

design

Descriptive

and

experimental design different types of


experimental design Validity of findings
internal and external validity Variables
in Research Measurement and scaling
Different

scales

Construction

of

instrument Validity and Reliability of


instrument.

RESEARCH DESIGN
Meaning
A

research

project

conducted

scientifically has a specific framework


of

research

from

the

problem

identification to the presentation of


the report.

This

framework

of

conducting

research is known as research design.

Definition
According

to

Kerlinger,

Research

design is the plan, structure, &


strategy of investigation conceived
so as to obtain answers to research
questions and to control variance.

According to Green and Tull, A research


design

is

methods

the

specification

of

and

procedures

for

acquiring the information needed. It is


the

overall

framework

operational
of

the

pattern
project

or
that

stipulates what information is to be


collected from which sources by
what procedures.

Types of research design


Types of Research
Design

Exploratory Descriptiv
Research
e Research
Design
Design

Experimen
tal/Causal
Research
Design

EXPLORATORY RESEARCH DESIGNS


Exploratory

means

to

explore

the

hidden things, which are not clearly


visible.
Exploratory

research

is

type

of

research conducted for a problem that


has not been clearly defined.
Data

are collected through observation

and interviews

For example:
It

is one thing

to describe the

crime rate in a country, to examine


the trends over time or to compare
the

crime

countries.

rates

in

different

It

is

quite

different

thing

to

develop explanations about

why the crime rate is high?

Why

some types of crime are

increasing? Or
Why

the rate is higher in some

countries than in others?

Exploratory

research

provides

insights into and comprehension


of an issue or situation.
Exploratory

research is a type of

research conducted because a


problem has not been clearly
defined.

Techniques of Exploratory
Research Design
Literature Research
The

quickest and most economical

way is to find possible hypotheses


from the available literature.
The

past researches may be suitable

sources of information to develop


new hypotheses.

Depth Interview/Experience survey:


Experience

survey means the survey of

people who had practical experience


with the problem to be studied.
The

individuals

executives,

can

sales

be

top

managers,

wholesalers or retailers possessing


valuable knowledge and information
about the problem environment.

Ex:

Henry

Mintzberg

interviewed

managers to explore the nature of


managerial work. Based on the
analysis of his interview data, he
formulated theories of managerial
roles.

Case study
It

involves

study

of

the

one

comprehensive
or

few

specific

situations and lend itself to the


study of complex situations.
For

example:

management

the
of

effective
distributor

relations or what constitutes


good marketing management.

Focus

group:

carefully

selected

representative sub set of the larger


respondent gather to discuss together,
in a short time frame, the subject/topic
to be investigated.
Ex:

if

company

manufacturing

cosmetics wants to obtain a thorough


understanding of what it is that arouses
emotive appeal for the product and
induces people to buy cosmetics.

DESCRIPTIVE RESEARCH DESIGNS


Descriptive

studies

are

designed

to

describe something.
Ex:

a study of class in terms of

the

percentage of members who are in

their senior and junior years


Gender
Age

composition

groupings

Number

of business courses taken

Descriptive

undertaken
learn

studies
in

about

characteristics

organizations
and
of

age

Educational
Job

status

level

to

describe
a

group

employees as for example


The

are

of

Example:
a

bank manager wants to have a

profile of the individuals who have


loan payments outstanding for 6
months and more.
It

would include details of their

average age, earnings, nature of


occupation,

full-time/part-time

employment status etc.

HYPOTHESIS TESTING
Studies

that engage in hypotheses

testing usually explain


the

nature

of

certain

relationships, or
establish

the differences among

groups or
the

independence of two or more

Example:
A

marketing manager wants to

know if the sales of the company


will increase if he doubles the
advertising dollars.

Here,

the manager would like to

know the nature of relationship


that can be established between
advertising and sales by testing
hypothesis:

if

advertising

is

increased, then sales will also


go up

Example: the testing of hypothesis


such as: More men than women
are whistleblowers,
It

establishes

the

difference

between two groups - men and


women

in

regard

to

whistle-blowing behaviour.

their

CAUSAL STUDY

Causal

study is able to state that variable

X causes variable Y.
So

when variable X is removed or altered

in some way, problem Y is solved.


The

study in which the researcher wants

to delineate the cause of one or more


problems is called a causal study.

A causal study question:


Does smoking cause cancer?
Smoking independent variable
Cancer dependent variable

The Time Dimension


Cross-sectional

research

designs: two criteria


1.

carried out at a single moment


in

time,

therefore

the

applicability is temporal specific.

Longitudinal
1.

studies: three criteria

The study involves selection of a


representative group as a panel.

2.

There are repeated measurement


of the researched variable on this
panel over fixed intervals of time.

3.

Once selected the panel composition


needs to stay constant over the
study period.

What is an Experiment?
The

process of examining the truth of a

statistical

hypothesis,

relating

to

some

research problem, is known as experiment.


Absolute
Ex:

experiment

examining the growth of children based

on one health drink (Complan)


Comparative
Ex:

experiment

examining the growth of children based

on two health drink (Complan & Horlicks)

Important Concepts Used in Research


design

Variable: a concept which can take


on different quantitative value is
called a variable.
Ex: weight, height, income
Continuous variable

: Age

Non-continuous variable: No. of


children

1. Dependent variable:
If one variable depends upon
or is a consequence of the
other variable, it is termed as
a dependent variable.
2. Independent variable:
If the variable is antecedent to
the dependent variable it is
termed

as

an

independent

For ex:
Height

depends upon age

Height

depends on gender

3.Extraneous variables: These are


the

variables

independent

other

than

variables

the
which

influence the response of test units


to treatments.
Examples: Store size, government
policies, temperature, food intake,
geographical location, etc.

4.

Experimental

and

control

groups:

When a group is exposed to


usual conditions, it is termed a
control group.

When

novel

a group is exposed to some


(new)

condition,

it

or
is

experimental group.

special

termed

as

Groups

Treatment

Experimental
Group I
Experimental
Group 2
Experimental
Group 3
Control group ( no
treatment)

$ 1.00 per piece

Treatment
effect (%
increase in
production
over prepiece rate
system)
10

$ 1.50 per piece

15

$ 2.00 per piece

20

Old hourly rate

5. Treatments:
The different conditions under
which experimental and control
groups are put are usually termed
to as treatments
Ex: selling cookies with free gift and
without free gift

Basic principles of
Experimental design
1. Principle of Replication
2.

Principle of Randomization

3.

Principle of Local Control

1. Principle of Replication
(Reproduction):
According

replication,

to

the
the

principle

of

experiment

should be repeated more than


once.
By

doing

so

the

statistical

accuracy of the experiments is


increased.

For Ex: suppose we are to examine


the effect of two varieties of
rice. For this purpose we may
divide

the field into two parts

and
grow

and

one variety in one part

We

of

can then compare the yield


the

two

parts

and

conclusions on that basis.

draw

But if we are to apply the principle of


replication to this experiment, then
We

first divide the field into several

parts,
Grow

one variety in half of these

parts and
the

other variety in the remaining

parts.

We

the

can then collect data of yield of


two

varieties

and

draw

conclusions by comparing the same.

The results so obtained will be more

reliable

in

comparison

to

the

conclusion we draw without applying


the principle of replication.

2. Principle of
Randomization
Principle of randomization

provides

protection.
It

avoids bias in the experiment.

For ex:
if

we grow one variety of rice, say in the

first half of the parts of a field and the


other variety is grown in the other half,
then it is just possible that the soil fertility
(productiveness) may be different in the
first half in comparison to the other half.
If this is so our results would not be
realistic.

3. Principle of Local
Control

Through the principle of local control

we can eliminate the variability.


According

to the principle of local

control, we first divide the field into


several homogeneous parts know
as blocks (barricade).

And

then each such block is divided

into parts equal to the number of


treatments.
Then

the treatments are randomly

assigned to these parts of block.


Dividing

the

field

into

several

homogeneous parts is known as


blocking.

TYPES OF EXPERIMENTAL DESIGN


1.

QUASI

EXPERIMENTAL

DESIGN
a.

Pretest

and

posttest

with

Experimental Group
b.

Posttest
Experimental
group

only
and

with
Control

2.

TRUE

EXPERIMENTAL

DESIGN
a.

Pretest

and

Experimental

Posttest
and

Group
b.

Blind studies

c.

Ex Post Facto Designs

with

Control

3. STATISTICAL DESIGN
i.

Completely

Randomized

Design (C.R. Design)


ii.

Randomized Block Design (R.B.


Design)

iii.

Latin

Square

Design)
iv.

Factorial Designs

Design

(L.S.

1.

QUASI EXPERIMENTAL
DESIGNS

. It

does not measure the true

cause-and effect relationship.


. This

is so because there is no

comparison between groups.


. This

experimental design is the

weakest of all designs

a.

Pretest and posttest with


Experimental Group

. An

experimental group (without a

control group) may be given a


pretest, exposed to a treatment,
and then given a posttest to
measure

the

effects

of

the

Group

Pretest
Score

Experiment
al Group

Treatment Posttest
introduce Score
d

O1

O2

Treatment Effect = (O2 O1)

O Observation or Measurement
X Exposure of a group to an experimental treatment

b. Posttest only with


Experimental and Control
group

Here only the experimental group is

exposed to the treatment not the


control group.
The

effects

of

the

treatment

are

studied by assessing the difference in


the outcomes- that is, the posttest
scores of the experimental and control
groups.

Group

Experimental
Group

Treatment
introduced

Control Group

Treatment Effect = (O1 O2)

Outcome

O1
O2

2. TRUE EXPERIMENTAL DESIGN


Experimental

designs,

which

include both the treatment and


control

groups

and

record

information both before and


after the experimental group is
exposed
known
design.

to
as

the
true

treatment

is

experimental

a.

Pretest and
Experimental
Group

. Two

Posttest with
and
Control

groups one experimental

and the other control both are


exposed
posttest.

to

the

pretest

and

Group

Pretest

Treatme
nt
introduc
ed

Posttest

O2

Experiment
al Group

O1

Control
Group

O3

Treatment Effect =

((O2 O1) (O4 O3))

O4

b. Blind Studies:
In

case

of

pharmaceutical

companies experimenting with the


newly

developed

drugs

in

the

prototype (trial) stage ensure


that

the

subjects

in

the

experimental and control groups


are kept unaware of who is given
the drug.

c. Ex Post Facto Designs:


Subjects

who have already been

exposed to a stimulus and those


not so exposed are studied.

For ex:
Training

programs

might

have

been

introduced in an organization 2 years earlier.


Some

might have already gone through the

training while others might not.


To

study the effects of training on work

performance, performance data might now


be collected for both groups.
Since

the study does not immediately follow

after the training, but much later, it is an ex


post facto design.

3. STATISTICAL DESIGN
a. Completely Randomized Design
(C.R. Design)
b. Randomized Block Design (R.B.
Design)
c.

Latin

Square

Design)
d. Factorial Designs

Design

(L.S.

a. Completely Randomized Design


(C.R. Design)

It involves only two principles viz.,


the principle of replication and the
principle of randomization.

Simplest and possible research design


and its procedure of analysis is also
easier.

The

essential characteristic of

the design is that subjects are


randomly

assigned

experimental treatments.

to

b. Randomized Block Design (R.B.


Design)
It

is an improvement over the C.R. Design

In

the R.B. Design the principle of local

control can be applied along with the


other

two

designs.

principles

of

experimental

c. Latin square design (L.S.


Design):
The

number of blocks will be

equal

to

treatments

the

number

of

FERTILITY LEVEL

Seed
Differ
ence

II

III

IV

X1

X2

X3

X4

X5

d. Factorial Design:
The

factorial experiment design is

used

to

test

two

or

more

variables at the same time.


Factorial

designs can be of two

types:
i.

Simple factorial design

ii.

Complex factorial design.

Validity in
Experimentation
Validity
The researchers must make sure
that

any

measuring

instrument

selected by him is said to be valid


when it measures what it purposes
to measure.
Ex: Weight machine

Reliability
Refers

to stability and consistency

through a series of measurements.


The

reliability of a measure is its

capacity to yield the same results


in repeated applications to the
same events.

Internal validity:

It refers to the

confidence we place in the cause


and effect relationship.
It

addresses the question, to what

extent

does

the

research

permit

us

to

say

design

that

the

independent variable A causes a


change in the Dependent variable
B?

Internal

validity tries to examine

whether the observed effect on a


dependent variable is actually
caused

by

(independent
question.

the

treatments

variables)

in

External

validity

validity:

External

refers

to

the

generalization of the results of


an experiment. The concern is
whether

the

experiment

can

beyond

the

situations.

result
be

of

an

generalized
experimental

Factors Affecting Internal


Validity of the Experiment
Maturation

Ex:
Test

the impact of new compensation

program on sales productivity.


If

this program were tested over a years

time, some of the sales people probably


would mature as a result of more selling
experience or gain increased knowledge.

Their

sales

improve

because

knowledge
rather

productivity

than

program.

and
the

of

might
their

experience
compensation

Testing
Testing

effects only occur in a

before-and-after study.
Instrumentation
A

change in wording of questions,

a change in interviewers cause


instrumentation effect

Selection bias
Sample

bias that results from

differential

selection

of

respondents.
Mortality
Some

subjects withdraw from the

experiment

before

it

is

Factors Affecting External


Validity
The

may

environment at the time of test


be

different

from

the

environment of the real world where


these results are to be generalized.
Population

used for experimentation

of the test may not be similar to the


population where the results of the
experiments are to be applied.

Environments of Conducting
Experiments
Laboratory

Environment - In a

laboratory

experiment,

the

researcher

conducts

the

experiment
environment

in

an

artificial
constructed

exclusively for the experiment.

Field

Environment - The field

experiment
actual

is

conducted

market

in

conditions.

There is no attempt to change


the

real-life

environment.

nature

of

the

Variables in Research
Variable
A variable is anything that can
take

on

differing

or

varying

values. The values can differ at


various times for the same object
or person.

Types of Variables
Dependent

Variable

Independent

variable

Moderating

variable

Extraneous

variable

Intervening

variable

1. Dependent variable (DV):


If one variable depends upon or is a
consequence of the other variable,
it is termed as a dependent variable.
2. Independent variable (IV):
If the variable is antecedent to the
dependent variable it is termed as an
independent variable
Ex: Smoking causes Cancer

3. Moderating variable (MV):


A moderating variable is a
second independent variable that
is included because it is believed
to

have

contributory
originally
relationship.

significant

effect

stated

IV

on

the

DV

4.Extraneous

variables

(EV):
These are the variables
other

than

independent

the
variables

which influence the response


of test units to treatments.

5. Intervening variable (IVV):


The intervening variable (IVV)
may be defined a that factor
which theoretically affects the
observed

phenomenon

but

cannot be seen, measured


or manipulated.

Measurement and Scaling


Measurement:

The

term

measurement

means

assigning numbers or some other symbols


to the characteristics of certain objects.
Ex:

A teacher counts the number of

students in a class, classifies them as


male or female.
How

well we like a song, a painting is also

a measurement

Scaling:

extension
Scaling

Scaling
of

measurements
located.

an

measurement.

involves

continuum

is

on
on

creating

which
objects

are

Types of Measurement Scale


NOMINAL

SCALE

ORDINAL

SCALE

INTERVAL
RATIO

SCALE

SCALE

NOMINAL SCALE:
In nominal scale, numbers are used to
identify

or

categorize

objects

or

events.
For example, the population of any
town may be classified according to
gender as males and females or
according to religion into Hindus,
Muslims, and Christians.

Example: (dichotomous scale elicit


Yes or No answer)
Are you married?
(a) Yes(b) No
Married

person may be assigned a No. 1.

Unmarried person may be assigned a

No. 2.
Do you have a car?
(a) Yes

(b) No

The

assigned numbers cannot be

added,

subtracted,

multiplied

or divided.
The

only arithmetic operations that

can be carried out are the count of


each category.

Therefore,

frequency

distribution table can be prepared


for the nominal scale variables.

ORDINAL SCALE:
This

is

the

next

higher

level

of

measurement.
The

ordinal scale places events in order.

Rank
The

orders represent ordinal scales.

use of an ordinal scale implies a

statement of greater than or less than


without stating how much greater or less.

Example:
Rank the following attributes while choosing a
restaurant for dinner. The most important attribute
may be ranked 1, the next important may be
assigned a rank of 2 and soInon.the ordinal scale,
the assigned ranks
cannot
be
added,
multiplied, subtracted
or divided. One can
compute
median,
and percentiles
of
the distribution. The
other major statistical
analysis which can be
carried out is the rank
order
correlation
coefficient,
sign
test.

INTERVAL SCALE:
The

interval scale measurement is the next

higher level of measurement.

it has all the characteristics of ordinal

scale.
In

addition,

the

units

of

measure

or

intervals between successive positions are


equal.
Ex: Marks: 0-10, 11-20, 21-30, 31-40

RATIO SCALE:
This
It

is the highest level of measurement.

possesses all the features of the

nominal, ordinal, and interval scales


It

has order, and distance

Example:
Measures of weight, height, length, etc
All

mathematical

and

statistical

operations can be carried out using the


ratio scale data.

SCALING
Scaling

may be considered as an

extension

of

measurement.

It

involves creating a continuum


upon which measured objects are
located

Scaling techniques or
classification of scales/Attitude
Scaling
scales
techniques
techniques

Rating Scales

Ranking scales

Graphic Rating
Scale
Itemized Rating
Scale
GuttmanScale/
Scalogram
Likert Scale
Semantic
Differential Scale
Thurstone Scale
Staples
Multi
Scaling

Scale
Dimensional

Method of Paired
comparison
Method of Rank
Order

Two

main

categories

of

Attitudinal Scale
RATING SCALES
Rating
response

scales

have

categories

several
and

are

used to elicit responses with regard


to object, event, or person studied.

RANKING SCALES
Ranking

scales

make

comparisons between or among


objects, events, or persons and
elicit the preferred choice and
ranking among them.

RATING SCALE
1.

GRAPHIC RATING SCALE

. Respondents

placing

rate the objects by


mark

at

the

appropriate position on a line


that runs from one extreme of
the criterion variable to another.

Graphic Rating Scale This is a


continuous scale and the
respondent is asked to tick his
preference on a graph.
Examples:

Please put a tick mark () on the following line to indicate


your preference for fast food.

Alternative Presentation of Graphic Rating Scale


Please indicate how much do you like fast food by pointing
to the face that best shows your attitude and taste. If you
do not prefer it at all, you would point to face one. In case
you prefer it the most, you would point to face seven.

2. ITEMIZED RATING SCALE


In

the itemized rating scale, the

respondents are provided with a


scale that has a number of brief
descriptions associated with each
of the response categories.

i.

Guttman Scales/Scalogram

Consists of statements to which a


respondent expresses his agreement
or disagreement.

It is also known as cumulative scale

Under this technique the respondents


are asked to answer in respect of
each item whether they agree or
disagree with it.

Ex:

Customers expectation on
Reliance Fresh

Item
No.
(i)
(ii)

Expectation
Would you expect price discounts in
Reliance Fresh?
Do you need free door delivery
service?

(iii)

Would you expect to increase the


duration of the store?

(iv)

Would you anticipate play area for


children?

Response in Scalogram Analysis


Respond
ent
Number (i)
(iv)

Item Number
(ii)

1.

2.

3.

4.
5.

Responden
t Score

(iii)

A score of 4 means that the respondent is

in agreement with all the statements of the


items.
A score or 3 means that the respondent is

ii. Likert scale


The

respondents are given a certain

number of items (statements) on


which they are asked to express their
degree of agreement/disagreement.
This

is also called a summated scale

because the scores on individual items


can be added together to produce a
total score for the respondent.

The

scale is named after its

inventor, psychologist Rensis Likert

Example of a Likert Scale:

iii. Semantic Differential Scale


This

scale

compare

is

widely

the

used

images

to
of

competing brands, companies or


services.
Here

the respondent is required to

rate each attitude or object on a


number

of

five-or

rating scales.

seven-point

This

scale is bounded at each end by

bipolar adjectives or phrases.


The

difference between Likert and Semantic

differential scale is that in Likert scale, a


number

of

statements

(items)

are

presented to the respondents to express


their degree of agreement/disagreement.

However, in the semantic differential scale,

bipolar adjectives or phrases are used.

Example of Semantic Differential


Scale:

Example of Semantic Differential Scale: (Pictorial Profile)

iv. Thurstone Scale


Thurstone

and his colleagues constructed

scales for the measurement of opinions


and beliefs of human groups.
PROCEDURE:
Large

number of statements pertaining

to the subject of enquiry are collected


through

literature

experience,

and

survey,

personal

discussions

knowledgeable persons

with

Second is the selection of statements

Statements should be brief

Ambiguous statements should be avoided

The statements must be related to attitude

On

the

basis

of

above

procedure,

the

researcher selects some 20 to 30 statements

The scale values are equally spaced

The

statements

questionnaire.

are

embodied

in

v. Stapels Scale
Used

as an alternative to semantic

differential scale.
The

scale measures how close to

or distant from the adjective


given stimulus is perceived to be.

Stapel

Scale

vi. Multi Dimensional Scaling (MDS)


It

consists

analytical

of

group

techniques

of

which

are used to study consumer


attitudes related to perceptions
and preferences.
It

is

technique.

computer

based

RANKING SCALES
i.

Method of paired comparison:

Under

this

respondent

method,
can

express

the
his

attitude by making a choice


between two objects, for ex.
Between a flavour of soft drink
A and another soft drink B.

ii. Method of Rank order:


Rank

order scales are comparative

scales.
For

ex: a respondent may be

asked to rank three motorcycle


brands

on

attributes

such

as

cost, mileage, style and pick-up


and so on.

Construction of
Instrument
The

purpose of scale construction

is to design questionnaire that


provides
measurement

quantitative
of

theoretical variable

an

abstract

Approaches by which scales can be


developed:
Arbitrary

scales: developed on ad

hoc basis may or may not measure


the concepts.
Cumulative

scales

Guttmans

Thurstone

scalogram analysis
Consensus

scale

scaling

Measurement Error
This occurs when the observed measurement on a
construct or concept deviates from its true values.
Reasons
Mood,

fatigue and health of the respondent

Variations

in the environment in which measurements are

taken
A

respondent may not understand the question being

asked and the interviewer may have to rephrase the


same. While rephrasing the question the interviewers
bias may get into the responses.

Some

of the questions in the

questionnaire may be ambiguous


errors may be committed at the
time of coding, entering of data
from

questionnaire

spreadsheet

to

the

Criteria for good measurement


Reliability
Reliability is concerned with consistency,
accuracy and predictability of the scale.
Methods to measures Reliability
Testretest
Split-half

reliability

reliability

Cronbachs

Alpha

Criteria for good


measurement

Validity

The validity of a scale refers to the


question whether we are measuring
what we want to measure.
Different
ways
to
measure
Validity
Content

validity

Concurrent
Predictive

validity

validity

Sensitivity
Sensitivity refers to an instruments
ability to accurately measure the
variability in a concept.

Potrebbero piacerti anche