Sei sulla pagina 1di 11

VALIDITY AND RELIABILITY

(2.5)
PREPARED BY:
DR. A. Lenin Jothi

CRITERIA FOR GOOD


MEASUREMENT

Three criteria are commonly used to assess the


quality of measurement scales in marketing
research:

1. Reliability
2. Validity
3. Sensitivity

RELIABILITY
The degree to which a measure is free from
random error and therefore gives consistent
results.

An indicator of the measures internal


consistency
Methods to measures Reliability
Testretest reliability
Split-half reliability
Cronbachs Alpha

ASSESSING REPEATIBILITY

Stability the extent to which results obtained with the


measure can be reproduced.
1. Test-Retest Method
Administering the same scale or measure to the
same respondents at two separate points in time
to test for stability.
2. Test-Retest Reliability Problems
The pre-measure, or first measure, may sensitize
the respondents and subsequently influence the
results of the second measure.
Time effects that produce changes in attitude or
other maturation of the subjects.

ASSESSING INTERNAL CONSISTENCY

Internal Consistency: the degree of homogeneity among the items


in a scale or measure

1. Split-half Method

Assessing internal consistency by checking the results of onehalf of a set of scaled items against the results from the other
half.

Coefficient alpha ()

The most commonly applied estimate of a multiple


item scales reliability.
Represents the average of all possible split-half
reliabilities for a construct.
2. Equivalent forms
Assessing internal consistency by using two scales designed to
be as equivalent as possible

VALIDITY
The accuracy of a measure or the extent to which a
score truthfully represents a concept.
The ability of a measure (scale) to measure what it
is intended measure.
Establishing validity involves answers to the ff:
Is there a consensus that the scale measures what it is
supposed to measure?
Does the measure correlate with other measures of
the same concept?
Does the behavior expected from the measure predict
actual observed behavior?

Contd

Validity

Face or
Content

Criterion
Validity

Concurren
t

Construct
Validity

Predictive

ASSESSING VALIDITY
1. Face or content validity: The subjective agreement
among professionals that a scale logically appears to
measure what it is intended to measure.
2. Criterion Validity: the degree of correlation of a
measure with other standard measures of the same
construct.
Concurrent Validity: the new measure/scale is taken at
same time as criterion measure.
Predictive Validity: new measure is able to predict a
future event / measure (the criterion measure).
3. Construct Validity: degree to which a measure/scale
confirms a network of related hypotheses generated from
theory based on the concepts.
Convergent Validity.
Discriminant Validity.

RELATIONSHIP BETWEEN VALIDITY AND


RELIABILITY

1. A measure that is not reliable cannot be


valid, i.e. for a measure to be valid, it
must be reliable Thus, reliability is a
necessary condition for validity
2. A measure that is reliable is not
necessarily valid; indeed a measure can
be but not valid Thus, reliability is
not a sufficient condition for validity

3. Therefore, reliability is a necessary but


not sufficient condition for Validity

SENSITIVITY
The ability of a measure/scale to accurately measure
variability in stimuli or responses;

The ability of a measure/scale to make fine distinctions


among respondents with/objects with different levels of
the attribute (construct).
Example - A typical bathroom scale is not sensitive
enough to be used to measure the weight of jewelry; it
cannot make fine distinctions among objects with
very small weights.
Composite measures allow for a greater range of
possible scores, they are more sensitive than singleitem scales.
Sensitivity is generally increased by adding more
response points or adding scale items.

MEASUREMENT ERROR
This occurs when the observed measurement on a construct or concept
deviates from its true values.

Reasons
Mood, fatigue and health of the respondent
Variations in the environment in which measurements are taken

A respondent may not understand the question being asked and the
interviewer may have to rephrase the same. While rephrasing the question
the interviewers bias may get into the responses.
Some of the questions in the questionnaire may be ambiguous errors may
be committed at the time of coding, entering of data from questionnaire to
the spreadsheet

Potrebbero piacerti anche