Sei sulla pagina 1di 41

Business

Research Methods
William G. Zikmund

Chapter 13:
Measurement
Measurement
• Measurement is the process of describing some
property of a phenomenon of interest, usually by
assigning numbers in a reliable and valid way.
Concept
• A researcher has to know what to measure before knowing
how to measure something. The problem definition process
should suggest the concepts that must be measured.
• A generalized idea about a class of objects, attributes,
occurrences, or processes
• Concepts such as age, sex, education, and number of children
are relatively concrete properties. They present few problems
in either definition or measurement.

• Other concepts are more abstract. Concepts such as loyalty,


personality, channel power, trust, corporate culture, customer
satisfaction, value, and so on are more difficult to both define
and measure
For example, loyalty has been measured as a
combination of customer share (the relative
proportion of a person’s purchases going to one
competing brand/store) and commitment (the degree
to which a customer will sacrifice to do business with
a brand/store). Thus, we can see that loyalty consists
of two components, the first is behavioral and the
second is attitudinal.
• Researchers measure concepts through a process
known as operationalization.

• Specifies what the researcher must do to measure the


concept under investigation

• This process involves identifying scales that correspond


to variance in the concept.
• Series of items arranged according to value for the
purpose of quantification
• A continuous spectrum
• Scales, just as a scale you may use to check your weight,
• provide a range of values that correspond to different
values in the concept being measured.
• In other words, scales provide correspondence rules that
indicate that a certain value on a scale corresponds to
some true value of a concept.
• Example of a correspondence rule: “Assign the numbers 1
through 7 according to how much trust that you have in
your sales representative.
• If the sales representative is perceived as completely
untrustworthy, assign the numeral 1,
• if the sales rep is completely trustworthy, assign a 7.”
Media Skepticism
conceptual definition
• Media skepticism - the degree to which individuals
are skeptical toward the reality presented in the
mass media. Media skepticism varies across
individuals, from those who are mildly skeptical and
accept most of what they see and hear in the
media to those who completely discount and
disbelieve the facts, values, and portrayal of reality
in the media.
Media Skepticism
Operational Definition
Please tell me how true each statement is about
the media. Is it very true, not very true, or not at all
true?
• 1. The program was not very accurate in its
portrayal of the problem.
• 2. Most of the story was staged for
entertainment purposes.
• 3. The presentation was slanted and unfair.
Nominal Scale
A nominal scale is the simplest of the four scale types and in
which the numbers or letters assigned to objects serve as
labels for identification or classification.

Example:

 Males = 1, Females = 2
 Sales Zone A = Islamabad, Sales Zone B = Rawalpindi
 Drink A = Pepsi Cola, Drink B = 7-Up, Drink C = Miranda
Ordinal Scale:
 Ordinal measurements describe order, but not relative
size or degree of difference between the items
measured.
 In this scale type, the numbers assigned to objects or
events represent the rank order (1st,2nd,3rd,etc) of the
entities assessed.
 A likert scale is a type of ordinal scale and may also use
names with an order such as:
❖ “Bad”, “medium” and “good”
❖ “very satisfied”, “satisfied”, “neutral”, “unsatisfied”,
“very unsatisfied”
Example of an ordinal scale:

• The result of a horse race,which says only which


horses arrived first, second, or third but include no
information about race times.

• Another example is military rank; they have an order,


but no well defined numerical difference between
ranks.
 Examples of Ordinal:

 Career Opportunities = Moderate, Good, Excellent


 Investment Climate = Bad, inadequate, fair, good, very good
 Merit = A grade, B grade, C grade, D grade

A problem with ordinal scales is that the difference between categories


on the scale is hard to quantify, ie., excellent is better than good but
how much is excellent better?
Interval Scale
 An interval scale is a scale that not only arranges objects or
alternatives according to their respective magnitudes, but also
distinguishes this ordered arrangement in units of equal intervals
(i.e. interval scales indicate order (as in ordinal scales) and also
the distance in the order).

 Examples:
 Consumer Price Index
 Temperature Scale in Fahrenheit

Interval scales allow comparisons of the differences of magnitude (e.g.


of attitudes) but do not allow determinations of the actual strength of
the magnitude.
Ratio Scale
A ratio scale is a scale that possesses absolute rather than
relative qualities and has an absolute zero.

Examples:
 Money
 Weight
 Distance
 Temperature on the Kelvin Scale

Interval scales allow comparisons of the differences of magnitude


(e.g. of attitudes) as well as determinations of the actual strength of
the magnitude.
Scale Properties
• Uniquely classifies
• Preserves order
• Equal intervals
• Natural zero
Index Measures
• ATTRIBUTES A single characteristic or fundamental
feature that pertains to an object, person, or issue
• COMPOSITE MEASURE A composite measure of
several variables to measure a single concept; a
multi-item instrument
The Goal
of Measurement Validity
Validity

The ability of a scale to measure


what was intended to be measured
Validity
• The ability of a measure (scale) to measure what it is
intended measure.
• Establishing validity involves answers to the
following:
• Is there a consensus that the scale measures what it is
supposed to measure?
• Does the measure correlate with other measures of the
same concept?
• Does the behavior expected from the measure predict
actual observed behavior?
Validity

Validity

FACE OR CONTENT CRITERION VALIDITY CONSTRUCT VALIDITY

CONCURRENT PREDICTIVE
Face or Content validity
• Face or content validity: The subjective agreement
among professionals that a scale logically appears
to measure what it is intended to measure.
Criterion Validity:
• Criterion Validity: the ability of some measure to
correlate with other measures of the same
construct.
• “How well does my measure work in practice?”
• Because of this, criterion validity is sometimes
referred to as pragmatic validity.
• In other words, is my measure practical?
Criterion validity may be classified as:
Concurrent validity or predictive validity

Concurrent validity
• A type of criterion validity whereby a new measure
correlates with a criterion measure taken at the
same time.
predictive validity
A type of criterion validity where by a measure
predicts future event or correlate with a criterion
measure administered at a later time.
• If the new measure is taken at the same time as the
criterion measure and is shown to be valid, then it has
concurrent validity.

• Predictive validity is established when a new measure


predicts a future event.
• The two measures differ only on the basis of a time
dimension
• that is, the criterion measure is separated in time from the
predictor measure.
Construct Validity:
Construct Validity: degree to which a
measure/scale confirms a network of related
hypotheses generated from theory based on the
concepts.
Convergent Validity.
Concepts that should be related to one
another are in fact related; highly reliable scales
contain convergent validity
Discriminant Validity.
the ability of some measure to have a low
correlation with measures of dissimilar concepts.
Reliability
• The degree to which measure are free from error
and therefore yield consistent results.
Reliability

RELIABILITY

STABILITY INTERNAL CONSISTENCY

TEST RETEST EQUIVALENT FORMS SPLITTING HALVES


Reliability
• The two dimensions underlie the concept of
reliability.
• Repeatability & internal consistency
Reliability
Test re test method
• Administering the same scale or measure to the
same respondents at two separate points in time to
test for stability
Internal consistency
• Internal consistency represents a measure’s
homogeneity. An attempt to measure
trustworthiness may require asking several similar
but not identical questions,
• The set of items that make up a measure are
referred to as a battery of scale items.
• Internal consistency of a multiple-item measure can
be measured by correlating scores on subsets of
items making up a scale.
Split-half method
• The split-half method of checking reliability is
performed by taking half the items from a scale
(for example, odd-numbered items) and checking
them against the results from the other half (even-
numbered items).
• The two scale halves should produce similar scores
and correlate highly.
Coefficient alpha (α)
• This is the degree to which the items that make up
the scale are all are measuring the same underlying
attribute.

• The extent to which the items hang togther


Equivalent –form method

• Two alternative instruments are designed to be as


equivalent as possible

• A method of measuring the correlation between


alternative instruments, designed to be as
equivalent as possible, administered to the same
group of subjects.
Sensitivity
• Sensitivity refers to an instrument’s ability to
accurately measure variability in a concept.
• A dichotomous response category, such as “agree
or disagree,” does not allow the recording of
subtle attitude changes
• A more sensitive measure with numerous
categories on the scale may be needed. For
example, adding “strongly agree,” “mildly agree,”
“neither agree nor disagree,” “mildly disagree,”
and “strongly disagree” will increase the scale’s
sensitivity..
• The sensitivity of a scale based on a single question
or single item can also be increased by adding
questions or items.
• In other words, because composite measures allow
for a greater range of possible scores, they are
more sensitive than single-item scales.
• Thus, sensitivity is generally increased by adding
more response points or adding scale items.
Reliability and Validity on Target

Old Rifle New Rifle New Rifle


Sun glare
Low Reliability High Reliability Reliable but Not
Valid
(Target A) (Target B) (Target C)
Validity

Validity

FACE OR CONTENT CRITERION VALIDITY CONSTRUCT VALIDITY

CONCURRENT PREDICTIVE
Reliability

RELIABILITY

STABILITY INTERNAL CONSISTENCY

TEST RETEST EQUIVALENT FORMS SPLITTING HALVES


Sensitivity
• A measurement instrument’s ability to accurately
measure variability in stimuli or responses.

Potrebbero piacerti anche