Sei sulla pagina 1di 19

Validity

Validity
The validity of any measuring instruments
depends upon the accuracy with which it
measures what it to be measured when
compared with standard criterion.
A test is valid when the performance
which it measures corresponds to the
same performance as otherwise
independently measured or objectively
defined.

Validity
The validity of the test is defined as the measurement
characteristics of the test that shows whether the test
measures what we consider to measure, as well as the
degree in which the it measures.
Cronbach (1949th, according to Jordan, 1966) says:
"The test is valid to the degree in which we know what it
is measured or predicted.",
In fact you could say that the test is valid as long as a
good measure of what it wants to measured.
Validity is, therefore, the most important problem that
must be considered when evaluating a test, and it refers
to the appropriateness, usefulness and significance of
the conclusions drawn from the test results.
To ensure the validity of the test we must take into
account the structure of the test that we provide
internal validity, and on its correlation with some

METHODS OF CALCULATING
VALIDITY

Content validity
Criterion based validity
Construct validity
Factorial validity

APA (1992.), claims three main


types of validity
Content validity
Criterion based validity
Construct validity

Content validity

Means to determine whether the content of a test is a


representative sample of the area behaviors examined
(empirical methods such as weight of test items , test item
correlation with the total score of the test and correlation
test tasks with external criterion variables).
There is no precise measure content validity.
Content validity should provide several characteristics of
the test :
a) coverage test (the degree to which the test covers the
relevant area) ,
b) the relevance of the test (the degree to which the test
particles contain what is being studied in the program, or
is relevant to the program) ,
c) coverage program ( the extent to which the program
covers the entire area to be tested).
) Precisely because it is based on an assessment, content
validity , by itself is not a sufficient measure of validity of

Criterion validity

Is achieved by comparing the test results with one or more


variables (criteria) for other measures or tests that are
considered to measure the same factor.
It refers to the ability to forecast the success of
respondents criteria activities, on the basis of test results.
Explores the comparison between the results obtained with
the observed test outcome of some standard or external
criterion. "The criterion is an independent measure traits
that test measures or conduct that test predicts ...
correlation criterion results of a group of subjects with their
test results verify the validity of the test" (Petz, 1992).
The criterion should be: a) relevant, b) an impartial
(objective), c) reliable - accurate information, d) suitable
for measuring or achievement.

Construct validity
That is a clear correlation between particle test with its presumed construct /
cancellable quality or trait , which shows the empirical data and logical analysis and
discussion, for example the degree to which certain constructs or concepts may
provide an explanation for the effect on the test.
The construct validity is used when you are interested in the nature of the
measurement object to be examined some measurements . The construct validity of
the test determines whether a valid measure objects of measurement . Refers to test
connectivity with theoretical " construct " or feature for which it is assumed that the
test measures . The construct validity is a process that requires you to construct a
support or not , and this is , in part achieved by the use of predictable or systematic
difference in the results of the test, which is supposed to measure this construct . As
the construct psychological abstract , it is difficult to determine whether the test
construct validity .
The construct validity can be checked to determine the positive correlation between
the results of the test that we are interested in, and another test that measures the
same construct - convergent validity , and the other is a way to show that there is no
connection between the selected tests and measures that have nothing to do with
the theoretical construct investigate , and that is divergent validity .
It also includes a method of using factor analysis in which study the relationships
among all the tasks of the test whose validity is determined . " Factor analysis is a
set of statistical procedures intended to determine the basic dimensions that are
responsible for the relationship between a larger number of variables . " ( According
Kolesari and Petz , 1999).

Content validity
Content validity becomes more of an issue
for tests of achievement or ability and less a
concern for tests of personality, where high
content validity may limit the overall
usefulness/ applicability of the test. Further,
it is useful for tests related with cognitive
skills that require an assessment of a broad
range of skills in a given area.
The concept of content validity is employed
in the selection of items for a test.

Content validity
Less defensible than content validity is the judgement
process called face validity. A test is said to have face
validity when it appears to measure whatever the
author has in mind.
Content validity is generally confused with face
validity that a scale appears to measure based on the
reading of various items. Rating scales for various
hypothesized traits, neurotic inventories, attitude scales
and even intelligence tests often claim face validity.
Judgment of face validity is very useful in helping an
author decide whether his test items are relevant to
some specific situation (that is, the industry) or to
specialized occupational experiences.

Criterion based Validity


This kind of validity deals with the ability of
test scores to predict human behaviour, either
with the help of other test scores, observable
behaviour or other accomplishments, such as
grade point averages.
Experimentally, the validity of a test
determined by finding the correlation between
the test and some independent criterion, may
be an objective measure of performance, or a
quantitative measure such as a judgment of
character or excellence in work done.

Criterion based Validity


The best way to check test prediction is evidence of
validity, provided that (a) the criterion was setup
independently and (b) both the test and the criterion are
reliable.
Criterion validity can be categorized into two types, that is,
concurrent and predictive. Concurrent validity involves
prediction of an alternative method of measuring the same
characteristics of interest, while predictive validity
attempts to show a relationship with future behaviour.
Both predictive and concurrent validities are accepted by
deciding the appropriate level of validity coefficient or
correlation between a test score and some criterion
variable. The appropriate acceptance level depends upon
the intended use of the test.

Construct Validity
Construct validity approach is much more
complex than other forms of validity and
is based on the accumulation of data over
a long period of time. Construct validity
requires the study of test scores in
relation not only to variables that the test
is intended to assess, but also in relation
to the study of those variables that have
no relationship to the domain underlying
the instrument.

Factorial Validity
Another method to study construct
validity is with the help of factor
analysis. One may postulate a factorial
structure for a specific test, given ones
assumptions about both the
characteristics that are being assessed
and the theory from which they are
derived. A confirmatory factor analysis is
then performed to test the hypothesis.

FACTORS AFFECTING
VALIDITY
Group Differences
The characteristics of a group of people on
whom the test is validated affect the criterion
related validity. Differences among the group
of people on variables like sex, age and
personality traits may affect the correlation
coefficient between the test and the selected
criteria. Like reliability coefficient, the
magnitude of the validity coefficient depends
on the degree of heterogeneity of the
validation group on the test variable.

FACTORS AFFECTING
VALIDITY
An unreliable test cannot be very valid. A test of low
reliability also has low validity. There is a formula that can
be employed to estimate what the validity coefficient
would be if both the test and the criterion are perfectly
reliable. This correction for attenuation formula is,

FACTORS AFFECTING
VALIDITY
Criterion Contamination
The validity of a test is also
dependent upon the validity of the
criterion itself as a measure of the
particular cognitive or affective
characteristic of interest.
Sometimes, the criterion is
contaminated or rendered invalid
due to the method by which the
criterion scores are determined.

FACTORS AFFECTING
VALIDITY
Test Length
Like reliability, validity coeffi cient varies directly as the test length,
that is, the longer a test the greater is its validity and vice versa.
Increasing a tests length affects the validity coefficient. This effect
can be measured by the following formula,

where, Vn = the validity of the lengthened test


Vo = the validity of the original test
r = the reliability coeffi cient of the test
K = number of parallel forms of test X, or the number of times it is
lengthened. Let us explain
with the help of an example.

THE RELATION BETWEEN


RELIABILITY AND VALIDITY
Reliability and validity refer to different
aspects of essentially the same thing,
namely, test efficiency. Reliability is
concerned with the stability of test scores. It
does not go beyond the test itself. Validity,
on the other hand, implies evaluation in
terms of an outside and independent
criteria.
To be valid a test must be reliable.
A test may be theoretically valid and show
little or no correlation with anything else.

Potrebbero piacerti anche