Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Validity
The validity of any measuring instruments
depends upon the accuracy with which it
measures what it to be measured when
compared with standard criterion.
A test is valid when the performance
which it measures corresponds to the
same performance as otherwise
independently measured or objectively
defined.
Validity
The validity of the test is defined as the measurement
characteristics of the test that shows whether the test
measures what we consider to measure, as well as the
degree in which the it measures.
Cronbach (1949th, according to Jordan, 1966) says:
"The test is valid to the degree in which we know what it
is measured or predicted.",
In fact you could say that the test is valid as long as a
good measure of what it wants to measured.
Validity is, therefore, the most important problem that
must be considered when evaluating a test, and it refers
to the appropriateness, usefulness and significance of
the conclusions drawn from the test results.
To ensure the validity of the test we must take into
account the structure of the test that we provide
internal validity, and on its correlation with some
METHODS OF CALCULATING
VALIDITY
Content validity
Criterion based validity
Construct validity
Factorial validity
Content validity
Criterion validity
Construct validity
That is a clear correlation between particle test with its presumed construct /
cancellable quality or trait , which shows the empirical data and logical analysis and
discussion, for example the degree to which certain constructs or concepts may
provide an explanation for the effect on the test.
The construct validity is used when you are interested in the nature of the
measurement object to be examined some measurements . The construct validity of
the test determines whether a valid measure objects of measurement . Refers to test
connectivity with theoretical " construct " or feature for which it is assumed that the
test measures . The construct validity is a process that requires you to construct a
support or not , and this is , in part achieved by the use of predictable or systematic
difference in the results of the test, which is supposed to measure this construct . As
the construct psychological abstract , it is difficult to determine whether the test
construct validity .
The construct validity can be checked to determine the positive correlation between
the results of the test that we are interested in, and another test that measures the
same construct - convergent validity , and the other is a way to show that there is no
connection between the selected tests and measures that have nothing to do with
the theoretical construct investigate , and that is divergent validity .
It also includes a method of using factor analysis in which study the relationships
among all the tasks of the test whose validity is determined . " Factor analysis is a
set of statistical procedures intended to determine the basic dimensions that are
responsible for the relationship between a larger number of variables . " ( According
Kolesari and Petz , 1999).
Content validity
Content validity becomes more of an issue
for tests of achievement or ability and less a
concern for tests of personality, where high
content validity may limit the overall
usefulness/ applicability of the test. Further,
it is useful for tests related with cognitive
skills that require an assessment of a broad
range of skills in a given area.
The concept of content validity is employed
in the selection of items for a test.
Content validity
Less defensible than content validity is the judgement
process called face validity. A test is said to have face
validity when it appears to measure whatever the
author has in mind.
Content validity is generally confused with face
validity that a scale appears to measure based on the
reading of various items. Rating scales for various
hypothesized traits, neurotic inventories, attitude scales
and even intelligence tests often claim face validity.
Judgment of face validity is very useful in helping an
author decide whether his test items are relevant to
some specific situation (that is, the industry) or to
specialized occupational experiences.
Construct Validity
Construct validity approach is much more
complex than other forms of validity and
is based on the accumulation of data over
a long period of time. Construct validity
requires the study of test scores in
relation not only to variables that the test
is intended to assess, but also in relation
to the study of those variables that have
no relationship to the domain underlying
the instrument.
Factorial Validity
Another method to study construct
validity is with the help of factor
analysis. One may postulate a factorial
structure for a specific test, given ones
assumptions about both the
characteristics that are being assessed
and the theory from which they are
derived. A confirmatory factor analysis is
then performed to test the hypothesis.
FACTORS AFFECTING
VALIDITY
Group Differences
The characteristics of a group of people on
whom the test is validated affect the criterion
related validity. Differences among the group
of people on variables like sex, age and
personality traits may affect the correlation
coefficient between the test and the selected
criteria. Like reliability coefficient, the
magnitude of the validity coefficient depends
on the degree of heterogeneity of the
validation group on the test variable.
FACTORS AFFECTING
VALIDITY
An unreliable test cannot be very valid. A test of low
reliability also has low validity. There is a formula that can
be employed to estimate what the validity coefficient
would be if both the test and the criterion are perfectly
reliable. This correction for attenuation formula is,
FACTORS AFFECTING
VALIDITY
Criterion Contamination
The validity of a test is also
dependent upon the validity of the
criterion itself as a measure of the
particular cognitive or affective
characteristic of interest.
Sometimes, the criterion is
contaminated or rendered invalid
due to the method by which the
criterion scores are determined.
FACTORS AFFECTING
VALIDITY
Test Length
Like reliability, validity coeffi cient varies directly as the test length,
that is, the longer a test the greater is its validity and vice versa.
Increasing a tests length affects the validity coefficient. This effect
can be measured by the following formula,