Sei sulla pagina 1di 1

ITEM ANALYSIS

1.) Item Validity – the degree to which a test or other measuring device is truly measuring what we
purport it to measure.

 Content Validity – essence of what you’re measuring consists of topics and processes (are these
the right type or items?)
 Criterion Related – how well a test corresponds with a particular criterion
(a) Concurrent Validity – correlate what is occurring now;
(b) Predictive Validity – correlate what occurs in the future (ex: college entrance exams ->
academic performance)
 Construct Validity – an informed scientific idea developed or hypothesized to describe or
explain a behavior; something built by mental synthesis. (does this test relate to other tests?)
- Constructs are made up or constructed by us in our attempts to organize and make
sense of behavior and other psychological processes.
(a) Convergent Validity – the test correlates well and measures the same construct as to other
test.
(b) Divergent/Discriminant Validity - a validity coefficient sharing little or no relationship
between the newly created test and an existing test.

2.) Item Reliability – indicates the internal consistency of a test, survey, observation or other
measuring device.

Sources of Error:

 Time Sampling Error – error due to testing occasions


 Test-retest Reliability – same test is given to a group of subjects on at least two separate
occasions
 Domain Sampling error – error due to test items
 Internal Consistency Error – error due to testing multiple traits
o Split-half method
o Kuder-Richardson formula
o Cronbach’s alpha –a measure used to assess the internal consistency, or a set of scale
or test items.

Potrebbero piacerti anche