Sei sulla pagina 1di 17

Item

Analysis
Item Analysis

The systematic evaluation of


the efectiveness of each item
of a test.
Item analysis can tell us:

The difculty of the item

The discriminating power of


the item

The efectiveness of each


alternative
Simplifed Item-Analysis
rocedure

There are a num!er of diferent


item-analysis procedures that
might !e applied "#ownie$ %&'().
*or informal achievement tests
used in teaching$ only the simplest
of procedures seems warranted.
The following steps outline a
simple !ut efective procedure. +e
shall use ,- test papers to
illustrate the steps.

%. Arrange the test papers "all ,-


papers) in order from the highest
score to the lowest score.

-. Select appro.imately one-third of


the papers with the highest scores
and call this the upper group "%/
papers). Select the same num!er of
papers with the lowest scores and
call this the lower group "%/ papers).

Set the middle group of papers


aside "%- papers). Although
these could !e included in the
analysis$ using only the upper
and lower groups simplifes the
procedure.

,. *or each item$ count the num!er of students in


the upper groups who selected each alternative.$
0a1e the same count for the lower group.

'. 2stimate item discriminating power$ !y


comparing the num!er of students in the
upper and lower groups who got the item
right. 3ote in our sample item a!ove$ that si.
students in the upper group and two students
in the lower group selected the correct
answer. This indicates positive discrimination$
since the item diferentiates !etween
students in the same way that the total test
score does. That is$ students with high scores
on the test "upper group) got the item right
more fre4uently than students with low scores
on the test "lower group).
Although analysis !y inspection may !e all that is necessary
for most purposes$ an inde. of discrimination can !e easily
computed. Simply su!tract the num!er in the lower group who
got the item right from the num!er in the upper group who
got the item right and divide !y the num!er in each group.
Thus$ for our sample item$ the computation would !e as
follows:

(. #etermine the efectiveness of the


distracters$ !y comparing the num!er
of students in the upper and lower
groups who selected each incorrect
alternative. A good distracter will
attract more students from the lower
group than the upper group. Thus$ for
our illustrative item-analysis data$ in
step 5$ alternatives A and # are
functioning efectively$ alternative 6
is poor since it attracted more
students from the upper group and
alternative 2 is completely inefective
since it attracted no one.

An analysis such as this is


useful in evaluating a test item$
and$ when com!ined with an
inspection of the item itself$ it
provides helpful information for
improving the item.
Simplifed 0ethods of Treating
Test Scores.

Test scores are normally descri!ed !y two measures:


%. Average score or measure of central tendency.
-. Spread of scores or measure of varia!ility.

Three types of averages:


%. 0ean
-. 0edian
,. 0ode

Two ways of descri!ing varia!ility:


%. 7ange
-. Standard deviation
#etermining the mean and
standard deviation.
8alidity and 7elia!ility

8alid tests measure what they actually


were designed to measure.

Tests of validity:
%. 6ontent
- . 6riterion - related
,. 6onstruct

7elia!le tests measure what they were


designed to measure consistently.
0ethods of determining
relia!ility:
%. Test - retest method.
-. 24uivalent - forms method.
,. Test - retest with e4uivalent
forms.
5. Internal consistency
method.
72*272362S

www.cte.cornell.edu

www.msu.edu9dept9sowe!9writitem.html:stem

;u!!ard$ 2valuation in 2ducation

+hitfeld$ 7.6.$ 6riteria of <uality for 0ultiple


6hoice <uestions

=ind4uist$ 2ducational 0anagement

I>cr!.we!.com

Interdisciplinary ?ournal of 6ontemporary 7esearch$


April$ -/%, 8ol.5.$ 3o.%-

Potrebbero piacerti anche