Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Iteman
Version 4.1
Contact Information
Assessment Systems Corporation
2233 University Avenue, Suite 200
St. Paul, Minnesota 55114
Voice: (651) 647-9220
Fax: (651) 647-0412
E-Mail: support@assess.com
www.assess.com
Bookmarks
To view PDF Bookmarks for this manual, select the Bookmark tab on the left side of the Acrobat
window. The bookmark entries are hyperlinks that will take you directly to any section of the
manual that you select.
License
Unless you have purchased multiple licenses for Iteman 4.1, your license is a single-user license.
You may install Iteman 4.1 on a single computer and one additional computer (e.g., a desktop
and a laptop), provided that there is no possibility that both copies will be in use simultaneously.
Instructions for transferring your license between computers are in Appendix E.
Technical Assistance
If you need technical assistance using Iteman 4.1, please visit the Support section of our Web
site, www.assess.com. If the answer to your question is not posted, please email us at
support@assess.com. Technical assistance for Iteman 4.1 is provided for as long as you maintain
the then current version. Please provide us with the invoice number for your license purchase
when you request technical assistance.
Citation
Thompson, N.A., & Guyer, R. (2010). User’s Manual for Iteman 4.1. St. Paul MN: Assessment
Systems Corporation.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any
form or by any means—electronic, mechanical, photocopying, recording, or otherwise—without
the prior written consent of the publisher.
Iteman has a friendly graphical user interface (GUI) that makes it easy to run the program, even
if you are not familiar with psychometrics. The GUI is organized into five tabs: Settings, Files,
Input Format, Scoring Options, and Output Options. These are discussed in detail in Chapter 3:
Running the Program.
1. The most important advantage is the addition of graphics. It is now possible to produce
an item quantile plot for each item. Moreover, you control the number of points in the
plot.
2. Iteman 4.1 is able to handle pretest (trial or unscored) items—items that are not included
in the final score but for which statistics are still desired.
3. More statistics are calculated, including the alpha (KR-20) reliability coefficient with
each item deleted, several split-half reliability coefficients (both with and without
Spearman-Brown correction) and subgroup P (proportion correct) statistics for up to
seven ordered groups.
4. Instead of simple ASCII text files, the output is now rich text file (RTF) format prepared
as a formal report, and also in a comma-separated value (CSV) format that is able to be
manipulated (sorted, highlighted, etc.) in spreadsheet software. It additionally produces a
CSV file of examinee scores.
5. Scaled scores and subscores can be added to the output.
6. Scores can be classified into two groups at a specified cut score, and the two groups can
use your labels.
7. Items can be analyzed relative to an external score rather than the total score on a test.
8. The maximum number of items that can be analyzed has been increased to 10,000.
9. A “batch” type of capability, using a “Multiple Runs File” has, been added to allow you
to run multiple data sets without having to use the graphic user interface for each run.
Multiple Runs files can be created outside Iteman in a text editor or interactively within
Iteman.
From the unlock screen you will need to send us the two blue Computer ID and Session ID
numbers (Figure 1.2). For your convenience we have provided a “Copy IDs to Clipboard”
button. This will copy both IDs to the Windows clipboard along with a brief message and the
email address to which to send your payment information. This can then be pasted into an email
message, filled in, and sent to sales@assess.com. If you have already paid for your Iteman 4
license, be sure to add your invoice number to this message.
When we receive these codes from you, we will respond with a single numeric Activation Code
(if you have purchased a permanent license) or two codes (if you have purchased an annual
subscription license) that you will need to enter into this same window from which you obtained
Note that if you install Iteman 4.1 on a second computer, you will need to repeat this process for
that computer since the unlock codes are specific to a given computer.
• Iteman 4.1 is permanently unlocked for academic use, but is an annual subscription for
non-academic use. The license status box (see Figure 3.1) will display the current
license status, including the number of days remaining for your subscription. As the
subscription nears the end, the background color of the box will change to alert you to the
need to renew your subscription for another year (red if you have less than 30 days
remaining, yellow if 30-90 days, and green if more than 90 days).
.
Person1 4213323412
Person2 1213323410
Person3 3323123413
Person4 1223323414
Person5 2214323411
Additional columns can be ignored, so it is not necessary to delete any data if your data file has
information other than ID and responses. For example, your file might contain exam dates,
locations, education level, or sensitive personal data that you do not want included in the output.
An example of this is shown in Figure 2.2; you might want to include examinee ID numbers (the
first six columns) in your output but not names. Chapter 3: Running the Program describes
how to skip these columns.
There are six columns of information in the control file for each item. Begin each item on a new
line:
An example of the control file is shown in Figure 2.3. There are ten items, with nine multiple
choice items and one partial credit item. The first five are in Domain 1, while the latter five are
in Domain 2. The first four items in each domain are scored, while the fifth item in domain 2 is
a pretest item. The keyed answers are either 1, 2, 3, or 4 for each multiple choice item since each
item has 4 alternatives. Keys can be alphabetical or numeric. Item 7 has two keyed responses (3
and 1). If an item is polytomously scored, the key should be “+” if positively scored and “-” if
negatively (reverse) scored. Item 10 is a positively scored (+) partial credit item with item
responses that begin at 0. For item 10, the item responses will be 0, 1, 2, 3, and 4, since the item
has five options.
The control file should have as many lines as there are items in the test. The program counts the
lines of information in the control file, and that serves as the total number of items in the test.
There is a maximum of 10,000 items (lines) in Iteman 4.1.
Item01 1 4 1 Y M
Item02 2 4 1 Y M
Item03 3 4 1 Y M
Item04 4 4 1 Y M
Item05 1 4 1 P M
Item06 2 4 2 Y M
Item07 31 4 2 Y M
Item08 4 4 2 Y M
Item09 1 4 2 Y M
Item10 + 5 2 P P
for each file. This will activate a standard dialog window to specify the path and name of each
file.
If the Data Matrix File has an Iteman 3 (ITAP) header, be sure to check this box:
The Item Control file box will be disabled when the Iteman 3 Header box is checked, as will the
options on the Input format Tab.
The output file must have an .rtf extension. The fourth box is used if you have a file containing
examinee scores that have been produced by some method other than number-correct that you
wish to use as the basis for your statistics (for example, a scaled score reported by your testing
Iteman 4.1 Manual Page 8
vendor). The scores in this file, one line per examinee, must be in the same order as those in the
examinee data file. The fifth box allows you to use a previously saved options file. The
selected options file will override the current program defaults when opened. The last file text
box allows you to provide a title for your report.
If you have a special character in your data representing omitted responses or not-administered
items, these are specified next. These responses will be treated separately, with frequencies
provided in the output. If all items were answered by all examinees, you can leave these
characters as the default value, and of course no examinees will be noted as having such
characters.
If your Data Matrix File includes an ITAP heard, the options on this tab will be deactivated and the
following message will be displayed:
♦ If your testing program reports scaled scores based on raw number-correct scores, these can
be calculated directly. Scaled scores are computed using the scaling function (detailed
below) for the total number correct scores and/or the domain number-correct scores. Scaled
scoring is often used to mask details about the test, such as exact number of items or raw
cutoff score, or to express scores on a different scale than number correct.
o Linear scaling: The raw scores are first multiplied by the slope coefficient then the
intercept is added to the product. For example, if you want the scores to be reported on a
scale of 100 to 200 for a test of 50 items, the scaled score could be specified as SCALE =
RAW × 2 + 100.
♦ If you want to perform dichotomous classification for the total number-correct scores. click
the box next to that statement. It is possible to classify based on either total number-correct
or the scaled total number-correct scores.
o Cutpoint: The cutpoint is the value at which scores are classified as in the high group.
Scores below the cutpoint are classified as being in the low group.
o Low group label: Label used in the Scores output file for those in the low group.
o High group label: Label used in the Scores output file for those in the high group.
To create an MRF:
1. Select the folder where the files used for the analysis are stored. Click “Add Path” to add
the Path to the MRF. (You must complete steps 2, 3, and 4 to perform an analysis.)
2. Select the Options File:
a. If you saved the program options to an external file, open this file and select “Add
Options”. The Options file will be added to the MRF.
b. If you wish to use the program defaults, select “Use Defaults.” The Keyword
“DEFAULTS” will appear in the MRF text editor next to OPTS.
3. Select the item control file (the data file(s) must follow the item control keyword):
Note that if you enter a file name that does not exist in the selected folder, and select “Add”, the
program will not add the file to the MRF. It is important to note that the options*, control**, and
data files for a specific analysis all must reside within the same folder.
You may delete entries in the MRF text editor by clicking on the line and hitting “Delete” or
“Backspace”. However the following file sequence must be observed for the MRF to work
correctly:
1. The first PATH keyword must be followed by the OPTS, CTRL, and DATA lines
2. If you wish to use a different OPTS file, that file must appear after the PATH statement.
3. The CTRL statement must be followed by the DATA line(s).
To Save the text in the MRF editor box to an external file, select the “Save MRF” button. This
will allow you to save the MRF to a folder of your choosing.
To Run the MRF select the “RUN MRF” box. Note that the text in the MRF editor box will
automatically be saved to an external file when you run the MRF. The saved MRF text file will
have the word ‘MRF’ appended to the end of the filename of the last selected data file.
The following output files will be generated for each DATA file in the MRF
1. DATA.rtf The main rich text output file that includes the graphics and tables
2. DATA.csv The comma-separated values output file
3. DATA Scores.csv The scores saved as a comma-separated values file
The following output files are optional and will be generated for each DATA file in the MRF if
requested in the Options File:
4. DATA Matrix.txt The scored data matrix file
5. DATA Control.txt The item control file if the original data matrix file used an Iteman 3
Header and a scored data matrix was requested
MRFs may also be created or edited in a test editor. They must, however, be saved as pure text (not
word professing) files.
The data files ‘Exam1.txt’, ‘Exam2.txt’, and ‘Exam3.txt’ all make use of the control file
‘Control.txt’. The data file ‘Exam4.txt’ uses an Iteman 3 Header, so the CTRL line with
‘ITEMAN 3’ precedes the DATA line. The new CTRL line overrides the previous CTRL file
‘Control.txt’ and the keyword ‘ITEMAN 3’ deactivates the input of the control file. A new
PATH statement at the end of this file would change the folder location of any following OPTS,
CTRL and DATA files to be analyzed. An MRF file can have any number of lines.
Figure 3.8 shows the multiple runs window following the successful completion of the multiple
runs analysis. The window above the “Add Path” button reports the following information:
To run the sample files, follow these steps, one step for each tab.
1. Specify your files. For the MC only sample files, the Data matrix file is Sample data file 1 (MC
only).txt and the Item control file is Sample control file 1 (MC only).txt. You can name your
output file whatever you like. Figure 3.10 shows what the Files tab should now look like.
2. The sample data file has 6 columns of ID information, beginning in column 1, while item
responses begin in column 7. These are determined by counting columns in the data file (advanced
text editors can count this for you, such as PSPad; www.pspad.com). Specify these numbers on the
Input Format Tab, as shown in Figure 3.11. There is no missing data in the sample file, so you do
not have to be concerned with the Omit or Not Administered characters.
Figure 3.11: The Input Format Tab for the Sample Files
3. Specify any Scoring Options and Output Options you wish. The program will run successfully if
you do not make any changes on the fourth and fifth tabs.
Once the program has successfully run, you will be shown the message in Figure 3.12 to tell you that
the run is complete, and where to find the output file. Clicking “Yes” will open the relevant
directory.
Iteman 4.1 Manual Page 19
Figure 3.12: “The run is complete.”
The primary output, the RTF report, is presented as a formal report that can be provided to test
developers. It begins with a title page which is followed by summary information of the input
specifications. This is important for historical purposes; if the report is read in the future, it will
be evident how Iteman 4.1 was set up to produce the report. If more than 300 items are
analyzed, the item-level RTF report will be divided into separate files. The test-level output and
the item-level output for the first 300 items will be saved in the first file. The second file will be
comprised of the item-level output for items 301-600. Additional item-level RTF files will be
created for all k items with each RTF file containing the output for up to 300 items.
Label Explanation
Score which portion of the test that the row is describing
Items number of items in that portion of the test
Mean average number correct
SD standard deviation, a measure of dispersion (a range of ± two SDs
from the mean includes approximately 95% of the examinees, if
their number-correct scores are normally distributed)
Min score the minimum number of items an examinee answered correctly
Max score the maximum number of items an examinee answered correctly
Mean P average item difficulty statistic for that portion; also the average
proportion-correct score if there are no omitted responses (not
reported if there are no multiple choice items)
Item Mean average of the item means for polytomous items (not reported if
there are no polytomous items)
Mean R average item-total correlation for that portion of the test
The test-level summary table (Table 4.1) allows you to make important comparisons between
these various parts of the test. For example, are the new pretest items of comparable difficulty to
Iteman 4.1 Manual Page 21
the current scored items? Are items in Domain 2 more difficult than Domain 1? Were the mean
and standard deviation (SD) of the raw scores what should be expected?
Score Alpha SEM Split-Half Split-Half Split-Half S-B S-B First- S-B Odd-
(Random) (First-Last) (Odd-Even) Random Last Even
All items 0.765 2.561 0.537 0.473 0.707 0.699 0.643 0.829
Scored items 0.731 2.439 0.462 0.434 0.682 0.632 0.605 0.811
Pretest items 0.519 0.754 - - - -
Domain 1 0.073 0.747 0.014 0.182 -0.008 0.028 0.308 -0.016
Domain 2 0.642 1.307 0.607 0.380 0.328 0.755 0.551 0.494
Domain 3 0.590 1.874 0.209 0.149 0.600 0.345 0.259 0.750
If a dichotomous classification was performed, and all the scored items are multiple choice, the
Livingston decision consistency index is computed at the cut-score (expressed as number-correct
scores). The equation for the Livingston index is provided in Appendix C.
After the histograms for the scored items, histograms for the item statistics are provided, each
followed by a table of numerical values corresponding to the histograms.. If there were scored
multiple-choice items, the histogram for the item P values and Rpbis correlations are provided.
If there were scored polytomous items then the histogram for the item means and the Pearson r
correlations are provided.
Next scatterplots are provided of the P value by Rpbis if there are scored multiple-choice items,
and of the item mean by Pearson’s r if there are scored polytomous items.
If dichotomous classification was performed, then the CSEM is reported at the cutscore
(expressed as number correct). If you used a scaled cutscore, this scaled cutscore is converted to
the raw number-correct scale for reporting.
Item-Level Output
After the test-level statistics, a detailed table of the statistics for each item is provided, one item
to a page. If the quantile plots option is selected, that is also provided on the same page, as
shown in Figure 4.3 for a dichotomously scored item and Figure 4.4 for a polytomous item.
These quantile plots can be pasted into the item record for test items that are stored in ASC’s
FastTEST item banker, FastTEST Pro, or FastTEST Web.
The quantile plot, as seen in Figure 4.3, can be difficult to interpret, but is arguably the best way
to graphically depict the performance of an item with classical test theory. It is constructed by
dividing the sample into X groups based on overall number-correct score, or an external score if
used, and then calculating the proportion of each group that selected each option. For a four-
option multiple-choice item with three score groups as in the example, there are 12 data points.
The 3 points for a given option are connected by a colored line. A good item will typically have
a positive slope on the line for the correct/keyed answer, while the slope for the incorrect options
should be negative.
Note: Quantile plots might not be visible if the output is opened in WordPad.
Polytomous Items
Label Explanation
N Number of examinees that responded to the item
Mean Average score for the item
Domain r* Correlation of item (Pearson’s r) with domain score
Domain Eta*+ Coefficient eta from an ANOVA using item and domain scores
Total r Correlation of item (Pearson’s r ) with total score
Total Eta+ Coefficient eta from an ANOVA using item and total scores
Alpha w/o The coefficient alpha of the test if the item was removed
Flags Any flags, given the bounds provided
The following table provides explanations for option-level information in the third table seen in
Figures 4.3 and 4.5, “Option statistics.”
Label Explanation
Option Letter/Number of the option
Weight Scoring weight for polytomous items
N Number of examinees that selected the option
Prop. Proportion of examinees that selected the option
Rpbis Point-biserial correlation of option with total score
Rbis Biserial correlation of option with total score
Mean Average score of examinees that selected the option
Color Color of the option on the quantile plot
(key) The keyed answer will be denoted by **KEY** for multiple
choice items
Item Difficulty
• The P value (Multiple Choice)
The P value is the proportion of examinees that answered an item correctly (or in the keyed
direction). It ranges from 0.0 to 1.0. A high value means that the item is easy, and a low value
means that the item is difficult.
The minimum P value bound represents what you consider the cut point for an item being too
difficult. For a relatively easy test, you might specify 0.50 as a minimum, which means that
50% of the examinees have answered the item correctly. For a test where we expect examinees
to do poorly, the minimum might be lowered to 0.4 or even 0.3. The minimum should take into
account the possibility of guessing; if the item is multiple-choice with four options, there is a
25% chance of randomly guessing the answer, so the minimum should probably not be 0.20.
The maximum P value represents the cut point for what you consider to be an item that is too
easy. The primary consideration here is that if an item is so easy that nearly everyone gets it
correct, it is not providing much information about the examinees. In fact, items with a P of 0.95
or higher typically have very poor point-biserial correlations.
The minimum item mean bound represents what you consider the cut point for the item mean
being too low.
The maximum item mean bound represents what you consider the cut point for the item mean
being too high. The number of categories for the items must be considered when setting the
Item Correlations
• Multiple Choice Items
The item point-biserial (r-pbis) correlation. The Pearson point-biserial correlation (r-
pbis) is a measure of the discrimination, or differentiating strength, of the item. It ranges from
−.0 to 1.0. A good item is able to differentiate between examinees of high and low ability, and
will have a higher point-biserial, but rarely above 0.50. A negative point-biserial is indicative of
a very poor item, because then the high-ability examinees are answering incorrectly, while the
low examinees are answering it correctly. A point-biserial of 0.0 provides no differentiation
between low-scoring and high-scoring examinees, essentially random “noise.”
The minimum item-total correlation bound represents the lowest discrimination you are willing
to accept. This is typically a small positive number, like 0.10 or 0.20. If your sample size is
small, it could possibly be reduced.
The maximum item-total correlation bound is almost always 1.0, because it is typically desired
that the r-pbis be as high as possible.
The item biserial (r-bis) correlation. The biserial correlation is also a measure of the
discrimination, or differentiating strength, of the item. It ranges from −1.0 to 1.0. The biserial
correlation is computed between the item and total score as if the item was a continuous measure
of the trait. Since the biserial is an estimate of Pearson’s r it will be larger in absolute magnitude
than the corresponding point-biserial. The biserial makes the strict assumption that the score
distribution is normal. The biserial correlation is not recommended for traits where the score
distribution is known to be non-normal (e.g., pathology).
• Polytomous Items
The minimum item-total correlation bound represents the lowest discrimination you are willing
to accept. Since the typical r correlation (0.5) will be larger than the typical r-pbis (0.3)
correlation, you may wish to set the lower bound higher for a test with polytomous items (0.2 to
0.3). If your sample size is small, it could possibly be reduced.
The maximum item-total correlation bound is almost always 1.0, because it is typically desired
that the r-pbis be as high as possible.
Eta coefficient. The eta coefficient is computed using an analysis of variance with the
item response as the independent variable and total score as the dependent variable. The eta
Option statistics
Each option has a P value and a r-pbis. The values for the keyed response serve as the statistics
for the item as a whole, but it is the values for the incorrect options (the distractors) that provide
the opportunity to diagnose issues with the item. A high P for a distractor means that many
examinees are choosing that distractor; a high positive r-pbis means that many high-ability
examinees are choosing that distractor. Such a situation identifies a distractor that is too
attractive, and could possibly be argued as correct.
30 O N 5
143534243521132435241342351423 KEY
555555555555555555555555555555 NO. ALTERNATIVES
YYYYYYYYYYYYYYYYYYYYYYYYYYYYYY ITEMS TO INCLUDE
EX001543542143554321542345134332413 EXAMINEE #1
EX002143534244522133OO2542531342513 EXAMINEE #2
EX003143534223521132435244342351233 EXAMINEE #3
EX004143534243521132435241342352NNN EXAMINEE #4
EX005143534243412132435452132341323 EXAMINEE #5
A data file with an Iteman 3 control header consists of five primary components:
Comments may also be included in the data file. Each of these elements is described in the
following sections.
1. Number of items for which responses are recorded for each examinee
(maximum is 10,000)
2. One space or tab
3. Alphanumeric code for omitted responses
4. One space or tab
5. Alphanumeric code for items not reached by the examinee
6. One space or tab
7. Number of characters of identification data recorded for each examinee.
The scoring status on the inclusion line can be specified as follows: Y = scored, N = not scored,
and P = pretest. All scored items are assumed to belong to a single domain. If scoring for more
than one domain is desired, the header should be converted to an Item Control File which
permits domain scoring.
1. The number of items in the Item Control File versus the Data Matrix File.
2. The column in the data matrix where item responses begin versus the value in the ”Item
response begins in column” box.
3. Whether the Data Matrix File includes an Iteman 3 header. You should remove the
Iteman 3 header from the Data Matrix File if you are using an Item Control File. This is
because the four lines that make up the Iteman 3 header would be scored as the first four
examinees.
1. The number of items specified in the Iteman 3 header versus the Data Matrix File
2. The column in the data matrix where item responses begin versus the value found on the
first line of the Iteman 3 header.
1. If the ”Data matrix file includes an Iteman 3 Header” box is checked and you are not
using an Iteman 3 header format. If so, then make sure the box is not checked before
running the program again.
2. The Data Matrix File to see if the Iteman 3 header is included or formatted properly.
You need to create an item control file if you wish to analyze item responses that begin at 0. The
item control file provides additional flexibility and permits mixed-format tests with items that
begin at both 0 and 1.
Check the data matrix file, examinee XXX did not respond to all
XXX items
You will receive this error when Iteman 4.1 reaches the end of the line before all of the item
responses are read in for any examinee other than the first one. If you received this error you
should check the following:
It should be noted that the examinee number reported in the dialog box is only the last examinee
in the data matrix to have an incomplete record. It is possible that multiple examinees did not
have a complete record.
x(n − x)
CSEM III = (1)
(n − 1)
where : x = number-correct score
n = number of items
n(n − 1) sP2
where : K= (3)
x (n − x ) − sx2 − n( sP2 )
and
sP2 = variance of the proportion correct
x = mean of the number correct scores
sx2 = variance of the number correct scores
and
sx2 α + ( x − npc ) 2
L= (4)
sx2 + ( x − npc ) 2
Note: L equals α when the cutscore is at the mean of the number-correct scores.
User Test 1
10 1 11 o n N
0.00 1.00
0.00 9.00
0.00 1.00
1.000 0.000 0.000 1.000
N N N N N
N N N 0.000
N
Y Y N N N 3 3
K LP HP LR HR LM HM
Low
High
The defaults file allows you to change the values for the components of Iteman 4.1 listed below.
The lines of the default file include the following information (this information is case sensitive):
All of the program options, except the flags, can be saved to the defaults file by making changes
to the options in the GUI and clicking “Save the current settings to the Defaults File”. You will
be notified that the defaults file is missing upon start-up of Iteman 4.1 if you move, rename, or
destroy the file. If the defaults file is missing you can easily save a new one by clicking on the
“Save current settings” button as described above.
Summary
License transferring is a 3-step process that takes a license from a licensed program on one
computer, and gives it to a program already installed in demo mode on another computer. The
original demo program (new computer) becomes a licensed program, and the original licensed
program (old computer) reverts to a demo. This process can transfer a license between PCs
running the same program on different versions of Windows such as XP and Vista.
This process starts with two computers, one that has an unlicensed program (original demo
computer), and one that has an already licensed program (original licensed computer). It starts on
the original demo computer, where the program creates a transfer file. This transfer file is taken
to the original licensed computer, where the program there puts its license in the transfer file.
The transfer file, now containing the license, is carried back to the original demo computer. The
program on the demo computer takes the license out of the transfer file, becoming licensed. The
program on the original license computer becomes a demo after it puts its license in the transfer
file.
Select “Start Transfer” and follow the prompts. Be sure to connect the appropriate drive for use
as the transfer drive when prompted, if it isn’t already connected (Figure E.3). Remember the
drive letter assignment for this drive.
Once OK is clicked, the drive dialog is displayed (Figure E.4). “Removable (A:)” will always be
the floppy drive. Internal hard drives are marked by their drive letter only. USB flash/thumb
drives and other externally connected drives will be marked as “Removable”.
Select the drive to carry the transfer file. Once the process is complete, if a USB flash/thumb
drive or external hard drive is used, carefully disconnect it. If there is a problem during this step,
an error message will be shown. Please note any error codes and report the error to Assessment
Systems at support@assess.com.
Run the program on the original licensed computer in Administrative mode, logging in as
administrator if necessary. Click on the License button to bring up the license window, and click
on the transfer license menu in the upper left again. Select the “Transfer This License” option
(Figure E.5).
The program will ask for confirmation, then prompt once again to connect the drive or diskette
carrying the transfer file (Figure E.6). If this has not been done already, please do so, and
remember which drive letter Windows assigns to it.
Follow the prompts to the drive dialog (Figure E.6), and select the appropriate drive, which
might have a different drive letter on the original licensed computer than on the original demo
computer. The program will transfer the license to the transfer file and will indicate that it is now
in demo/trial mode (Figure E.7).
Carefully disconnect the drive once this step is complete. If there have been any errors, please
note them along with any specific codes and report them to Assessment Systems at
support@assess.com.
Follow the prompts to connect the transfer drive if this hasn’t already been done, and to select
the drive. If the license transfer was successful, a message will appear.
If there have been any errors, please note them along with any specific codes and report them to
Assessment Systems at support@assess.com.