Sei sulla pagina 1di 192

User Guide 3.

3
Introduction

About The Guide


The TAO User Guide is designed to aid in the use of TAO for the creation and delivery of
assessments.

The TAO User Guide focuses on the creation of tests and deliveries, as well as, the components you
would need to deliver a test such as test items and test takers. The guide will walk you through
creating your first item through retrieving your results.

More advanced topics such as LTI Applications and Metadata are also included along with features
available in O.A.T.'s Premium Edition.
Introduction

What is TAO?
TAO is an Open Source e-Testing platform that empowers you to build, deliver, and share
innovative and engaging assessments online – in any language or subject matter.

TAO ("Computer-Based Testing" or Testing Assisté par Ordinateur in French), was created by the
University of Luxembourg and is now maintained primarily by Open Assessment Technologies
(OAT).

TAO is the first commercial-grade Open Source assessment development software on the market. It
is QTI and LTI standards-based, and operates under audit-proof transparency. Developers can
access the source code for their own test-creating or administering purposes, opening the user to a
wide range of potential customizations. Complete ownership of test design has never been this
easy; without the restrictions and high costs of proprietary testing, all assessments can easily be
displayed with the educational institution's signature details. Furthermore, TAO is fully compatible
with just about all of your favorite commercial add-ons.
Introduction

Take a Tour
This section takes you on a short tour of TAO, giving you an overview of how to prepare and
organize your assessments using TAO.

Why use TAO?

TAO helps you set up and organize all types of assessments quickly and efficiently. TAO's simple
architecture allows for the easy navigation of resources which enables you to re-use existing tests
or parts of tests. You can also add new assessment material to previous assessments, including
those used by other teachers or with other groups.

Putting an Assessment together

An assessment in TAO consists of several different building blocks: Interactions, Items and Tests.

An interaction is the most basic unit in an assessment, and takes the form of a question (e.g.
multiple choice), or other task type (e.g fill-in-the-blank). An item is a set of interactions to be used
together, along with any supporting material, and a test is a group of items, together with
information on how they are ordered and presented to the test-taker.

Let's walk through the steps to create an assessment and manage your assessment resources.

1. Check what test items are already available.

Test items prepared by other users may be available to you, as well as items you have prepared
yourself for previous assessments.

In TAO's Assessment Builder Bar, select the Items icon and examine the test items that are
already available in the Library on the left.

If you do not have enough ready-to-go items, then you will need to create new ones utilizing TAO's
item authoring tool.
Item Library

2. Create items.

The Items page consists of three parts. On the left is the Library, where you can view the inventory
of already existing items. In the center is the Canvas, where you can provide a label for a new Item
and then author it, or edit the label, author or preview an exiting Item. And once you are Authoring
an item, the Properties Panel will be on the right, where you can select component settings for your
items, interactions, and tests, such as your chosen scoring method. This three-part layout is a
common feature of the TAO system.

To create a new Item, you would select the New Item icon on the bottom of the left panel. See
the Creating a new item section for more details.

3. Add interactions to your item.

Your new item will consist of interactions which are added by Authoring an item. Interactions
include the following types: Common, Inline, Graphic, and Custom Interactions. For further
information on these types, see the Interactions section.

For each type, the procedure to create interactions will vary. See detailed descriptions of these
procedures in each Interaction section.
Populating your Item with Interactions

4. Use your items in a test.

Once you have populated your item with interactions, you will need to build it into a test before you
can use it in an assessment. A test can include one or more items.

To do this, select the Tests icon on the assessment builder bar. You can add items to a test by
selecting them from the Test Library. See the Creating a new test section for more details.
Creating a new Test

5. Give your test a trial run.

You can try your test by setting up a test-taker account. A trial helps ensure everything will run as
expected during the actual student assessment. After checking the Test with a trial run, the next
step is to set up a Delivery.

6. Register Your Test-takers.

Students need to be registered as Test-takers in TAO before the first assessment. In most cases,
this is done by the instructor or course administrator using student rosters.

To do this, select the Test-takers icon in the Assessment Builder Bar. See the Creating test-
takers section for more details.
Creating new Test-takers

7. Assign test-takers to Groups.

After entering or uploading the Test-taker profiles of all your students in TAO, you will need to
organize them into groups depending on which students are taking which assessments. It may be
that an entire class of students is taking the same assessments, or it may be that you need to
create smaller groups of test-takers for certain types of assessment.

To do this, select the Groups icon in the assessment builder bar. See the Creating a new group
section for more details.
Groups

8. Publish and deliver your test

Before students can take the assessment you have prepared, the test needs to be assembled as a
Delivery.

Assembled deliveries only take a few moments to put together and govern when a test will be
taken, which selected individuals or groups will take the test, and how long the test will last.

To do this, you will need to select teh Deliveries icon in the assessment builder bar. See the
Creating a new delivery section for more details.
Deliveries

9. View Your Results.

After the assessment is over, you will want to see how your test-takers did.

To do this, select Results in the assessment builder bar. See the Viewing results section for more
details.

Viewing the Results


Introduction

Installing TAO
To learn more about the TAO installation process, please refer to the following chapters in the TAO
Administrator Guide.

Prerequisites
Centos, Redhat and Fedora
MacOS Mojave
Ubuntu and Debian
Windows
Web Installer
Items

What is an Item?
"An item is a set of interactions (possibly empty) collected together with any supporting material
and an optional set of rules for converting the candidate's response(s) into assessment outcomes."
- Question and Test Interoperability standard, published by IMS Global.

Items are the basic building blocks for Tests. They may contain a single Interaction (a simple item),
or several closely-related interactions, all the same type or of a mixture of types (a composite
item). Note that items contain interactions, but are not interactions themselves. (See the
Interactions section for a definition of interactions.)

Beyond the interactions contained within them, items may include titles, images, and text, which
help the Test-taker understand the expectations and context of the assessment material presented
within. Item complexity ranges from simple items with a single interaction to composite items with
multiple interactions.
Items

Items: An Overview
Items are the basic building blocks for Tests. For a full definition, see What is an Item?.

This section provides an overview of how to manage your items, including their creation, what to
put in an item, viewing them , and their use once created.

1. Creating a new item

Items first need to be created, before they are then populated with the desired Interactions and
any further material required so they can be used in assessments. See Creating a new Item for
information on how to do this.

2. What to include in your item

Pictures and media such as audio or video files can be included in your Item. For more information
on how to do this, see Adding Media, for adding media to your items, and Media Interactions for
adding audio and video files.

For managing resources which are included in your items (which includes all of the resources
mentioned above), see the Media Manager section.

3. Making decisions about your item

There are some decisions you will need to take during the process of creating your item. These
involve the following:

How do you want your item to appear? For decisions on the style of your item, see the Style Editor.

Will the Test-taker receive feedback during the test? See Modal feedback.

How will the item be scored? There are various possibilities here: see Item scoring rules.

4. Previewing your items

Before finalizing your item, it is a good idea to preview it. For more information on how to do this,
see: Preview.

5. Importing and exporting your items

You can import and export your items to and from different storage devices. For more information
on how to do this, see Importing Items and Exporting Items.
Items

Creating a New Item


Together, questions and other types of Interaction form Items, which comprise parts of Tests.
Items are created and populated with interactions, and can be combined to assess Test-taker
performance.
There are more than 17 types of interaction in TAO. Note that an item generally contains only one
interaction type.

The videos below will demonstrate how to easily create some of the most popular items using TAO.

How to create an item with a Choice (multiple-choice) Interaction using TAO:

Choice Interaction

How to create an item with an Inline Interaction using TAO:


Inline Interactions

How to create an item with an Associate Interaction using TAO:

Associate Interaction

Now, let's walk through the steps of creating an item.

1. Click on the Items icon in the Assessment Builder Bar.

This will take you to the Items page. The Library on the left-hand side will show existing items. The
last item to be edited (either by you or a previous user) will be highlighted in the library. In this
tour, however, you will create a new item.

2. Click on the New item icon in the button bank under the library.

This will create a new item in the selected folder.

Note: To create a new item in a different folder, click on that folder in the library, and then select
the New item icon in the button bank. To create a new folder (in TAO these represent classes),
click on New class in the button bank. Select a location within the library, and the new folder
(class) will be created there.

Creating a new Item

3. Label and save your item.

Creating a new item will bring up a new dialog box with the option to name (or label) your item.
After labeling your item, click Save. This produces an empty item, which you can now populate with
interactions.

4. Click on the Authoring icon in the Action Bar.

This will take you to the empty item you have created. You can now start to fill this with your
interactions. In the library on the left, you will see the Common Interactions catalog. You will find
the other types of interaction below this catalog: Inline Interactions, Graphic Interactions and
Custom Interactions. You can navigate these catalogs to choose the types of interaction you want
to use for your item. The Interactions section will tell you about the different types of interaction
which you can use in TAO.

You can create an item which contains more than one interaction. Once you have added and
prepared one interaction in your item, drag another interaction template from the Interactions
Library onto the Canvas below or beside the interaction you have just authored, and repeat the
authoring process for the new interaction.

5. Select the settings for your item.

Two settings can be chosen for your new item in the Item Properties to the right of the canvas.

Time dependent: Check the Time dependent box if you wish the length of time a test-taker takes
to complete the item to be recorded. This information will be used when the response is processed.

Language: Select the language of your item from the drop-down menu. This will be used for the
Text-to-Speech functionality. The default language is English.

Optional Extras
Duplicating an existing item

You can make a copy of an already existing item by clicking on the Duplicate icon in the
button bank under the library. A duplicate will then be created in the folder of the item you have
duplicated, with the same name but with "bis" on the end.

Copying or moving an existing item

You can make a copy of an already existing item by clicking on the Copy To icon in the button
bank under the library.

A dialog box will appear on the canvas. Select a destination folder, and click on Copy . A copy of the
item will then be created in the folder you have selected, with the same name but with "bis" on the
end.

Move To works exactly in the same fashion.


Items

Importing Items
Items and Interactions can be imported from one computer to another, using an operation called
Import.

The steps to import items are as follows.

1. Click on Items on the Assessment Builder Bar.

This will take you to the Item Library, which you will see on the left.

2. Click on the Item class in the library that you wish to import the new item into.

3. Click on Import in the button bank below the library.

This opens a dialog box which asks you to select the format of the item to be imported. The
supported input formats are: QTI (Question and Test Interoperability) packages or items, APIP
(Accessible Portable Item Protocol) packages, RDF (Resource Description Framework) or CSV
(Comma-Separated Values) files. Be sure that the Item to be imported is in this format, or the
import won't work.

Importing Items

4. Click the blue Browse button to find the file intended for import (alternatively, the file may be
dragged and dropped into the box below the button).

5. Once the item is selected, click on the blue Import button.


This will import the item into the Item library, after which it can be added to Tests, or modified.
Items

Exporting Items
Interactions can be put together into Items on almost any computer that has access to TAO.
However, there will be situations in which sharing Items will be useful. For instance, two teachers
who teach the same course may collaborate and share the responsibility of creating questions for
an upcoming Test.

This can be done in a few easy steps.

1. Click on the Items icon in the Assessment Builder Bar.

2. Click either on a Class or an Item symbol in the Item Library on the left-hand side to
select one or multiple items.

3. After the selection, click Export in the button bank below the library.

The dialog box will ask you to choose an export format from the list: QTI, APIP or RDF. If the Item is
to be exported as a Question and Test Interoperability (QTI) formatted document, it will save the
file(s) as a compressed .zip file.

You also need to confirm that the folder or file highlighted is the one that should be exported, by
checking the Items box.

Exporting Items
4. Click the blue Export button in the dialog box to continue with the export.

5. Select the location to which you want to export your item, and then click Save.

The item can then be transferred either to a data storage device or a computer network. The next
step in the transfer is to import the item onto the desired computer.
Items

Moving or copying items


Items can me moved or copied within the same library

In any library inside TAO you can move elements to a different folder by dragging and dropping
them to the new destination. This does not only apply to items but also to tests, test-takers etc.
You can copy elements by duplicating and moving them to a different directory. This is, however,
limited to desktop environments.

The buttons Copy To and Move To allow you to do this in a different, platform independant way.

1. Click on the Copy To or Move To icon in the button bank under the library.

This will show the dialog below:

Copying an item

2. Select the new directory and click on Copy or Move at the bottom of the dialog depending on
your original action.
Items

Adding Media
Items and interaction can contain media, e.g. images, videos or sound files. For instructions on how
to add Media (videos or sound files), refer to the Media interaction section. This section describes
how to add an image to a typical text block.

1. Whenever a text block is selected in your Interaction or Item (e.g. the choices in a choice
interaction) a tool bar appears in the grey bar above the item.

You can insert either images or other media by selecting the appropriate. This tutorial
focuses on inserting an image, but the workflow for other media is almost identical.

2. Click on the image icon.

This opens a window which provides access to the Resource Manager, and consists of three panels.
As with the main window, the left panel is a Library: the Resource Library. The middle panel shows
the list of pictures which are available within the highlighted Class in the resource library. The right
panel provides a preview and properties. If your picture is already in the resource library, click on
that picture and skip to Step 4. If it is not, then carry out Step 3.

3. To add a new image from your desktop, click the blue Add file(s) button. Then, click the blue
Browse... button.

You can now navigate your computer system to select the image you wish to upload into your TAO
system. Once selected, click on the green Upload button. When it has been uploaded, a preview
image of the graphic will appear. You can select any .jpg, .gif, or .png graphic file to upload and
display.
Adding Media to your Item

4. Click the green Select button.

This uploads your image into the text block. If you begin typing without hitting return, the text will
center vertically. If your text extends beyond the first line, however, it will wrap underneath the
image. If the image is followed by a longer text, it is best to click on return on your keyboard at
least once before starting to type.
Items

Item Scoring Rules


Item scores are determined by the student's performance in the various Interactions which make
up the Items of which the Test is composed. Interactions generate individual scores which count
towards the overall score of the Item. These individual scores can be tallied in different ways. This
chapter shows how to configure these.
For information on Test Scoring Rules, see Test Scoring Rules and Outcome Declarations.

After you have created your interaction, go to the Response window and follow the steps below to
set up your chosen scoring method.

1. In the Response Properties panel on the right, locate the Response Processing pull-down choice
box offering the two options: Match Correct and Map Response .

Match Correct: With this processing option, either the question is answered correctly or incorrectly.
There is no partial credit, so if part of the response is incorrect, the whole answer is marked as
incorrect.

Map Response: With this processing option, it is possible to give partial credit for a response. This
is useful for questions where the answer is given in multiple parts. With the map response option,
some parts of the answer can also be weighted more heavily than other parts if you consider them
to be of higher importance.

Item Scoring Rules

2. Select the Response Processing type which is appropriate for the question.
This is going to depend to a large extent on preference, particularly with respect to partial credit.

If you select Match Correct, stop here - the scoring settings for this interaction are complete.

3. If you select Map Response , review the responses in your interaction, then determine and assign
corresponding weights to each potential response.

Partial credit can be awarded here by assigning values in the weight boxes in order of importance
in your Interaction canvas.

Note: For most interaction types, score boxes are located to the right of the potential responses.

4. Set the values of the Score Range fields, located in the Response Properties panel.

This is where you can specify the minimum and maximum number of points awarded for this
interaction. Its use is optional.

The minimum score indicates the minimum number of responses the Test-taker is required to
select for it to be a valid answer. If the interaction involves selecting more than one response, the
maximum score should reflect the total of the weights for all correct responses, i.e. the maximum
score possible for your Interaction. If only one response is expected, the maximum should equal
that of the highest weight. Adjust these values as needed.

So if, for example, there are two correct responses in the question, and you have assigned a weight
of 1 to one of them, and a weight of 2 to the other, the Test-taker would get a score of 3 if they are
both correct (providing the maximum is set to 3 or above).

Other values in the Score Range field include Mapping Default, which contains the default value
given if no specific scores are assigned to a response, and the check box used to Define Correct
Response . The latter should be checked if there are specific correct responses, and left unchecked
if the correctness of the answer is dependent on the sum of weights accumulated by the Test-taker
in answering the interaction.

Lastly, it should be noted that TAO assigns a Response Identifier to each interaction response. It is
best not to change this.
Items

Modal Feedback
Modal feedback can be defined as a message presented to the Test-taker outside of the Item, when
the test-taker selects an answer. Feedback may be triggered by either a correct or an incorrect
answer, depending on the conditions set by the test author.

After you have created your Interaction, go to the Response window and follow the steps below if
you would like the test-takers to receive modal feedback during this interaction.

1. Many of the interaction types have the option of giving Modal Feedback. If this is available, click
on the Add a Modal Feedback button in the Response Properties Panel on the right.

Adding Modal Feedback to your Interaction

This opens a modal feedback panel in which you can insert the feedback and specify when it
should be given.

2. Insert your feedback and feedback conditions.

If the feedback is to be given when the test-taker gets the correct answer, then ensure that the if-
statement is set to correct. You can also set it to incorrect or any numeric comparative
relationship.

Then fill in the then-statement by first clicking on the blue Feedback button, and then entering the
desired text in the pop-up window. When complete, click the done button.
If alternative feedback for the opposite condition is required, click the Feedback button for the else-
statement, and follow the same procedure used to set up the initial feedback.

3. If additional modal feedback is required, then click the Add a Modal Feedback button below your
first Modal Feedback and repeat the above steps.

You can preview your modal feedback using the steps given in the Preview instructions.
Items

Preview
Completed Items or Interactions can be previewed to determine what they look like on various
screen sizes.

Previewing typically takes place after clicking Done for the completed Interaction. This will bring up
the completed interaction on the Canvas. The ability to preview is on the Action Bar above the
canvas where there are a series of buttons, including Save, Preview, and Print.

1. Click Preview.

A pop-up window will appear with which you can save your interaction (the window will appear
whether the interaction has been saved already or not, just to ensure that the latest version of the
Interaction is not lost during testing).

2. Click the blue Save button in the pop-up window.

This brings up the Interaction as it will appear to the Test-taker.

Previewing an Item

Answer the question correctly or incorrectly to see if the interaction performs as expected. Clicking
Submit will bring up a black screen below the demonstrated interaction which shows the score for
the answer you have given.

If the interaction uses Map Response scoring, it is a good idea to try out not only answers which are
either completely correct or completely incorrect, but also to test the various ways in which partial
credit may be awarded.

3. Once testing is completed, click the Close button at the top of the page.
This will take you back to the point where further changes to the interaction may be made (click on
the item to return to Authoring ), or where the Interaction can be dismissed until the test is
assembled.
Items

Style Editor
White, black, grey, and blue, all done up in a sans-serif font, can get boring after a while. The Style
Editor can help you make your items look more appealing. The Style Editor is found in the TAO
interface above the Properties Panel on the right in an Item window. It should be noted that this
feature is meant only to adjust the appearance of a small number of items. If you are dealing with
larger item banks, you may want get in touch with us to discuss the options for a customized
version of TAO.

1. To access the Style Editor, click on the Style Editor button in the blue Action Bar above the
Properties Panel.

This will turn the Properties Panel into a Style Editor panel. There are two parts to the editor, the
Style Sheet Manager at the top, and the Style Editor below this.

Style Editor

2. If you have a style sheet ready for upload, click on the Add Style Sheet button.

This will provide an interface similar to that of adding a graphic into a Graphic Interaction. You can
use an existing style sheet by clicking the Add file(s) button and uploading it.

If you wish to format the style for this item only, use the Style Editor below to enter the settings you
would like to use for this item. The style editor has three parts: (1) Color, (2) Font, and (3) Item
width.
3. Adjust the colors to your liking.

There are four color swatches that can be changed in accordance with your preferences, one for
each of: Background color, Text color, Border color, and Table headings.

Clicking on any of these (e.g. Background color) opens a color editor panel which consists of a
square surrounded by a color wheel (a swatch), and a text box below.

Select a color by moving the cross onto the desired hue on the color wheel. In the square you can
then adjust the contrast (left and right) and brightness (up and down). You can use the text box to
save a specific color setting when it is found as portions of Red, Green, Blue (in RGB hexidecimal-
percent of primary color density).

The four swatches cover specific parts of any item and its Interactions. The Background color
provides the color backing of the entire item. The text color is used for all text within the item and
its related interactions. The border color governs that of the borders of interactions. Finally, the
table heading color swatch provides the color setting for interactions which use tables (such as
Match).

4. Adjust the fonts to your liking.

Create the desired font by adjusting the font family and font size.

Click on the Font Family box, and select a font family besides Default.

There are three types of fonts available: Sans Serif fonts - lacking extra strokes at the ends of
letters, Serif fonts - with the small, projecting strokes at the ends of letter, and Monospace fonts -
resembling a typewriter, each letter being of equal width.

Using fonts which are not on this list requires setting up a style sheet.

To select a font size, click in the font size box and type in a number. There is no limit to the size of
font that can be selected, but of course if the font is set too large it won't be displayed properly.

5. The item width can be set in the Item width box.

The default item width is set to adapt to the width of the user's screen. It is highly recommended
that you do not change this setting.

Some institutions prefer students to take Tests only on specifically designated computers which
have a specific screen width. TAO ofers the option to set the width for a given item. However, for
most schools, setting a width presents a significant disadvantage in that a set width setting that
doesn't adapt to screen size means different-sized computer screens may have problems
displaying Items. If it is unnecessary to specify the item width, it is best to use the default setting.

6. If you are not satisfied with any of the settings you've selected, click on the Eraser icon on the
right of any of the settings boxes, and the item will be restored to its default setting.

This is particularly useful if the settings selected for the item render an indecipherable result.
Simply restore the default settings with a click.
Items

Math Expressions
Math Expressions (i.e. mathematical operators), or formulae, can be employed in Interactions by
using the Formula Editor. The Formula Editor is a WYSIWYG editor based on MathQuill, which allows
you to use mathematical symbols to create LaTeX expressions containing mathematical operators.
It is found in the Custom Interactions section.

To access the Formula Editor follow the steps below.

1. Once you have created a new Item, click on the Custom Interactions Library on the left, and drag
the Math Entry icon onto the blank item and drop it onto the Canvas.

The MathQuill editor can also be accessed from any block by selecting Insert Math Expression
and then clicking on the WYSIWYG editor.

A list of mathematical symbols will appear, with an empty text field below.

Formula Editor

2. Click on the mathematical symbols you wish to use.

These will appear in the text field and can then be used in the writing of mathematical formulae,
such as questions on geometry.

Note: The Formula Editor only provides the possibility of drawing mathematical symbols, but does
not carry out any calculation.
Interactions

What is an Interaction?
"Interactions allow the candidate to interact with the item. Through an interaction, the candidate
selects or constructs a response. The candidate's responses are stored in the response variables.
Each interaction is associated with (at least) one response variable." - Question and Test
Interoperability standard, published by IMS Global

Term Interaction

Interactions serve as the basic unit for Test-taker responses. Items may be made up of one or
several related interactions. As such, the term Interaction should not be considered
interchangeable with the term Item.

There are four categories of Interactions: common, inline, graphic, and custom or PCI. There are
currently 17 interactions recognized by the QTI standard.

In TAO, interactions include the mechanisms used to score the interaction itself.

For simple Items, correct answers add to the Test score, unlike incorrect answers. Scoring a
composite item using standard response templates (match correct , map response, or map
response point) are often a more complicated sum.
Interactions

Interactions: An Overview
Interactions are the basic building blocks for Items, which in turn are the basic building blocks for
Tests. For a full definition, see What is an Interaction?.

This section provides an overview of how to manage your interactions, including their creation,
what to put in an interaction, viewing them, and their use once created.

Using different kinds of interactions


Common Interactions cover many of the simple interactions that are often used in assessment.
In TAO, the following Common Interactions can be used in the creation of assessment items:

Choice Interaction
Order Interaction
Associate Interaction
Match Interaction
Hottext Interaction
Gap Match Interaction
Slider Interaction
Extended Text Interaction
File Upload Interaction

Common Interactions

Inline Interactions are interactions which contain text-based elements. In TAO, the following
Inline Interactions can be used in the creation of test items:

Inline Choice Interaction


Text Entry Interaction
End Attempt

Inline Interactions

Graphic Interactions elements. In TAO, the following Graphic Interactions can be used in the
creation of test items:

Hotspot Interaction
Graphic Order Interaction
Graphic Associate Interaction
Graphic Gap Interaction
Select Point Interaction
Graphic Interactions

Portable Custom Interactions (PCI) are interactions which are developed for a specific
scenario, mostly to fulfill a specific need of a customer.

Portable Custom Interactions

Creating New Interactions


New interactions are created as part of items in a test. With the exception of PCIs (which are
project-specific), each of the above interactions is described in detail in its own section of the User
Guide.
Interactions

Interaction Authoring Tools


A Text Editing Toolbar is available to the Item Author, containing various tools for creating new
Interactions and only appears when authoring a new Item.

The toolbar will appear below the Action Bar once you have created a blank item, clicked on
Authoring, and then dragged and dropped the interaction template of your choice from the
Interactions Library on the left onto the Canvas in the middle. Note: See the section Creating a new
Item for details on how to create an item.

The toolbar contains two types of aids:

Style Features:

The icons towards the left of the bar can be used to make the text of the item bold, italics, or
underline it. You can also include the text as a subscript or a superscript here.

Feature Insertion:

Using the icons towards the right, you can insert a special character, a shared stimulus, a math
expression or formula (using the formula editor), an image or other type of media, a table, a
tooltip, or a link into your question.

Most of these functions work in the same way as in a typical text editor. The tooltip can be used to
add explanations for the Test-taker to specific text fragments of the Test.
Interactions

Choice Interaction
Choice Interactions or multiple choice question, present a test type that has been made popular by
such time-honored exams as the SAT, ACT, PSAT/NMSQT, etc. Choice interactions are preferable to
free-response test interactions in cases where a large quantity of test questions need to be
covered in a short exam period.

How to Create a Multiple-Choice Item Using TAO

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Choice Interaction:

1. From the Common Interactions Library on the left, drag the Choice icon onto the blank item
and drop it onto the Canvas.

This provides the answer choices for your choice interaction.

2. Enter the question in the question field at the top of the interaction where it says define prompt.

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

3. A choice interaction has three default answer choices. Click on choice #1 to type the first
answer choice in this field. Repeat this step with the other choices to populate the other fields with
your answer choices.

You can add more choices by clicking the blue Add Choice field below the first three choices (keep
clicking until the desired number of choices appear in the item), and you can delete choices by
clicking the trash can icon to the right of the choice you wish to delete.

4. After defining all answer choices, set the minimum and maximum number of answer choices
that the Test-Taker will be asked to provide (before he can continue to the next question).

This can be done in the Allowed Choices boxes in the Interaction Properties Panel on the right.
Setting the minimum to "0" allows the Test-taker to skip the question.

By default, your choice interaction is made of checkboxes. Leaving the maximum on "0" allows
test-takers to select an unlimited number of choices.

To set up a radio button test interaction, select a maximum of 1. This means that your test-taker
will not be allowed to select more than one choice. You can see on your interaction that radio
buttons will be displayed.

Optional Extras when Creating a Task


The following options are available in the Interaction Properties panel on the right.

Presenting the answer choices in list format


To present the choices as a list, select one of the options in List style, which is located below the
Allowed Choices .

Shuffling the choices


Check the Shuffle choices box. This will randomize the order in which answer choices appear for
each test-taker. In this manner, guessing or copying strategies is rendered useless. Where the
order of items is unimportant, this is recommended.

Note: Remember that if you use this option, avoid choosing an ordinal list style, eg. A,B,C or 1,2,3.

Presenting the answer options horizontally


Answer choices can be presented either vertically or horizontally. This can be defined in the
Orientation option. The default is vertical.

5. Click Response on the right of the blue interaction header to define the correct answer(s).

This activates options for setting the correct answer.

6. Select the correct answer by clicking the box in front of it.

You can select more than one answer.

Optional Extras when Processing a Response


The following options are available in the Response Properties Panel on the right.

Modifying the scoring method


By default, a test-taker receives one point per completely correct interaction, so in the case of
Choice interactions, the test-taker has to select all the correct choices in order for the answer to be
considered correct.

You may want to modify the scoring method if, for example, you want the test-taker to receive
partial credit for selecting some, but not all, of the correct choices. Or you may wish to give a
higher weight to some of the choices than to others.
You can do this using the map response option of Response processing , in the Response Properties
panel on the right. When you choose this option, there are several settings you need to enter.

First, assign a weight to each choice in each of the corresponding Interaction boxes.

Click here for more details on how to use this scoring method, and how to set the values of the
associated properties.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more information on how to do
this, see the Modal Feedback section.

7. Click the blue Done button. Your choice interaction is now complete.

After this step, you can preview your interaction using the steps given in the Preview Instructions.
Interactions

Order Interaction
The Order Interaction gives Test-takers the opportunity to demonstrate their knowledge of a
particular order of elements: chronological orders, priority orders, alphabetical or numerical orders,
orders of size, etc.

Order Interaction

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Order interaction:

1. From the Common Interactions Library on the left, drag the Order icon onto the blank item
and drop it onto the Canvas.

This creates a new Order Interaction window. There is a question field at the top, with two boxes
underneath.

2. Enter the question in the question field, where it says define prompt.

This will describe the task given to the test-taker ("Place the following in chronological order", etc.).

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

3. Fill in the options for each question in box to the left.

There are three default options, but you can add more by clicking the blue Add choice field at the
bottom. You can delete options by clicking the trash can icon to the right of the option you wish to
delete.

Note: Drag-and-drop is enabled for this type of interaction.

Optional Extras when Creating a Task


The following options are available in the Interaction Properties Panel on the right.

Shuffling the choices


Check the Shuffle choices box. This means that the order in which the answer options are
presented will be randomized (recommended if the order of presentation is not important).

Specifying correct number of answers


Specify the minimum and maximum number of answer options that the test-taker will be asked to
provide (before he can continue to the next question) in the Allowed Choices boxes. By default
these are empty, which means the test-taker can include as many (or as few) of the answer options
as he likes. (Setting the minimum to 0 allows the test-taker to skip the question.)

Presenting the answer options in list format


The answer options can be presented as a list. To choose a List Style, select one of the options in
List style, located below the Allowed Choices in the Interaction Properties panel.

Present the answer options horizontally


Below the List style option is the Orientation option. Answer options can be presented either
vertically or horizontally using this option. The default is vertical.

4. Click Response on the right of blue interaction header to define the correct answer(s).

Then click on each option in the left-hand box in the desired order. The options will be transferred
in this order to the right-hand box. If you are not satisfied with the order you have chosen, click in
the right-hand box and then on the option which is in the wrong place. You can then click on the up
or down arrow on the right to move it up or down respectively.

Optional Extras when Processing a Response


The following option is available in the Response Properties Panel on the right.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more information on how to do
this, see the section on Modal Feedback.

6. Click the blue Done button. Your order interaction is now complete.

After this step, you can preview your Interaction using the steps given in the Preview Instructions.
Interactions

Associate Interaction
The Associate Interaction assesses the Test-taker's ability to match associated words or phrases.

For a quick glimpse of how to create an Associate Interaction in TAO, please watch the following
video.

How to Create an Associate Item Using TAO

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Associate Interaction:

1. From the Common Interactions Library on the left, drag the Associate icon onto the blank
Item and drop it onto the Canvas.

This opens a new Associate Interaction window. There is a question field at the top, two answer tile
options below this, and then an example of linked boxes at the bottom.

2. Fill in the question field where it says define prompt, describing the associations (matches)
being sought.

This could be in the form of a question ("Which country goes with which capital city?") or
instructional ("Match the country with the capital city.").

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

Drag-and-drop is enabled for this type of interaction. Test-takers can also use click-and-click to
move objects; this is an accessibility feature for test-takers with trouble using drag-and-drop.

3. Fill in the answer tiles for the question.

You will need more than two, so select Add choice as many times as needed to provide all the
options to be made available to the test-taker.

Note: We recommend adding the appropriate matches first (e.g. the correct countries and capitals)
in separate tiles first, and then adding the incorrect (unmatched) options.

Optional Extras when Creating a Task


The following options are available in the Interaction Properties Panel on the right.

Shuffling the choices


Check the Shuffle choices box. This will help disguise the matched pairs you have entered for the
question. If this is not clicked, how you've entered the tiles will be how they appear to the test-
taker.

Limiting the use of a choice


If you want to limit the number of times a particular element is used, click on it. It will then appear
in the Identifier box in the right-hand panel, which gives you the option to set the Allowed number
of uses. Setting this to a maximum of 1, for example, will mean that the test-taker can only use
that element in one association.

Specifying the correct number of associations


You can specify the minimum and maximum number of associations the test-taker will be asked to
provide (before he can continue to the next question) in the Number of associations boxes in the
Interaction properties panel. By default these are empty, which means the test-taker can include
as many (or as few) associations as he likes. Setting the minimum to 0 allows the test-taker to skip
the question.

4. Click Response on the right of blue interaction header to define the correct answer(s).

This will provide all the answer tiles created in the previous step, and a series of associate pair
boxes, which are to be filled in the next step.

5. Click on the first element to be associated, and then click on the first box. Click its match
(association), and then click on the second box.

This will provide the first set of correct responses. Continue with this procedure until all association
pairs have been linked in the association boxes, leaving the incorrect associations unmatched.

Optional Extras when Processing a Response


The following options are available in the Response Properties Panel on the right:

Modifying the scoring method


By default, a test-taker receives one point per completely correct interaction, so in the case of
Associate interactions, the test-taker has to select all the correct associations in order for the
answer to be considered correct.

You may want to modify the scoring method if, for example, you want the test-taker to receive
partial credit for selecting some, but not all, of the correct associations. Or you may wish to give a
higher weight to some of the associations than to others.
You can do this using the map response option of Response processing , in the response properties
panel on the right. When you choose this option, there are several settings you need to enter.

First, assign a weight for each association in the boxes next to each one.

Click here for more details on how to use this scoring method, and how to set the values of the
associated properties.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more information on how to do
this, see the Modal Feedback section.

6. Click the blue Done button. Your associate interaction is now complete.

After this step, you can preview your interaction using the steps given in the Preview Instructions.
Interactions

Match Interaction
The Match Interaction provides Test-takers with a matrix upon which they can demonstrate their
knowledge by accurately matching, or associating, selections from two different sets of elements.
Matching is carried out by placing check marks in squares where matching rows and columns
intersect.

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Match interaction:

Match Interaction

1. From the Common Interactions Library on the left, drag the Match icon onto the blank item
and drop it onto the Canvas.

This creates a new Match Interaction window. There is a question field at the top, and a default 2-
row-by-2-column matrix beneath this.

2. Fill in the question field, where it says define prompt.

This should describe the match task expected of the test-taker.

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

3. Insert in the rows the first set of elements, and in the columns the second set of elements that
are to be matched with the first.
Using the Add new row and Add new column buttons, add as many rows and columns as will be
needed to cover all the matches. If desired, add some unmatched elements in either the rows or
the columns to provide an additional challenge for the Test-taker.

Optional Extras when Creating a Task


The following options are available in the Interaction Properties Panel on the right.

Shuffling the choices


Check the Shuffle choices box. The sequence of the row and column options will then be
randomized. This is recommended if the order of presentation of either set of elements is not
important.

Limiting the use of a choice


If you want to limit the number of times a particular row or column is used, click on it. It will then
appear in the Identifier box in the right-hand panel, which gives you the option to set the Allowed
number of uses. Setting this to a maximum of 1, for example, will mean that the Test-taker can
only use that element in one associated match.

Specifying the correct number of associations


You can specify the minimum and maximum number of associations the test-taker will be asked to
provide (before he can continue to the next question) in the Number of associations boxes in the
Interaction properties panel. By default these are empty, which means the test-taker can include
as many (or as few) matches, or associations, as he likes. Setting the minimum to 0 allows the test-
taker to skip the question.

4. Click Response on the right of blue interaction header to define the correct answer(s).

Optional Extras when Processing a Response


The following options are available in the Response Properties Panel on the right.

Modifying the scoring method


By default, a test-taker receives one point per completely correct interaction, so in the case of
Match interactions, the test-taker has to select all the correct matches in order for the answer to
be considered correct.

You may want to modify the scoring method if, for example, you want the test-taker to receive
partial credit for selecting some, but not all, of the correct matches. Or you may wish to give a
higher weight to some of the matches than to others.

You can do this using the map response option of Response processing , in the Response Properties
panel on the right. When you choose this option, there are several settings you need to enter.

First, assign a weight for each match in the boxes next to each one.

Click here for more details on how to use this scoring method, and how to set the values of the
associated properties.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more details on how to do this,
see the section on Modal Feedback.

6. Click the blue Done button. Your match interaction is now complete.
After this step, you can preview your interaction using the steps given in the Preview Instructions.
Interactions

Hottext Interaction
The Hottext Interaction gives Test-takers the opportunity to demonstrate their knowledge by
showing, among several selections within a body of text, a specific type of word or phrase (e.g. a
grammatically incorrect element, misspelling, main character in a story, capital city).

Hottext Interaction

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Hottext interaction:

1. From the Common Interactions Library on the left, drag the Hottext icon onto the blank Item
and drop it onto the Canvas.

This creates a new Hottext Interaction window. There is a question field at the top, followed by a
space (containing a sample text) in which to place the text containing the phrases to be highlighted
as Hottext elements.

2. Fill in the question field, where it says define prompt.

This will describe the task given to the test-taker ("Find the mistakes", "Pick the capital city", etc.).

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

3. Copy and paste, or type in, the text within which the Hottexts will be presented.

The test-taker will choose the best option Hottext elements.


4. Select a word or phrase and highlight it. When the magic wand button comes up, click it to
confirm selection of the word or phrase as your Hottext Interaction.

This will create a Hottext element within the text. Typically, there will be one word or phrase that
matches the type being sought, and several additional words or phrases that might be similar to
the type being sought. There might be cases where more than one option is correct, or where none
of the options are correct. At the end of the text, you can add a final Hottext element which allows
the test-taker to state that there is no correct selection (e.g. "No error.")

Optional Extras when Creating a Task


The following option is available in the Interaction Properties Panel on the right.

Specifying the correct number of answers


You can specify the minimum and maximum number of Hottext choices the Test-taker will be
asked to provide (before he can continue to the next question) in the Allowed Choices boxes. By
default these are empty, which means the test-taker can include as many (or as few) answer
options as he likes. Setting the minimum to 0 allows the test-taker to skip the question.

5. Click Response on the right of blue interaction header to define the correct answer(s).

This will produce the same window as before, but you now have the possibility of placing
checkmarks by the right answer(s). Check all that apply.

Optional Extras when Processing a Response


The following options are available in the Response Properties Panel on the right.

Modifying the scoring method


By default, a test-taker receives one point per completely correct interaction, so in the case of
Hottext interactions, the test-taker has to select all the correct Hottext elements in order for the
answer to be considered correct.

You may want to modify the scoring system if, for example, you want the test-taker to receive
partial credit for selecting some, but not all, of the correct Hottext elements. Or you may wish to
give a higher weight to some of the Hottext elements than to others.

You can do this using the map response option of Response processing , in the Response Properties
panel on the right. When you choose this option, there are several settings you need to enter.

First, assign a weight for each Hottext element in the boxes next to each one.

Click here for more details on how to use this scoring method, and how to set the values of the
associated properties.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more information on how to do
this, see the section on Modal Feedback.

6. Click the blue Done button. Your Hottext interaction is now complete.

After this step, you can preview your interaction using the steps given in the Preview Instructions.
Interactions

Gap Match Interaction


The Gap Match Interaction gives Test-takers the opportunity to demonstrate their knowledge in a
manner similar to Match Interactions. A Gap Match, however, provides a set of match words, some
of which will fit into gaps within a selected text passage. In essence, this is a combination of a
match interaction and a "fill the gap" question.

Gap Match Interaction

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new gap match interaction:

1. From the Common Interactions Library on the left, drag the Gap Match icon Graphic Gap Match
onto the blank Item and drop it onto the Canvas.

This opens a new Gap Match Interaction window. There is a question field at the top, a middle field
for the words which are to be matched, and a lower field for the gapped text, which contains a
sample text.

2. Fill in the question field, where it says define prompt.

Typically this will be some variation of "Fill in the gaps from the following word set."

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

3. Insert the text which will contain the gaps into the text field at the bottom.
The Gap Match elements will be extracted from this text.

4. Select the words or phrases you want to make into Gap Match elements within the text.

Click on the word or phrase in the text to highlight it. This will create a magic wand button.

Click on the magic wand to confirm your selected location for a Gap Match element. This creates a
gap in the text, and places the word/phrase into the match words field.

Repeat as many times as is needed to adequately assess the test-taker's knowledge of the
passage.

Note: Drag-and-drop is enabled for this type of interaction.

5. If desired, add extra options into the match words field by clicking the add choice button.

Placing additional words into the match word field may prevent test-takers from successfully using
"process of elimination" as a strategy.

Optional Extras when Creating a Task


The following options are available in the Interaction Properties Panel on the right.

Shuffling the choices


Check the Shuffle choices box. The sequence of the match word options will then be randomized.
This is recommended if the order of presentation of the match words is not important.

Limiting the use of a choice


If you want to limit the number of times a particular element is used, click on it. It will then appear
in the Identifier box in the right-hand panel, which gives you the option to set the Allowed number
of uses. Setting this to a maximum of 1, for example, will mean that the test-taker can only use
that element in one match.

Obliging the test-taker to give an answer


If you want to prevent test-takers from continuing to the next question without providing an
answer, check the required box for that gap match. This box appears after you have inserted the
gaps in the text.

7. Click Response on the right of blue interaction header to define the correct answer(s).

To define the correct answers, drag and drop the correct match words from the match word field
onto the corresponding gaps in your text.

Optional Extras when Processing a Response


The following options are available in the Response Properties Panel on the right.

Modifying the scoring method


By default, a test-taker receives one point per completely correct interaction, so in the case of Gap
Match interactions, the test-taker has to select all the correct matches in order for the answer to
be considered correct.

You may want to modify the scoring method if, for example, you want the test-taker to receive
partial credit for selecting some, but not all, of the correct matches. Or you may wish to give a
higher weight to some of the matches than to others.
You can do this using the map response option of Response processing , in the Response Properties
panel on the right. When you choose this option, there are several settings you need to enter.

First, assign a weight for each correct match in the Pair Scoring box below the text. To do this, click
on an appropriate match word and then on the space in which it should be placed. Now click the
Add button, and it will appear in the list of matches at the bottom of the window. Check the Correct
box, and assign the desired weight for matching this correctly in the Score box. Fill all the gaps
with their appropriate matches.

Click here for more details on how to use this scoring method, and how to set the values of the
associated properties.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more details on how to do this,
see the section on Modal Feedback.

8. Click the blue Done button. Your gap match interaction is now complete.

After this step, you can preview your interaction using the steps given in the Preview Instructions.
Interactions

Slider Interaction
The Slider Interaction lets Test-takers demonstrate their knowledge of a numerical type, such as a
percentage, a total, etc. The answer is conveyed by sliding an indicator on a horizontal scale.

Slider Interaction

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Slider interaction:

1. From the Common Interactions Library on the left, drag the Slider icon onto the blank Item
and drop it onto the canvas.

This creates a new Slider interaction window. There is a question field at the top, followed by a
graphical control element (a 'slider') indicating the scale covered by the answers to the question.
Below the slider is the current value depicted by the slider ("0").

2. Fill in the question field, where it says define prompt.

This will describe the task given to the test-taker, typically a question involving numbers or a
fraction, etc.

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

3. Adjust the settings on the slider.


This can be done in the Interaction Properties Panel on the right.

First, set the upper and lower limits of the slider using the Upper Bound and Lower Bound boxes.

By default, the lower boundary is set to 0 and the upper to 100. These default values anticipate a
percentage answer, but can be adjusted as desired, so long as the lower boundary is less than the
upper."

Next, adjust the intervals on the slider in the Step box. By default, the Step value is set to 1. These
values should be customized to fit the question. (For example, in the question "What was the
population of the Icelandic city of Reykjavik in 2014?", the interaction properties for the answer,
120,000, might be set so that the lower value is 100,000, the upper value is 200,000, and the step
value is 10,000.)

4. Click Response on the right of blue interaction header to define the correct answer(s).

This provides access to the actual slider, so that the answer can be set. You can do this by moving
the indicator to the correct value.

Optional Extras when Processing a Response


The following option is available in the Response Properties Panel on the right.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more information on how to do
this, see the section on Modal Feedback.

5. Click the blue Done button. Your slider interaction is now complete.

After this step, you can preview your interaction using the steps given in the Preview Instructions.
Interactions

Extended Text Interaction


The Extended Text Interaction provides the means of examining the Test-taker's ability to
reproduce a phrase, sentence, or text passage exactly. The answer must not deviate from the
original in any way.
Usually, these Items are scored manually by a human scorer. In TAO 3.3, authors can define
outcome variables on the item level for scoring rubrics, for example for grammar, spelling, and
contents.

Extended Text Interaction

Once you have generated a new item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Extended Text Interaction:

1. From the Common Interactions Library on the left, drag the Extended Text icon onto the
blank Item and drop it onto the Canvas.

This opens a new Extended Text Interaction window. There is a question field at the top, with an
extended text field below it.

2. Fill in the question field, where it says define prompt.

The test-taker will be expected to remember the answer exactly, without any variation. Even an
extra space will result in the answer being marked as incorrect.

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.
Optional Extras when Creating a Task
The following options are available in the Interaction Properties Panel on the right.

Defining a certain format


Plain text format is the default which is expected as input from the test-taker. If desired, you can
specify the type of text format as either preformatted text or XHTML in the Format box.

Placing constraints on the answer


You can limit the length of text the test-taker enters in the answer field by setting a maximum
length or word count. Alternatively, you can specify a certain pattern in Constraints.

Giving hints about the text length


You can provide hints for the test-taker about the length of the text by filling in the
Recommendations fields. This provides the test-taker with an expected length, in either characters
or number of lines. (This can also be done after you have entered the answered expected in Step
4.)

4. Click Response on the right of blue interaction header to define the correct answer(s).

Enter the expected answer in the answer field. Again, the test-taker is expected to answer exactly.
Any variation(s) will result in the answer being marked as incorrect.

Optional Extras when Processing a Response


The following options are available in the Response Properties Panel on the right.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more information on how to do
this, see the section on Modal Feedback.

5. Click the blue Done button. Your extended text interaction is now complete.

After this step, you can preview your interaction using the steps given in the Preview Instructions.
Interactions

File Upload Interaction


The File Upload Interaction provides an interface in which Test-takers can upload a pre-written
essay, completed artwork, or other form of submission. Usually, these items are scored manually
by a human scorer.

File Upload Interaction

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new File Upload Interaction.

1. From the Common Interactions Library on the left, drag the File Upload icon File Upload onto the
blank item and drop it onto the Canvas.

This opens a new File Upload Interaction window. There is a question/prompt field at the top, and a
Browse box with which to upload the desired submission.

2. Fill in the question field, where it says define prompt.

Add an instruction for the test-taker to submit work, such as Upload document.

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

3. Select the file type expected.

This can be done in the Interaction Properties Panel on the right by setting the Multipurpose
Internet Mail Extension (MIME) type desired for the submission, if applicable.
If a MIME type is selected, this will allow the candidate to submit files of that particular type (.pdf,
.doc, .jpg, etc.)

4. Click the blue Done button. Your file upload interaction is now complete.

After this step, you can preview your interaction using the steps given in the Preview Instructions.
Interactions

Media Interaction
The Media Interaction allows Test-takers to view a multimedia presentation (image slide show,
YouTube video, etc.), usually in connection with another interaction.

Media Interaction

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Media Interaction:

1. From the Common Interactions Library on the left, drag the Media icon onto the blank item and
drop it onto the Canvas.

2. From the Common Interactions library, drag the Media icon onto the blank item and drop
the resulting box in the blue field that appears.

A Resource Manager window will appear with which you can select a media file. You can re-use a
media file already in the resource manager, or you can upload a new one (note that size and file
type restrictions apply). To select one from the list of previously uploaded media, highlight the
appropriate one in the resource manager list and click the green Select button. To upload a new
one, click on the blue Add file(s) button to browse the files on your computer, and then upload one
to the resource manager by clicking the green Upload button.

Highlight the file you have chosen as your background by clicking on it, and it will appear on the
right in the preview panel. Click Select in the bottom right of the window to continue.

Note: Alternatively, exit this resource window and enter the web address of an online video or
audio resource in the box entitled Media file path or YouTube video address in the Interaction
Properties Panel. See here for supported media formats.

A new authoring window will appear with the media shown in the center of the canvas. Above the
media file there is a question field.

3. Fill in the question field, where it says define prompt.

This will describe the task given to the test-taker ("View the following film", "Listen to the
inflections in the following sound recording", etc.).

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

4. Set the playback method of the media device.

These property options will affect how the test-taker views/hears the media object while it is
playing.

You can do this in the Interaction Properties panel on the right.

First, determine the size of the screen on which the video or audio will play in the Width and Height
boxes.

Then check autostart if the media device should begin playing when the interaction is opened.

Check loop if the media device should play over and over again. If this is checked, enter the
number of times you wish the loop to be repeated in the Max plays count box.

Check Pause if the test-taker is permitted to pause and restart the media device during the
interaction.

5. If desired, add a further interaction to the media interaction.

Usually, a Media interaction is used to present a film or sound clip, to which a series of questions
may be added. Drag the appropriate interaction type from the Common Interactions menu on the
left, and consult the relevant chapter of the User Guide for help on how to execute this interaction.

6. Click the blue Done button. Your media interaction is now complete.

You can now preview your Interaction using the steps given in the Preview Instructions.
Interactions

Inline Choice Interaction


Inline Choice Interactions allow Test-takers to complete a "Fill in the Blank" question with one
choice taken from a selected list of answers. Like with all inline interactions this interaction needs

to be in a Text Block

For a quick glimpse of how to create an Inline Choice Interaction in TAO, please watch the following
video.

How to Create an Inline Choice Interaction Using TAO

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Inline Choice interaction:

1. From the Inline Interactions Library below Common Interactions on the left, drag the Text Block

onto the blank Item and drop it onto the canvas.

This creates a field (containing a sample text) in which a text may be entered from a favorite
source (a Word document or website, for instance), or typed in.

To enter your text, click inside the text field.

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

2. Once you have entered the text, drag the Inline Choice icon from the Inline Interactions
library to a space next to where the test-taker will be expected to fill in the blank.
This brings up a pop-up window with three default choices. Enter the test-taker's answer options by
highlighting the default entries (choice #1, etc.) and typing in each answer option. For fewer
choices, click the trash can next to each choice to delete it. To add another choice, click the blue
Add Choice field below the other choices to generate another field.

Repeat the above for each place in the text where you would like to the test-taker to fill in the
blank.

Note: Remember to remove the actual words from the text that the Inline Choice blanks are
designed to replace.

Optional Extras when Creating a Task


The following options are available in the Interaction Properties Panel on the right.

Shuffling the choices


Check the Shuffle choices box. This will randomize the order in which answer choices appear for
each test-taker. In this manner, guessing or copying strategies is rendered useless.

Note: Remember that if you use this option, avoid choosing an ordinal list style, e.g. A,B,C or 1,2,3.

Obliging the test-taker to give an answer


If you want to prevent test-takers from continuing to the next question without providing an
answer, check the required box.

3. To select the right answer, click on each Inline Choice element, and in the header bar of the
resulting pop-up window, click Response .

This produces the same selection of options that the Test-taker will see. Simply select the correct
response to set the right answer.

Optional Extras when Processing a Response


The following options are available in the Response Properties Panel on the right.

Modifying the scoring method


By default, a test-taker receives one point per completely correct interaction, so in the case of
Inline Choice interactions, the test-taker has to select all the correct choices in order for the answer
to be considered correct.

You may want to modify the scoring method if, for example, you want the test-taker to receive
partial credit for selecting some, but not all, of the correct choices. Or you may wish to give a
higher weight to some of the choices than to others.

You can do this using the map response option of Response processing , in the Response Properties
panel on the right. When you choose this option, there are several settings you need to enter.

First, assign a weight for each choice in the boxes next to each one.

Click here for more details on how to use this scoring method, and how to set the values of the
other associated properties.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more information on how to do
this, see the section on Modal Feedback.
Limiting the duration of the test
Click anywhere outside of the Text Space. This will give you the option of setting the interaction as
Time dependent (to be completed within a certain interval), by checking the check box. This option
is covered in greater detail in Test Settings.

4. Click the blue Done button. Your inline choice interaction is now complete.

After this step, you can preview your interaction using the steps given in the Preview Instructions.
Interactions

Text Entry Interaction


Text Entry Interactions allow Test-takers to complete a "Fill in the Blank" question with an exact

text answer. This interaction needs to be in a Text Block .

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Text Entry Interaction:

1. From the Inline Interactions Library below Common Interactions on the left, drag the Text Block

onto the blank Item and drop it onto the Canvas.

This creates a field (containing a sample text) in which a text may be entered from a favorite
source (a Word document or website, for instance), or typed in.

To enter your text, click inside the text field.

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

Text Entry Interaction

2. Once you have entered the text, drag the Text Entry icon from the Inline Interactions library
to the space next to where the test-taker will be expected to fill in the blank.
This creates a pop-up window containing the blank, which is to be filled by the test-taker. No
changes can be made here: the window just confirms that you have created the blank. However, a
correct answer will need to be selected and this is done in the Response mode.

Note: Remember to remove the actual words from the text that the Text Entry blanks are designed
to replace.

Optional Extras when Creating a Task


The following options are available in the Interaction Properties Panel on the right.

Inserting a 'placeholder' text in the blanks


If you would like to put a text in the fields the test-taker is supposed to fill in, such as "Write your
answer here", enter it in the Placeholder Text field.

Placing constraints on the answer


You can specify a certain Pattern which should be used in the answer. This can be done in the
Pattern Mask box.

Giving hints about the text length


You can provide a hint for the test-taker about the length of the text by filling in the Expected
Length field. This tells the test-taker the number of words expected.

Changing the base of numerical values


The Base feature is used to set the number base for the interpretation of numerical values entered
by the test-taker. By default this is 10, i.e. its interpretation is based on the decimal system. If it
uses a different system, change this here.

3. Click Response in the pop-up window to define the correct answer(s).

This opens the response entry window, in which you can enter the correct answers. Remember
that you will need to produce an answer that the test-taker will be expected to match exactly,
character-for-character, including spaces.

Repeat steps 2 and 3 until all the desired Text Entry blanks have been inserted into the text.

Optional Extras when Processing a Response


The following options are available in the Response Properties Panel on the right.

Modifying the scoring method


By default, a test-taker receives one point per completely correct interaction, so in the case of Text
Entry interactions, the test-taker has to select all the correct answers in order for the answer to be
considered correct.

You may want to modify the scoring method if, for example, you want the test-taker to receive
partial credit for selecting some, but not all, of the correct answers. Or you may wish to give a
higher weight to some of the answers than to others.

You can do this using the map response option of Response processing , in the Response Properties
panel on the right. When you choose this option, there are several settings you need to enter.

First, assign a weight for each response in the boxes next to each one.

Click here for more details on how to use this scoring method, and how to set the values of the
other associated properties.

Inserting modal feedback


If you wish, you can insert modal feedback into this Interaction. For more information on Modal
Feedback, see the section on Modal Feedback.

Limiting the duration of the test


Click anywhere outside of the Text Space. This will give you the option of setting the interaction as
Time dependent (to be completed within a certain interval), by checking the check box. This option
is covered in greater detail in Test Settings.

4. Click the blue Done button. Your text entry interaction is now complete.

After this step, you can preview your interaction using the steps given in the Preview Instructions.
Interactions

Hotspot Interaction
The Hotspot Interaction gives Test-takers the opportunity to demonstrate their knowledge by
selecting portions of an image (regions on a map, people in a line-up, etc.).
This interaction is one of a series of Graphic Interactions (the others are covered in their own
sections). All graphic interactions can be found in the Graphic Interactions Library.

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Hotspot interaction:

1. From the Graphic Interactions library near the bottom of the Interactions library on the left, drag

the Hotspot icon onto the blank Item and drop it onto the Canvas.

Hotspot Interaction

2. Choose the desired background graphic.

A Resource Manager window will appear with which you can select a background graphic. You can
re-use a background already in the resource manager, or you can upload a new one. To select one
from the list of previously uploaded graphics, highlight the appropriate background graphic in the
resource manager list and click the green Select button. To upload a new one, click on the blue
Add file(s) button to browse the files on your computer, and then upload one to the resource
manager by clicking the green Upload button.

Highlight the file you have chosen as your background by clicking on it, and it will appear on the
right in the preview panel. Click Select in the bottom right of the window to continue.

A new authoring window will appear with the background graphic in the center of the canvas.
Above the graphic there is a question field. On the left there is an Associable Hotspot Panel for
inserting selected shapes that will represent Associable Hotspots into the background graphic
(these include four different shapes: rectangle, circle, ellipse, and polygon). Below the Hotspot
Panel there is a trash can icon, which allows you to delete a poorly-placed or misshapen Hotspot.

3. Fill in the question field, where it says define prompt.

This should cover such important information as what the background graphic represents, and
what the test-taker is supposed to do in this interaction.

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

4. Insert the Associable Hotspots onto the background graphic.

To insert a rectangle, click on one corner and drag it across the intended area the Hotspot is
supposed to cover. To insert a circle or ellipse, select its center and drag outward or inward until
the Hotspot is the right size. To insert a polygon, begin at one corner, then click on each corner in
succession until the Hotspot is complete. You can make all the shapes bigger or smaller (or in the
case of polygons change the shape), but if necessary, click on the problem Hotspot, click the trash
can to delete it, and then try again.

Optional Extras when Creating a Task


The following options are available in the Interaction Properties Panel on the right.

Specifying correct number of answers


Specify the minimum and maximum number of Hotspot choices that the test-taker will be asked to
provide (before he can continue to the next question) in the Allowed Choices boxes. By default,
these are empty, which means the test-taker can include as many (or as few) of the answer options
as he likes. (Setting the minimum to 0 allows the test-taker to skip the question.)

5. Click Response on the right of the blue interaction header to set the Hotspots in the order
required by the question.

This will bring up the same screen, but you can now assign scores to the Hotspots.

By default, a test-taker receives one point per completely correct interaction, so in the case of
Hotspot interactions, the test-taker has to select all the correct Hotspots in order for the answer to
be considered correct.

See Optional Extras below for other scoring methods.

Optional Extras when Processing a Response


The following option is available in the Response Properties Panel on the right.

Modifying the scoring method


You may want to modify the scoring method if, for example, you want the test-taker to receive
partial credit for selecting some, but not all, of the correct Hotspots. Or you may wish to give a
higher weight to some of the Hotspots than to others.
You can do this using the map response option of Response processing , in the Response Properties
panel on the right. When you choose this option, there are several settings you need to enter.

First, assign a weight for each Hotspot in the boxes next to each Hotspot element. Click on each
Hotspot, and in the pop-up window that appears, set the weight to be awarded if the test-taker
selects it correctly.

Click here for more details on how to use this scoring method, and how to set the values of the
other associated properties.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more details on how to do this,
see the section on Modal Feedback.

5. Click the blue Done button. Your hotspot interaction is now complete.

You can now preview your interaction using the steps given in the Preview Instructions.
Interactions

Graphic Order Interaction


The Graphic Order Interaction gives Test-takers the opportunity to demonstrate their knowledge of
chronological order, orders of importance, etc. as seen on a graphic (map, photo, or other image).
This interaction is one of a series of Graphic Interactions (the others are covered in their own
sections). All graphic interactions can be found in the Graphic Interactions Library.

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Graphic Order interaction:

1. From the Graphic Interactions library near the bottom of the Interactions library on the left, drag

the Order icon onto the blank Item and drop it onto the Canvas.

Graphic Order Interaction

2. Choose the desired background graphic.

A Resource Manager window will appear with which you can select a background graphic. You can
re-use a background already in the resource manager, or you can upload a new one. To select one
from the list of previously uploaded graphics, highlight the appropriate background graphic in the
resource manager list and click the green Select button. To upload a new one, click on the blue
Add file(s) button to browse the files on your computer, and then upload one to the resource
manager by clicking the green Upload button.

Highlight the file you have chosen as your background by clicking on it, and it will appear on the
right in the preview panel. Click Select in the bottom right of the window to continue.

A new authoring window will appear with the background graphic in the center of the canvas.
Above the graphic there is a question field. On the left there is an Associable Hotspot Panel for
inserting selected shapes that will represent Associable Hotspots into the background graphic
(these include four different shapes: rectangle, circle, ellipse, and polygon). Below the Hotspot
Panel there is a trash can icon, which allows you to delete a poorly-placed or misshapen Hotspot.

3. Fill in the question field, where it says define prompt.

This should cover such important information as what the background graphic represents, and
what the test-taker is supposed to do in this interaction.

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

4. Insert the Associable Hotspots onto the background graphic.

To insert a rectangle, click on one corner and drag it across the intended area the Hotspot is
supposed to cover. To insert a circle or ellipse, select its center and drag outward or inward until
the Hotspot is the right size. To insert a polygon, begin at one corner, then click on each corner in
succession until the Hotspot is complete. You can make all the shapes bigger or smaller (or in the
case of polygons change the shape), but if necessary, click on the problem Hotspot, then click the
trash can to delete it, and then try again.

Optional Extras when Creating a Task


The following options are available in the Interaction Properties Panel on the right.

Specifying correct number of answers


You can specify the minimum and maximum number of Hotspot choices that the test-taker will be
asked to provide (before he can continue to the next question) in the Allowed Choices boxes. By
default, these are empty, which means the test-taker can include as many (or as few) of the
answer options as he likes. (Setting the minimum to 0 allows the test-taker to skip the question.)

5. Click Response on the right of the blue interaction header to set the Hotspots in the order
required by the question.

This will bring up the same screen, but you can now numerically categorize the selected Hotspots.
To do this, click on the number, then the Hotspot. Repeat until all numbers are assigned to
Hotspots.

Optional Extras when Processing a Response


The following option is available in the Response Properties Panel on the right.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more details on how to do this,
see the section on Modal Feedback.

6. Click the blue Done button. Your graphic order interaction is now complete.

You can now preview your interaction using the steps given in the Preview Instructions.
Interactions

Graphic Associate Interaction


The Graphic Associate Interaction gives Test-takers the opportunity to demonstrate their
knowledge by depicting routes on a map or graphic in a prescribed way. This can be used for
drawing out historical military marches, route-planning exercises, connecting the dots to form a
missing piece of an image, etc.
This interaction is one of a series of Graphic Interactions (the others are covered in their own
sections). All graphic interactions can be found in the Graphic Interactions Library.

Graphic Associate Interaction

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Graphic Associate Interaction:

1. From the Graphic Interactions library near the bottom of the Interactions library on the left, drag

the Associate icon onto the blank item and drop it onto the Canvas.

2. Choose the desired background graphic.

A Resource Manager window will appear with which you can select a background graphic. You can
re-use a background already in the resource manager, or you can upload a new one. To select one
from the list of previously uploaded graphics, highlight the appropriate background graphic in the
resource manager list and click the green Select button. To upload a new one, click on the blue
Add file(s) button to browse the files on your computer, and then upload one to the resource
manager by clicking the green Upload button.

Highlight the file you have chosen as your background by clicking on it, and it will appear on the
right in the preview panel. Click Select in the bottom right of the window to continue.
A new authoring window will appear with the background graphic in the center of the canvas.
Above the graphic there is a question field. On the left there is an Associable Hotspot Panel for
inserting selected shapes that will represent Associable Hotspots into the background graphic
(these include four different shapes: rectangle, circle, ellipse, and polygon). Below the Hotspot
Panel there is a trash can icon, which allows you to delete a poorly-placed or misshapen Hotspot.
Below the background graphic is a gap match field for entering the answers (in the form of graphic
elements).

3. Fill in the question field, where it says define prompt.

This should cover such important information as what the background graphic represents, and
what the Test-taker is supposed to do in this interaction.

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

4. Insert the Associable Hotspots onto the background graphic.

To insert a rectangle, click on one corner and drag it across the intended area the Hotspot is
supposed to cover. To insert a circle or ellipse, select its center and drag outward or inward until
the Hotspot is the right size. To insert a polygon, begin at one corner, then click on each corner in
succession until the Hotspot is complete. You can make all the shapes bigger or smaller (or in the
case of polygons change the shape), but if necessary, click on the problem Hotspot, then click the
trash can to delete it, and then try again.

After the Hotspots are inserted, set the number of Hotspot matches that the test-taker will be
asked to provide (before he can continue to the next question), giving the minimum and the
maximum. This can be done in the Allowed number of matches boxes in the Interaction Properties
Panel on the right.

Optional Extras when Creating a Task


The following options are available in the Interaction Properties panel on the right.

Limiting the use of a choice


You can limit the number of times a particular element is used by clicking on it. When it appears in
the Identifier box in the right-hand panel, set the Allowed number of matches.

Specifying the correct number of associations


You can specify the minimum and maximum number of associations that the test-taker will be
asked to provide (before he can continue to the next question) in the Number of Associations boxes
in the Interaction Properties panel. By default, these are empty, which means the test-taker can
include as many (or as few) associations as he likes. (Setting the minimum to 0 allows the test-
taker to skip the question.)

5. Click Response on the right of the blue interaction header to select the associations between
Hotspots (the answers).

By default, a test-taker receives one point per completely correct interaction, so in the case of
Graphic Associate interactions, the test-taker has to select all the correct Hotspot pairs in order for
the answer to be considered correct. Select the Hotspot pairs by clicking first on one Hotspot and
then the associated Hotspot, until the pairs are all correctly connected by lines.

See Optional Extras below for other scoring methods.


Optional Extras when Processing a Response
The following options are available in the Response Properties Panel on the right.

Modifying the scoring method


You may want to modify the scoring method if, for example, you want the test-taker to receive
partial credit for selecting some, but not all, of the correct Hotspot pairs. Or you may wish to give a
higher weight to some of the Hotspot pairs than to others.

You can do this using the map response option of Response processing , in the Response Properties
panel on the right. When you choose this option, there are several settings you need to enter.

First, assign a weight for each Hotspot pair. In the Pair Scoring panel below the background graphic
in your Interaction, select pairs of Associable Hotspots by clicking on Add new pairs, and then
clicking on the blue Add button. Then assign a weight according to the value of the answer.

Click here for more details on how to use this scoring method, and how to set the values of the
other associated properties.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more details on how to do this,
see the section on Modal Feedback.

6. Click the blue Done button. Your graphic associate interaction is now complete.

You can now preview your interaction using the steps given in the Preview Instructions.
Interactions

Graphic Gap Interaction


The Graphic Gap Interaction gives Test-takers the opportunity to demonstrate their knowledge
about geographical regions, identify facts about portions of images (people in a group photo, etc.),
or show other similar capabilities in picture matching.
This interaction is one of a series of Graphic Interactions (the others are covered in their own
sections). All graphic interactions can be found in the Graphic Interactions Library.

Graphic Gap Interaction

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Graphic Gap Interaction:

1. From the Graphic Interactions library near the bottom of the Interactions library on the left, drag
the Gap Match icon Grpahic Match onto the blank Item and drop it onto the Canvas.

2. Choose the desired background graphic.

A Resource Manager window will appear with which you can select a background graphic. You can
re-use a background already in the resource manager, or you can upload a new one. To select one
from the list of previously uploaded graphics, highlight the appropriate background graphic in the
resource manager list and click the green Select button. To upload a new one, click on the blue
Add file(s) button to browse the files on your computer, and then upload one to the resource
manager by clicking the green Upload button.

Highlight the file you have chosen as your background by clicking on it, and it will appear on the
right in the preview panel. Click Select in the bottom right of the window to continue.
A new authoring window will appear with the background graphic in the center of the canvas.
Above the graphic there is a question field. On the left there is an Associable Hotspot Panel for
inserting selected shapes that will represent Associable Hotspots into the background graphic
(these include four different shapes: rectangle, circle, ellipse, and polygon). Below the Hotspot
Panel there is a trash can icon, which allows you to delete a poorly-placed or misshapen Hotspot.
Below the graphic is a gap match field where the answers should be entered (in the form of graphic
elements).

3. Fill in the question field, where it says define prompt.

This should cover such important information as what the graphic represents, and what the test-
taker is supposed to do in this interaction.

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

4. Insert the Associable Hotspots onto the graphic.

To insert a rectangle, click on one corner and drag it across the intended area the Hotspot is
supposed to cover. To insert a circle or ellipse, select its center and drag outward or inward until
the Hotspot is the right size. To insert a polygon, begin at one corner, then click on each corner in
succession until the Hotspot is complete. You can make all the shapes bigger or smaller (or in the
case of polygons change the shape), but if necessary, click on the problem Hotspot, then click the
trash can to delete it, and then try again.

After inserting the Hotspots, enter the answer graphics will in the gap match field below the
background graphic.

Note: Drag-and-drop is enabled for this type of interaction.

5. Click the plus sign (+) within the gap match field as many times as is needed to create the
correct number of gap match slots.

Clicking the plus sign will bring up the resource manager window. As during placement of the
background graphic, the immediate choices will include all recently uploaded images. Simply select
one graphic for each slot, or upload the necessary graphics.

Before uploading, resize the graphics: all graphics should be roughly the same size - ideally, a
width that is one-sixth the width of the background graphic. If the selected graphics are not this
size, they should be resized before uploading.

Once all the images are the correct size, click on the blue Add file(s) button at the top of the list of
available graphics to locate and upload all the desired image files by clicking the green Upload
button below the list.

After uploading the images, select the first answer graphic for the first slot, and repeat for all
subsequent slots.

Optional Extras when Creating a Task


The following options are available in the Interaction Properties Panel on the right.

Limiting the use of a choice


If you want to limit the number of times a particular answer graphic is used, click on it. It will then
appear in the Identifier box in the right-hand panel, which gives you the option to set the Max
number of matches.

6. Click Response on the right of blue interaction header to define the correct answer(s).

You can now select the correct associations between the answer graphics and the Hotspots on the
graphic. To do this, drag and drop each answer graphic onto its corresponding Hotspot.

Optional Extras when Processing a Response


The following options are available in the Response Properties Panel on the right.

Modifying the scoring method


By default, a test-taker receives one point per completely correct interaction, so in the case of
Graphic Gap interactions, the test-taker has to select all the correct matches in order for the
answer to be considered correct.

You may want to modify the scoring method if, for example, you want the test-taker to receive
partial credit for selecting some, but not all, of the correct matches. Or you may wish to give a
higher weight to some of the matches than to others.

You can do this using the map response option of Response processing , in the Response Properties
panel on the right. When you choose this option, there are several settings you need to enter.

First, assign a weight for each correct match in the Pair Scoring box below the graphic in the boxes
next to each match.

Click here for more details on how to use this scoring method, and how to set the values of the
other associated properties.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more details on how to do this,
see the section on Modal Feedback.

6. Click the blue Done button. Your graphic gap interaction is now complete.

You can now preview your interaction using the steps given in the Preview Instructions.
Interactions

Select Point Interaction


The Select Point Interaction gives Test-takers the opportunity to demonstrate their knowledge by
selecting an invisibly-defined portion of an image (region on a map, person in a line-up, etc.).
This interaction is one of a series of Graphic Interactions (the others are covered in their own
sections). All graphic interactions can be found in the Graphic Interactions Library on the left.

Select Point Interaction

Once you have generated a new Item, and clicked on Authoring in the Action Bar, follow the steps
below to create a new Select Point Interaction:

1. From the Graphic Interactions library near the bottom of the Interactions library on the left, drag

the Select Point icon onto the blank item and drop it onto the Canvas.

2. Choose the desired background graphic.

A Resource Manager window will appear with which you can select a background graphic. You can
re-use a background already in the resource manager, or you can upload a new one. To select one
from the list of previously uploaded graphics, highlight the appropriate background graphic in the
resource manager list and click the green Select button. To upload a new one, click on the blue
Add file(s) button to browse the files on your computer, and then upload one to the resource
manager by clicking the green Upload button.

Highlight the file you have chosen as your background by clicking on it, and it will appear on the
right in the preview panel. Click Select in the bottom right of the window to continue.

A new authoring window will appear with the background graphic in the center of the canvas.
Above the graphic there is a question field.

3. Fill in the question field, where it says define prompt.

This should cover such important information as what the background graphic represents, and
what the test-taker is expected to select in this interaction.

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

Optional Extras when Creating a Task


The following option is available in the Interaction Properties Panel on the right.

Specifying correct number of answers


You can specify the number minimum and maximum of points that the test-taker will be asked to
select (before he can continue to the next question) in the Allowed Choices boxes. By default,
these are empty, which means the test-taker can include as many (or as few) of the answer options
as he likes. (Setting the minimum to 0 allows the Test-taker to skip the question.)

4. Click Response on the right of blue interaction header to define the correct answer(s).

This opens the graphic with an Associable Hotspot Panel on the left, used for inserting selected
shapes that will represent Associable Hotspots into the graphic (these include four different
shapes: rectangle, circle, ellipse, and polygon). Below the Hotspot Panel is a trash can icon, which
allows the user to delete poorly-placed or misshapen Hotspots.

To insert a rectangle, click on one corner and drag it across the intended area the Hotspot is
supposed to cover. To insert a circle or ellipse, select its center and drag outward or inward until
the Hotspot is the right size. To insert a polygon, begin at one corner, then click on each corner in
succession until the Hotspot is complete. You can make all the shapes bigger or smaller (or in the
case of polygons change the shape), but if necessary, click on the problem Hotspot, click the trash
can to delete it, and then try again.

5. Insert the Associable Hotspots onto the background graphic.

Test-takers will not see these Hotspots on the background graphic, but selecting a point within the
Hotspot will register it as a correct answer.

6. Set the weights to be awarded for each Hotspot.

In the scoring method normally used as a default, a test-taker receives one point per completely
correct interaction (so the test-taker has to select all the correct responses in order for the answer
to be considered correct).

In this type of interaction, however, each Hotspot is evaluated individually, and thus Map Response
is used as the Response processing method (see Response Properties Panel on the right).

By clicking on each Hotspot, a pop-up window appears next to it, which allows you to set the
weight to be awarded if the test-taker selects it correctly.

Click here for more details on how to use this scoring method, and how to set the values of the
other associated properties.

Optional Extras for Processing the Response


The following options are available in the Response Properties panel on the right.

Inserting modal feedback


If you wish, you can insert Modal Feedback into this Interaction. For more information on how to do
this, see the section on Modal Feedback.

Limiting the duration of the test


Click anywhere outside of the Text Space. This will give you the option of setting the Interaction as
Time dependent (to be completed within a certain interval), by checking the check box. This option
is covered in greater detail in Test Settings.

7. Click the blue Done button. Your Select Point interaction is now complete.

You can now preview your Interaction using the steps given in the Preview Instructions.
Interactions

Portable Custom Interactions


Portable Custom Interactions (PCIs) are Interactions which are developed for a specific scenario,
generally to fulfill a particular need of a customer, hence are not classical QTI interactions. PCIs
represent a best practice for defining and packaging custom interactions. Unlike classical Custom
Interactions, they interact with the test runner (standardized APIs), allowing them to be ported
from one system to another.

The following types of PCI are part of the standard TAO package: Audio, Likert, Liquid, Text Reader,
and Math Entry.

Audio: An Audio interaction enables the Test-taker to record a short spoken response,
typically to test his/her speaking ability. The test-taker is presented with an image of a tape
recorder, and can record speech by clicking on the record button, and check the recording
using the play-back button. The response format is an audio file. Audio interactions are only
compatible with Firefox and Chrome, not with Internet Explorer or Edge. Chrome requires a
connection via https, whereas at the present time Firefox allows both https and http
connections (this is, however, expected to change to only https in the future).

PCI: Audio Interaction

Likert: In a Likert interaction, a scale from 1-5 is used to represent people's attitudes to a
topic. This is commonly used in qualitative surveys. The scale is presented using 'thumbs-
down' and 'thumbs-up' images to represent a negative or positive response, and offers five
buttons from which to select a choice. The response is a simple integer representing the
selected choice. Likert interactions are not usually scored, as there is no right or wrong.
Likert-Interaction

Liquid: A Liquid interaction, developed to showcase the possibilities of PCIs, contains a simple
simulation of a liquid container. The container has a scale on the left-hand side. This type of
interaction can be used to ask questions about volume. To record the correct answer, click on
Response and then click inside the cube at the desired point. If the answer is "5 liters", for
example, click in the cube at the level of the 5 on the scale on the left-hand side, and the cube
will 'fill' to that level with a simulated blue liquid.
Liquid-Interaction

Math Entry: A Math Entry Interaction employs a Math Editor, which allows for the use of
mathematical symbols in the interaction. The editor provides a list of mathematical symbols
and an empty text field. Please note that the editor only provides the possibility of drawing
mathematical symbols, but does not carry out any calculation. See the Math Expressions
section for more information on how to use the Math Editor.
Math-Entry-Interaction

To create any of these types of interaction, once you have created a new Item, click on the Custom
Interactions library below Graphic Interactions on the left, and drag the appropriate interaction
type onto the blank Item, drop it onto the canvas, and then populate the Item.
Interactions

End Attempt
End Attempt offers Test-takers the possibility of exiting from a particular Item in a Test without
completing it.

Note: Strictly speaking, the End Attempt interaction is not an interaction, but is implemented as
such for the test-taker to be able to end his/her attempt at an item

1. Including End Attempt in your test item.

The End attempt option can be added to a test item by means of an inline interaction inserted into
a text block.

After you have created a new Item, a Text Block is inserted by dragging a Text Block from the
Inline Interactions Library below Common Interactions on the left, onto the blank Item and dropping
it onto the Canvas. This creates a field (containing a sample text).

End Attempt

To insert the End Attempt button, drag the End Attempt icon from the Inline Interactions
library below Common Interactions on the left, onto the text field and drop it onto the canvas.

A blue button will appear in the text box, reading End Attempt.

A test-taker can click on this button during a test to indicate that he/she wants to give up on that
particular item. The test normally then moves on to the next item.
Interactions

Text Blocks
Text Blocks are used to create Inline Interactions. A text block is basically a paragraph, and forms a
framework into which one of the two available text-based inline interactions (Inline Choice, Text
Entry) is then inserted.

Text Blocks are represented by the icon in the Interactions Library on the left, under Inline
Interactions.

1. Using Text Blocks with Inline interactions

After you have created a new Item, drag a Text Block from the Inline Interactions Library
below Common Interactions on the left, onto the blank item and drop it onto the Canvas.

Text Block

This creates a field (containing a sample text) in which a text may be entered from a favorite
source (you can copy and paste text from any text editor, or website, for instance), or typed in.

To enter your text, click inside the text field.

Note: See the section on Interaction Authoring Tools for details on text editing options such as
using italics or bold text in your item, and inserting features such as shared stimuli or media, tables
or formulae.

For more detail on how to create the two inline interactions which use Text Blocks, see the sections
on Inline Choice Interaction and Text Entry Interaction.
Interactions

Shared Stimuli
A stimulus is a piece of information which sets the context for a question or a series of questions. A
Shared Stimulus is one that is shared between multiple Items.

Below is a template for a shared stimulus in the form of an empty XML file, which you can use to
author a new shared stimulus (outside of TAO):

<?xml version="1.0" encoding="UTF-8"?>


<div xmlns="http://www.imsglobal.org/xsd/imsqti_v2p1" xmlns:xi="http://www.w3.org/2001/XInclu
de" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="stimulus_content" xsi:schemaL
ocation="http://www.imsglobal.org/xsd/imsqti_v2p1 http://www.imsglobal.org/question/qtiv2p1pd2
/xsd/imsqti_v2p1.xsd">

<h1 >Title</h1>
<p >Text here...</p>

</div>

To create a shared stimulus with a media file, reference the media file relatively and include it in a
zip file together with the XML file, then import the zip file.
Tests

What is a Test?
Tests, or assessments, define the continuity between Items, how they are ordered, and how they
are presented to the Test-taker. They also define the constraints and settings, including those
related to time and navigation policies. Tests may be built from building blocks called Item
Sections that logically sort Items into groups, making configuring a Test easier.

Term Test

In TAO, assessments are assembled from individual items that are later delivered to test-takers
through an automated Delivery system. If there are enough items within a test, they can be sorted
and grouped into item sections according to any logical set of criteria.

Tests should be given an appropriate title which helps test-takers to accurately identify the
assessment if it appears in a list of other assessments which the test-taker must also take at the
same time.

Test designers must consider issues such as whether the test-taker will need to follow a linear path
(a specific question order with no option to revisit questions later) or if they can instead use a non-
linear route (where questions can be answered in any order and revisited if desired). Time
limitations and feedback are also important elements in successful test design.
Tests

Tests: An Overview
Tests, or assessments, are assembled from individual Items. Items are built from Interactions,
which are based on exercises such as multiple choice questions. Tests define the order of items, as
well as how and when they are presented to the Test-taker. For a full definition, see What is a
Test?.

This section provides an overview of how to manage your tests, including what you need to do to
construct them, what the result looks like, the choices you need to make along the way, and what
you can do with them once they have been created.

Tests in TAO

1. Creating a new test

A Test is a collection of items designed to assess the academic progress of a test-taker. To create a
new test, items first need to be created (see Creating a new Item), so that they can be used in
assessments. A new test can then be put together: see Creating a new Test for information on how
to do this.

2. Defining the settings for your new test

Once you have created a new test, you will need to assign certain properties to it, such as the
length of the test. There are four levels in which properties may be assigned, from the top test
level down to the individual item level. See Test Settings for information on how to do assign these
properties.

3. Importing and exporting your tests

You can import and export your tests to and from different storage devices in order to be able to
use them in different locations. For more information on how to do this, see Importing a Test and
Exporting Tests.
Tests

Creating a New Test


TAO defines a Test as a collection of Items designed to assess the academic progress of a Test-
taker. This approach allows for the rapid assembly of tests administered across computer
networks. If all you have is 30 minutes to create a 10-question quiz for your 8th grade class, TAO
can help.

Now, let's walk through the steps of creating a test.

1. Click on the Tests icon in the Assessment Builder Bar.

This will take you to the Tests page, and will show the last test which you, or the user before you,
created. On the left-hand side of your screen you will see the Test Library of existing tests. The last
test which was created will be highlighted in the library. In this tour, however, you will create a new
test.

2. Click on the New test icon in the button bank under the library.

This will create a new test in the highlighted folder.

Note: To create a new test in a different folder, select the desired folder in the library, and then
click on the New test icon in the button bank. To create a new folder (in TAO these represent new
classes), click on New class in the button bank, and then give it a label. Highlight where to put the
new folder relating to the new class.

Creating a new Test

3. Label and save your test.

Once you have created a new test, this will bring up a new dialog box which gives you the option of
naming, or labeling your Test.

After labeling your test, click on Save. This produces an empty test, which you can now populate
with items.

Note: It is always a good idea to save test assembly work every ten minutes or so, to prevent losing
your work.

4. Click Authoring in the Action Bar to insert items into your test.

This will take you to the empty test you have created. You can now start to populate the Canvas in
the middle (the test assembly) with items.

Tests can be divided into a hierarchy of two levels: Test parts, and Sections. A Test part is the first
tier division of a test, while a section is the second tier division.

Test divisions, however, are optional, and they are added in reverse order. For instance, if a test
has no divisions, all items will simply be added to Part 1 Section 1. If only one level of division is
needed, adding sections will allow a test to be divided into two. If both levels are required, you
need to add new test parts. To add new sections, click on the blue New section button below the
existing sections in the test. To add new test parts, click on the blue New test part button below
the existing test parts.

At the start of any section, a Rubric Block, or explanatory text, can be entered prior to the insertion
of items. To do this, click on the icon with the letter "A" on it, and then click on the blue New Rubric
Block button that appears. Add your text in the space provided.

5. Search for items for your test.

The Library on the left shows the items which can be used in your test.

There are two ways of searching for items:

Click on the Item drop-down menu above the library. The available item classes, or folders,
will be shown in the library below. Click on the folder you wish to open, and the items in it will
appear in the library. By selecting one of the two icons to the right of the Item drop-down
menu you can choose to view the available items either as a list or in a tree.

Click on the plus sign to the right of the Search box to carry out an advanced search. A dialog
box will appear. Clicking on Choose a value in the State box will open a drop-down menu,
which enables you to filter the available items according to their status. Items which have
passed through the review process and have been approved are marked final, and it is
recommended that you use only these items in your test. Ticking the box on the right will
clear your selection. Alternatively, if you know the name of the item you are looking for, you
can enter it in the Label box. Clicking on Apply will take you to that item. Click Reset to clear
your selection.

From the library, select the item you wish to add, and then click on the blue button that reads Add
selected item(s) here.

Note: It is easiest to add items in the order in which they are to appear in the delivered test. If an
item is added out of order, however, this can be corrected by clicking the upward or downward
arrow buttons which appear after the properties icon on the right of the item. Delete unwanted
items by clicking on the trash can icon.

See the Test Settings section for information on adjusting assessment settings such as the
time/attempts to be allowed.

Optional Extras
Duplicating an existing test
You can duplicate an already existing test by clicking on the Duplicate icon in the button bank
under the library. A copy will then be created in the folder of the test you have duplicated, with the
same name but with "bis" on the end.

Copying or moving an existing test

You can make a copy of an already existing test by clicking on the Copy To icon in the button
bank under the library.

A dialog box will appear on the canvas. Select a destination folder, and click on Copy . A copy of the
test will then be created in the folder you have selected, with the same name but with "bis" on the
end.

Move To works exactly in the same fashion.

Finding an existing test


You can use the Search Test button on the right of the Action Bar to look for already existing tests.
A list of tests with that name will appear. Click on Open if you want to open one of them.

Publishing a test
If you would like to assemble a delivery once your test is complete, you can publish it directly from
the Tests page, rather than going to the Deliveries page to assemble it.

Click on the Publish icon in the button bank under the library.

A dialog box will appear on the canvas. Select a destination folder, and click on Publish. A Delivery
of your test will then be created in the folder you have selected.

Note: If the background tasks functionality is installed on your version of TAO, the publishing
process will be transferred to it (shown in a circle to the left of the Properties icon on the
assessment builder bar). Clicking on the circle opens the list of tasks, containing information about
each one. You can see here when the delivery has been created.

Open the delivery you have created to set the delivery properties in the same way you would when
you create a delivery on the Deliveries page.
Tests

Importing a Test
Prepared Tests can be taken from any computer and imported onto any other computer that also
has access to TAO. This is done using an operation called Import.

1. Click on the Tests icon on the Assessment Builder Bar.

This will take you to the Test Library, which you will see on the left.

2. Click on the Test class (folder) in the library in which you wish to import the new test.

3. Click on Import in the button bank below the library.

This opens a dialog box which asks you to select the format of the test to be imported. The
supported input formats are: QTI (Question and Test Interoperability), RDF (Resource Description
Framework) or CSV (Character-Separated Values).

Importing Tests

4. Click the blue Browse button to find the file intended for import (alternatively, the file may be
dragged and dropped into the box below the button).

5. Once the Test is selected, click on the blue Import button.

This will import the Test into the Test library, for later use in Deliveries.
Tests

Exporting Tests
Tests may be assembled on almost any computer that has access to TAO. However, the capability
to share tests will be useful in certain situations. For instance, a department may have standard
performance expectations for its most basic courses, and these may be determined by a single
test distributed to all the teachers of the department. Tests can be shared in a few easy steps.

1. Click on the Tests icon in the Assessment Builder Bar.

2. Select the class (Test folder) you want to export in the Test Library on the left-hand side, or click
on New class in the button bank below the library. This will create a new folder for the tests you
would like to export.

Creating a new class (i.e. a new folder) allows you to place tests in a distinct location in order to be
transferred from one computer to another. When doing this, the test class can be renamed in the
Edit test class dialog box in the field marked Label. Clicking the blue Save button will create the
class.

Note: Individual tests can be exported without creating a new class to transfer to. It may be helpful,
however, to organize the entire export from a single folder.

3. If you have created a new class for this purpose, move the tests you want to transfer to this new
class in the Test library.

This selects the tests which are to be exported.

4. After clicking on the class, click Export in the button bank below the Library.

The dialog box will ask you to choose an export format: either QTI or RDF. If the test is to be
exported as a Question and Test Interoperability (.qti) formatted document, it will save the files as
a compressed .zip file. Otherwise, the export will be in Resource Description Framework (.rdf)
format.
Exporting Tests

5. Click the blue Export button in the dialog box to continue with the export.

6. Select the location to which you want to export your test, and then click Save.

The test can then be transferred either to a data storage device or a computer network. The next
step in the transfer is to import the test onto the desired computer.
Tests

Test Settings
After creating a new Test, it will be necessary in most cases to set various properties for the
assessment and its individual parts. There are four levels in which properties may be assigned: the
Test level, the Test part level, the Section level, and the Item level. These properties will appear in
the Properties Panel on the right when you click on the appropriate properties icon (depicted as
three interacting gears).
The Test level properties icon can be found in the Test bar at the top of the canvas. Test part level
properties icons can be found in each grey Test part bar, while Section level properties icons will
be found on the same line as the section label. Item level properties icons can be found next to the
item's label.

Test Settings

1. Click on the Test level properties icon.

This brings up four panels on the right: general properties, Time Limits, Scoring and Outcome
Declarations .

In the general properties, the Identifier box should normally be left as it is, though it is editable.
However, the test Title can be renamed to make it easier for the Test-taker to identify.

In the Time Limits you can set a time limit for tests at either the item level or the test level. To limit
the amount of time the test-taker has to complete the test, enter the maximum duration (in hours,
minutes, and seconds). If late submissions are to be accepted, check the Late submission allowed
box. If the duration is to be strictly enforced (i.e. no late submissions are allowed), leave this
unchecked.
See the section on Scoring Rules for more information on Scoring and Outcome Declarations.

Test Settings

2. Click on the Test part level properties icon.

This brings up three panels on the right: a general properties panel, an Item Session Control panel,
and a Time Limits panel.

In the general properties panel, the Identifier box should be renamed as appropriate.

The Navigation box, or how the test-taker is allowed to answer questions, should be selected as
either linear (first question first, second question second, etc.) or non-linear (can be answered in
any order).

Select the Submission mode as either individual (submitted response by response) or simultaneous
(submitted on completion of the test part) by clicking in the appropriate box.
Settings for Test Parts

In the Item Session Control Panel, set the following four properties:

Set Max Attempts to the number of attempts the test-taker may have (the default setting 0 permits
an unlimited number of attempts).

Check the Show Feedback box if the test-taker should see the modal feedback after completing
this test part.

Check the Allow Comment box if the test-taker may provide explanations for responses, or leave
feedback for the test.

Check the Allow Skipping box to allow the test-taker to pass on answering questions within the test
part.

Check the Validate responses box if only responses which are valid should be accepted. If this box
is checked, constraints governing the test-taker's response (such as if the minimum and maximum
choices specified for that question have been given) will be checked before the test-taker can
proceed to the next question.
Item Session Control

The time limits section is similar in nature to the time limits section for the test level properties,
except that the settings apply to the current test part only.

Time Limits for Test Parts

3. Click on the Section level properties icon.

This brings up eight panels on the right: general properties, Test Navigation, Navigation Warnings,
Test-taker Tools, Selection properties, Ordering, Item Session Control, and Time Limits. The last two
panels are the same as the last two in the test part properties, while the first six panels differ from
previous levels.

Note: In the Premium Edition of TAO, tests can be configured at section level to provide test-takers
with the option of hearing the test content as well as reading it. This text-to-speech functionality
allows the test-taker to hear the test questions, or parts of them, read aloud. See the chapter on
Enabling text-to-speech for more information.

The general properties include an identifier and title: the default name in the Identifier box should
generally be maintained, while the Title can be changed to suit the test. In general, the Visible box
should be checked (or else the test-taker is unable to see the section), and the Keep Together box
should also be checked if it is important that the entire section be completed before moving onto
the next section. Categories act as tag references, which may be displayed to the test-taker.

Settings for Test Sections

In the Test Navigation panel, if Enable Review Screen is checked, the review panel will appear on
the left when a test-taker is taking a test. If Enable Mark for Review is checked, the test-taker has
the option to flag items in the test. See the section on the Review Panel for more information on
how the review panel can be used. Check Informational Item Usage if the item has been included in
the test for informational purposes only. This prevents the item being treated as a question. Check
Allow Section Skipping to allow the test-taker to pass on answering the questions in this section.

In the Navigation Warnings panel, check the boxes where you would like the test-taker to receive
(or not receive) the warning in question.

In the Test-taker Tools panel, set the tools which should be made available to the test-taker for this
section. See the section on Test-taker Tool Configuration for more information on the tools
available.

The Selection panel asks if the delivered test section should include only some of the items
assigned it (Enable selection), and if so, how many (Select). If With Replacement is checked while
the selection mode is enabled, then questions may be repeated. Normally, they are only allowed to
be used once in a Test sitting.
Ordering contains only one property setting, which is Shuffle. This setting randomizes the question
order.

4. Click on the Item level properties icon.

This brings up seven panels: general properties, Test Navigation, Navigation Warnings, Test-taker
Tools, Weights, Item Session Control, and Time Limits.

The last two panels are the same as the last two in both the test part and section level properties.
The three panels after the general properties (Test Navigation, Navigation Warnings and Test-taker
Tools) are the same as in section level properties.

Note: In the Premium Edition of TAO, the text-to-speech functionality (described above for section
level properties) can be also configured at item level.

The general properties include entries for: Identifier, Reference, and Categories. It also includes
check boxes to indicate if the item is Required and if it is Fixed.

Settings for Test Items

The Identifier and References boxes generally do not require modification. Categories act as tag
references which may be displayed to the test-taker. If the Required box is checked, the item will
appear in the test, even if less than the total number of items appears in a given test (in a section
where selection is enabled). If the Fixed box is checked, the item will appear in a particular order,
even if the section ordering calls for shuffling. Categories are not displayed to the test-taker;
typically they are used to calculate aggregate scores (e.g. on sub-domains). For more information,
see the section on Scoring Rules.

In the Weights panel, the weight of that item can be adjusted. It is also possible to add further
weights if you wish to enable the item to be scored in different ways for different tests. (The weight
value to be used for a specific test should be selected in the Weight box in the Scoring panel in the
test properties section above.)The default weight value for each item is 1. The section on Scoring
Rules gives more information on weights.
Tests

Providing Test-level Instant Feedback


Test takers often want direct feedback about how well they have done in their assessment. It is
possible to configure a Test to offer them instant test-level feedback.

Follow the steps below to configure your test to provide test-level instant feedback:

1. Configure your test for Outcome Processing.

Before you can provide test-takers with instant feedback on whether they have passed or failed,
you need to configure the method of scoring for your test. To do this, click on Tests from the
Assessment Builder Bar, and select your test from the Library.

Then, click on Authoring in the Action Bar. Now you are in the test editor.

Next, click on the Settings icon to the right of test name at the top. This is depicted by three
interlocking cogs.

After this, open the Scoring panel, and select Cut score in the drop-down menu of the Outcome
processing box.

Then define a Cut score for your test.

Note: This needs to be a ratio (between 0.0 and 1.0) and is computed by dividing the maximum
score by the total score. If Category score is checked, the same cut score will be applied to all
categories (set to individual items)

When a Cut score has been defined, expand Outcome declarations, then check the correct
generation of Outcome variables corresponding to the settings applied.

Note: The full set of Outcome variables can be found in the section Test Scoring and Outcome
Declarations.

2: Display the pass/fail captions.

The generated outcome variables can be used to display the appropriate pass or fail caption.

To do this, add a new Test part at the end of the test.

Next, add an (informational) Item to the Section by clicking on the A icon to the right of the section
title. This adds a rubric block to the section.

Then, click inside the blue New Rubric Block and insert an appropriate 'pass' caption, such as 'Well
done!'.

Add another rubric block by clicking inside the blue New Rubric Block below the first, and insert a
'fail' caption, such as 'Hard luck'.

Finally, specify when each of the captions should be displayed. To do this, click on the Properties
icon on the right of each rubric block. The properties panel will open on the right.

If you would like the captions in a specific style, you can add a style sheet in the class box. This
can also be left blank, however.

Then, click on the Feedback Block panel and check the Activated box so that your caption can be
displayed as feedback.

Select the relevant outcome variable for your test (defined in the test-level properties) from the
drop-down menu in the Outcome box, and then a value in the Match Value box which indicates the
circumstances in which this caption should be displayed. For pass/fail captions, choose
PASS_ALL_RENDERING from the menu in the Outcome box, and enter passed in the Match value
box if the caption is for a positive result, and not_passed if the caption is for a negative result.

3. View the final outcome.

Save the test and create a new Delivery to view the final outcome.
Tests

Test-taker Tools
The Test-taker Tools comprise a set of tools designed to aid the Test-taker in various ways when
taking Tests. Many of them are accommodation tools which aim to improve accessibility. The test-
taker tools for a test can be found in the Properties Panel on the right.

To configure the test-taker tools for a specific test, follow the steps below.

1. Select the test for which the test-taker tools are to be configured from the Test Library on the
left, and click on Authoring .

Test-taker tools can be configured either for a whole section of a test, or on a per-item basis. To
configure the tools for a whole section, on the Canvas click on the cog wheels to the right of the
section in question. To configure the tools for an individual item, click on the cog wheels to the
right of the item in question.

In each case, the test-taker tools will appear in the properties panel on the right.

Test-taker Tool Configuration for a Section

2. Check the boxes next to each of the test-taker tools you wish to activate.

The Tools available are as follows:

Calculators
TAO provides three different calculators, a simple one, a BODMAS and a scientific version.

Note: If you select multiple calculators, only the most complex variety will be used!
Calculator

Answer Eliminator
The Answer Eliminator allows the test-taker to eliminate answers in Choice interactions. This is
useful if there is a long list of answer choices, and the test-taker has a learning disability.

Answer Eliminator

Answer Masking
Answer Masking allows the test-taker to mask and unmask answers in choice interactions
Answer Masking

Area Masking
Area Masking allows the test-taker to mask parts of the item with a movable mask.

Area Masking

Highlighter
The Highlighter allows the test-taker to highlight parts of the text in an item.
Highlighter

Line Reader
The Line Reader allows the test-taker to visually isolate a line of text.

Line Reader

Magnifier
The Magnifier provides the test-taker with a movable magnifier tool.
Magnifier

Zoom Tool
The Zoom Tool allows the test-taker to zoom in on an area of an item.

Zoom Tool

Text-to-speech
The text-to-speech functionality allows the test-taker to hear the questions of a test being read
aloud. Note: This functionality is only available on the Premium Edition of TAO.
Configuration is now complete. Any test-takers taking the test will have access to the selected
tools, for the sections or items specified.
Tests

Test Scoring Rules and Outcome Declarations


Test scores are determined by a student's performance in the various Items of a Test. Individual
item scores can be tallied using different methods to produce the final test result. This chapter
shows how to configure the test scoring rules, and compute the Outcome Declarations.
For information on the scoring rules used for items, see Item Scoring Rules.

Follow the steps below to set up your chosen scoring method.

1. After you have created your test by selecting the items which are to go in it, hover over the cog
wheels just below the Action Bar, at the top of the right-hand corner of the Canvas, and you will see
the option Manage Test Properties. Click on this and the general properties of the test will be
displayed in the right-hand panel, including the properties for scoring.

2. Click on the button Scoring.

There are four methods for scoring a test, or "outcome processing" as this is called in QTI. Enter
your choice from the four options below in Outcome Processing:

None: If this option is selected, any existing scoring rules will be removed. In this case, no test
score will be generated. Scores for the individual items of the test can be extracted by clicking
on Results, or the QTI Results API. Use this option if you do not need aggregated test scores,
or want to do custom outcomes processing outside of TAO.

Custom: This option is only applicable when a test with custom outcome processing rules is
imported - thus the processing rules are defined outside of TAO. In this case - for example if
the assessment test XML is modified outside of TAO and imported back into the system - the
rules cannot be authored in TAO, so the existing rules are left untouched.

Total Score. If you select this as your chosen method, all scores from all parts of the test will
be added together, and outcome variables for the test-taker's score (SCORE_TOTAL), the
maximum score of all items (SCORE_TOTAL_MAX) and the ratio of correct responses
(SCORE_RATIO) will be generated.
Scoring: Total Score

Cut Score: If you choose this option, a cut-off point for passing the test will be fixed. Enter the
score which is to be used as the cut-off point in the Cut Score box. This is set as a ratio of the
total score. A PASS variable is then generated, indicating if the student has passed the test.
When "Category Score" is active (see below), PASS variables for all categories are generated,
all using the same cut-off point.

Scoring: Cut Score

If you choose either of the latter methods (Total Score or Cut Score), some additional information is
required.
A check-box for Category Score will appear. If Category Score is selected, the scores will be
calculated for each category separately. You will also need to set the Weight variable in the Weight
box, which is used to calculate the score, and is by default 'WEIGHT'. Different weights can be
assigned to different items of the test, based on how much they should contribute to the final test
score. For example, if a weight of 2 is entered, the score for that item is multiplied by 2 when
calculating the final test score (see section Item Scoring Rules for more information on weighting).
If, for any item of the test, there is no such weight definition at the Item level, the scoring engine
assumes the default value = 1.

When item metadata is inserted, it is automatically transferred to the categories of the item.
Categories can also be managed manually.

3. Click on the button Outcome Declarations.

The Outcome Declarations are directly related to the scoring rules you have chosen above; the list
of generated outcome variables is as follows:

If you selected None as the outcome processing method, all outcome variables are removed

If you selected Total score as the outcome processing method, outcome variables are
generated to calculate total scores

If you selected Cut score, as the outcome processing method, outcome variables are
generated to calculate total scores and variables which are checked against the cut-score
which has been defined (typically pass/fail)

If a WEIGHT has been applied, additional outcome variables are generated to compute
weighted scores

Note: Outcomes are generated automatically, but in the few cases where this does not occur, the
values can be generated using the 'regenerate' button.

The configuration of the scoring rules is now complete.


Tests

Publishing a test
Publishing a test means creating a delivery for an assessment.

In TAO ther are two ways to create a delivery, you can either follow the instructions in the delivery
chapter or you can publish test directly from within the test manager.

1. Click on the Publish icon in the button bank under the library.

This will show the dialog below:

Publishing a test

2. Select the directory in which you want to create the delivery and click on Publish at the bottom
of the dialog. Mind you, that the library in this dialog refers to your deliveries and not your tests!

To configure the delivery you will still need to preceed as decrivbed in delivery chapter.
Test-takers

Test-Takers: An Overview
Test-Takers are individuals who take the assessments, or Tests, assembled by TAO. These are
typically registered students in a given class.

This section provides an overview of how to manage your test-takers, including what you need to
do to register them in TAO, how to add new test-takers to groups of existing test-takers, and how
to re-use the same test-taker profiles in different locations.

Test-takers

1. Creating a new test-taker

Each test-taker needs a profile which contains information including personal details and affiliation
to groups. For more information on how to create these, see Creating Test-takers.

2. Importing and exporting test-takers

Test-taker profiles can be imported to and exported from different storage devices, to enable them
to be used in various test scenarios, including by different teachers. For more information on how
to do this, see Importing Test-takers and Exporting Test-takers.

3. Grouping test-takers together

Test-takers can be organized into Groups. These are collections of test-takers who take the same
assessments throughout the duration of a course of study. For information on how to manage
groups of test-takers, see the section Creating a new group.
Test-takers

Creating a Test-taker
Test-Takers are individuals who will take assessments assembled by TAO. These are typically
registered students in a given class, but test-takers can be place-holder profiles for an instructor or
test administrator to trial a Test.

Now, let's walk through the steps of creating a test-taker.

1. Click on the Test-takers icon in the Assessment Builder Bar.

This will take you to the Test-takers page, and will show the last test-taker profile which you, or the
user before you, created. On the left-hand side of your screen you will see the Test-taker Library of
existing test-takers. The last test-taker which was created will be highlighted in the library. In this
tour, however, you will create a new test-taker.

2. Click on the New test-taker icon in the button bank under the library to create a new test-taker
profile.

Creating new Test-takers

This brings up a dialog box entitled Edit Test-taker. Enter the personal details of the new test-taker,
give them a login (which must be unique) and a password, and select an Interface language from
the pull-down menu.

Enter a profile name for the test-taker in the Label field. This can be any form of useful identifier (it
can, but does not have to be, the test-taker's name). An address may be entered in the Mail field,
for communication with the test-taker.

When you have filled these fields, click the blue Save button.

Note: To create a new test-taker profile in a different folder, select the desired folder in the Library,
and then click on the New Test-Taker icon in the button bank. To create a new folder (in TAO these
represent new classes), click on New class in the button bank, and then give it a label. Highlight
where to put the new folder relating to the new class.

3. Place the test-taker in a Group

This can also be done from within the Groups section, but for convenience, it is possible to select
an existing group in the Add to group pane on the right, and associate the test-taker with that
group by checking the box next to the desired group. Click the blue Save button at the bottom of
the pane to complete the procedure.
Test-takers

Importing Test-takers
Test-taker metadata files can be taken from any computer and imported onto any other computer
which also has access to TAO. This is done using an operation called Import.

1. Click on the Test-takers icon on the Assessment Builder Bar.

This will take you to the Test-taker Library, which you will see on the left.

2. Click on the Test-taker class (folder) in the library in which you wish to import the new test-taker
profile.

3. Click on Import in the button bank below the library.

This opens a dialog box which asks you to select the format of the incoming test-taker metadata.
The supported formats are RDF (Resource Description Framework) or CSV (Comma-Separated
Values).

Importing Test-takers

4. Click the blue Browse button to find the file intended for import (alternatively, the file may be
dragged and dropped into the box below the button).

5. Once the file is selected, click on the blue Import button.

This will import the test-taker's profile into the test-taker library, after which he can receive Tests.
Test-takers

Exporting Test-takers
Test-taker profiles may be assembled on almost any computer that has access to TAO. However,
there may be situations in which sharing test-taker profiles is useful. For instance, a test-taker who
has successfully passed a prerequisite course may now enroll in a more advanced study with a
different teacher. Test-taker profiles can be shared in a few easy steps.

1. Click on the Test-taker icon on the Assessment Builder Bar.

2. Click on either the Test-taker folder (class) you want to export in the Test-taker Library on the
left-hand side, OR click on New class in the button bank below the library to create a new folder for
the test-takers you would like to export.

Creating a new class (i.e. a new folder) allows you to place test-takers in a distinct location in order
to be transferred from one computer to another (for organized groups of students, review how to
export groups). When doing this, the test-taker class can be renamed in the Edit class test-taker
dialog box in the field marked Label. Clicking the blue Save button will create the class.

Note: An individual test-taker can be exported without creating a new class to transfer it to. It may
be helpful, however, to organize the entire export from a single folder.

3. If you have created a new class for this purpose, move the test-takers which you want to
transfer to this new class in the test-taker library.

This selects the test-takers which are to be exported.

4. Select the class, then click Export in the button bank below the library.

The folder or test-taker metadata file which is to be exported should be formatted as a Resource
Description Framework (RDF) file.
Exporting Test-takers

5. Click the blue Export button in the dialog box to continue with the export.

6. Select the location to which you want to export your test-taker file, and then click Save.

The test-taker metadata can then be transferred either to a data storage device or a computer
network. The next step in the transfer is to import the test-takers onto the desired computer.
Groups

Groups: An Overview
Groups are organized collections of Test-takers who take the same Tests (or assessments)
throughout the duration of a course of study.

This section provides an overview of how to manage your groups of test-takers, including what you
need to do to create groups, add new test-takers to groups, and how to re-use the same groups in
different locations.

Groups

1. Creating a new group.

Since a group is made up of individual test-takers, it is necessary to create a profile for each test-
taker (see the section on Creating a Test-taker) prior to assigning test-takers to groups. Once this
has been completed, groups of test-takers can be created. See Creating a new Group for
information on how to do this.

2. Extending a group.

The section Creating a new Group also contains information on how to add test-takers to existing
groups.

3. Importing and exporting groups

Groups of test-takers can be imported to and exported from different storage devices, to enable
them to be used in different test scenarios. For more information on how to do this, see Importing
Groups and Exporting Groups.
Groups

Creating a New Group


Groups are organized collections of Test-takers who take the same assessments throughout the
duration of a course of study. Examples of a group include Laboratory Sections, Discussion Groups,
or any cohort or subdivision of students assessed using the same examinations, test
administrators, and grading criteria as others within their group. Since a group is made up of
individual test-takers, it is necessary to enter the meta-data for each test-taker prior to assigning
test-takers to groups.

Now, let's walk through the steps of creating a group.

1. Click on the Groups icon in the Assessment Builder Bar.

This will take you to the Groups page, and will show the last group which you, or the user before
you, created. On the left-hand side of your screen you will see the Group Library of existing groups.
The last group which was created will be highlighted in the library. In this tour, however, you will
create a new group.

Creating a New Group

2. Click on the New Group icon in the button bank under the library to create a new group.

This brings up dialog box entitled Edit group for the newly created group. If desired, rename the
group through the Label field, and then click the blue Save button.
Note: To create a new group in a different folder, select the desired folder in the library, and then
click on the New Group icon in the button bank. To create a new folder (in TAO these represent
new classes), click on New class in the button bank, and then give it a label. Highlight where to put
the new folder relating to the new class. (Do not confuse Class with Group - 'Class' in this instance
means a folder, while a 'Group' means a class, section, or any other cohort of students).

3. Populate the group.

To the right of the Edit group pane are two other panes: the Select group test takers pane on the
left, and the Deliveries pane on the right. To select test-takers for your group, check the boxes by
the relevant names in the left pane, and then click the blue Save button for that pane. If a delivery
has already been assembled for this group, it will appear in list of deliveries in the right pane. It
should be selected before clicking the blue Save button for that pane.
Groups

Importing Groups
Group metadata files can be taken from any computer and imported onto any other computer that
also has access to TAO. This is done using an operation called Import.

1. Click on the Groups icon on the Assessment Builder Bar.

This will take you to the Group Library, which you will see on the left.

2. Click on the Class in the library in which you wish to import the new group.

3. Click on Import in the button bank below the library.

This opens a dialog box which asks you to select the format of the incoming group metadata. The
supported formats are RDF (Resource Description Framework) or CSV (Comma-Separated Values).

Importing Groups of Test-takers

4. Click the blue Browse button to find the file intended for import (alternatively, the file may be
dragged and dropped into the box below the button).

5. Once the file is selected, click on the blue Import button.

This will import the Test-taker group into the Group library, after which its members can receive
Tests.
Groups

Exporting Groups
Formal Groups of Test-takers, such as laboratory or lecture groups, may be assembled on almost
any computer that has access to TAO. However, there will be situations in which sharing the
metadata of formal test-taker groups will be useful. For instance, an instructor may need to
transfer metadata for an entire class to a substitute or replacement teacher during an absence.

Test-taker groups can be shared in a few easy steps.

1. Click on the Groups icon in the Assessment Builder Bar.

2. Click either on the Class or Group symbol in the Group Library on the left-hand side to
select one or multiple groups.

3. After the selection, click Export in the button bank below the library.

In the dialog box, confirm that the folder or group metadata file highlighted is the one that should
be exported as a Resource Description Framework (RDF) file.

Exporting Groups of Test-takers

4. Click the blue Export button in the dialog box to continue with the export.

5. Select the location to which you want to export your group file, and then click Save.

The group metadata can then be transferred either to a data storage device or a computer
network. The next step in the transfer is to import the group file onto the desired computer.
Deliveries

What is a Delivery?
A Delivery is a published Test, and as such is immutable. Comparable to a physically published text
(such as a pdf file) and its source (e.g. Word) document, consequent changes in the test do not
impact a delivery. An author would need to create a new delivery, i.e. re-publish the test, for the
changes made to a particular test to take effect for the Test-takers assigned to the new delivery.

In practice, a delivery is the assembly of all the information required to assign and send out tests
to selected test-takers. This includes such information as the test which is to be delivered, the
group of test-takers to receive the test, and the circumstances (in particular the time frame) in
which the test may be taken.

Deliveries effectively define the life cycle of an Item session (i.e. the time between test-takers
starting an item and finishing it). This cycle begins when the test-taker becomes eligible for a test
delivery, a condition that is set during the actual authoring of the delivery. In simple terms, the
cycle continues through the test-taker's completing of the tests selected, and ends when test
Results for all Interactions are determined. In some instances, more than one attempt to interact
may be allowed, and the delivery life cycle continues until no remaining attempts are permitted.
The cycle finishes when the results are recorded; these may or may not be displayed to the test-
taker, but they are made available to at least the test administrator after the life cycle is complete.
Deliveries

Deliveries: An Overview
Deliveries provide the means of publishing and administering Tests. These govern when a test is
taken, by whom, and how long it is.

This section provides an overview of how to manage your deliveries, including what you need to do
to construct them, and what to do with them afterwards.

Deliveries

1. Creating a new Delivery

A delivery is a published test. In practice, publishing a test - i.e. creating a delivery - involves
assembling all of the information required to assign and send out a particular test to selected Test-
takers. This includes such information as the test which is to be delivered, the group of test-takers
to receive the test, and the circumstances (in particular the time frame) in which the test may be
taken. See the Tests section for more details on what a test is before it is assembled as a delivery.

A delivery can only be assembled after Items have been created and populated and compiled in a
test. Profiles of the Test-takers also need to be created and then gathered into Groups.

A new delivery can then be put together: see Creating a new Delivery for information on how to do
this.

2. Using your delivery with test-takers

Once your delivery is assembled, the groups of test-takers identified are able to sit the assessment
you have prepared within the defined scenario. The Results can then be collected for the whole
delivery. For details on how to view the results of your delivery, see the section on Viewing Results.
Deliveries

Create a New Delivery


Assembled Deliveries provide the means of publishing and administering Tests. These govern
when a test is taken by selected individual or Groups of Test-takers and how long tests will be. A
delivery can only be assembled after the creation of Interactions, the assembly of the test, the
creation of test-taker profiles, and the gathering of test-takers into formal groups.

Let's walk through the steps of creating a delivery.

1. Click on the Deliveries icon Delivery in the Assessment Builder Bar.

This will take you to the Deliveries page, and will show the last delivery that has been created. On
the left-hand side of your screen you will see the Delivery Library of existing deliveries. The last
delivery which was created will be highlighted in the library. In this tour, however, you will create a
new delivery.

2. Click on the New delivery icon Delivery in the button bank under the library.

This brings up a dialog box entitled Create a New Delivery, which asks for a test selection. From
the pull-down menu, select the test that is to be sent to test-takers in this delivery. Once selected,
click Publish.

New Delivery

A properties panel will come up.


Note: To create a new delivery in a different folder, select the desired folder in the library, and then
click on the New delivery icon in the button bank. To create a new folder (in TAO these represent
new classes), click on New class in the button bank, and then give it a label. Highlight where to put
the new folder relating to the new class.

3. Set the delivery properties.

Label: The default name of the delivery is 'Delivery of Test Name'. This can be changed as needed.

Title: Give your delivery a title.

Maximum Executions: If left empty, test-takers may take the delivered test an unlimited number
of times. Setting this number to any non-zero integer will limit the test-takers to that number of
attempts.

Start Date and End Date: These fields establish the earliest date and time, and latest date and
time that the test can be taken. Clicking on either date field provides a graphical interface which
allows you to set the date and time (using a calendar, and slider controls for hours and minutes).
However, it is also possible to provide the date and time by typing them in manually in the
following order: year, month, day, and 24-hour clock (YYYY-MM-DD HH:MM)

Display Order: This allows you to specify the order in which the deliveries are presented.

As a test-taker you are presented with a list of all the deliveries which are assigned to you. These
are normally unordered unless you set display order to a numeric value. This numeric value defines
the position of delivery in the Available list.

Access: If the Guest Access box is checked, people who are not registered as test-takers can view
the delivery. This enables a test author or an administrator to preview the test without having to
assign it to a Test Center to try it out.

Test Runner Features: Checking the Security plugins box ensures restricted access to the
delivery by showing it in forced full-screen mode, and detects certain key presses, such as print
screen for creating a screenshot. When combined with proctoring, the test will be halted upon
detection of the relevant key presses, or of loss of focus (if the test-taker uses alt-tab or exits full
screen mode).

4. Assign the test to a group of test-takers in the panes on the right of the Properties Panel.

The Assigned to pane contains all available groups of test-takers. Select a group, and click the blue
Save button below. If there are test-takers in the group who, for whatever reason, should not take
this exam (due to absence, remedial assignments, etc.), click the blue Excluded Test-takers button
at the bottom of the test-takers pane. To move a person from assigned status to excluded status,
simply click on that person's name in the Assigned column. Click Save to close the pop-up window.
Taking a Test

Accessibility Tools
The Accessibility Tools are a set of accommodation tools which are designed to aid the Test-taker
in various ways when taking tests.

The Accessibility Tools which have been activated for a Test can be found in the Properties Panel
on the right.

Note: Accessibility tools can be activated either for a whole Section of a test, or for each Item. Not
all Accessibility Tools are activated for every test - this depends on the test configuration.

The tools available are as follows:

Calculator
This option provides the test-taker with a basic calculator.

Calculator

Answer Eliminator
The Answer Eliminator allows the test-taker to eliminate answers in Choice interactions. This is
useful if there is a long list of answer choices, and the test-taker has a learning disability.
Answer Eliminator

Answer Masking
Answer Masking allows the test-taker to mask and unmask answers in choice interactions.

Answer Masking

Area Masking
Area Masking allows the test-taker to mask parts of the item with a movable mask.
Area Masking

Flag
Flagging an answer to a particular question allows the test-taker to review the answer at a later
stage. Flagged items are marked in the review panel on the left. The test-taker can return to
flagged items by clicking on the flag in the review panel.

Highlighter
The Highlighter allows the test-taker to highlight parts of the text in an item.

Highlighter
Line Reader
The Line Reader allows the test-taker to visually isolate a line of text.

Line Reader

Magnifier
The Magnifier provides the test-taker with a movable magnifier tool.

Magnifier

Zoom Tool
The Zoom Tool allows the test-taker to zoom in on an area of an item.

Zoom Tool

Text-to-speech
The text-to-speech functionality allows the test-taker to hear the questions of a test being read
aloud. Note: This functionality is only available on the Premium Edition of TAO.

A list of keyboard shortcuts for these tools can be found in the section Keyboard Shortcuts.
Taking a Test

Review Panel
A Review Panel can be made available to Test-takers, in which they can see their progress through
a particular Test.

When a test-taker moves to the first question of a test, the review panel, if activated by the Test
Author, will automatically appear on the left.

The review panel is divided into two parts: the first section shows information about the general
Test Status, and the second section gives more detailed information on the different Test-parts.

In Test Status the following general statistics are given about the test-taker's progress:

Viewed: In the Viewed box, test-takers can see how many screens form the test, and how many of
them they have already viewed.

Answered: Here, test-takers can see how many questions there are in the test, and how many
they have answered.

Unanswered: Test-takers can see here how many of the questions in the test they have not
answered

Flagged: The number of questions which have been flagged for later review will be shown in this
box. See the section on Accessibility Tools for more information on flagging.

In the second section, the test content for each Test-part is broken down by Interaction, and
organized into types, so that test-takers can see a more detailed overview of their progress.

By selecting the corresponding icon at the top of the second section, test-takers can choose to
display all interactions, only the interactions they have not yet answered, or only the interactions
which have been flagged for review.
Review Panel
Taking a Test

Test Navigation
Test-takers navigate through a Test using the buttons which appear on the bottom right of the
screen once they have begun the test.

There are five navigation buttons, which appear when applicable:

Next: By selecting the Next button, the test-taker will be taken to the next question in the test.
The answer given to the question on the screen will be submitted for processing. You can also hit
the letter J on your keyboard instead.

Skip: By selecting the Skip button, the test-taker will be taken to the next question in the test. Any
answer given to the question on the screen will be disregarded, and therefore not submitted for
processing.

Previous: By selecting the Previous button, the test-taker will be taken back to the preceding
question. You can also hit the letter K on your keyboard instead.

End test: This button appears on the last page of the test only, after the last question. It will take
the test-taker back to the list of available tests.

Skip and End test: This button appears as the skip option when the last question is on the last
page of the test.
Taking a Test

Keyboard Navigation
Keyboard shortcuts for the Accessibility Tools are available to the Test-taker.

The set of keyboard shortcuts provided is as follows:

Tool Action Shortcut


Next Item J
Previous Item K
Answer Masking Toggle D
Area Masking Toggle Y
Calculator Toggle C
Highlighter Toggle Shift + U
Line Reader Toggle G
Magnifier Toggle L
Magnifier In Shift + I
Magnifier Out Shift + O
Magnifier Close Esc
Zoom In I
Zoom Out O

A description of the Accessibility Tools to which the keyboard shortcuts apply can be found in the
section Accessibility Tools.
Taking a Test

Text-to-Speech
Test-takers can elect to use the text-to-speech functionality for a Test.

The text-to-speech functionality allows the test-taker to hear the test questions, or parts of them,
read aloud.

Note: This functionality is only available in the Premium Edition of TAO.

To activate this functionality, test-takers need to carry out the steps below:

Click on the headphones icon which appears at the bottom of the screen after beginning the test.

Four icons, representing four options, will appear:

Hand over button: By clicking on this and then on a specific question in the test, that question
will be read aloud. The current word being read out will be highlighted.

Play button: The questions on the current page of the test will be read aloud. The current word
being read out will be highlighted.

Stop button: The recording will be halted.

Settings (interlocking cogs): The speed of the speech the test-taker hears can be controlled
here.
Taking a Test

Hide Time Limits


A timer shows the amount of time which a Test-taker has left on a Test. This can be hidden if
required.

If a time limit has been set for a test, the time a test-taker has left will be shown in the middle of
the bar across the top of the screen once the test has been started.

Test-takers can choose to hide this if they wish, by clicking on the timer next to the time display.
Taking a Test

Connection Indicator
The connection indicator shows connectivity to the internet.

When a Test taker begins a Test, a connection indicator in the form of a connectivity icon will
appear on the left-hand side of the blue bar across the top.

Test-takers can check connectivity by hovering over it: if their computer is connected to the
internet, the message Connected to Server will appear.
Results

Results: An Overview
The Results of Tests are collected for each test Delivery.

This section provides an overview of how to manage the results of your tests, including how to view
the results and how to export them to a different device.

Viewing the Results

1. Viewing results

The Results Tables associated with each delivery are stored in the library and the results for any
Test-taker can be viewed. For information on how to do this, see the section on Viewing Results.

2. Exporting results

There will be situations in which posting results on another system will be useful. Exporting results
tables can be done in a few easy steps. For information on how to do this, see the section on
Exporting Results.
Results

Viewing Results
The Results of a Test are collected for each Delivery, and can be viewed under the Results icon.

Viewing the Results

1. Click on the Results icon in the Assessment Builder Bar.

This will show the last Results Table which you, or the user before you, opened. On the left-hand
side of your screen you will see the Library of existing results for different deliveries.

2. Click on the desired delivery in the library.

This will bring up the test results (the Results Table) associated with the given delivery. Typically,
this should be done only after the submission deadline has passed, so that all the results can be
compiled in one table.

3. Click View on the right of the Test-taker whose results you want to see.

A results table will appear. The first table is entitled Test taker, and contains the information of the
student who took the test, e.g. name, login, email.

On the right of this table there is a filtering drop-down menu, and a blue Filter button.

Below the Test Taker table is the Test Variables table. Below exit codes (the exit code information
can be ignored), you will see the “LtiOutcome” variable and its corresponding numbers. This
reflects the total test score for the test-taker in question. If all of the questions were answered
correctly, this will be "1". If some of them were incorrect, it will be below 1.
Below the Test Variables table is a detailed analysis of the Items in the test. This item table is
divided into three parts: Responses, Grades, and Traces.

Responses: includes information on the answer chosen by the test-taker.

Grades: includes information on the score given, and whether the item was completed (or skipped).

Traces: records the start and end times for the test, as well as the time zone.

As mentioned above, these three outputs can be filtered out.

If you would like to see any of the test items pertaining to the results you are viewing, you can click
on the Review button next to the relevant item. The assessment question will then be shown. Click
on the blue close button in the right-hand corner of the screen to return to the results.
Results

Exporting Results
Results Tables may be set up on almost any computer that has access to TAO. However, there will
be situations in which posting Results on another system will be useful. Transferring results tables
to another device can be carried out in a few easy steps.

1. Click on the Results icon in the Assessment Builder Bar.

This will show the last results table which you, or the user before you, opened. On the left-hand
side of your screen you will see the Library of existing results for specific Deliveries.

2. Click on the desired delivery in the library.

This will bring up the Test results (the results table) associated with the given delivery. Typically,
this should be done only after the submission deadline has passed, so that all the Results can be
compiled in one table.

Note: You can view the results by clicking on the blue View button to the right of a particular Test-
taker. For more information on the way the results are presented, see Viewing Results.

3. Click on the blue button marked Export Table on the Action Bar at the top of the results table.

This brings up the names of all the test-takers who are associated with the results table.

Above the names are three buttons which you can toggle between, depending on the information
you wish to include in the file to be exported.

Add Names/Anonymise: This controls the display of names associated with the results table
(Anonymise appears here in red, as the names are added by default; by clicking on Anonymise the
names will be taken out).

Add All Grades/Remove All Grades: This controls the display of scores associated with the results
table (This is blue, so by default the scores are not included; by clicking on Add All Grades they will
be added).

Add All Responses/Remove All Responses : This controls the display of responses associated with
the results table (This is blue, so by default the responses are not included; by clicking on Add All
Responses they will be added).

A filter control sits between these buttons and the table, and can be used to display the results
based on the order in which the settings above (i.e. the variables relating to test-takers' names,
scores and responses) were received. For example, you may wish to display all the variables on
your screen, and then select a subset of the data to be exported using the filter.
Exporting the Results

4. Click the Export CSV File button below the results table.

This creates a character separated value file formatted for Microsoft Excel which can be translated
into score information. Once the button is clicked, a window appears asking where you would like
to save the file.

5. Select the location to which to export your results, and then Save.

The results table, easily opened in Excel, can then be transferred either to a data storage device or
a computer network.

6. Exporting the results for an entire delivery.

It is possible to export the results for a whole delivery, or more than one delivery - in other words,
the entire tree structure of results for that delivery. To do this, click on the Class in the results
library for the delivery you wish to export.

Next, click on the Export CSV icon in the button bank under the library. If the background tasks
functionality is installed on your version of TAO, the export process will be transferred to it (shown
in a circle to the left of the Properties icon on the assessment builder bar). Clicking on the circle
opens the list of tasks, containing information about each one. Click on the download icon to the
right of your results export to download the CSV file to your computer.

If the background tasks functionality isn't installed in your TAO environment, the download will
begin as soon as you select Export CSV, and will appear in the downloads area of your computer.
Premium Edition

Customization in TAO
You can make the environment your own by customising your TAO installation and adding your
own branding to the TAO product, so that it is recognizable to your Test-takers and other users.

Follow the steps below to customize your TAO environment and add your own branding:

Hover over the Settings icon (depicted by three cogs) located on the right-hand side of the
Assessment Builder Bar, and click on the Look and Feel tab.

1. Select a look and feel for your environment.

The color scheme options for the background to your TAO environment will appear on the screen.

In the default theme for the TAO environment, the assessment builder bar is black, the Action Bar
is blue, and the background is white, but there are nine other color scheme options to choose
from.

Click on the screen of your choice to set the desired look and feel of TAO.

2. Upload your company's logo.

Having your company or organization's own logo on your TAO environment makes the environment
your own.

Select your logo file using the blue Browse button, or drag and drop the file from your hard drive.

Next, add a title, and the link which you would like to be activated. When a user clicks on your logo,
the linked URL will open in a new window.

Now add the information Operated by to the page footer. You can add your organization or
company's name, and an email address, in the boxes.
Adding Look & Feel, and your own Logo

If you are happy with the information you have entered, click on Apply changes to save it. If not,
click on Discard changes.

Your institution or company may also want to provide direct access to the TAO environment from
its own website. As a subscriber to the Premium Edition of TAO, your organization can set up a
shortcut to the TAO environment via a URL which you own, to refer users directly to TAO.
Premium Edition

Enabling Text-to-Speech
In the Premium edition of TAO, Tests can be configured to provide Test-takers with the option of
hearing the test content as well as reading it.

The text-to-speech functionality allows the test-taker to hear the test questions, or parts of them,
read aloud. It is configured on the Section and Item levels of a test.

To carry out the steps below to activate this functionality for your test:

After creating a new Test, configure the Properties for each section or item for which you wish to
activate the test-to-speech functionality.

Click on the appropriate properties icon (depicted as three interacting cogs) for the relevant
section/item. The Section level properties icon can be found on the right of each section. The Item
level properties icon can be found on the right of each item.

The properties will appear in the Properties Panel on the right.

Open the drop-down menu Test-taker Tools and check the Text to Speech box at the bottom.

The text-to-speech functionality is now enabled, and will be available to test-takers for the relevant
sections or items of your test.

Text to Speech
Premium Edition

Test Centers: An Overview


Test Centers deliver the assessments assembled in TAO to Test-takers. They are typically
institutions in the education sector.
This section provides an overview of the role test centers play in the TAO environment, as well as
how to manage your test centers.

1. The role of test centers

Test centers assign Deliveries to Groups of test-takers. Before a test center can assign a specific
delivery to test-takers, the delivery first needs to be released to that test center.

See the section Licensing a Delivery to a Test Center for details on this.

2. Managing test centers

Each test center needs to be registered in TAO. Typically, the Tenant Administrator would set up
the test center structure, and create the users with the appropriate roles.

For more information on how to add a test center, and which users and their associated roles are
needed, see the section Creating a new Test Center.

For information on how to remove an existing test center from your test center library, see
Removing a Test Center.
Premium Edition

Creating a new Test Center


Test Centers deliver Tests to Test-takers.
This section tells you how test centers are created, and which users and their associated roles need
to be created for a TAO-registered test center to function.

1. Create a new test center.

To create a new test center in TAO, first click on the Test Centers icon in the Assessment Builder
Bar. This opens the Test Centers page, with the Library of Test Centers on the left.

The last test center to be edited (either by you or a previous user) will be highlighted in the library.

Next, click on the New Test Center icon in the button bank under the library. This will define a new
test center in the selected folder.

Then, label your test center in the space provided on the canvas, and click Save.

2. Assign the roles which your new test center will need.

There are three roles which need to be assigned within a test center:

Administrator: The Test Center Administrator manages the Proctors at the test center, in other
words can create and remove proctors, as well as authorize them to proctor specific deliveries.
Users with this role can also proctor themselves.

Proctor: The proctor role is assigned to the person who will oversee the execution of a particular
Delivery. See the section on Proctoring for more information on what proctoring entails.

Sub-center: Using the sub-center role, test centers can be linked to each other within a hierarchy.

See the section Licensing a Delivery to a Test Center for details on how a test center can deliver
tests to test-takers.
Creating a new Test Center
Premium Edition

Importing a Test Center


Test Center metadata files can be taken from any computer and imported onto any other computer
which also has access to TAO. This is done using an operation called Import.

Follow the steps below to import a test center:

1. Choose a location for your test center.

Click on Test Centers on the Assessment Builder Bar. This will take you to the Test Center Library,
which you will see on the left.

Then click on the Test center class (folder) in the library in which you wish to import the new test
center.

2. Import the selected test center.

Click on Import in the button bank below the library. This opens a dialog box. The supported format
for the file to be imported is CSV (Comma-Separated Values). Field delimiters and field enclosers
can be defined in the appropriate boxes.

Next, click the blue Browse button to find the file intended for import (alternatively, the file may be
dragged and dropped into the box below the button).

Once the file is selected, click on Next . This will take you to a new screen.

To import the chosen test center, click on the blue Import button. This will import the test center's
profile into the Test Center library. The impoirted test Center can then deliver TAO assessments to
Test-takers.
Premium Edition

Licensing a Delivery to a Test Center


In order to deliver a Test - in other words, to enable Test-takers to sit the test - it is usually referred
to a Test Center in the form of a Delivery.
It is the task of the Tenant Administrator to refer deliveries to the appropriate test centers.

Follow the steps below to enable a test center to assign a delivery:

1. Select the test center which is to assign the delivery.

Click on the Test Centers icon in the Assessment Builder Bar. This opens the Test Centers page,
with the Library of test centers on the left.

The last test center to be edited (either by you or a previous user) will be highlighted in the library.

Next, select from the library the test center you wish to assign the delivery to a group of test-
takers.

2. Add your delivery to their list of eligible deliveries.

At the bottom of the canvas which appears is a list of Eligible Deliveries. In the list are the
deliveries which a test center is licensed to deliver.

Now, click on Add to add a delivery from your delivery library to their Eligible Deliveries, or click on
Import to import a delivery from another computer.

3. Adjust the settings for your delivery.

Under Actions, to the right of the delivery you have just added, are the settings for that delivery.

Edit: Here, you can edit the list of test-takers eligible to take this test. Only the tenant
administrator is authorized to do this.

Clicking on Edit will bring up a list of all test-takers registered in the relevant TAO installation. The
test-takers eligible for this test will have a tick in the box next to their name. This list can be
edited.

Proctor/Un-Proctor: You can toggle between these two settings. The default setting is Proctor.
Click on Un-Proctor to disable proctoring, or Proctor to enable the delivery to be proctored.

Proctoring can also be set when a new delivery is assembled: see the section Create a new
Delivery for details.

See the section on Proctoring for more information on the role of proctors, and what a proctor is
authorized to do.

Remove: Clicking on Remove will remove this delivery from the list of eligible deliveries of that
test center.
Licensing a Delivery to a Test Center
Premium Edition

Removing a Test Center


Test Centers deliver Tests to Test-takers.
This section tells you how test centers are removed from the TAO Test Center Library.

1. Remove a test center.

To remove a test center from the test center library, click the Test Centers icon in the Assessment
Builder Bar to open the Test Centers page, with the test center library on the left.

Select the test center in the library which you wish to delete, and then click on the Delete icon in
the button bank under the library.
Advanced Features

User Settings
The TAO Administrator Guide describes how you can change your password and your settings

User Settings
Advanced Features

Proctoring
The Proctoring extension allows the administrators of deliveries to monitor a Delivery. By assuming
the role of proctor, a teacher or administrator is able to follow the progress of Test-takers sitting a
Test, submit a report on the status of the delivery and also intervene if necessary. There are two
roles involved, a proctor and one or multiple test-takers.

Follow the steps below to assign a proctor to a delivery.

1. Create a user with the role proctor

See the User Management section for details on how to do this.

Adding a Proctor as User

2. Create a delivery which requires proctoring

See the Create a new Delivery section for details on how to create a delivery.

The properties pertaining to your new delivery will appear on the Canvas in the middle of the
screen. Check the box Require proctoring at the bottom of the properties list, and save your
delivery.
Configuring a Delivery

3. Log in as proctor

A list of all the deliveries which require a proctor will appear.

Deliveries requiring a Proctor

4. Click on monitor for the delivery you wish to proctor.

A list of all the test-takers who are logged in for this session will appear. You can filter the test-
takers either by date or by status (started, terminated, etc):
Filtering test-takers according to date

Note: You will need to click on 'refresh' to see test-takers who have logged in since you logged in
as proctor. If none are logged in, and you wish to continue, proceed to step 5.

You can now carry out the following actions for this delivery:

Possible Actions when Proctoring

Authorize: authorize a test-taker or group of test-takers to start the session

Pause: Pause a session (e.g. if there is an interruption)


Report: Report an irregularity

Terminate: Stop a session (e.g. if there is a power cut)

History: Check the history of the session

These actions can be carried out for individual test-takers or for all test-takers.

Note: the above steps are only possible if one or more test-takers have been assigned to the
delivery, and are logged in to a session. If no test-takers are logged in to the session, it is not
possible to monitor the delivery.

No test-takers logged in

Carry out Step 5 below if you wish to simulate a test situation in order to experiment with the
proctoring option.

5. Testing the proctoring functionality

To try out the proctor functionality you will need to simulate a test scenario. To do this, log in as
both proctor and test-taker. If this is to be done on one computer, you either need to use two
different web browsers, or one browser which is set up in one instance in regular mode and in
another instance in private mode.

Log in as a test-taker on one of the browsers, and as a proctor on the other. Continue the
procedure above from step 4.
Appendix

Glossary
The following are specialized terms which users of TAO may frequently run into while using the
program to author tests, administrate them, or generate assessment reports.

Action Bar
The menu, in the form of a blue bar, situated under the black Assessment Builder Bar, which
appears across the top of all TAO web pages.

Adaptive Test
A Test which changes the presentation of Items based on Test-taker response. Generally achieved
through the use of pre-conditions and branching. Advanced.

Assessment Builder Bar


The menu, in the form of a black bar, across the top of all TAO web pages.

Assessment Variable
Technical term for the final score assigned to a Test-taker's performance in an Item Session or a
Test Session.

Associable Hotspot Panel


The panel on the left of the Hotspot Interaction which is being edited, from which the shapes
needed for the hotspots can be chosen.

Attempt
A single candidate interaction with an Item that possibly assigns values to or updates an
associated Response Variable.

Author
A person who authors material for use in TAO assessments in TAO, typically a teacher. See Item
Author and Test Author.

Authoring System
A system used by authors to create and edit Items and Tests. TAO is an example of an authoring
system.

Back office
The administrative interface of TAO, which manages items, tests, test-takers and so on.

Base-type
A predefined data type used to define a value set from which Item Variables are drawn.

Basic Item
An Item which contains one and only one Interaction.
Candidate Session
The time during which the Candidate or Test-taker is interacting with an Assessment Item as part
of an Attempt. An Attempt may extend across more than one Candidate Session (such as when a
Test-taker terminates one Candidate Session in order to answer another question first, and then
starts another Candidate Session to return to the original question).

Canvas
The main area in the middle of the screen for the user to define the contents of Items, Tests, etc.

Class
A group of related Items, Tests, etc. Folders in the Library represent Classes.

Common Interactions
An Interaction type which covers many of the simple interactions that are commonly used in
testing.

Common Interactions Library


The first section of the Interactions Library which appears on the left when editing an item. It
contains the Common Interactions used in TAO.

Composite Item
An Item which contains more than one Interaction.

Container
An aggregate data type that can contain multiple values of unmodified Base-types, or even be
empty.

Custom Interactions
An Interaction type which provides library space for miscellaneous Interactions developed by the
user.

Delivery
An assembly of all information required to assign and send out Tests to selected Test-Takers.

Delivery Library
The Library of existing Deliveries, situated in the panel on the left.

Delivery Library Panel


The panel on the left, where existing Deliveries are shown.

Delivery System
A system which administers and delivers assessments to Test-takers through the use of a delivery
engine, or a process that coordinates both Item Delivery and response evaluation (scoring) and
Feedback.

End Attempt
The possibility to exit a test item before it is complete.

Event Log
The Event Log in TAO logs all events, from a user logging in to the system, to creating a new test.
The log creates an event ID, provides information on the user and their role, as well as the type
and date of the event.

Extension
The components, or building blocks, of which TAO is made. Every extension adds a new set of
features.

Extensions Manager
Architecture used to view and manage installed and available extensions.

Feedback
Any material presented to the candidate as a result of an outcome variable meeting or exceeding
particular conditions. This can include integrated, modal, and Test Feedback.

Formula Editor
An editor containing mathematical symbols, used to insert Math Expressions in tests.

Front Office
The test-taker screen.

Global Manager
The global manager has access to the entire platform apart from the system components such as
the extension manager.

Graphic
A picture or other image upon which Graphic Interactions are based. They often form the
background on which the Interaction is then placed.

Graphic Interactions
An Interaction type which covers the Graphical Interactions commonly used in testing.

Graphic Interactions Library


The third section of the Interactions Library which appears on the left when editing an Item. It
contains the Graphic Interactions used in TAO.

Group
Organized collections of Test-Takers who take the same assessments throughout the duration of a
course of study.

Group Library
The Library of existing Groups, which appears in the panel on the left when you select Groups from
the Assessment Builder Bar.

Inline Interactions
An Interaction type which covers the text-based interactions commonly used in testing.

Inline Interactions Library


The second section of the Interactions Library which appears on the left when editing an item. It
contains the Inline Interactions used in TAO.

Interaction
The part of an Item which allows the candidate to interact with an assessment, selecting or
constructing a response.

Interactions Library
The panel which appears on the left when editing an item contains the Interactions Library. It
contains the all the available Interactions used in TAO.

Interaction Properties Panel


The panel which appears on the right when an Interaction is being edited, where it is possible to
define certain properties pertaining to this Interaction.

Integrated Feedback
Feedback which is integrated into an Item. Unlike with Modal Feedback, Test-takers may update
their responses while viewing Integrated Feedback.

Item
The smallest exchangeable object in an assessment. An Item is more than a 'Question' in that it
also contains the contextual instructions, the processing to be applied to the Test-taker's
response(s), and any Feedback (including hints and solutions). May also be called Assessment
Item.

Item Author
A person who authors and manages items and media to be used in TAO, typically a teacher.

Item Properties
Item Properties define attributes, or characteristics, pertaining to Items.

Item Scoring Rules


See Response Processing.

Item Session
The accumulation of all Test-taker Attempts at a particular Item.

Item Session Control Panel


The Item Session Control Panel contains settings for items in a particular test such as the time
allowed for that item. See Test Settings for the properties which can be set in this panel.

Item Variable
A variable which records Test-taker responses and any outcomes assigned during response
processing during an Item Session. As a special kind of Assessment Variable, Item Variables are
also used to define Item Templates.

Items Library
The Library of existing Items, which appears in the panel on the left when you select Items from
the Assessment Builder Bar.

Library
The panel on the left represents the various Libraries. There are Libraries for existing Items, Tests,
Deliveries, etc, as well as for possible Interaction types.

Library Panel
The panel on the left, where existing Items, Tests, Deliveries, etc, as well as possible Interaction
types, are shown.

List Style
The possible answers will be presented to the Test-taker in the form of a list.

Material
All static text, image, or media objects which are intended for the Test-taker rather than for being
interpreted by a processing system. Interactions are not considered to be material.

Math Expression
Math expressions can be entered using MathML or Latex Math.

Media Formats
For maximum compatibility across browsers, mp4 or mpeg formats should be used for video (ACC
codec for audio + H264 for video codec), and mp3 or ogg should be used for audio.

Metadata
Information described in the properties of Interactions, Items, Tests, or Deliveries, including
everything except their content.

Modal Feedback
Feedback which is not integrated into an Item's body during presentation to the Test-taker.

Multiple Response
A Response Variable that serves as a container for multiple values taken from a value set defined
by a base-type. These are processed as an unordered list, and may be empty.

Ordered Response
A Response Variable which is a Container for multiple values taken from a value set defined by a
Base-type. These are processed as an ordered list (sequence) of values, but may be empty.

Outcome
The result of an Assessment Test or Item. These are represented by one or more Outcome
Variables.

Outcome Processing
The process which adds up the values of Item Outcomes (or Responses) in order to produce Test
Outcomes.

Outcome Variable
Variables taken from outcome declarations. Values are set either from a default given within the
declaration or by a response rule encountered during Response Processing (for Item outcomes) or
Outcome Processing (for Test Outcomes).

Pattern
Patterns can be set using regular expressions in the QTI creator. If the Test-taker's Response does
not match the Pattern, an error is shown. The Response cannot be submitted until the input is
corrected in line with the pattern.

Pre-formatted Text
Indicates that the text to be entered by the Test-taker is pre-formatted and should be rendered in a
way consistent with the definition of pre in XHTML.

Proctor
A teacher or administrator given permission to follow the progress of test-takers sitting a test. A
proctor can stop a test.

Proctor Administrator
A teacher or administrator given permission to follow the progress of test-takers sitting a test. A
proctor administrator can stop and restart a test.

Proctoring Screen
The screen from which the proctor administrates tests.

Properties
Properties define attributes, or characteristics, pertaining to Interactions, Items, Tests, or
Deliveries.

Properties Panel
The panel in which it is possible to define certain properties pertaining to the Interaction, Item,
Test, Delivery etc, which is being edited. Usually this is on the right.

Radio Button
A test type whereby the Test-taker is only permitted to select one answer. If an additional answer
is clicked, the selection will move to this choice, leaving the first one unselected.

Resource Manager
A method by which to manage and select media files for use in Interactions.

Response
Data provided by the Test-taker through interaction with an Item or Item Part. Associated values
are represented as Response Variables.

Response Processing
The process by which Response Variable values are scored and Item Outcome values are assigned.

Response Properties Panel


The panel on the right of the Interaction being edited, where it is possible to define the properties
pertaining to the Response to this Interaction.

Response Variable
Variables taken from Response declarations and bound to Interactions in the Item body, they
record the candidate's Responses.

Results
Results in TAO are the output from a Delivery, and contain the test details for each Test-taker
sitting a particular assessment.

Results Table
The Results are displayed in a Results Table containing the test results associated with a given
delivery, including information on each student who took the test.

Scoring Engine
The part of the assessment system that processes Test-taker Responses and scores them based on
Response Processing rules.

Section
A section of a Test, which can be managed independently. Settings for item selection and ordering
are defined at section level. Item session control and tools (by categories) defined here propagate
to the item level.

Single Response
A Response Variable which takes a single value from the value set defined by a Base-type.

Shared Stimulus
A stimulus is a piece of information which sets the context for a question or a series of questions. A
Shared Stimulus is one that is shared between multiple Items.

Tenant Administrator
This functional role allows full access to a TAO tenant, on the basis of a single account provided to
clients. The Tenant Administrator can create other accounts from this single account, but does not
have access to Role and Permissions management.

Test (or Assessment)


An organized collection of Items which are used to measure performance of a Candidate with
respect to that person's level of mastery of a given subject. Assessments contain all instructions
required for navigation through a sequence of Items. They also calculate the final score earned by
the Test-taker.

Test Author
A person who authors tests, and the management of tests in TAO, typically a teacher. This is an
additional role which needs to be allocated in combination with the Item Author role to be able to
select items and assemble tests.

Test Feedback
Feedback presented to a Test-taker, based on final score values.

Test Library
The Library of existing Tests, which appears in the panel on the left when you select Tests from the
Assessment Builder Bar.

Test Library Panel


See Test Library.

Test Part
Part of a Test, which can be managed independently. The navigation mode (linear or non-linear) is
defined at the test part level.

Test Session
The Interaction of a Candidate with a Test and the Items it contains.

Test Center
An educational institution registered to deliver TAO assessments.

Test Center Administrator


This functional role manages the proctors at a test center - in other words can create, remove and
authorize proctors for deliveries, as well as proctor.

Test Center Library


The Library of existing Test Centers which appears in the panel on the left when you select Test
Centers from the Assessment Builder Bar.

Test-taker (or Candidate)


A person who participates in a Test, assessment or exam by answering questions.

Test-taker Library
The Library of existing Test-takers, which appears in the panel on the left when you select Test-
takers from the Assessment Builder Bar.

Test-Taker Tools
A set of accommodation/accessibility tools designed to aid the Test-taker in various ways when
taking tests.

Text Block
A framework with which a block-related interaction can be created.

Text Editing Toolbar


The bar which appears above an item during authoring. It enables the item author to insert italics
or underline text, or add a math expression or table, for example. See Interaction Authoring Tools
for the tools available.

Text-to-speech
The text-to-speech functionality allows test-takers to hear the test questions, or parts of them, read
aloud.

Time Dependent Item


An Item which records the accumulated elapsed time for a Candidate Session in a Response
Variable, used during Response Processing.

Time Independent Item


An Item which does not use the accumulated elapsed time during Response Processing.

Tooltip
An option in the Text Editing Toolbar where an item author can add a hint, for example, for a
particular interaction.
Appendix

Contributing to TAO
When contributing to the TAO project, please first discuss the change you wish to make via issue.

Contributions to the TAO codebase are made using the fork & pull model. This contribution model
has contributors maintaining their own copy of the forked codebase (which can easily be synced
with the main copy). The forked repository is then used to submit a request to the base repository
to “pull” a set of changes. For more information on pull requests, please refer to GitHub Help.

The TAO development team will review all issues and contributions submitted by the community of
developers in the first in, first out order. During the review, we might require clarifications from the
contributor. If there is no response from the contributor within two weeks, the pull request will be
closed.

Contribution process
If you are a new GitHub user, we recommend that you create your own free GitHub account. This
will allow you to collaborate with the TAO development team, fork the TAO project and send pull
requests.

1. Check the open an closed issues for similar proposals of intended contribution before starting
work on a new contribution.
2. Create and test your work.
3. Fork the repository of the TAO extension you wish to contribute.
4. Create a branch that follows the GitFlow branching model.
5. Once development has been done, create a pull request that targets the development branch
of the extension you are contributing.
6. If your code depends on changes in another extension, create a draft pull request, until all
required pull requests are created.
7. Once your contribution is received the TAO development team will review the contribution and
collaborate with you as needed.

Code of Conduct
Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and
maintainers pledge to make participation in our project and our community a harassment-free
experience for everyone, regardless of any differences between us.

Our Standards
Examples of behavior that contributes to creating a positive environment include:

Using welcoming and inclusive language


Being respectful of differing viewpoints and experiences
Gracefully accepting constructive criticism
Focusing on what is best for the community
Showing empathy towards other community members
Examples of unacceptable behavior by participants include:

The use of sexualized language or imagery and unwelcome sexual attention or advances
Trolling, insulting/derogatory comments, and personal or political attacks
Public or private harassment
Publishing others' private information, such as a physical or electronic address, without
explicit permission
Other conduct which could reasonably be considered inappropriate in a professional setting

Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are
expected to take appropriate and fair corrective action in response to any instances of
unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits,
code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to
ban temporarily or permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.

Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is
representing the project or its community. Examples of representing a project or community
include using an official project e-mail address, posting via an official social media account or
acting as an appointed representative at an online or offline event. Representation of a project
may be further defined and clarified by project maintainers.

Enforcement
Instances of abusive, harassing or otherwise, unacceptable behavior may be reported by
contacting the project team at community@taotesting.com. All complaints will be reviewed and
investigated and will result in a response that is deemed necessary and appropriate to the
circumstances. The project team is obligated to maintain confidentiality concerning the reporter of
an incident. Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct in good faith may face
temporary or permanent repercussions as determined by other members of the project's
leadership.

Attribution
This Code of Conduct is adapted from the Contributor Covenant homeapge, version 1.4, available
at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html.

Templates
Contribution Template
### Subject of the issue{#appendix-subject-of-the-issue}

Describe your issue here.

### Your environment{#appendix-your-environment}

* Which browser and version are you using?


* Which PHP version are you using?
* Which Database engine and version are you using?
* Which Web server are you using?
* Which extensions are installed, and what version are they?

### Steps to reproduce{#appendix-steps-to-reproduce}

Tell us how to reproduce this issue.

### Expected behaviour{#appendix-expected-behaviour}

Tell us what should happen

### Actual behaviour{#appendix-actual-behaviour}

Tell us what happens instead

Pull Request Template

_Before you submit a pull request, please make sure you have to following:_

- [ ] The title of this pull request offers a good description of what is changed (as it is us
ed in release notes).
- [ ] Your branch follows the [GitFlow](https://datasift.github.io/gitflow/IntroducingGitFlow.
html) branching model.
- [ ] The code follows the [best practices (to be defined)](#).
- [ ] The functionality has been manually tested (if applicable).
- [ ] The update script has been run, and causes no issues.
- [ ] The functionality has been tested after a clean install.
- [ ] A new unit test has been created, or the existing test has been updated.
- [ ] All new and existing tests passed.
- [ ] The module version has been bumped in both the manifest.php, and Updater.php files.

---
**Depends on**
- [ ] List other pull requests that depend on this pull request
- [ ] Also list pull requests that require this pull request
---

Describe the changes you made in your pull request here

**Testing the changes**

Please provide a description of how to test the changes made in this pull request.
Appendix

GNU GENERAL PUBLIC LICENSE


Version 2, June 1991

Copyright (C) 1989, 1991 Free Software Foundation, Inc.


51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA

Everyone is permitted to copy and distribute verbatim copies of this license document, but
changing it is not allowed.

Preamble
The licenses for most software are designed to take away your freedom to share and change it. By
contrast, the GNU General Public License is intended to guarantee your freedom to share and
change free software -- to make sure the software is free for all its users. This General Public
License applies to most of the Free Software Foundation's software and to any other program
whose authors commit to using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to your programs, too.

When we speak of free software, we are referring to freedom, not price. Our General Public
Licenses are designed to make sure that you have the freedom to distribute copies of free software
(and charge for this service if you wish), that you receive source code or can get it if you want it,
that you can change the software or use pieces of it in new free programs; and that you know you
can do these things.

To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or
to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if
you distribute copies of the software, or if you modify it.

For example, if you distribute copies of such a program, whether gratis or for a fee, you must give
the recipients all the rights that you have. You must make sure that they, too, receive or can get
the source code. And you must show them these terms so they know their rights.

We protect your rights with two steps:

1. copyright the software, and


2. offer you this license which gives you legal permission to copy, distribute and/or modify the
software.

Also, for each author's protection and ours, we want to make certain that everyone understands
that there is no warranty for this free software. If the software is modified by someone else and
passed on, we want its recipients to know that what they have is not the original, so that any
problems introduced by others will not reflect on the original authors' reputations.

Finally, any free program is threatened constantly by software patents. We wish to avoid the
danger that redistributors of a free program will individually obtain patent licenses, in effect
making the program proprietary. To prevent this, we have made it clear that any patent must be
licensed for everyone's free use or not licensed at all.

The precise terms and conditions for copying, distribution and modification follow.

TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION


0. This License applies to any program or other work which contains a notice placed by the
copyright holder saying it may be distributed under the terms of this General Public License. The
"Program", below, refers to any such program or work, and a "work based on the Program" means
either the Program or any derivative work under copyright law: that is to say, a work containing the
Program or a portion of it, either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in the term "modification".) Each
licensee is addressed as "you".

Activities other than copying, distribution and modification are not covered by this License; they
are outside its scope. The act of running the Program is not restricted, and the output from the
Program is covered only if its contents constitute a work based on the Program (independent of
having been made by running the Program). Whether that is true depends on what the Program
does.

1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in
any medium, provided that you conspicuously and appropriately publish on each copy an
appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to
this License and to the absence of any warranty; and give any other recipients of the Program a
copy of this License along with the Program.

You may charge a fee for the physical act of transferring a copy, and you may at your option offer
warranty protection in exchange for a fee.

2. You may modify your copy or copies of the Program or any portion of it, thus forming a work
based on the Program, and copy and distribute such modifications or work under the terms of
Section 1 above, provided that you also meet all of these conditions:

a) You must cause the modified files to carry prominent notices stating that you changed the files
and the date of any change.

b) You must cause any work that you distribute or publish, that in whole or in part contains or is
derived from the Program or any part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.

c) If the modified program normally reads commands interactively when run, you must cause it,
when started running for such interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a notice that there is no warranty (or
else, saying that you provide a warranty) and that users may redistribute the program under these
conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself
is interactive but does not normally print such an announcement, your work based on the Program
is not required to print an announcement.)

These requirements apply to the modified work as a whole. If identifiable sections of that work are
not derived from the Program, and can be reasonably considered independent and separate works
in themselves, then this License, and its terms, do not apply to those sections when you distribute
them as separate works. But when you distribute the same sections as part of a whole which is a
work based on the Program, the distribution of the whole must be on the terms of this License,
whose permissions for other licensees extend to the entire whole, and thus to each and every part
regardless of who wrote it.

Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely
by you; rather, the intent is to exercise the right to control the distribution of derivative or
collective works based on the Program.

In addition, mere aggregation of another work not based on the Program with the Program (or with
a work based on the Program) on a volume of a storage or distribution medium does not bring the
other work under the scope of this License.

3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code
or executable form under the terms of Sections 1 and 2 above provided that you also do one of the
following:

a) Accompany it with the complete corresponding machine-readable source code, which must be
distributed under the terms of Sections 1 and 2 above on a medium customarily used for software
interchange; or,

b) Accompany it with a written offer, valid for at least three years, to give any third party, for a
charge no more than your cost of physically performing source distribution, a complete machine-
readable copy of the corresponding source code, to be distributed under the terms of Sections 1
and 2 above on a medium customarily used for software interchange; or,

c) Accompany it with the information you received as to the offer to distribute corresponding
source code. (This alternative is allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such an offer, in accord with
Subsection b above.)

The source code for a work means the preferred form of the work for making modifications to it.
For an executable work, complete source code means all the source code for all modules it
contains, plus any associated interface definition files, plus the scripts used to control compilation
and installation of the executable. However, as a special exception, the source code distributed
need not include anything that is normally distributed (in either source or binary form) with the
major components (compiler, kernel, and so on) of the operating system on which the executable
runs, unless that component itself accompanies the executable.

If distribution of executable or object code is made by offering access to copy from a designated
place, then offering equivalent access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not compelled to copy the source
along with the object code.

4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided
under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License. However, parties who have
received copies, or rights, from you under this License will not have their licenses terminated so
long as such parties remain in full compliance.

5. You are not required to accept this License, since you have not signed it. However, nothing else
grants you permission to modify or distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by modifying or distributing the
Program (or any work based on the Program), you indicate your acceptance of this License to do
so, and all its terms and conditions for copying, distributing or modifying the Program or works
based on it.

6. Each time you redistribute the Program (or any work based on the Program), the recipient
automatically receives a license from the original licensor to copy, distribute or modify the Program
subject to these terms and conditions. You may not impose any further restrictions on the
recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance
by third parties to this License.

7. If, as a consequence of a court judgment or allegation of patent infringement or for any other
reason (not limited to patent issues), conditions are imposed on you (whether by court order,
agreement or otherwise) that contradict the conditions of this License, they do not excuse you from
the conditions of this License. If you cannot distribute so as to satisfy simultaneously your
obligations under this License and any other pertinent obligations, then as a consequence you may
not distribute the Program at all. For example, if a patent license would not permit royalty-free
redistribution of the Program by all those who receive copies directly or indirectly through you,
then the only way you could satisfy both it and this License would be to refrain entirely from
distribution of the Program.

If any portion of this section is held invalid or unenforceable under any particular circumstance, the
balance of the section is intended to apply and the section as a whole is intended to apply in other
circumstances.

It is not the purpose of this section to induce you to infringe any patents or other property right
claims or to contest validity of any such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is implemented by public license practices.
Many people have made generous contributions to the wide range of software distributed through
that system in reliance on consistent application of that system; it is up to the author/donor to
decide if he or she is willing to distribute software through any other system and a licensee cannot
impose that choice.

This section is intended to make thoroughly clear what is believed to be a consequence of the rest
of this License.

8. If the distribution and/or use of the Program is restricted in certain countries either by patents or
by copyrighted interfaces, the original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding those countries, so that
distribution is permitted only in or among countries not thus excluded. In such case, this License
incorporates the limitation as if written in the body of this License.

9. The Free Software Foundation may publish revised and/or new versions of the General Public
License from time to time. Such new versions will be similar in spirit to the present version, but
may differ in detail to address new problems or concerns.

Each version is given a distinguishing version number. If the Program specifies a version number of
this License which applies to it and "any later version", you have the option of following the terms
and conditions either of that version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of this License, you may choose any
version ever published by the Free Software Foundation.

10. If you wish to incorporate parts of the Program into other free programs whose distribution
conditions are different, write to the author to ask for permission. For software which is copyrighted
by the Free Software Foundation, write to the Free Software Foundation; we sometimes make
exceptions for this. Our decision will be guided by the two goals of preserving the free status of all
derivatives of our free software and of promoting the sharing and reuse of software generally.

NO WARRANTY

11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE
PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN
WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.
SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY
SERVICING, REPAIR OR CORRECTION.

12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY
COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE
PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL,
SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO
USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED
INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM
TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

Potrebbero piacerti anche