Sei sulla pagina 1di 8

International Journal of Software Engineering and Its Applications

Vol.8, No.1 (2014), pp.159-166


http://dx.doi.org/10.14257/ijseia.2014.8.1.14

Assessment Management System based on IMS QTI 2.1


Youngseok Lee
HYU Institute for Embedded Software, Hanyang University, 17 Haengdang-dong,
Sungdong-gu, Seoul, 133-791, South Korea
yslee38@hanyang.ac.kr
Abstract
Traditionally, teachers have made use of exams to evaluate learners during the
educational process as a course requirement or to measure concept assimilation presented in
a lecture/class/course. E-learning testing systems in general did not help to improve the level
of learning because it does not reflect a learner's personal characteristics. In recent years, in
order to solve these problems, online learners can be judged by the learning site tests that
provide lectures, school, or recognized government through learning results. The online test
management for learners evaluates management techniques, learner's analysis of assessment
results, report results, and provides heavily-relied feedback technology. According to the
individual evaluation result, learning outcomes are used for the next stage of education and
promotion. Therefore, in this paper, we propose to design and develop a prototype
assessment system in accordance with international standards on the interoperability test
items. While focusing on the item bank management system, create online testing, evaluation
system; a question authoring tool was developed to create a simple form of items. In addition,
Import and Export were designed as a form of international standards-based assessment
items for sharing between different systems evaluation questions through management
module development.
Keywords: QTI, Question and Test Interoperability, Assessment System, Online
Assessment, E-Learning

1. Introduction
There is a growing trend in the field of education with the development of IT technology,
using e-learning training content, users, and e-learning education market expansion. The
current online assessment system is the same process as the equation for the paper-based test
which measures and evaluates the learner's ability in solving paper and pencil-style test
questions via the computer. An online rating system is the examination of a variety of nonpaper-based expression test questions which measures the validity of the test. There is an
advantage to know the scoring results immediately after the test. The questions focus on
various types of multimedia environment [1, 2].
The importance of online assessment is increasing, not only for its greater spread but also
for its being used in critical tasks such as job promotion or professional ability certification.
After a brief review of assessment capabilities, this paper addresses issues such as reusability,
interoperability and standardization of assessment content, in an attempt to determine present
trends in online assessment evolution [3-5].
Assessment content creation, conformance to standards and development of content
repositories are issues of concern. Consensus on the value of reuse, interoperability and
standardization does not ensure adoption. Practical considerations, authors' needs and will,

ISSN: 1738-9984 IJSEIA


Copyright 2014 SERSC

International Journal of Software Engineering and Its Applications


Vol.8, No.1 (2014)

institutional interests, economics and several other factors may act as driving forces or
impediments. This article evaluates the current state of standards adoption and attempts to
discover the driving forces behind. Only free software implementations are considered, and
among them a selection of those more widely deployed [6, 7].
In this online system, assessment item results are produced uniquely. However, the
information sharing is difficult; and time inefficient and costly. As a result, an established
rating system must be established and compliant with international standards. Thus, the
question of the sharing and distribution methods is necessary. There is a need to build the
assessment management system based on international standards and assessment items shared
system.

2. Literature Review
2.1. Computerized Adaptive Test
Computerized adaptive test improves the efficiency and the accuracy of ability tests,
analyzing learners' responses and providing appropriate items for individuals. As the
next items are selected and set according to the answers to the given items, it is possible
to estimate examinees' ability more accurately. Computerized adaptive test can evaluate
examinees' ability with less items than the previous paper pen test and tes t results can
be confirmed immediately after the tests are completed, so it is cost effective too [ 8].
In order to conduct a computerized adaptive test, item banks in which many items are
systematically stored after being analyzed by difficulty, discrimination, and guessing
are needed to be established. As for the test theory for item analysis, many theories
have been suggested since the classical test theory developed in the 1920s. Recently,
many testing systems, such as Item Response Theory, using the principles of item
response theory and the calculation ability of computers to conduct a test appropriate
for examinees' level are under development [9].
Computerized adaptive test comprises all assessment processes where ICT
(Information and Communications Technology) is used for the presentation of
assessment material and the recording of responses [1]. Assessment is commonly
offered in learning platform or LCMSs (Learning Content Management Systems), web
or computer based applications designed to manage educational and assessment content
[10]. Most LCMSs provide several types of question: Multiple Choice, True or False,
Short Answer - a word or simple phrase from a list, Matching - a two-column concept
matching question, and so on. Questions can be presented each time in a different
random order, time limits can be set, the number of attempts and final results can be
recorded [11, 12].
2.2. IMS QTI
The IMS Global Consortium, an industry and academic consortium, produced the
IMS QTI (IMS Question & Test Interoperability Specification) to enable the exchange
of question and test data, as well as their corresponding results reports [13]. It was
designed both for interoperability and innovation, by the provision of extension points
where specialized or proprietary data can be wrapped consistently.
IMS QTI is an IMS specification for enabling the exchange of data related to
questions, tests and results of the assessment process between heterogeneous
IMScompliant systems and tools [14]. This interchange is usually performed using a

160

Copyright 2014 SERSC

International Journal of Software Engineering and Its Applications


Vol.8, No.1 (2014)

binding of the IMS QTI abstract models with XML (eXtensible Markup Language) [15].
In this section we summarize the concepts of QTI that are relevant for the present work.
In QTI parts, questions are called items, while tests/exams are called assessments.
QTI also introduces an intermediate structuring construction called section, which is
used to group individual items or other simpler sections together. In addition, IMS QTI
defines the concept of object bank. Objects banks are used to package a set of
individual items or sections to be exchanged between LMSs or question repositories to
create question pools, so that teachers can reuse these pools when creating new exams
[15].
QTI also enables the declarative description of many relevant d ynamic features of the
assessment process. For each structural element it is possible to attach a set of rules
used to process learner responses. These rules can, for instance, trigger the presentation
of some feedback. Their formulation can be based on a set of control switches, which
are also included in the description of the corresponding structural element or their
neighbors [16].
Present version is QTI 2.1; a final specification release completes update from 1.x to
2.x, and addresses issues raised with the QTI 2.1 specification; resolution of remaining
issues is expected to be included in the final release. QTI specific goals are to provide a
well documented content format for storing items independent of the authoring tool
used to create them, support their deployment in different types of item banks and
delivery systems, support deployment of items and item banks from diverse sources into
a single system, and enable systems to report test results consistently [13].
2.3. Related Works
There are already a variety of tools available for creating electronic assessment
content for both formative and summative purposes. Further discussions revealed that,
among some of the new clients, the interest had never truly been about interoperability,
but far more to do with the project presenting an opportunity to able to shape the form
of a new assessment authoring tool according to their own needs. They wanted
assessment resources to be quick and easy to construct, yet still result in innovative,
rich content for both formative and summative purposes something they felt their
existing tools did not provide [17, 18].
To develop a prototype of assessment items, in accordance with international
standards on the interoperability test items, the development of a simple question form
authoring tool is vital. Focusing on the item bank management system has established a
rating system [19, 20]. Management module developed for the assessment items to
share assessment items from one computer to another, based on international standards
in question is designed to be both Import and Export.
Rendering for the creation of items by individuals or group in accordance with the
characteristics of the learners evaluation technology has been developed. Basic online
assessment and evaluation system was developed. The current status of the national
technology system TEAMS evaluation rating system NEOTEST, State and) KSES
evaluation system products, research and analysis of its preceding and following the
same. The evaluation systems do not follow the international standard QTI. Each
evaluation system introduced in whole or in part, in the form of XML data management
concepts that were analyzed. It is deemed impossible to wait for the developer's own
efforts, such as Korean and K-QTI standards and assessment items between mutual
sharing of different types of systems. Commercial systems and research status is in the
form of a foreign technology consortium [21-23].

Copyright 2014 SERSC

161

International Journal of Software Engineering and Its Applications


Vol.8, No.1 (2014)

Question Mark's rating system has limited the types of questions that can be
implemented in place to implement the IMS QTI site. The JISC project JELFAD
applied to the IMS QTI is identified as an example of the learning system. XML
conversion module and the importer are required, but in order to establish a system
management and sharing of assessment items that are currently in development were
analyzed [5].

3. System Design
The system architecture is shown in Figure 1.

Figure 1. The System Architecture


In this paper, the proposed system was designed to have the following compon ents:
- International standard IMS Question and Test Interoperability (QTI) Version 2.1 a final
specification follow the development of management tools of assessment items.
- Individualized according to the characteristics of the learners evaluation groups or for
generating questions rendering technology development.
- In the domestic situation, a wide range of assessment items shared evaluation system
independently of the platform and distribution module development.
- The number of multi-user access to online tests according to the ability of the individual
to undergo an online evaluation system development.
- Assessment management system developed to give appropriate feedback based on the
results of the evaluation of the learner.
- International standards, the IMS Question and Test Interoperability (QTI) Version 2.1
Public Draft (revision 2) test items according to the development of management tools.
First, a prototype of assessment items, QTI standards through analysis of assessment items
in five different forms that you use most often: optional choice, multi-selectable (multiple
choice), and combined type (match), implantable (inline choice), the implantable text (text

162

Copyright 2014 SERSC

International Journal of Software Engineering and Its Applications


Vol.8, No.1 (2014)

entry) has been developed. We developed the question authoring tools for the production of
test items. Configure the system to build the question bank for the development of assessment
items. The schematic diagram of the assessment management system is shown in the
following Figure 2.

Figure 2. The Schematic Diagram of the Assessment Management System


Questions Rendering technology has been developed for generating evaluated by
individual or group in accordance with the characteristics of the learners. Determine the level
of the learner and the characteristics of the individual, groups rated the ability to create and
develop the ability to manage it. When placing the question, the evaluation of the multimediabased module is initiated by securing the necessary rendering technology. The teacher was
utilized in order to check the papers and questions.
In addition, the development of online test, you can identify the assessment action
process is generated. Also, one may understand the current state of the learner through
the session management which was developed to check the reaction of a variety of
learners. Thus, a variety of assessment items that meet the domestic situation, the
evaluation system development platform, sharing and distribution of independent
modules is produced.
Assessment management system that does not conform to international standards for
assessment items of the QTI specification has been developed to extract the features. If
the question has been produced to meet the QTI specification, development and
evaluation system can be inserted into the module.

4. System Implementation
4.1. Implementation Environment
Implementation environment of the test-bed for interoperability assessment items based
evaluation system is shown in the table below.

Copyright 2014 SERSC

163

International Journal of Software Engineering and Its Applications


Vol.8, No.1 (2014)

Table 1. Implementation Environment


Test System Server

PC Client

H/W

Intel Xeon 3.2 GHz Dual CPU


3072MB RAM

Intel Pentium 2
128MB RAM (Minimal)

S/W

Windows 2003 Server with IIS


MS SQL-2000
ASP.NET, C#
NHibernate, Active Record

Windows 98 upper
Internet Explorer 6.0 upper

4.2. Implementation Results


The question authoring tool for creating questions and questions of the user interface
rendering. The basic information of the items was to generate questions by entering the
following information in the QTI specification.
The database structure of the proposed system is shown in Figure 3.

Figure 4. The Database Schema of the Assessment Management System


The screen renders questions for temporary test questions after each question is
shown in the following Figure 4. You can also view the status of the question by the
question ID.
Each item, modification, and deletion, and the questions management tool for
rendering a variety of management. Export (Export) of the QTI specification is used the
evaluation system. The question generated according to the rules of the question tools.

164

Copyright 2014 SERSC

International Journal of Software Engineering and Its Applications


Vol.8, No.1 (2014)

Figure 4. The Schematic Diagram of the Assessment Management System

5. Conclusion
Evaluating e-learning systems do not reflect the learner's personal characteristics as
it they do not demonstrate improvement in the level of learning. In order to solve these
problems; and time and costs; learners can be assessed through: lectures and study sites,
their respective schools, or government studies which recognize test results.
In addition, the test strip technology and learners manage to build a database of
questions such as the question bank questions for assessment, actual online exam
administration, results reporting through analysis, evaluation of learners and provide
feedback technology entrance, promotion, etc., have been used to assess an individual's
competence and learning outcomes.
Therefore, in this paper, we have developed a prototype development of assessment
items in accordance with international standards on the interoperability test items.
These questions create a simple form of question authoring tool was developed.
Building a assessment management system, while focusing on the item bank, has
created a booklet and on-line testing. Questions in the form of units based on
international standards develop test items between different systems to share the
assessment items for management module is developed to enable the import and export.
If the test items based on the interoperability of the evaluation system is introduced
in the field of physical education, and includes all the features of the online ev aluation
system could be the basis of a system that conforms to international standards. In
addition, e-learning assessment system from the perspective of the questions
management system, and the classification management system can be shared through
the question of interoperability question papers made mutually compatible or can be
used to share. It is expected to perform the assessment items through international
standards compliance and management systems for online evaluation system.

Copyright 2014 SERSC

165

International Journal of Software Engineering and Its Applications


Vol.8, No.1 (2014)

References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]

[9]
[10]

[11]
[12]

[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]

A. Zeileis, N. Umlauf and F. Leisch, Flexible Generation of E-Learning Exams in R: Moodle Quizzes,
OLAT Assessments, and Beyond, no. 2012-27, (2012).
H.-J. Kim, U-Learning Scheme: A New Web-based Educational Technology, Journal of the Korea
Academia-Industrial Cooperation Society, vol. 12, no. 12, pp. 5486-5492, (2011).
TOIA, More information available at http://www.toia.ac.uk/, last visited (2013) April.
Blackboard, More information available at http://www.blackboard.com/, last visited (2013) April.
Questionmark, More information available at https://www.questionmark.com/, last visited (2013) April.
J. Lee and Y.-J. Lee, Development and Application of E-Learning Content for Advertising Education,
IJAST, vol. 47, (2012), pp. 1-12.
N. Arman, E-learning Materials Development: Applying and Implementing Software Reuse Principles and
Granularity Levels in the Small, IJUNESST, vol. 3, no. 2, (2010), pp. 31-42.
G. Barrera-Sanabria, D. Arenas-Seleey, J. C. Garcia-Ojeda and F. Mendez-Ortiz, Designing Adaptive
Educational Web sites: General Framework, Proceedings of the IEEE International Conference on
Advanced Learning Technologies (ICALT'04), (2004) September, pp. 973-977.
T. Murase and Y. Isomoto, e-Learnig System to Provide Optimum Questions Based on Item Response
Theory, Intelligent Tutoring Systems, (2006), pp. 695-697.
JISC/QCA (Joint Information Systems Commitee, Qualifications and Curriculum Authority). Effective
assessment in a digital age. http://www.jisc.ac.uk/publications/programmerelated/2010/digiassess.aspx [last
visited April 2013], (2013).
Moodle, Using Moodle. http://docs.moodle.org/23/en/Main_page [last visited April 2013], (2013).
B. Kanti Das and S. Pal, A framework of Intelligent Tutorial System to incorporate adaptive learning and
assess the relative performance of adaptive learning system over general classroom learning, IJMUE, vol. 6,
no. 1, (2011), pp. 43-54.
IMS Global Learning Consortium. IMS Question & Test Interoperability Specification.
http://www.imsglobal.org/question/ [last visited April 2013], (2013).
T. Bray, J. Paoli, C. M. Sperberg-McQueen and E. Maler, Extensible Markup Language (XML) 1.0 (Third
Edition), W3C Recommendation, (2004).
W. Liu, Q. Li and R.W.H. Lau (Eds.): ICWL 2006, LNCS 4181, (2006), pp. 134-145.
V. Gonzalez-Barbone and M. Llamas-Nistal, eAssessment: Trends in Content Reuse and Standardization,
Proceeding of 37th ASEE/IEEE Frontiers in Education Conference, Milwaukee, WI, (2007) October 10-13.
Y.-B. Park and H.-S. Yang, XML-based Retrieval System for E-Learning Contents using mobile device
PDA, Journal of the Korea Academia-Industrial Cooperation Society, vol. 10, no. 4, (2009), pp. 818-823.
A. Balla, Designing Pedagogical Learning Environment, IJAST, vol. 6, (2009), pp. 1-14.
J.-S. Sung, U-Learning Model Design Based on Ubiquitous Environment, IJAST, vol. 13, (2009), pp. 7788.
H.-W. Park, K.-D. Lee, S. Lee and S.-H. Kim, The Mobile Quiz System Using SMS, Journal of Korea
Academia-Industrial cooperation Society, vol. 11, no. 2, (2010), pp. 525-531.
Daulsoft, More information available at http://www.daulsoft.com/ (last visited April 2013), (2013).
IOSYS, More information available at http://www.iosys.co.kr/ (last visited April 2013), (2013).
KERIS, More information available at http://www.keris.or.kr/ (last visited April 2013), (2013).

Author
Youngseok Lee, (Ph.D.09) became a Member (M) of The Korea
Academia-Industrial Cooperation Society in 2006. He graduated
from the Hanyang University, and is currently enrolled in the HYU
Institute for Embedded Software. His research interests include
smart learning, intelligent tutoring systems and web-based learning
systems.

166

Copyright 2014 SERSC

Potrebbero piacerti anche