Sei sulla pagina 1di 11

A Comparative Analysis of WebBased Testing and Evaluation Systems

Elizabeth J. Gibson (email: ejgibson@unity.ncsu.edu) Patrick W. Brewer Ajay Dholakia Mladen A. Vouk Donald L. Bitzer Dept. of Computer Science North Carolina State University Raleigh, NC 27695-8206.

Abstract
One of the most important characteristics of an advanced learning environment is its ability to evaluate the knowledge acquisition and retention rate of its user, and to adapt to student needs. Therefore, it is not surprising that an increasing number of student knowledge testing and evaluation systems, that can be attached to World Wide Web courseware and tutorials, are becoming available. In this paper, we discuss some basic features that such a system needs to have, and in that light evaluate a selection of the Web-based testing and evaluation systems.

Keywords
WWW, Education, Computer-Based Teaching, Computer-Based Training, Computer-Based Learning, Testing, Evaluation

1. Introduction
The World Wide Web [WWW], technology holds a lot of promise as an educational tool. However, the current generation of WWW tools and servers was designed for browsing and information retrieval, and not as components of an active learning system. Therefore, they currently lack a number of features that an advanced educational environment requires. Despite that, an ever increasing amount of "courseware" is appearing on the Web [Campbell95] . Web-based educational systems, like other computer-based education (CBE) systems, must provide certain basic instructional functionalities [Overbaugh94]. There are at least nine events that such a system needs to support. [Gagne79], [Aronson83]. Three of these events are: 1. evaluation of the student's understanding of each concept,

2. provision to the student of feedback concerning his/her performance during the evaluation, and 3. assessment of the student's complete understanding of each concept. Presenting the student with questions is the only way to ensure that learning is occurring. The questions within a lesson provide the instructor with feedback about how a student, or a group of students, is performing, and with the information about the effectiveness of the lesson itself [Steinberg91]. The courseware on the Web must follow the well-established theories of learning, and any lesson on the Web should incorporate some form of knowledge testing. Unfortunately, a facility for easy, active tracking and evaluation of user knowledge, and for interactive feedback, is precisely what is lacking in the Web interfaces that are readily available today. This has motivated a number of researchers to discuss and develop systems that support development of Web-based courseware and/or testing packages, as opposed to Web-pages. Invariably, such systems support inclusion of knowledge testing into the courseware. In developing the criteria for the testing systems, we have drawn upon our experiences with NovaNET. NovaNET [NovaNET] is a well established and successful low-overhead high-yield network-based multimedia educational system that originates from the University of Illinois at Urbana-Champaign (UIUC) and serves thousands of users on a daily basis. It is not Web-based, but does allow Web-access. The principal attraction of the system is that it was designed to be a network-based educational (NBE) system. Therefore, it supports many different forms of testing, has very sophisticated knowledge evaluation capabilities, and excellent security. To the best of our knowledge, NovaNET has one of the most complete NBE profiles we are aware of. Its developers have years ago resolved many problems the Web-educators are just beginning to face [Steinberg91]. In this paper we limit our attention to the questions related to knowledge testing and response tracking. We do not discuss how collected information is judged to assess the actual knowledge a student has gained, nor do we deal with the issue of how to adapt the system output and responses based on that information. Our prime motivation for doing this work is the desire to support development of the testing subsystem in an advanced learning environment called "Lesson Objects on Parallel Systems" (LOOPS). LOOPS supports both Web-based and non-Web based educational technology. In Section 2, we briefly describe the Web-based testing and evaluation systems we have investigated. In Section 3. we discuss the criteria we used in our comparative analysis. The results of the analysis are given in Section 4. Summary and conclusions are in Section 5.

2. Testing Systems
The Web-based testing and evaluation systems that we have examined are Mklesson, Eval, Tutorial Gateway, and the Open Learning Agency of Australia's

(OLAA) system. While none of the systems in the selection is perfect, they each have some unique and interesting features. Mklesson: Mklesson [Mklesson] is a public domain tutorial generating program that was developed by David A. Wheeler at the Institute for Defense Analyses. Mklesson takes a file in the so called ".les" format and produces a tutorial. This ".les" formatted file consists of text, standard html control statements, and extended html statements. The Mklesson program uses these html extensions to break the monolithic ".les" file into a group of html files. After Mklesson processes the file, the user has a tutorial that consists of a number of lessons each of which consists of a number of sections. Mklesson allows one question to be attached at the end of each section. Eval: Eval [Eval] is a test taking facility developed at Calvin College by Joel C. Adams and Aaron A. Armstrong. Eval allows an instructor to develop a question database. Then, Eval provides students with either a predetermined test or a randomly generated test. The questions in this on-line test come from the instructor's database. The addition and modification of questions in the instructor's database is performed on-line. Test security is an additional functionality provided by the Eval system. Currently, Eval can be viewed by following the directions on the Eval homepage. Although Eval can be copied, it is not packaged for easy transfer and installation. Tutorial Gateway: The Tutorial Gateway [TutorialGateway] is a public domain package that aids in the development of tutorial style questions. The package was developed at the Carleton University by Neal Holtz. In order to use the Tutorial Gateway, one creates html files with these files having standard html, and a few extensions to standard html. Using the Tutorial Gateway, the instructor can develop questions to present to students via the World Wide Web using a CGI-compliant server, such as the NCSA's Version 1.4 server [Servers] [Byrnes95]. Open Learning Agency of Australia: OLAA [OLAA] is a package that was built on top of the Tutorial Gateway. OLAA allows an instructor to develop a question database and then it provides students with a test, that meets the instructor's preset variables, using the instructor's question database. The adding and changing of questions to the database can be performed on-line, or as a batch process. Two of the variables that an instructor can preset are the number of questions in the test, and the criteria (topic, section, keyword, etc.) the questions on the test must meet. Currently, OLAA is not in the public domain, but its public release is planned.

3. Criteria for Comparison


There are six principal issues that need to be addressed: testing, tracking, grading, tutorial building, implementation issues, and security issues. Each is discussed in more detail below.

3.1 Testing

There is no question that testing in an extremely important part of a CBE environment. If the instructor keeps students notified of their progress and mastery of the lesson material, the students are more likely to continue learning [Overbaugh94]. When comparing testing capabilities, we were concerned with the types of questions supported, the feedback, help and hints, retries, and the use of multimedia. 3.1.1 Types of Questions Types of questions refer to the categories of questions that the system supports. Some examples are multiple-choice, true-false, simple numeric, and simulations. Steinberg [Steinberg91] notes that the types of questions supported by a particular system are "guided by the ease of entering a response and the ability of the computer to judge the response adequately" (p 107). 3.1.2 Feedback Feedback is the information that the instructor provides to the student once the student has entered his/her answer to the given question. Sometimes this information is as simple as "yes, that is the correct answer", while sometimes, it will be as complex as presenting the topic in a whole new way in order to help the student understand the material. According to Aronson and Briggs [Aronson83], feedback is a vital and indispensable instructional activity. 3.1.3 Help and Hints Test help provides the student with the directions, and other necessary information, on how to complete a test. Test hints, on the other hand, is the automatic content-related aid the student gets in answering the test questions. 3.1.4 Retries Retries refers to a computer-based lesson allowing a student multiple attempts at a test question. According to Clariana [Clariana93], allowing a student to retry the given question can be beneficial depending on the student's prior knowledge. Clariana continues by stating that multiple tries are most effective with students having high prior knowledge. As for low prior knowledge students, Clariana states that a single attempt with the correct answer feedback is most effective. Hence, we are interested in whether the system automatically provides retries, or has some mechanism for providing retries only when deemed necessary. 3.1.5 Use of Multimedia Interactive multimedia-based testing can be a very powerful approach to knowledge evaluation. Hence, it is as important to provide an instructor with the capability to ask questions using video clips, sound clips, or some other medium, as it as to provide the ability to pose typical text-based questions. The main drawback to the use of multimedia in test items is the diversity in the hardware, software and network characteristics of platforms on which the

students may be taking the test. If sound is used, the local host must be capable of replaying it. If video is used, the quality of service guarantees must provide for adequate throughput, delay and jitter control. Fallback schemes must be built into the test (e.g., if the test platform has inadequate sound profile, the system must revert to closed-caption mode), and so on. All this can be quite expensive and portability and interoperability issues can be considerable.

3.2 Tracking
It is essential that a testing and evaluation system provide tracking capabilities. Tracking entails remembering where the student has traveled within the lesson and recording the student's performance on test questions and answers. Tracking needs to be performed for two reasons. First, it allows the instructor (student) to monitor each student's (his/her own) progress and performance [Campbell95]. Second, by tracking the student's progress, the lesson can provide the student with dynamic guidance on how best to proceed through the lesson. When a lesson has dynamic guidance capabilities, the chance of reaching a wider range of learners is greater [Overbaugh94].

3.3 Grading Capabilities


In addition to the computation of student grades, based on some criteria, the grades must be given as feedback. There are three levels of grading feedback. To the course coordinator, to the instructor, and to the student. Each of these three types of users requires a somewhat different amount and class of information. For instance, while a student should be given her/his own individual and cumulative grades by problem, and for the test as a whole, the privacy of other student grades must be strictly protected. On the other hand, a student should be told where she/he stands with respect to other students (e.g., with respect to class average) without revealing individual grades of other students. An excellent solution to this problem is found in the NovaNET "gradebook" program where students are shown a graph of the overall grade distribution for the class, and their individual position on that graph [NovaNET] . Furthermore, an instructor should have access to all student grades for his/her class, as well as to privacy-protecting comparison with other classes. Finally, course coordinator should have free access to all grades.

3.4 Tutorial Building


Tutorial building refers to whether the testing and evaluation system supports automatic tutorial inclusion.

3.5 Implementation Issues


We will limit our attention to two implementation-related issues: ease of use and platform issues. 3.5.1 Ease of Use Ease of use focuses on how easy it is for authors of the courseware and instructors to learn how to use the testing system to construct courses. It is

assumed that an instructor using the WWW as a computer-based education tool already knows the Hypertext Markup Language (html) used in Web-based documents. 3.5.2 Platform Issues There are many platform issues. They include server functionality, availability of viewers, and the ability of the hardware to support video, sound, adequate networking, etc.

3.6 Security
As any instructor knows, providing students with an on-line test involves various security considerations. This includes security for the test material, security for the student tracking information, security for the html source code, and security for ensuring that only registered students take the test as well as security issues mentioned earlier in the grading section.

4. Results
We used the six main criteria: testing, tracking, grading, tutorial building, implementation, and security, to analyze each of the Web-based testing and evaluation systems. In this section, we discuss the results of this analysis. 4.1 Mklesson Testing: Mklesson supports one multiple-choice question for each section. These multiple-choice questions can be made into true-false questions by making answer choices true and false. If a lesson developer wants more than one question per section, she/he can create sections which only contain questions [Wheeler95]. From the student's perspective, it will appear as a section containing different questions. Mklesson allows a lesson developer to provide appropriate feedback for each question. Mklesson also allows a student to choose between retrying a question and proceeding to the next section. However, Mklesson does not support help or hints during its testing. Mklesson does support the use of multimedia in the questions and answers. Tracking: Mklesson does not support any form of tracking. Grading: Mklesson does not have any form of grading capabilities. Tutorial Building: Quite possibly, Mklesson's greatest asset is its capability to automatically build tutorials. A lesson developer uses Mklesson's template html file (with a few easy-to-learn extensions to standard html) to fill in the various parts of the tutorial. After that the lesson developer runs Mklesson on this file and the complete tutorial is generated as a set of standard html files. Implementation: Mklesson is relatively easy to use. The lesson developer has to learn a few simple extensions to standard html, and the lesson developer needs to master the process of using the Mklesson program itself (Mklesson has a thorough User Guide). Mklesson requires Perl. Security: Mklesson does not provide security for the lesson material itself, and it does not ensure that appropriate students use the lesson. But,

Mklesson does provide security for the html source code by masking the URL's that contain answers. Since Mklesson does not support any form of tracking, security for student tracking information is not an issue.

4.2 Eval
Testing: Eval supports multiple-choice and true-false questions. In addition, Eval supports feedback by allowing the test developer to supply whatever feedback is deemed necessary for each selected answer. However, Eval does not support test help or test hints. Eval also does not support retries. Once a student has submitted an answer, Eval provides the student with feedback on the correctness of the answer, awards points for the chosen answer, and the correct answer. Eval does support the incorporation of multimedia into the test questions and answers. Tracking: Eval does not support any form of tracking. Grading: Eval's grading capabilities include individual and group information. After completing a test, each student is presented with the total points awarded, maximum possible points, and percentage of correct points. After all the students have completed the test, the instructor can view the class average, each individual student's name and score, maximum and minimum percentages, and a histogram of the students' percentages. Tutorial Building: Eval is not an automatic tutorial building system. As a result, an instructor wanting to use Eval as a testing facility for a Web-based lesson will have to manually link the students between the Eval testing system and learning materials. Implementation: Eval appears to be extremely easy to use. All of the functionality of Eval is automatically provided through the use of Web-based forms. Since Eval uses forms to enter information into its question database, a test developer using Eval needs a computer that already has a WWW viewer with forms capability and the Eval program itself. In addition, since Eval uses forms for entering the test answer, a student using Eval needs a computer with a forms-compatible WWW viewer. Security: Eval automatically provides security for the question database and student id checking. The first measure is a password-protected test question database. After setting the password for the question database, the instructor has to enter that password every time he/she modifies the database. The second security measure is an id checking capability. This can be used to require the students who are taking the test to enter their ids upon starting the test. Eval also provides additional security by ensuring that the student cannot discover the correct answer through the html source code. Eval's final security measure relates to the collection of student tracking information. Eval protects each student's name and score from being read by other students through the use of passwords.

4.3 Tutorial Gateway

Testing: The Tutorial Gateway supports multiple-choice, true-false, single numeric, and single algebraic expression questions. Feedback and hints are also supported. The test developer simply provides whatever feedback he/she wants the Tutorial Gateway to display to the student. This feedback is based on the student's submitted answer. The test developer provides the Tutorial Gateway with the hints that will be incrementally displayed at the student's request. However, help is not supported by the Tutorial Gateway. Retries, and the use of multimedia within questions and answers, are both supported by the Tutorial Gateway. Tracking: The Tutorial Gateway does not support any form of tracking. Grading: The Tutorial Gateway does not support any form of grading. Tutorial Building: Despite its name, the Tutorial Gateway system is not an automatic tutorial building system. As a result, a test developer utilizing the Tutorial Gateway testing system will have to build the lesson and questions manually by creating the needed html files. Implementation: The Tutorial Gateway system appears to be relatively easy to use. The test developer only has to learn some simple extensions to standard html. However, the main implementation issue is the platform on which the system is executed. The Tutorial Gateway requires a CGI-compliant http server, such as NCSA's Version 1.4 server, and a computer with a WWW viewer [Byrnes95]. The server can be located on a machine different from either the student or developer. In addition, the test developer is required to have a copy of the Tutorial Gateway system itself on his/her computer. Security: By not displaying the URL for the correct answer, the Tutorial Gateway system provides security for this html source code. However, the Tutorial Gateway does not provide any security measures for the test items themselves, nor for checking the identification of the student. Since tracking is not supported by the Tutorial Gateway, security for student tracking information is not an issue.

4.4 Open Learning Agency of Australia


Testing: Since the OLAA's testing facility is built on top of the Tutorial Gateway, it has the same testing capabilities as the Tutorial Gateway. OLAA supports multiple-choice, true-false, single numeric, and single algebraic expression questions. In addition, OLAA supports appropriate feedback, incremental hints, retries, and the use of multimedia within questions and answers. However, help is not supported by the OLAA's testing facility. Tracking: The OLAA system does not support any form of tracking. Grading: The grading support provided by the OLAA testing system is related to the given question. For each question given by the OLAA testing system, an instructor is provided with the number of students who attempted the question, and the distribution of the answers. Tutorial Building: While OLAA is not an automatic tutorial building system, it does provide a mechanism for a smooth transition between learning materials

and testing materials. By passing the current URL when a student enters a test, the student can easily return to the same learning materials at the completion of the test. Implementation: The OLAA system appears to be very easy to use, because all of the test development is completely automated. A test developer can either use the on-line forms based Interactive Question Editor, which steps one through the complete development of a test question, or use the Question Loader to add questions to the database in a batch format. Since the OLAA system uses forms for the entering of information into its question database, a test developer using the OLAA system needs a computer that already has the OLAA system itself and a WWW viewer with forms capability installed. A student using the OLAA system needs a computer with a WWW viewer. When using the OLAA system, the test itself must reside on a computer with a CGIcompliant server, such as NCSA's Version 1.4 server [Byrnes95]. Security: By utilizing cgi-bin scripts in the development of test questions, OLAA does not allow access to the correct answer in the html source code. So, the OLAA system ensures security in that area. However, neither security for the test items, nor the checking of the student id's is supported. Since tracking is not supported by the OLAA system, no security measures are required for the student tracking information.

5. Summary
We have discussed the criteria for evaluation of a knowledge assessment and testing system. The criteria were developed using NovaNET as a benchmark. There are six main areas of concern: testing functionality, tracking capabilities, grading capabilities, automatic tutorial building capabilities, implementation issues, and security issues. We have used the criteria to examine four Webbased testing and evaluation systems, Mklesson, Eval, Tutorial Gateway, and OLAA. These systems are continually being updated and improved upon. Currently, we are developing a testing and evaluation system that meets all the criteria discussed in this paper.

Acknowledgments
This work was supported in part through the NSF MRA award ACS-9418960. We would like to thank Larry Lee and Ken Flurchick of MCNC-North Carolina Supercomputing Center for their participation in the discussions related to this work.

References
[Aronson83] D. T. Aronson and L. J. Briggs, "Contributions of Gagne and Briggs to a Prescriptive Model of Instruction", Instructional-Design Theories and Models: An Overview of Their Current Status, C. M. Reigeluth (ed.), Hillsdale, NJ: Lawrence Erlbaum Associates, 1983, p. 75-100.

[Byrnes95] R. Byrnes, personal communication with E. J. Gibson, July 1995. [Campbell95] J. K. Campbell, S. Hurley, S. B. Jones and N. M. Stephens, "Constructing educational courseware using NCSA Mosaic and the World-Wide Web", Proceedings of the Third International World-Wide Web Conference, Darmstadt, Germany, April 10-14, 1995 (http://www.igd.fhg.de/www/www95/papers/52/www3.html). [Clariana93] R. B. Clariana, "A Review of Multiple-Try Feedback in Traditional and Computer-Based Instruction",Journal of Computer-Based Instruction, Vol. 20, No. 3, Summer 1993, p. 67-74. [Eval] J. C. Adams and A. A. Armstrong, Eval: Taking Tests via the Internet, journal paper in progress. More information can be found at Eval: Educational verification and learning, Eval's homepage (http://usurp.calvin.edu/html/Eval.home.html). [Gagne79] R. M. Gagne and L. J. Briggs, Principles of Instructional Design, Second Edition, New York: Holt, Rinehart and Winston, 1979. [Mklesson] Mklesson User Guide can be found at http://lglwww.epfl.ch/Ada/Tutorials/Lovelace/userg.html [NovaNET] NovaNET was developed at the University of Illinois and is now owned and operated by University Communications Inc. More information can be found at NovaNET's Web Page (http://www.nn.com/). [OLAA] R. Byrnes, R. Debreceny and P. Gilmour, "The Development of a Multiple-Choice and True-False Testing Environment on the Web", Proceedings of the First Australian World Wide Web Conference, Ballina, Australia, April 30, 1995 - May 2, 1995 (http://www.scu.edu.au/ausweb95/papers/education3/byrnes/). [Overbaugh94] R. C. Overbaugh, "Research-Based Guidelines for Computer-Based Instruction Development", Journal of Research on Computing in Eduation, Vol. 27, No. 1, Fall 1994, p. 29-47. [Servers] Information about CGI-compliant servers can be found at http://hoohoo.ncsa.uiuc.edu/cgi/overview.html. Information about NCSA's Version 1.4 server can be found at http://hoohoo.ncsa.uiuc.edu/docs/Overview.html. [Steinberg91] E. R. Steinberg, Computer-Assisted Instruction: A Synthesis of Theory,

Practice, and Technology, Hillsdale, NJ: Lawrence Erlbaum Associates, 1991. [TutorialGateway] Information about The Tutorial Gateway can be found at http://www.civeng.carleton.ca/~nholtz/tut/doc/doc.html. [Wheeler95] D. Wheeler, personal communication with E. J. Gibson, May-June 1995. [WWW] Information about the World Wide Web can be found at http://www.w3.org/hypertext/WWW.TheProject.html.

Potrebbero piacerti anche