Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
The challenges of solving usability issues are historic, while the evolution of mobile
application as examples of information systems has posed immense challenges to the area of
Human Computer Interaction. Terho (2002) added that, in developing mobile applications,
series of challenges are expected to be met such as narrow bands, lack of coverage, devices
with small memory and screens that cannot display large amount of data and the diversity of
users needs wished to be met by a single device. This trend had been on the track since the
1980s, till this present age when usability design and testing has joined issues in the cause of
developing mobile applications (Gafni, 2008; Shneiderman & Plaisant, 2010), mounting more
importance on usability evaluation of mobile applications with the evidence of Comparative
Usability Evaluation (CUE) studies, with the most recent which is CUE-4 (Shneiderman &
Plaisant, 2010).
In a broader view, Shneiderman & Plaisant (2010) asserted that the current issues of
usability values and expected characteristics of information systems should be seen beyond
the interface but also the entirety of the mobile device; this necessitate attention to availability
of extra batteries and chargers and signal strength. This in its holistic view is suggested to
enhance the ease of use of the mobile applications.
______________________________
* Akanmu Semiu A. Tel.: +6-017-515-092-5; E-mail address: ayobami.sm@gmail.com.
First International Conference on Behavioural and Social Science Research (ICBSSR 2012)
Universiti Tunku Abdul Rahman, Kampar, Perak, Malaysia, 2 November 2012
2
In the cause of testing, metrics provided in figure III above are suggested to be used for
measurement and to define the level of the quality of the information system with the
First International Conference on Behavioural and Social Science Research (ICBSSR 2012)
Universiti Tunku Abdul Rahman, Kampar, Perak, Malaysia, 2 November 2012
3
possibility of using different approaches like expert review, IS managers’ assessment and
users’ evaluation ( Gafni, 2009; Shneiderman & Plaisant, 2010).
Summarily, issues of most importance in the development of information systems and
specifically mobile applications at present centered on the enhancement of mobile interface
and the embedded users’ friendly applications to further meet the usability features without
compromising some other ones especially sustainable battery life and screen size for better
display and navigation.
Nelson (2001) defined Usability as one of the standard concepts in designing and
developing information systems, and it was fundamentally defined as ease of use (as cited by
Alshamari & Mayhew, 2009). This definition forms the basis, but further added to by ISO
defining Usability with specific goals (effectiveness, efficiency and user satisfaction) that the
users of the information systems intended to achieve.
Usability testing which is otherwise called Usability evaluation methods are standard
procedures and methods used to test and ensure that software developed meet the standard
usability goals. Examples of these evaluation methods are Heuristics evaluation, Guideline
review, consistency inspection, cognitive walkthrough, metaphors of human thinking (MOT),
and formal usability inspection (Shneiderman & Plaisant, 2010).
As posited by Alshamari & Mayhew (2009), current issues of Usability testing are factors
affecting usability testing and its results. Examples of these issues are usability measures,
evaluator’s role, users, tasks, usability problem report, and the test environment and so on.
Explaining these issues further:
Usability measures and problem analysis: Before any usability test is expected to be
conducted, experts involved in the testing must be aware of the measure and tests to be
involved especially to be in accordance with the three major ISO standards which are
efficiency, effectiveness and the user satisfactions. Hornbaek (2006) asserted that the
difficulty involved in choosing the method of measuring system’s usability, its elements and
the appropriateness of the method choosen has been responsible for a recorded weakness in
measuring usability, and then suggested the dimensions of metrics to be used. It is also note
worthy that usability problem must be identified before being given a judgemental position,
with a clue that any issue that disallows users from completing a task is a usability issue
(Alshamari & Mayhew, 2009).
Evaluator’s role: This is a sensitive issue in usability testing, because expertise employed to
perform an evaluating role tend to differ in the detection of usability problem, and even could
be inefficient in the problem detention exercise.
Users: Using Users’ assessment approach as a usability testing method, the number of users
needed to be involved for the evaluation process. Alshamari & Mayhew (2009) in reference to
many previous studies showed variation in the number suggested users: five, three, nine are
suggestion made, however with an emphasis that the choice of the users must depend on their
level of system experience.
First International Conference on Behavioural and Social Science Research (ICBSSR 2012)
Universiti Tunku Abdul Rahman, Kampar, Perak, Malaysia, 2 November 2012
4
Tasks: The task to be involved in the usability testing must be tasks that are related and will
influence the usability evaluation.
Test Environment: The inconsistency of the controlled test laboratory and the real life
experience is also an issue in usability testing. Cost and inherent doubts to generalize such
experimental results are some of the reason while lab testing is not supported by some
expertise in HCI.
Conclusion
References
Alshamari, M., & Mayhew, P. (2009). Current Issues of Usability Testing. . IEEE Technical Review,
26, 402-406.
Gafni, R. (2008). Quality of PDA-Based M-Learning Information Systems. Proceedings of the Chais
Conference on Instructional Technologies, Israel. Retrieved from
http://telem-pub.openu.ac.il/users/chais/2009/morning/3_1.pdf
Gafni, R. (2009). Usability Issues in Mobile-Wireless Information Systems. Issues in Informing
Science and Information technology, 6, 755-769.
Hornbaek, K., & Stage, K. (2006). The Interplay Between Usability Evaluation and User Interaction
Design. International Journal of Human-Computer Interaction. 21, 117-123.
Hornbaek, K. (2006). Current practice in measuring usability: Challenges to usability studies and
research. International Journal of Human-Computer Studies. 64, 79- 102.
Ivan, I., & Zamfiroiu, A. (2011). Quality Analysis of Mobile Applications. Informatica Economica.
15(3). Working paper of Academy of Economic Studies, Romania. Retrieved from:
http://revistaie.ase.ro/content/59/12%20-%20Ivan,%20Zamfiroiu.pdf
Parsons, D., & Ryu, H. (2006). A Framework for Assessing the Quality of Mobile Learning. Institute
of Information and Mathematical Sciences, Massey University, Auckland, New Zealand.
Retrieved from: citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.108
Popa, M. (2010). Audit Process during Projects for Development of New Mobile IT Applications.
Informatica Economica, (14) 3. Working paper of Academy of Economic Studies, Romania.
Retrieved from: http://revistaie.ase.ro/content/55/1003%20-%20Marius%20Popa.pdf
Shneiderman, B. & Plaisant, C. (2010). Designing the User Interface: Strategies for Effective
Human-Computer Interaction, 5th Edition, USA. Pearson.
Terho, M. (2002). Mobile web services and software quality. Proceedings of ESCQ, Berlin:
Springer-Verlag, 2-6.
Wac, K., Hilario, M., Beijnum, B.J., Bults, R., & Konstantas, D. (2002). Quality of Service
Predictions Service: QoS Support for Proactive Mobile Applications and Services. Retrieved
from: http://asg.unige.ch/publications/TR09/02QoS1.pdf
Wentzel, P., Lammeren, R., Molendijk, M., Bruin, S., Wagtendonk, A. (2005). Using Mobile
Technology to Enhance Students’ Educational Experiences. A working paper of Educause
Center for Applied Research. Colorado. Retrieved from:
http://net.educause.edu/ir/library/pdf/ers0502/cs/ecs0502.pdf