Sei sulla pagina 1di 28

WMU Journal of Maritime Affairs

https://doi.org/10.1007/s13437-018-0157-0
IAMU SECTION ARTICLE

Maritime simulator training across Europe: a comparative


study

Salman Nazir 1 & Sophie Jungefeldt 1 & Amit Sharma 1

Received: 29 May 2018 / Accepted: 11 October 2018/


# World Maritime University 2018

Abstract
Simulator-based training has witnessed considerable attention in recent years for the
training of operators in the maritime domain and is employed by majority of nautical
training institutes. IMO has published the model course 6.10 Train the simulator
trainer and assessor which acts as a guideline and aims to promote uniformity in the
simulator-based training for maritime operations. Nevertheless, the model course
should not be Bimplemented blindly,^ since one has to acknowledge the institution’s
own resources and apply them in an appropriate way. Consequently, a variation in
different institutions simulator training can exist. This study aims to discover how such
variations exist in the European full mission simulator training institutions. Semi-
structured interviews were conducted to clarify each of the participating institution’s
simulator training design. The interviews comprised relevant performance indicators—
e.g., identical elements, training time, teaching of general principles, students’ needs,
feedback, training needs analysis, assessment, and instructors´ qualifications—selected
after a detailed literature review. The findings present variations and similarities
observed in the European simulator training facilities, in relation to the designated
performance indicators. The study demonstrated that although some simulator training
in Europe appears to be performed uniformly due to comparable proceedings and
understanding of the model course 6.10, the implementations of these aspects on the
basis of interpretation and available infrastructure can create dissimilarities.

Keywords EU maritime institutions . Maritime education and training . Performance


indicators . Comparative study

Salman Nazir and Sophie Jungefeldt have made equal contributions in this manuscript and are co-first authors.

* Salman Nazir
Salman.Nazir@usn.no

1
Training and Assessment Research group (TARG), Department of Maritime Operations, University
of South-eastern Norway, Postboks 4, 3199 Borre, Norway
S. Nazir et al.

1 Introduction

The main goal of education is understood as to prepare students for the practical
application of the knowledge and skills they gain during training in the real world and
their future work (Liu et al. 2009). In the context of maritime education, this implies
preparing the trainees for their career as a bridge officer or engine officer in charge of
operations on a merchant vessel (Sharma et al. 2019). For the operators, training in such
high-risk domains as maritime, simulator training is recognized as the key element to the
education and the most effective way of training them and introducing them to their
future line of work and responsibilities (Moroney and Lilienthal 2009).
Simulator training is widely used by the maritime training institutions across
European Union (EU) for nautical training programs. To meet the maritime educational
requirements globally, International Maritime Organization (IMO) ensures correspond-
ing training requirements for all maritime educational institutions, through the Standards
of Training, Certification and Watch keeping (STCW) regulations (IMO 2011). The
requirements regarding obligatory simulator training in the STCW-code only concern
Automatic Radar Plotting Aid (ARPA) and Electronic Chart Display and Information
System (ECDIS) training. Other training requirements in the STCW-code are suggested
to be, Band/or,^ part of the simulator training, though it is not obligatory for them to take
place in a simulator (IMO 2011). When interpreting the STCW requirements, each
learning institution may develop their own understanding of how to fulfill such require-
ments (Crossan et al. 1999). Consequently, each institution may have their own frame-
work on how to comply with the requirements, thus causing a variability in the nautical
programs offered in different parts of the world (Nazir et al. 2015).
The present study aims to compare the simulator training used in different nautical
bachelor education programs in various countries within EU. A comparative study of
nautical bachelor educations in different countries could therefore help to find devia-
tions between the institutions and subsequently uncover how different institution’s
simulator training differ from each other. The mandatory training requirements laid
down in the nautical bachelor programs are defined in the STCW code, table A-II/1 and
A-II/2. For the purpose of this study, simulator instructors from six nautical institutions
within EU were interviewed. The study aims not to evaluate the simulator training in
relation to specific STCW requirements, but instead, measure variations in the simula-
tor training based on performance indicators created from a relevant literature review
(Nazir and Jungefeldt 2017).
In the subsequent sections, we present the review associated with the simulator
training literature which is followed by a list of selected performance indicators. Next,
we describe the method of collecting data from different maritime training institutions.
The results and discussion section highlight the observed differences across these
institutions for the identified performance indicators. We conclude with discussing
the implications of these variations and directions for further research.

2 Literature review

A thorough literature review started with the simulator training guide published by the
IMO: IMO model course 6.10 Train the simulator trainer and assessor (IMO 2012).
Afterwards, some keywords were established such as transfer of knowledge, training
design, transfer of training, performance indicators, and simulator-based training for
the literature review. These were searched separately and combined in different re-
search platforms. For instance, Science Direct© and Google Scholar© were utilized to
search for scientific articles, while Oria© was utilized to search relevant literature in the
library platform. The aim was to find additional theories to supplement the IMO model
course 6.10 and thus, gain a deeper understanding regarding simulator training, to be
able to distinguish relevant performance indicators.
Two prominent sources for relevant theories were selected after consulting the IMO
model course 6.10 thoroughly: (1) BTransfer of training: a review and directions for
future research^ by Baldwin and Ford (1988) and (2) The handbook of simulator-based
training by Farmer et al. (1999). These sources were selected due to their prominent use
in pedagogy and simulator training across a number of institutions.

1. BTransfer of training: a review and directions for future research^ by Baldwin and
Ford (1988) was selected since it was found to be the originating theory regarding
transfer of training. However, new emerging theories regarding transfer of training
have also been utilized.

2. BThe handbook of simulator-based training^ by Farmer et al. (1999) was men-


tioned in the IMO model course 6.10 (IMO 2012). Additionally, during the
literature review, this handbook was found to incorporate and advance most of
the topics mentioned in the IMO model course 6.10.

Since the IMO model course 6.10 is merely a guideline (IMO 2012), the theories from
Farmer et al. (1999) and Baldwin and Ford (1988) aim to act as a supplement to
increase the theoretical base and elaborate on intricate aspects of simulator training. We
now elaborate on the chosen literature in the following sub-sections.

2.1 IMO model course 6.10

The IMO has published the model course 6.10 Train the simulator trainer and assessor.
The model course is merely a guideline and aims to promote uniformity in the
maritime simulator training across the globe. However, the simulator training
might vary in different institutions depending on several factors, for instance,
cultural backgrounds and interpretations of this model. The model course
should therefore not be Bimplemented blindly,^ since one has to acknowledge
the institution’s own resources and how to apply them in an appropriate way.
Consequently, this could lead to a variation in simulator training programs offered by
different countries.
The objective for this course is to develop an ideal instructor who should possess
training awareness, training skills, and managerial skills and aptitudes, which are the
selected components presented in Fig. 1. The main purpose is to emphasize transfer of
knowledge from the instructor to the students. BThe future maritime instructors should
be able to contribute in formulating a training policy both at the macro, as well as the
micro level^ (IMO 2012, p. 4). Hence, the instructor and how s/he design the simulator
training can be seen as one of the most important parts of the simulator training.
S. Nazir et al.

2.1.1 Training awareness

Identify training needs According to the model course, training awareness involves the
identification of the students’ needs. The model course describes the importance of
collecting as much information as possible regarding the students. Mostly concerning
their previous knowledge and beliefs.

Understand the end result Training awareness concerns the general understanding of
the outcome of training.

BThe main focus of the simulation exercise is not the acquisition of technical
knowledge but the ability to apply it in real time context^

(IMO 2012, p. 89).

Objectives and real-world scenarios Other important awarenesses, as mentioned in the


model course, are the ability to provide scenarios with clear objectives. The
objectives are for the students, to help them understand the benefits and
importance of the simulator training. Furthermore, the instructor must choose
a proper duration for the sessions, to accomplish the specific training. Each
exercise is suggested to present concise objectives given to the student’s in
advance. Additionally, Bdocumentation of what the trainees are to be trained to
do^ should be provided (IMO 2012, p. 66). At the same time, the instructor
must make sure to introduce the students to a scenario which is as identical as
possible with reality.

Fig. 1 IMO model course 6.10; selected components


2.1.2 Training skills

Psychology of learning and teaching techniques The instructor must acquire dif-
ferent training skills, as for example different techniques in how to transfer
their own knowledge to the students. One example could be regarding the
instructors intervening in the simulator. BThe trainer must know when to
intervene and when to leave the students alone, so as to encourage as much
as possible experiential learning for the students^ (IMO 2012, p. 52). This
argument can be supported by the theory of Dewey (2004), who argued that
people Blearn by doing,^ and the importance of acknowledging this in the
education.
Furthermore, an important training skill mentioned is the use of feedback.
The model course clearly suggests the use of briefing and debriefing during the
simulator training. A simulator session is suggested to include four main steps:
briefing, planning, simulator exercise and debriefing. The briefing should intro-
duce the students towards the exercise to be executed, the objectives, and how
they are covered in the exercise. After the exercise, the students meet with the instructor
for a debriefing session, preferably all together in a classroom, where the instructor has
the possibility to play back the session. The debriefing is for the students to
receive feedback about their performance, to review their own actions, and to
learn from each other.
Additionally, an instructor should comprehend pedagogy skills to monitor and guide
the student’s understanding of the simulator training (Bransford et al. 2000).
Subsequently, an instructor should have a Bbackground or experience in teaching or
instructional techniques^ (IMO 2012, p. 54).

Course and scenario design The instructor must relate the simulator training to relevant
syllabus and create relevant training scenarios. The simulator course must include
several scenarios of different complexity, where the first scenarios introduce the
students to the most basic activities. BStep by step they should proceed towards more
complex activities^ (IMO 2012, p. 65).

Assessment Simulator training can, according to the model course, be referred to as


competence-based training. The main purpose of an assessment is to make sure that the
students can perform an activity in a specific role, according to specific standards. The
use of Bcriteria tables^ with different weightings, where the student receives scores
depending on which level s/he’s at, is recommended. However, the main importance of
an assessment is that Bthe assessment criteria shall be relevant, valid, reliable, consistent
and realistic^ (IMO 2012, p. 97).

2.1.3 Managerial skills and aptitudes

Identifying and providing resources According to the model course, the simulator
training should include a minimum of two experienced instructors. The instructors
should have relevant maritime background, to be able to identify and provide signif-
icant resources and knowledge for the students.
S. Nazir et al.

2.2 Handbook of simulator based training

The Handbook of simulator-based training originates from the military, where


simulators are essential for the effectiveness of training (Farmer et al. 1999).
Several European countries participated in the creation of the handbook, to
form a foundation for simulator training, where the aim of the book is to
provide a key resource to utilize during the establishment, and the implemen-
tation of simulator training. Figure 2 presents the key components selected from
the Handbook of simulator-based training, described in the following sub-
chapters.

2.2.1 Training needs analysis

When deciding to use a simulator as a training tool, it is important to perform a


training needs analysis (TNA) before purchasing the simulator, and consequent-
ly, develop the training program. A TNA consists of four main analyses:
mission, task, trainee, and training analysis. Such analyses assist to establish
the training goals and subsequently, the training objectives for the simulator
training. When the training objectives are finalized, the user can choose the
suitable simulator and successively validate the simulator training and its coexistence
with the training goals.

2.2.2 Types of simulators

During the first step of a TNA, the mission analysis, the main focus is to find
the purpose and mission of the training. After such analysis, one can establish
the required features of the simulator and subsequently, which simulator to
purchase.

Fig. 2 Handbook of simulator-based training; selected components


2.2.3 Training objectives and sequencing

The next part of the TNA, the task analysis, aims to cover the actual operation,
its conditions, and goals. The conclusion of a task analysis will help frame the
training objectives. The training objectives can be quite large, and for the work
load to be kept at a normal level, sequencing can be utilized. It is of great
importance to make sure all students follow with a constant progression.
Sequencing can be explained as dividing the tasks and increasing the complexity slowly.
One can divide sequencing into four categories (Farmer et al. 1999, p. 98):

1. Team-based sequencing which is Brelated to the dependencies between the tasks of


different team members.^
2. Task-based sequencing which is Brelated to the dependencies between the tasks
assigned to an individual operator.^
3. Part-task sequencing which is Brelated to dependencies between task
components.^
4. Part-skill sequencing which is Brelated to dependencies between constituent
skills.^

2.2.4 Required vs. available skills

As a consequence of the third stage of a TNA, trainee analysis, where one


needs to assess the student’s previous knowledge, the subsequent stage which is
known as training analysis, considers the required versus the available skills of
the students. The importance of identifying these skills is to define the actual
training need, which emerge from the Bdiscrepancies between required and
available skills^ (Farmer et al. 1999, p. 49). Such gaps between the required
and available skills could be solved through an effective group division. The
students can be divided in groups depending on their state of knowledge. A
group of students with different knowledge and experience could be beneficial
for the student’s learning outcome (Littleton and Mercer 2013). Experienced
students can assist less experienced students and thereby increase the knowledge of the
whole group (Postholm 2011). Nevertheless, it is important to make sure all students are
active, and not only the more experienced ones (Postholm 2011).

2.2.5 Training time

The training time is also a factor emerging from the fourth stage of the TNA,
training analysis. Even if training in a simulator is discovered to be more
efficient than training in the actual live setting, the amount of training time is
found to be of great importance (Bransford et al. 2000; Farmer et al. 1999).
When deciding the training time, this should be in correlation to the training
objectives. For example, if the training objectives are exceeding the actual job
requirements, the training time must be prolonged as well. Though, setting the standards
of the training objectives to low in combination with a reduced training time could
reduce the transfer of training.
S. Nazir et al.

2.2.6 Train like you fight

During the simulator training, all activities and training exercises should be as similar to
reality as possible. BThe way in which people are trained and the way in which training
is organized should correspond as closely as possible to operational practice^ (Farmer
et al. 1999, p. 77).

2.2.7 Instructional design

During the simulator session, the instructional design should be divided into three main
parts according to Farmer et al. (1999): briefing (prior), tutoring (during), and
debriefing (after), where briefing and debriefing is performed as mentioned above in
the IMO Model course 6.10. Tutoring is a process which is a direct corrective response
in real time. BThe major distinction between tutoring and briefing/debriefing is that
tutoring is a process that proceeds in real-time, i.e., in parallel and in interaction with
the training process, whereas briefing/debriefing is a process that proceeds off-line^
(Farmer et al. 1999, p. 131). Tutoring encompasses two main distinctions, one proac-
tive which is called Guidance, and one reactive, Feedback.

2.2.8 Guidance and feedback

Providing guidance and feedback is an important part of simulator training. Even if


tutoring can improve the performance, students can also come to rely too much on
these tutoring tools. If such a scenario would occur, it could reduce the understanding,
instead of increasing the knowledge of the students. Such scenarios can be
prevented by the instructor as long as s/he’s aware of when and how much
guidance and feedback to give. This can also be called Bfading,^ where the
complexity slowly increases by Bgradually withdrawing tutoring support from the
training process^ (Farmer et al. 1999, p. 132).

2.2.9 Assessment

According to Farmer et al. (1999), there are two main methods of measuring the
performance of a student. These two methods are called objective and subjective
measures. Where the main factors for a subjective performance measurement are the
instructor’s opinions about the students’ performance. The instructor can use a rating
scale and at the same time use previous collected knowledge about the student’s
experience and capabilities. Subsequently, such an assessment can contain biases based
on for example pre-conceptions about the students. An objective assessment is much
harder to perform, since it is strictly based on deviations from the task related to the
students’ performance during the assessment, and by the nature of human perception,
the subjective bias is difficult to overcome.

2.3 Transfer of training: a review and directions for future research

According to Liu et al. (2009), the main goal of Transfer of training is Bthe ability to
transfer what was learned in training to the actual real-world setting^ (Liu et al. 2009, p.
50), also called positive transfer. A negative transfer refers to the opposite, a discrep-
ancy between the training and the real-world setting. Many research efforts have been
conducted regarding Transfer of training, though one of the most cited articles
concerning this topic is Transfer of training: a review and directions for future research
written by Baldwin and Ford (1988). They created a framework containing main
components for transfer of training. This framework is seen as a process, which
includes Training inputs, Training outputs and condition of transfer, where each part
of the process includes its own factors.
As shown in Fig. 3, the transfer process simply includes what is embodied in the
training context, Training inputs. The outcome of the training, Training outputs, can be
measured with, for example, assessments, and, finally, the Conditions of transfer, how
the training is maintained and generalized in the actual work-setting.

2.3.1 The main input in simulator training

When looking at this process, one could state that the simulator training offered
at a nautical bachelor education is mainly affected by the training inputs. Since
these factors create the actual training context. All factors within the training
inputs are of course important for the actual simulator training and its outcome.
Though, the main importance of the simulator training according to the IMO
model course 6.10 are the instructor and how s/he designs the simulator
training.

2.3.2 Training design

In the study carried out by Kluge et al. (2008), a vast variation between simulator
training environments in the process industries was discovered during their research.
One of the main factors as described by these studies was little or almost no theory and
competence framework regarding the simulators training design. Baldwin and Ford
(1988) claim that training design is an important factor for the transfer of training,
mainly because a superior training design can increase the students understanding of
the executed activity and hence the transfer of training (Baldwin and Ford 1988;
Bransford et al. 2000).

Fig. 3 Adapted from BA Model of the Transfer Process^ by Baldwin and Ford (1988)
S. Nazir et al.

Training design consists of four main components: identical elements, teaching of


general principles, stimulus variability, and various conditions of practice (Baldwin
and Ford 1988). These main components incorporated in the training design support
students to understand and master multiple conditions which they will encounter in
their future work setting (Grossman and Salas 2011).

2.3.3 Identical elements

The theory of identical elements emerged from Thorndike and Woodworth (1901).
They argued that transfer of training would occur easier if the first activities in a
training scenario had identical components and tasks as the following activity per-
formed in real life. Therefore, a simulator training with as identical elements as possible
to reality could increase the transfer of training (Baldwin and Ford 1988).
When establishing identical elements for a maritime simulator-based training, one
must therefore look at the activity performed in real life (Thorndike and Woodworth
1901). During the bridge watch, a proper look-out shall be kept at all times (IMO
1972). The officer on watch can operate the bridge alone, except during Breduced
visibility, costal navigation, increased traffic or other special conditions^ (NFD 1999).
A total of two to three persons are usually operating the bridge. Thus, an identical
training scenario in the simulator should include two to three students in each bridge
(Baldwin and Ford 1988; NFD 1999; IMO 1972; Thorndike and Woodworth 1901).

2.3.4 Teaching of general principles

During training, it is important for the students to understand the underlying conditions
and theories of the exercise. When the students have a full understanding of the
theoretical rules, they are more likely to solve different problems. Additionally, this
can increase the possibility for transfer of training (Bransford et al. 2000). According to
Vygotsky et al. (1978), people build new knowledge on the foundation of their existing
knowledge. It is therefore of great importance for the instructor to facilitate this process
and make sure that all students have the same understanding.

2.3.5 Stimulus variability

The training should include a great variety of stimulus for a positive transfer to occur
(Liu et al. 2009). A meta-analysis regarding flight simulations was conducted by Hays
et al. (1992). In their research, they found variation of stimulus and response during
training to be of great importance for the students to increase their learning (Baldwin
and Ford 1988; Hiebert et al. 2014). Consequently, an increase in experience and
understanding for students could increase their ability Bto see the applicability of a
concept in a new situation^ (Baldwin and Ford 1988, p. 67).

2.3.6 Various conditions of practice

Different conditions of practice include numerous designs of training (Baldwin and


Ford 1988). Sequencing and feedback are two distinctive conditions of practice
specified by Baldwin and Ford (1988). The student’s understanding of a concept
increases when the progression is following their pace and knowledge (Hays et al.
1992; White 1984). The progression of a training scenario with the use of a stimulus
variety can be seen as a form of sequencing where the complexity, hence the
stimulus introduced to the students, increases gradually (Collins and Kapur
2014). During these training sessions, feedback can be given to the students.
Feedback is found to be an extremely powerful tool to influence learning
(Hattie and Timperley 2007). The provided feedback should fill the gaps for
the students regarding where s/he’s going, how s/he’s going, and where to next
(Black and Wiliam 1998; Hattie and Timperley 2007).

3 Performance indicators

To distinguish which factors to compare in the context of maritime simulator training,


performance indicators were established. Since performance indicators can be seen as
factors related to the main objectives of for example an enterprise, the performance
indicators were developed from the objectives of the IMO model course 6.10 described
in Chap. 2.1 (Maté et al. 2017), combined with the supplements given in Chap. 2.2
BHandbook of simulator-based training^ and Chapter 2.3 BTransfer of training^.
Subsequently, all indicators are taken from the literature elaborated above and will
help to answer the research objective. The performance indicators are presented in
Table 1 and shaped the formation of the tool for the collection of data, i.e., the interview
guide.

3.1 Identical elements

The instructor must know when to intervene in the training to encourage experimental
learning (IMO 2012). The number of instructors in the simulator training could indicate
how independent the students are (Farmer et al. 1999). Subsequently, the number of
instructors in each institutions simulator training will be covered. Additionally, how
many students each simulator bridge holds could indicate how similar the training

Table 1 Selected performance indicators

Performance indicators References

Identical elements (Baldwin and Ford 1988; Farmer et al. 1999; NFD 1999;
IMO 1972; IMO 2012; Thorndike and Woodworth 1901)
Training time (Bransford et al. 2000; Farmer et al. 1999; IMO 2012)
Teaching of general principles (Baldwin and Ford 1988; Bransford et al. 2000; Vygotsky et al. 1978)
Students needs (Baldwin and Ford 1988; Farmer et al. 1999; IMO 2012; Postholm 2011)
Feedback (Black and Wiliam 1998; Farmer et al. 1999; Hattie and Timperley 2007;
IMO 2012)
Training needs analysis (Farmer et al. 1999; IMO 2012)
Assessment (Farmer et al. 1999; IMO 2012)
Instructors qualifications (Bransford et al. 2000; IMO 2012)
S. Nazir et al.

scenarios are in comparison to reality (Baldwin and Ford 1988; Thorndike and
Woodworth 1901). Another factor affecting the number of students each bridge holds
is the number of existing bridges. Consequently, the comparison of how many instruc-
tors and bridges each institution has in combination with how many students each
bridge holds was examined.

3.2 Training time

The training time is found to be of great importance, to make sure all training objectives
are achieved (Bransford et al. 2000; Farmer et al. 1999; IMO 2012). Therefore, a
comparison of the different institutions simulator training time was gathered.

3.3 Teaching of general principles

It is important for the students to understand the theories and concepts of the training
sessions (Bransford et al. 2000). The distinct semester a simulator course starts will be
included in the research, mainly, since such information could indicate how much
theory the students can build their understanding on (Vygotsky et al. 1978).

3.4 Students’ needs

The students’ needs and the gap between the available and required skills are an
important factor to expose during simulator training (Farmer et al. 1999; IMO 2012).
One way to fill these gaps could be through group dividing (Postholm 2011). A group
of students with different knowledge and experience could be beneficial for the
student’s learning outcome (Littleton and Mercer 2013). Consequently, how each
institution divides the groups in the simulator training will be investigated in relation
to relevant theory. In addition, the students’ needs can be controlled through a
progression adequate to the student’s knowledge. Therefore, scenarios with progression
of complexity, and stimulus variability, is an important factor to include in simulator
training (Baldwin and Ford 1988; Farmer et al. 1999; Hays et al. 1992; IMO 2012;
White 1984). Subsequently, the utilization of sequencing will be compared in relation
to the literature review.

3.5 Feedback

All theories above discuss feedback in the form of briefing and debriefing, where the
executions of the simulator session can include different stages as mentioned above
(Baldwin and Ford 1988; Black and Wiliam 1998; Farmer et al. 1999; Hattie and
Timperley 2007; IMO 2012). Consequently, how the different institutions perform a
simulator session in comparison to such feedback was investigated.

3.6 Training needs analysis

The importance of concise training objectives is distinctly stated in the IMO model
course 6.10. These are suggested to be finalized before choosing the proper simulator
and should be given to the students before each exercise. Subsequently, this research
will cover whether a proper training program development was performed before or
after purchasing the simulator, in combination with clear training objectives given to
the students (Farmer et al. 1999; IMO 2012).

3.7 Assessment

The simulator assessment is recommended to be realistic and valid with the utilization
of a rating scale (Farmer et al. 1999; IMO 2012). Consequently, how the assessments
are performed in each institution’s simulator training will be investigated.

3.8 Instructor qualifications

The instructors must have a clear understanding of the desired outcome of a simulator
training. An instructor must also be able to guide the students and should therefore
encompass pedagogical skills (Bransford et al. 2000; IMO 2012). Subsequently, what
qualifications and background the instructor(s) have will be explored in relation to the
literature review.

4 Method

The data was collected through a semi-structured interview based on an interview guide
(Flick 2009). The interview guide presented in Table 2 includes all relevant topics
related to the designated performance indicators selected in the literature review.
Furthermore, the topic, full mission simulators, was clarified before the interviews
were conducted. One of the authors was responsible for the data collection process and
conducted interviews with representative informants of the selected institutes as elab-
orated in the next sub-chapter.

4.1 Sample

The samples were selected based on specific parameters. All samples had to be located
in different countries within Europe and approved by the STCW and subsequently
presented in the MSC.1/Circ.1163/Rev.9. Additionally, they had to facilitate classes of
approximately 30 students, increasing the comparability.
Six institutions were selected, with eight informants constituting a stratified sample
of key informants. The main criterion for selecting them as key informants was the fact
that they are all simulator instructors. The selection of all informants was non-random
and purposive basis on the contacts of the authors. The basis of this approach was in
alignment with the objectives of this research which was aimed at detecting variation
between institutions rather than absolute conformity with the IMO model course 6.10.
Three interviews were conducted as personal face-to-face interviews, three by skype
and two by phone. Each institution is called a phenomenological case, where an outline
of each case, the numbers of informants, and the utilized channels of communication
are provided in Table 3. The standard ethical considerations laid down by affiliated
institution of authors regarding data confidentiality and informed consent for the
informants were followed.
S. Nazir et al.

Table 2 Semi-structured interview guide used for interviews

Interview guide

Part 1. Unstructured
1. Could you explain and describe the simulators in your institution?
2. How was the simulator training program designed?
3. Could you explain how your simulator training is conducted?
4. Could you explain the instructor’s role in the simulator training?
5. Could you explain how the assessment of the simulator training is carried out?
Part 2. Structured
6. How many bridges does your simulator have?
7. Did you develop the objectives and goals for the simulator training before, or after, you
bought the simulator?
8. Does the students have access to the simulator training objectives and goals?
9. During which semester(s) are the full mission simulator training offered?
10. For how long does the simulator exercise normally lasts?
11. How many instructors are their usually for each session?
12. How many students are there usually in one simulator bridge?
13. How is the progression of the training usually implemented in the simulator course?
14. How do you manage the Bgroup division^ of the students during the simulator training?
15. What experience’s and qualifications does the simulator instructors possess? And what skills
to you find important for a simulator instructor?
16. How are the assessments performed in the simulator training?
17. Is there anything else you would like to add, regarding your simulator training, which you feel are not
covered in this interview?

4.2 Data analysis

The analyzing process started by fully transcribing all the interviews. This study utilizes
a top-down perspective where theoretical concepts define the performance indicators
and subsequently, the coding (Berg and Lune 2012; Flick 2007; Frankfort-Nachmias
and Nachmias 2008). Since the codes are predefined, the coding is deductive
(Frankfort-Nachmias and Nachmias 2008; Miles et al. 2014). The coding presented
in Table 4 will therefore relate to each performance indicator presented in the literature
review and analyzed in compliance with the relevant theory (Flick 2007; Frankfort-
Nachmias and Nachmias 2008).

Table 3 Informants and channels


Institutions Number of informants Channels of communication
of communication
Case 1 1 Face-to-face
Case 2 2 Face-to-face
Case 3 2 Skype
Case 4 1 Phone
Case 5 1 Skype
Case 6 1 Phone
Table 4 Coding for data analysis

Main coding Sub-coding References

Identical elements Nr of bridges (Baldwin and Ford 1988; Farmer et al. 1999; NFD 1999; IMO 1972; IMO 2012;
Nr of students/bridge Thorndike and Woodworth 1901)
Nr of instructors/session
Training time Duration of sailing/session (Bransford et al. 2000; Farmer et al. 1999; IMO 2012)
Teaching of general principles Introduction of full mission training (Baldwin and Ford 1988; Bransford et al. 2000; Vygotsky et al. 1978)
Students needs Systematic group division (Baldwin and Ford 1988; Farmer et al. 1999; IMO 2012; Postholm 2011)
Sequencing
Feedback Briefing (Black and Wiliam 1998; Farmer et al. 1999; Hattie and Timperley 2007; IMO 2012)
Debriefing
Training needs analysis Training program development, training objectives (Farmer et al. 1999; IMO 2012)
Assessment Assessment procedure (Farmer et al. 1999; IMO 2012)
Instructor qualifications Trainers skills (Bransford et al. 2000; IMO 2012)
Table 5 Summary of the finding

Main codings Sub-codings Case 1 Case 2 Case 3 Case 4 Case 5 Case 6

Identical elements Nr of full mission bridge 5 5 2 4 3 1


Nr of students/bridge 3 2 2–3 8 3 5
Nr of instructors/session 2 1 2 4 1 1
Training time Duration of sailing/session 4h 4h 1:15–6:15 ha 4 h 6–8 ha 4h
Teaching of general Introduction of full mission 3rd 7th semester 7th semester 6th semester 8th semester 6th
principles training semester semester
Student needs Systematic group division No Yes No No Yes No
Sequencing Yes Yes Yes Yes Yes Yes
Feedback Briefing Yes Yes No Yes No Yes
Debriefing Yes Yes Yes Yes No Yes
Training needs analysis Training program development No No Yes Yes Yes No
Training objectives Yes Yes Yes Yes Yes Yes
Assessment Assessment procedure Yes Yes Yes Yes Yes No
Instructor qualifications Trainers skillsb License License: License License: License: License:
Captain Captain Captain Captain Captain
Chief Officer 2nd Officer Chief Officer Chief
Officer
3rd Officer
Extra: Pedagogy Extra: Simulator Extra: Teacher
Course Certificate Education

a Variations in the duration of sailing/session were found and will be discussed in Sect. 5.2
b The licenses in sub-coding BTrainers skills^ are structured after rank where the use of blank areas are inserted to make the table more comprehensible
S. Nazir et al.
5 Results and discussion

Table 5 below presents a summary of the findings. Each coding related to the perfor-
mance indicators will be discussed in relation to the literature review and presented with
the use of a figure. The six participating institutions are referred to as BCases.^

5.1 Identical elements

5.1.1 Number of full mission bridges

The number of full mission bridges each case has in the simulator training facility was
collected to see if it correlated with the number of students each case could place in
every bridge, subsequently, if a case with many bridges would encompass a low
number of students in each bridge, to construct the simulator training as similar to
reality as possible (Baldwin and Ford 1988; Farmer et al. 1999; IMO 2012).
As we can see in Fig. 4, a variation was observed regarding how many students each
case encompassed in their simulator bridges. However, this was not in relation to the
number of bridges each institution had. Case 1 and 2 have the ability to utilize five
bridges each, with respectively three and two students in every bridge. This is similar to
cases 3 and 5, which only utilizes two and three bridges respectively with two to three
and three students in each bridge. Conclusively, they have approximately the same
number of students in each bridge, with a varied number of bridges to utilize.
Cases 4 and 6 have a clear difference in the number of bridges, 4 and 1, respectively.
Nevertheless, both cases have the highest number of students in each bridge, with
respectively eight and five students in every bridge. Consequently, a correlation
between the number of bridges and the number of students/bridge could not be found
among the different cases.

5.1.2 Number of students/bridge

As we can see in Fig. 4, 66.66%, subsequently four cases, 1, 2, 3, and 5, facilitated two
to three students/bridge, in comparison to 33.33%, cases 4 and 6, which facilitated five
to eight students/bridge.

Identical Elements
9 8
8
7
6 5 5 5
5 4 4
4 3 2-3 3 3
3 2 2 2 2
2 1 1 1 1
1
0
Case 1 Case 2 Case 3 Case 4 Case 5 Case 6

Nr of bridges Students / bridge Nr of instructors / session


Fig. 4 Identical elements
S. Nazir et al.

The IMO model course 6.10 clearly states that the students must be introduced to
scenarios which is as identical as possible with reality. Therefore, cases 1, 2, 3, and 5
utilizes approximately the same number of students in each bridge as they would be in a
real-world scenario out at sea. This indicates that the trainings are organized in a way to
present the students to as identical situations and scenarios as possible with the
activities performed in real life (Farmer et al. 1999; IMO 2012; Thorndike and
Woodworth 1901).
Furthermore, cases 2 and 3 explained that they organize some full mission courses as
in real life, with two students on each bridge, and with one student acting as the officer
on watch or captain, and one student assisting the one in charge (NFD 1999;
IMO 1972). Consequently, this suggests that these cases organize the training to
Bcorrespond as closely as possible with reality^ (Farmer et al. 1999, p. 77).
Additionally, due to the facts stated above, one could argue that these two cases
could have a higher positive transfer of training, in relation to the other two cases: 1 and
5 (Baldwin and Ford 1988; Liu et al. 2009).
Cases 4 and 6 perform their training with five to eight students/bridge, since
a vessel is normally operated with two to three persons located on the bridge.
This indicates a negative transfer of training with a lack of identical elements in
relation to reality.
Additionally, case 6 specifically mentioned that there were too many students in one
full mission bridge for some specific scenarios, indicating that the additional number of
students in one bridge is not beneficial for the student’s learning outcome. This implies
that in this specific case, there is a discrepancy between the training and the real-world
setting.
Furthermore, such large number of students in one bridge could lead to a high
activity by more experienced students, leaving some students less active (Postholm
2011). Moreover, it may be hard to discover the students’ needs in such large groups.

5.1.3 Number of instructors/session

The IMO model course 6.10 recommends that the simulator training should include a
minimum of two experienced instructors. The number of instructors varied from 1 to 4
in the cases, where only 33.33% of total, cases 1 and 3, utilized two instructors during
the simulator training, in accordance with the IMO model course 6.10.
Half of the cases, case 2, 5, and 6, utilized one instructor during their simulator
trainings, which according to the IMO model course 6.10 is below the recommended
minimum requirement. However, 16.66%, case 4, utilized four instructors, which
exceeds the minimum requirement, and could therefore be seen as beneficial for the
training facility.
Nevertheless, if we look at the instructors intervening, it was mentioned that the
instructors in case 4 barely leaves the students alone in the bridges, Bmaybe 5-10
minutes to see what happens.^ This indicates that the students might come to rely too
much on the instructors, since they are always present in the training. This indicates that
the students in case 4 do not get the opportunity to practice any, or almost no,
experiential learning. The 66.66%, i.e., cases 1, 2, 3, and 5, all mentioned a dynamic
intervening where the students were alone at the bridges, indicating an encouragement
of experiential learning. In addition, instructors in each of these cases tried to guide the
students, make them aware of situations, and described how they could discuss
different solutions together with the students.
The experiential learning in cases 3 and 5 is suggested since both mentioned
Blearning by doing^ and Bself-learning^ as an important factor to be aware of
when interacting with the students (Dewey 2004; Farmer et al. 1999). Case 1
described the second instructor to enter the bridges as a pilot for example, or as
an instructor to support the students, indicating the utilization of guidance. The
providing of independency for the students in case 1 is also suggested, since
there is only one instructor intervening in five bridges, enabling experiential
learning.
Furthermore, case 2 described how the instructor tries to reduce the interaction
further on in the training and subsequently tries not to intervene at all. This could
reduce the risk of the students relying on the instructors to guide them and successively
improve the student’s performance and experiential learning.
Case 6 described the interaction to occur via the intercom or VHF, indicating a low
personal interaction with the students in the bridges. Subsequently, this indicates a high
experiential learning for the students. On the other hand, the absence of personal
interaction might occur due to the fact that only one instructor is utilized during the
simulator training. Guidance and feedback during the simulator training were also
described as an important part of the simulator training (Farmer et al. 1999).
Consequently, due to the absence of personal interaction, this could indicate that the
students in case 6 received less guidance during the simulator training, which could
impact their learning gain and understanding.

5.2 Training time

As we can see in the Figs. 5, 66.66%, cases 1, 2, 4, and 6, has the same amount of
training time, 4 h, inside the full mission simulator bridges. However, two institutions
differ from the other four. Case 3 described their training time to vary between 1 h
15 min to 6 h 15 min, depending on the exercises performed in the simulator. This
indicates that case 3’s simulator training time varies depending on the exercise and its
objectives.

Training Time
9
8
8
7
6:15 6
6
5
4 4 4 4
4
3
2
1:15
1
0
Case 1 Case 2 Case 3 Case 4 Case 5 Case 6

Hours Hours (Variaons*)

Fig. 5 Training time


S. Nazir et al.

Case 5 also deviates from the other cases with a training time of 6–8 h/session. In
addition, the students were described to utilize 4–6 h to prepare for the exercises Bfor
the preparation they also need at least 4–6 h.^ This indicates that the training objectives
in case 5 exceed the other cases objectives, and subsequently, had to prolong their
training time (Farmer et al. 1999; IMO 2012).
However, since 66.66% utilizes a training time of 4 h, this could be seen as an
optimal number of hours to utilize in the training. Consequently, 4 h might be enough
time to eliminate the possibility of insufficient training time, preventing a reduction in
the transfer of training. In addition, 4 h might contain the training time at a level were
the exercises performed does not exceed the actual job requirements.

5.3 Teaching of general principles

As we can see in Fig. 6, 83.33%, cases 2, 3, 4, 5, and 6, introduces the full mission
training quite late in the education, in comparison to case 1. This indicates that students
entering the full mission training in case 1 have had less theoretical knowledge than in
the other cases. Consequently, the understanding of the full mission training might be
lower in case 1 and could lead to a reduced transfer of training (Bransford et al. 2000).
Moreover, a diverse understanding of the underlying theory could not occur so early in
the education, leading the students to build knowledge on what they observe, instead of
the actual facts educated in relevant curriculum (Vygotsky et al. 1978).
However, Dewey (2004) suggests that people Blearn by doing,^ indicating that an
early introduction to the full mission training could in fact increase the student’s
understanding.

5.4 Students needs

5.4.1 Systematic group division

As we can see in Table 6, only 33.33%, cases 2 and 5, were found to have a system
regarding the group division during the simulator training. When choosing the groups,
the instructor must take the students’ knowledge and needs into account (Farmer et al.
1999; IMO 2012; Postholm 2011). Both cases 2 and 5 were found to have systems

Introduction of full mission training


9
8
8
7 7
7
6 6
6
5
4
3
3
2
1
0
Case 1 Case 2 Case 3 Case 4 Case 5 Case 6

Semester

Fig. 6 Introduction of full mission training


Table 6 Variations in cases for performance indicators—student needs, feedback, training need analysis, and
assessment

Main coding Sub-coding Case(s)

Yes No

Students needs Systematic group division 2, 5 1, 3, 4, 6


Feedback Briefing 1, 2, 4, 6 3, 5
Debriefing 1, 2, 3, 4, 6 5
Training needs analysis Training program development 3, 4, 5 1, 2, 6
Assessment Assessment procedure 1, 2, 3, 4, 5 6

regarding the group division, which took the student’s needs into account. However,
these systems are quite different from each other.
Case 2 mentioned that they noticed some students who participated less in the
training, due to the possibility of being weaker than the other students. Due to this
fact, the instructors started to divide the groups by taking this factor into account. This
indicates that case 2 tries to acknowledge the student’s available skills in relation to the
required skills. To make sure all students participate equally and thereby increasing the
learning outcome.
Case 5 described how they utilize a professional system to build the bridge teams.
This system was described to focus on the human interaction and team management.
Furthermore, a 3-day BRM course was described to take place before the simulator
training, to increase the selected group team work, indicating that the group
members learn how to interact with each other and subsequently, share their
knowledge to improve their performance (Littleton and Mercer 2013). In addi-
tion, the first exercise was described to help the teams discover what must be
improved in the management process, before the actual full mission training starts. This
implies that all students must be active in the exercises, to improve their management
and fulfill their tasks (Postholm 2011).
The remaining 66.66%, cases 1, 3, 4 and 6, were not found to utilize a system to
cover the student’s needs, or fill the gap between the required and available skills
(Farmer et al. 1999; IMO 2012; Postholm 2011). This was concluded since two of the
cases, 1 and 6, used the class list to divide the groups, which indicates that the student’s
earlier experience and the required skills are not taken into account. In addition, case 1
mentioned how they try to balance the group’s gender. However, this does not indicate
or impose a group dividing where the students’ needs are considered.
Case 3 mentioned that the students could decide their own groups, indicating that no
thoughts regarding the student’s knowledge and needs were considered. However, it
was described that the instructors could change the groups in the ship-handling course,
if an insufficient group environment was discovered. Indicating some thoughts regard-
ing the student’s needs, nevertheless, no system regarding group dividing was found to
be utilized.
Case 4’s group dividing was conducted by the instructor; however, it was explained
that the focus was to make the groups equal in numbers. Nevertheless, it was mentioned
that the groups altered during every session, to teach the students to cooperate with
S. Nazir et al.

different people. Cooperation is of course an important consideration to take into


account. However, the systematic group division was not found to encounter the
student’s actual needs or knowledge. Which could be seen as a necessary aspect, to
facilitate an increased understanding within the group.

5.4.2 Sequencing

All cases utilized sequencing in their simulator training. Nevertheless, the way each
case introduced sequencing varied from case to case. Cases 1 and 6 utilized
part-skill sequencing to introduce the students to the different skills needed in
different visibilities. The students must learn the basic skills to navigate in day
light, before they are introduced to night-time and reduced visibility where
other skills are necessary. As for example; knowledge regarding navigation lights and
how to use the radar.
Case 1 also utilized a task-based sequencing, and a progression of the implemented
equipment was described. In addition, the removing of a GPS-signal was described to
increase the complexity. Indicating the use of part-task sequencing, since the removing
of a GPS-signal will affect other components in the bridge.
Furthermore, cases 1 and 3 mentioned the importance of Bbalancing^ the
exercise or taking the student’s knowledge into account. Indicating that all
students have the chance to follow with a constant progression. Additionally,
this indicates that the progression is following the student’s understanding.
Subsequently, this suggests that the student’s knowledge might increase due to such
balancing (Hays et al. 1992; White 1984).
Cases 2, 3, and 4 mainly described the full mission simulator training to contain a
progression with an increased complexity towards the end, while case 5 described their
first exercise to be quite easy, followed by a high-load exercise during the second
session. However, it was described that the complexity was reduced after this session,
with an increased complexity towards the end of the training, indicating that a proper
sequencing in accordance with the literature is conducted (Collins and Kapur 2014;
Farmer et al. 1999).

5.5 Feedback

5.5.1 Briefing

The briefing was analyzed to be conducted by 66.66%, cases 1, 2, 4, and 6, as we can


see in Table 6. The reason for this conclusion stems from the IMO model course 6.10,
where a simulator session is suggested to include briefing, planning, simulator exercise,
and debriefing. Consequently, a simulator session is suggested to start with a briefing,
where four of the designated institutions actually complied with.
If we look at cases 3 and 5, it was described that case 3 has a briefing in the first day
of the training but then continues the sessions with exercise and debriefing. Case 5 has
a designated briefing room but chooses to start with the planning which is the second
step in a simulator session, instead of performing an actual briefing. Subsequently,
these two cases were not found to comply with the literature review, as none of the
cases utilized briefing regularly during their sessions.
5.5.2 Debriefing

The simulator exercise was followed by a debriefing in five out of six cases, 83.33%,
shown in Table 6, e.g., cases 1, 2, 3, 4, and 6. Case 5 was described to have a
debriefing done by the students in the beginning of the next simulator session.
Consequently, this way of debriefing was questioned, since debriefing is seen as
a training skill performed by the instructor in the end of the session (Farmer et al. 1999;
IMO 2012).
Nevertheless, it was mentioned that the debriefing had been performed by the
instructor in previous years; however, it was argued by the informant that the outcome
of this was close to zero. This could indicate that the previous debriefings performed by
the instructor were conducted in an insufficient way. Primarily since feedback is found
as an important tool to influence learning, and as an important part of a debriefing
(Hattie and Timperley 2007; IMO 2012). However, this also suggests that the
debriefings performed by the students may in fact enhance their learning outcome,
due to an increased responsibility and hence, interaction when performing the
debriefings (Littleton and Mercer 2013).
Debriefing is a tool for students to reflect on their own actions and learn
from each other after the exercise. Postponing the student’s reflections until the
next week as in case number 5 could, however, reduce the learning outcome.
This is suggested since the memory of their performance could have faded and
as a consequence, is not as specific as if one had performed the debriefing directly after
the exercise.
If we look at the other cases which were found to perform the debriefing in
compliance with the literature, it was observed that there is still a great variety in
how the debriefings are performed and how the feedback is given. Case 1 described
their debriefing to contain two distinctive segments, one internal debriefing, indicating
that each group gets to reflect on their own performance, followed by a debriefing with
all the students, which suggests that all students can learn from each other, in combi-
nation with received feedback from the instructors.
Case 2 performs a debriefing in the classroom together with all the students. It was
described that a strategy of encouraging students who might have made a mistake was
utilized during the debriefing, since everyone can learn from this. Case 3 explained that
the students must reflect on three questions—e.g., what did I do well, what could I have
done differently, and what have I learned, after leaving the bridge. These questions can
be related to the three questions mentioned in the literature review, e.g., where am I
going, how am I going, and where to next (Black and Wiliam 1998; Hattie and
Timperley 2007). By asking these three questions, the students must reflect on their
performance, thereby enhancing their understanding. The simulator sessions are
replayed, and feedback is given to the students; at the same time, they must share their
three questions with each other. This indicates that the students learn from each other by
sharing their reflections regarding their own performance. Due to the facts stated above,
this suggests that case 3 is performing a debriefing entirely in accordance with the
literature review.
Cases 4 and 6 were described to perform their debriefings similar to each other, with
the use of replayed exercises, where the students receive feedback regarding their
performance. Consequently, these were also found to comply with the literature review.
S. Nazir et al.

5.6 Training needs analysis

5.6.1 Training program development

Before choosing the suitable simulator, one must establish the training goals and
objectives, and consequently, develop the training program. Half of the cases, cases
1, 2, and 6, were found to develop their training program after they bought the
simulators. Nevertheless, case 2 specified that they lacked information regarding the
development of the first simulator training program. However, since the decision
making and exercise development were described as an event which transpired after
the new simulator was installed, this indicates that the training program development
occurred after the simulator was selected.

5.6.2 Training objectives

All cases were found to provide the students with the training objectives for the
simulator training and the exercises. However, cases 1, 2, 3, 5, and 6 were found to
provide the exercises and the learning objectives on the institutions virtual platforms
designated to the students.
In addition, case 3 provides each student with a logbook during their watch-keeping
course, stating all the exercises and their learning objectives. Case 4 presents the
exercises and the training objectives on the actual briefing before the exercise.
Since cases 1, 2, 3, 5, and 6 all present the exercises and their objectives before the
sessions start, this indicates that the students in these cases have the opportunity to
prepare more for each exercise. Consequently, the students may have the
opportunity to study the underlying conditions for the specific exercise before
entering the training and therefore gain an increased understanding (Bransford et al.
2000), in comparison to the students in case 4, which does not get the opportunity to
know the exercises on beforehand.

5.7 Assessment

5.7.1 Assessment procedure

As shown in Table 6, 83.33%, cases 1, 2, 3, 4, and 5, performed the assessment in


accordance with the literature review. Assessment in simulator training exercise is
performed to see if the students can perform an activity in a specific role,
according to specific standards. The assessment can be subjective or objective,
depending on whether pre-conceptions regarding the students are taken into account
during the assessment.
The findings were analyzed to comprehend whether the cases performed their
assessments in accordance with the literature review. Four cases, 2, 3, 4, and 5, were
found to perform their assessment with the use of rating scales or a specific assessment
form (Farmer et al. 1999; IMO 2012). However, even though case 1 did not describe
the use of a specific form, or criteria table, as suggested in the literature review, the
main objective of the assessment was explained to regard the performing of a Bsafe
voyage without endangering the vessel or others.^ In addition, case 1 described the
complexity of the simulator assessment to increase for each semester. Consequently,
this suggests that case 1 is performing an assessment which is still Brelevant, valid,
consistent and realistic^ (IMO 2012, p. 97), even if a criteria table is not utilized.
Case 6 described the assessment to be a fast process, only containing small question
regarding the ECDIS and ARPA. Due to this information, this suggests that the
assessment is not very relevant or realistic in comparison to activities performed in
real life (Baldwin and Ford 1988; Farmer et al. 1999; IMO 2012). Subsequently, since
cases 2, 3, 4, and 5 utilize use rating scales, and case 1 performed relevant and realistic
tasks, these were found to perform the assessment in compliance to the literature
review. Furthermore, case 6’s assessment was found to lack realistic similarities with
reality, and consequently, not found to comply with the literature review.
However, all cases found to comply with the literature review were found to contain
some dissimilarities between each other. While most were found to perform a final
examination in the end of the course, case 3 described their assessment as an ongoing
process during the simulator course. The students were described to be given scores
after every session, which were discussed as a part of the debriefing. At the end of the
course, these scores determined whether the students had passed the course or not.
Furthermore, this could be seen as an advantage for the students, since they know what
to improve and how they are doing, in relation to their scores after each session.
Subsequently, this could be seen as a way for the instructors to discover each student’s
specific needs, thereby enhancing their understanding (Black and Wiliam 1998; Farmer
et al. 1999; Hattie and Timperley 2007).
Case 5 explained that an external assessor from the government attended the final
assessment. Due to the external assessor, this indicates that both an objective and a
subjective assessment is performed in case 5. The objective assessment is suggested,
since one can assume that the external assessor does not have any previous knowledge
regarding the student’s abilities. Therefore, the external assessor cannot take the
students previous performances into account. On the other hand, this can be done by

Fig. 7 Trainer skills


S. Nazir et al.

the instructor, also acting as an assessor in the final assessment, subsequently, resulting
in both an objective and subjective assessment.
Case 4 explained how there were two assessors giving the students’ scores. These
scores were then multiplied and divided, creating the final score. This indicates
that the bias of the instructors acting as assessors is reduced, giving the
students a more realistic score.

5.8 Instructor qualifications

5.8.1 Trainer skills

The IMO model course 6.10 states that the instructors must have relevant background
to be able to provide significant knowledge to the students. As we can see in Fig. 7, all
cases had instructors with an officer license and consequently relevant experience.
Apart from the maritime background, 33.33%, cases 2, 4, and 5, distin-
guished themselves from the others. The reason is due to the fact that the
instructors in case 2 have taken a pedagogy course and the instructors in case 4
have simulator certificates regarding their specific simulator, while the instructor in case
5 has a teacher education.
Since the IMO model course 6.10 clearly states that the instructor should have a
Bbackground or experience in teaching or instructional techniques^ (IMO 2012, p. 54),
this indicates that these three cases have a superior understanding of how to construct a
simulator training in a pedagogical way and thereby increase the transfer of training
(Bransford et al. 2000; IMO 2012). In comparison to the remaining three cases, the
mentioned qualifications mainly were regarding the maritime background.

5.9 Limitations

Although this study was conducted thoroughly, some limitations where inevitable and
are fully recognized by the authors. Primarily, due to the informant’s different nation-
alities, the interviews were not conducted in their native language. Subsequently,
misunderstandings due to language barriers, resulting in an inaccurate data, cannot be
excluded in this study. Furthermore, due to time limitations and other economic factors,
this study did not utilize observation as a method which could have been beneficial to
understand each institution’s simulator training in real life, reducing the possibility of
imprecise data collected from the informants.

6 Conclusion

The objective of this study was to explore different institution’s full mission simulator
training variability as a phenomenon. Since there are limited regulations regarding
simulator training, an existing variation between different institutions simulator training
was implied. The selected performance indicators—identical elements, training time,
teaching of general principles, student’s needs, feedback, training needs analysis,
assessment, and instructors´ qualifications—were found to deviate depending on how
the different cases had implemented these indicators in the training.
Some indicators were found only to exist in a few of the designated cases simulator
training, while other indicators were found to be utilized by all cases. However, there
were some of the indicators utilized by all cases which were discovered to deviate due
to a variation in their executions. In conclusion, even though some of the simulator
training practices in Europe appear to be performed similarly due to comparable
proceedings, the implementations of these proceedings can create dissimilarities. The
study could facilitate to identify the opportunities and hurdles arising out of the uniform
implementation of IMO regulations across countries.
This study has created an opportunity for further research regarding maritime
simulator training. We would like to point out into the following avenues for future
research: (1) The researched phenomenon could be studied with the complimentary use
of observations, increasing the perspective and understanding of simulator-based
training. (2) Non-European countries should be included, where each interview is
conducted in the informant’s native language, thereby eliminating possible misunder-
standings due to language barriers. (3) Each established performance indicator derived
from this study should be investigated on a deeper and more well-defined level, thereby
creating an even broader understanding of the maritime simulator training facilities.

Acknowledgements The authors would like to thank the participants in the study for their valuable
contributions and anonymous reviewers for their feedback.

References

Baldwin TT, Ford JK (1988) Transfer of training: a review and directions for future research. Pers Psychol 41:
63–105
Berg BL, Lune H (2012) Qualitative research methods for the social sciences, 8th edn. Pearson, Boston
Black P, Wiliam D (1998) Inside the black box: raising standards through classroom assessment. Phi Delta
Kappan 2:139–148
Bransford JD, Brown A, Coocking RR (2000) How people learn: brain, mind, experience, and school:
expanded edition. The. National Academies Press, Washington, DC. https://doi.org/10.17226/9853
Collins A, Kapur M (2014) Cognitive apprenticeship. In: Cambridge handbook of learning science, 2nd edn.
Cambridge University Press, New York, pp 47–60
Crossan MM, Lane HW, White RE (1999) An organizational learning framework: from intuition to institution.
Acad Manag Rev 24:522–537
Dewey J (2004) Democracy and education. Courier Corporation, New York
Farmer E, van Rooij J, Riemersma J, Jorna P, Moraal J (1999) Handbook of simulator-based training. Ashgate
Publishing Company London
Flick U (2007) Designing qualitative research vol 1. Sage, London
Flick U (2009) An introduction to qualitative research. Qualitative Forschung, 4th edn. Sage, Los Angeles
Frankfort-Nachmias C, Nachmias D (2008) Research methods in the social sciences, 7th edn. Worth
Publishers, New York
Grossman R, Salas E (2011) The transfer of training: what really matters. Int J Train Dev 15:103–120.
https://doi.org/10.1111/j.1468-2419.2011.00373.x
Hattie J, Timperley H (2007) The power of feedback. Rev Educ Res 77:81–112. https://doi.org/10.3102
/003465430298487
Hays RT, Jacobs JW, Prince C, Salas E (1992) Flight simulator training effectiveness: a meta-analysis. Mil
Psychol (Taylor & Francis Ltd) 4:63–73
Hiebert NM, Vo A, Hampshire A, Owen AM, Seergobin KN, MacDonald PA (2014) Striatum in stimulus–
response learning via feedback and in decision making. NeuroImage 101:448–457
International Maritime Organization (1972) Convention on the international regulations for preventing
collisions at sea. IMO, London
S. Nazir et al.

International Maritime Organization (2011) International convention on standards of training, certification and
Watchkeeping for seafarers (STCW): including 2010 Manila amendments ; STCW convention and
STCW code. IMO publication, 3rd consolidated edn. IMO, London
International Maritime Organization (2012) Model Course 6.10 Train the simulator trainer and assessor.
International Maritime Organization, London
Kluge A, Sauer J, Schuler K, Burkolter D (2008) Designing training for process control simulators: a review of
empirical findings and current practices. Theor Issues Ergon Sci 10:489–509
Littleton K, Mercer N (2013) Interthinking: putting talk to work. Routledge, London
Liu D, Blickensderfer EL, Macchiarella ND, Vincenzi DA (2009) Transfer of training. In: Vincenzi DA, Wise
JA, Mouloua M, Hancock PA (eds) Human factors in simulator and training. CRC press, New York, pp
49–60
Maté A, Trujillo J, Mylopoulos J (2017) Specification and derivation of key performance indicators for
business analytics: a semantic approach. Data Knowl Eng 108:30–49. https://doi.org/10.1016/j.
datak.2016.12.004
Miles MB, Huberman AM, Saldaña J (2014) Qualitative data analysis: a methods sourcebook, 3rd edn. Sage,
Los Angeles
Moroney WF, Lilienthal MG (2009) Human factors in simulation and training an overview. In: Vincenzi DA,
Wise JA, Mouloua M, Hancock PA (eds) Human factors in simulation and training. CRC press, New
York, pp 4–35
Nærings- og fiskeridepartementet (NFD) [Ministry of food and fisheries] (1999) Forskrift om vakthold på
passasjer- og lasteskip [Passenger and cargo look out regulations]
Nazir S, Jungefeldt S (2017) Simulator based maritime training—a comparative study Proceedings of
International Association of Maritime Universities, 2018 in 11–15 October 2017, Varna, pp 82–91
Nazir S, Øvergård KI, Yang Z (2015) Towards effective training for process and maritime industries. Procedia
Manuf 3:1519–1526
Postholm MB (2011) Organisering og ledelse av læringsaktivitet [Organization and management of learning
activities]. In: Lærerarbeid for elevenes læring 5–10. Høyskoleforl., Kristiansand,
Sharma A, Nazir S, Wiig AC, Sellberg C, Imset M, Mallam S (2019) Computer supported collaborative
learning as an intervention for maritime education and training. In: Nazir S, Teperi AM, Polak-Sopińska A
(eds) Advances in human factors in training, education, and learning sciences. AHFE 2018. Advances in
intelligent systems and computing, vol 785. Springer, pp 3–12
Thorndike EL, Woodworth RS (1901) The influence of improvement in one mental function upon the
efficiency of other functions: II. Functions involving attention, observation and discrimination. Psychol
Rev 8:553–564. https://doi.org/10.1037/h0071363
Vygotsky LS, Cole M, John-Steiner V, Scribner S, Souberman E (1978) Mind in society: the development of
higher psychological processes. Harvard University Press, Cambridge
White BY (1984) Designing computer games to help physics students understand Newton’s Laws of motion.
Cogn Instr 1:69–108. https://doi.org/10.1207/s1532690xci0101_4

Potrebbero piacerti anche