Sei sulla pagina 1di 191

MINERVA-5: A MULTIFUNCTIONAL DYNAMIC EXPERT SYSTEM

BY
VADIM V. BULITKO
Dipl., Odessa State University, 1995

THESIS
Submitted in partial ful llment of the requirements
for the degree of Master of Science in Computer Science
in the Graduate College of the
University of Illinois at Urbana-Champaign, 1998

Urbana, Illinois
MINERVA-5: A MULTIFUNCTIONAL DYNAMIC EXPERT SYSTEM

Vadim V. Bulitko
Department of Computer Science
University of Illinois at Urbana-Champaign, 1998
David C. Wilkins, Advisor

This thesis describes a research project on building a versatile expert system shell
and its applications. The system presented, Minerva-5, is an attempt to use blackboard
architectures for dynamic control, advising, and critiquing.
The concepts of blackboard and deliberate-schedule-execute cycle operation are well-
known and have been exploited in di erent areas ([18], [11], [20]). Our work extends this
approach by re ning the scheduling stage into qualitative prediction and state evalua-
tion substages. This re nement turns out to improve all Minerva-5 functions (control,
advising, and critiquing).
In order to support the extended framework, we found it ecient and convenient to
employ various AI paradigms (such as rule-based reasoning, Petri Nets, arti cial neural
networks) as knowledge sources of an integrating blackboard architecture. Such a setup
naturally provides for multiple levels of abstraction and opportunistic reasoning ([21])
and thus eciently addresses di erent reasoning subtasks (such as classi cation, predic-
tive simulation, and scheduling). It also facilitates low-cost explanatory and critiquing
facilities which normally come at a high expense.
As a mathematical analysis shows, Minerva-5 deliberation process is computationally
feasible while Minerva is a universal computational device (i.e. equivalent to the Turing
Machine).
The framework has been tested in the domain of Damage Control on the Navy battle-
ships and achieved an expert-level performance in the DCTrain simulated environment
([28]).

iii
To My Parents

iv
Acknowledgements

First of all I would like to give great thanks to my advisor, Dr. David C. Wilkins, for
his continuous and versatile support that made this project possible. KBS members and
especially Surya Ramachandran and Ole Mengshoel have provided me with numerous
valuable ideas and suggestions. Arthur Menaker, Tamar Shinar, Adam Boyko, Sebastian
Magda, and Scott Borton have done a wonderful programming job. Finally my deepest
debt is to my parents who not only supported me morally but also provided me with a
lot of interesting scienti c ideas.
The research has been supported in part by ONR Grant N00014-95-1-0749, ARL
Grant DAAL01-96-2-0003, NRL Grant N00014-97-C-2061, and the International Soros
Educational Program (ISSEP) through Grant GSU051239.

v
Preface

It seems to me that Arti cial Intelligence as a discipline was founded by a number of


brilliant dreamers. They have had radically di erent background, di erent approaches,
and di erent ideas but there was one driving dream common to all of them: to build an
intelligent (or thinking) machine.
Now after decades of work, many victories, and maybe even more losses, a lot of AI
researchers still aim at building Arti cial Intelligence in the true sense of the word.
I believe that expert systems and especially expert systems for dynamic real-time
control, critiquing, and advising are on the way to the original dream, and I hope that
my humble work makes a little step towards it.

vi
Table of Contents
Chapter
1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1
2 The Tasks of Dynamic Problem-Solving, Advising, and Critiquing : : 2
2.1 Static Diagnosis Domains : : : : : : : : : : : : : : : : : : : : : : : : : : 2
2.1.1 Tasks: Diagnosis : : : : : : : : : : : : : : : : : : : : : : : : : : : 2
2.1.2 Challenges : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4
2.2 Dynamic Domains : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5
2.2.1 Tasks: Problem-Solving/Control : : : : : : : : : : : : : : : : : : : 5
2.2.2 Challenges : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 6
2.2.3 Tasks: Advising and Critiquing : : : : : : : : : : : : : : : : : : : 7
2.2.4 Challenges : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 9
3 Proposed Approach : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11
3.1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11
3.1.1 Objectives : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11
3.1.2 Philosophy of the Approach : : : : : : : : : : : : : : : : : : : : : 12
3.1.3 Blackboard as an Integrating Framework : : : : : : : : : : : : : : 13
3.2 Overall Design : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 14
3.2.1 Deliberation Module: Domain Knowledge Sources : : : : : : : : : 14
3.2.2 Deliberation Module: Strategy Knowledge Sources : : : : : : : : : 18
3.2.3 Scheduling Module: Design Ideas : : : : : : : : : : : : : : : : : : 21
3.2.4 Scheduling Module: Predictor : : : : : : : : : : : : : : : : : : : : 21
3.2.5 Scheduling Module: Evaluator : : : : : : : : : : : : : : : : : : : : 23
3.2.6 Scheduling Module: Computing Utilities : : : : : : : : : : : : : : 24
3.3 Operation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 24
3.3.1 Main Loop : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 24
3.3.2 Problem-Solving : : : : : : : : : : : : : : : : : : : : : : : : : : : 27
3.3.3 Advising : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 27
3.3.4 Critiquing and Measuring Performance : : : : : : : : : : : : : : : 30
3.4 Mathematical Formalization : : : : : : : : : : : : : : : : : : : : : : : : : 31
3.4.1 Notation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 31
3.4.2 Domain Layer : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 31

vii
3.4.3 Strategy Layer : : : : : : : : : : : : : : : : : : : : : : : : : : : : 34
3.4.4 Scheduling Layer: Extended Petri Nets Prediction Module : : : : 36
3.4.5 Scheduling Layer: Computing Utilities : : : : : : : : : : : : : : : 39
3.4.6 Minerva Main Loop : : : : : : : : : : : : : : : : : : : : : : : : : : 41
3.4.7 Computing Window Sizes for Critiquing and Performance Measure 41
3.4.8 Computing Performance Measure : : : : : : : : : : : : : : : : : : 43
3.5 Complexity Analysis : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 45
3.5.1 Domain Level Bounds : : : : : : : : : : : : : : : : : : : : : : : : 45
3.5.2 Blackboard Bounds : : : : : : : : : : : : : : : : : : : : : : : : : : 46
3.6 Equivalence to Turing Machine : : : : : : : : : : : : : : : : : : : : : : : 53
3.7 Implementation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 55
3.8 Application to the Navy Damage Control Domain : : : : : : : : : : : : : 58
3.8.1 Domain Background : : : : : : : : : : : : : : : : : : : : : : : : : 58
3.8.2 Minerva as an instructor aid in DCTrain environment : : : : : : : 60
3.8.3 Minerva as a DCA Decision Aid in DC-ARM : : : : : : : : : : : 63
3.9 Evaluation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 66
3.9.1 Theoretical Evaluation of the Proposed Approach : : : : : : : : : 66
3.9.2 Practical Evaluation of the Proposed Approach : : : : : : : : : : 66
4 Related Work : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 73
4.1 Medical Diagnosis Expert Systems : : : : : : : : : : : : : : : : : : : : : 73
4.1.1 MYCIN : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 73
4.1.2 NEOMYCIN : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 74
4.2 Blackboard Expert Systems : : : : : : : : : : : : : : : : : : : : : : : : : 74
4.2.1 Guardian : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 74
4.2.2 Minerva Family : : : : : : : : : : : : : : : : : : : : : : : : : : : : 76
4.2.3 HASP : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 78
5 Thesis Contributions and Conclusions : : : : : : : : : : : : : : : : : : : 81
5.1 Contributions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 81
5.1.1 Theoretical Contributions : : : : : : : : : : : : : : : : : : : : : : 81
5.1.2 Practical Contributions : : : : : : : : : : : : : : : : : : : : : : : : 82
5.2 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 83
5.2.1 Thesis Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : 83
5.2.2 Future Research Directions : : : : : : : : : : : : : : : : : : : : : : 84
Appendix : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 86
A DCA Doctrines : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 86
B Minerva-DCA knowledge layers : : : : : : : : : : : : : : : : : : : : : : : 89
B.1 Domain Layer : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 89
B.1.1 Domain Facts : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 89
B.1.2 Domain Rules : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 91

viii
B.1.3 Domain Graph : : : : : : : : : : : : : : : : : : : : : : : : : : : : 96
B.2 Strategy Layer : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 96
B.3 Extended Petri Nets Predictor : : : : : : : : : : : : : : : : : : : : : : : : 105
B.4 State Evaluator : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 118
B.4.1 Design : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 118
B.4.2 Inductive Learning Setup : : : : : : : : : : : : : : : : : : : : : : : 119
B.4.3 Multilayer Perceptrons with Backpropagation Learning : : : : : : 121
B.4.4 Kohonen Maps : : : : : : : : : : : : : : : : : : : : : : : : : : : : 121
B.4.5 Decision Trees : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 135
B.4.6 Comparisons and Comments : : : : : : : : : : : : : : : : : : : : : 146
B.5 Rule-based Scheduling Layer of Minerva-4 : : : : : : : : : : : : : : : : : 148
B.6 Critiquing and Problem-Solving Knowledge : : : : : : : : : : : : : : : : : 151
C Minerva Graphical User Interfaces (GUIs) : : : : : : : : : : : : : : : : : 153
C.1 Explanatory GUI : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 153
C.2 Advisory GUI : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 155
C.3 Critiquing GUI : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 156
D Experimental Data : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 159
D.1 Blackboard Statistics : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 159
D.1.1 Minerva-3 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 160
D.1.2 Minerva-4 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 160
D.1.3 Minerva-5 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 160
D.1.4 Comparative Chart : : : : : : : : : : : : : : : : : : : : : : : : : : 160
D.2 Damage Control Scenarios : : : : : : : : : : : : : : : : : : : : : : : : : : 160

Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 176
Vita : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 180

ix
List of Figures
3.1 Minerva-5 Overall Design : : : : : : : : : : : : : : : : : : : : : : : : : : : 15
3.2 A Minerva-5 Domain Rule : : : : : : : : : : : : : : : : : : : : : : : : : : 16
3.3 Minerva-5 Domain Rule Format : : : : : : : : : : : : : : : : : : : : : : : 17
3.4 Minerva-DCA Strategy Chain : : : : : : : : : : : : : : : : : : : : : : : : 20
3.5 Minerva-5 Strategy Rule : : : : : : : : : : : : : : : : : : : : : : : : : : : 20
3.6 Minerva-5 Main Loop : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 25
3.7 Minerva-5 Advisory GUI short-form NL explanation generation : : : : : 28
3.8 Minerva-5 Advisory GUI long-form NL explanation generation : : : : : : 29
3.9 EPN ring mechanism : : : : : : : : : : : : : : : : : : : : : : : : : : : : 38
3.10 Illustrating performance measure : : : : : : : : : : : : : : : : : : : : : : 43
3.11 Possible Strategy Network Schema : : : : : : : : : : : : : : : : : : : : : 47
3.12 Strategy Network Scheme Starting with process-finding (step 1) : : : 49
3.13 Strategy Network Scheme Starting with process-finding (step 2) : : : 49
3.14 Minerva rules for MT simulation : : : : : : : : : : : : : : : : : : : : : : : 56
3.15 MT accepting 1 and changing them to x : : : : : : : : : : : : : : : : : 57
3.16 DDG-51 Arleigh Burke Destroyer : : : : : : : : : : : : : : : : : : : : : : 59
3.17 Main Repair Station Locations : : : : : : : : : : : : : : : : : : : : : : : : 59
3.18 Minerva-5 in DCTrain : : : : : : : : : : : : : : : : : : : : : : : : : : : : 62
3.19 Minerva-5 in DC-Aware : : : : : : : : : : : : : : : : : : : : : : : : : : : 64
3.20 Minerva-4/5 vs. SWOS graduates : : : : : : : : : : : : : : : : : : : : : : 68
3.21 Minerva-4/5 Average Cycle Time (Graph 1) : : : : : : : : : : : : : : : : 70
3.22 Minerva-4/5 Average Cycle Time (Graph 2) : : : : : : : : : : : : : : : : 70
3.23 Minerva-4/5 Average Cycle Time (Graph 3) : : : : : : : : : : : : : : : : 71
4.1 Minerva history : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 77
A.1 DCA's responsibilities on setting GQ : : : : : : : : : : : : : : : : : : : : 86
A.2 DCA's responsibilities on investigation and setting FBs : : : : : : : : : : 87
A.3 DCA's responsibilities on handling re progress : : : : : : : : : : : : : : 87
A.4 DCA's responsibilities on managing pressure drop on re main : : : : : : 88
B.1 Domain Subgraph (part 1) : : : : : : : : : : : : : : : : : : : : : : : : : : 97
B.2 Domain Subgraph (part 2) : : : : : : : : : : : : : : : : : : : : : : : : : : 98
B.3 EPN dealing with re : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 116
B.4 EPN dealing with remain : : : : : : : : : : : : : : : : : : : : : : : : : : 117

x
B.5 ANN 479-2-60 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : 124
B.6 ANN 479-100-60 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : 126
B.7 ANN 479-200-60 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : 128
B.8 ANN 479-500-60 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : 130
B.9 ANN 479-100-60 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : 132
B.10 ANN 479-100-60 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : 134
B.11 Experiments with Kohonen Maps : : : : : : : : : : : : : : : : : : : : : : 137
B.12 Experiments with C5.0 : : : : : : : : : : : : : : : : : : : : : : : : : : : : 138
B.13 ANNs vs. K-maps vs C5.0 : : : : : : : : : : : : : : : : : : : : : : : : : : 147
C.1 Minerva-5 Explanatory GUI : : : : : : : : : : : : : : : : : : : : : : : : : 154
C.2 Minerva-5 Advisory GUI : : : : : : : : : : : : : : : : : : : : : : : : : : : 157
C.3 Minerva-5 Critiquing GUI : : : : : : : : : : : : : : : : : : : : : : : : : : 158

xi
List of Tables
2.1 Static vs. Dynamic Domains : : : : : : : : : : : : : : : : : : : : : : : : : 5
3.1 Minerva-4/5 vs. SWOS graduates : : : : : : : : : : : : : : : : : : : : : : 68
3.2 Minerva-4/5 Average Cycle Time : : : : : : : : : : : : : : : : : : : : : : 69
B.1 Compartment Status Encoding Scheme : : : : : : : : : : : : : : : : : : : 119
B.2 Multi-layer Perceptrons as Board Evaluator : : : : : : : : : : : : : : : : 122
B.3 ANN 479-2-60 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : 123
B.4 ANN 479-100-60 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : 125
B.5 ANN 479-200-60 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : 127
B.6 ANN 479-500-60 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : 129
B.7 ANN 479-100-60 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : 131
B.8 ANN 479-100-60 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : 133
B.9 Experiments with Kohonen Maps : : : : : : : : : : : : : : : : : : : : : : 136
B.10 Experiments with C5.0 : : : : : : : : : : : : : : : : : : : : : : : : : : : : 139
B.11 Degree of Closeness Function : : : : : : : : : : : : : : : : : : : : : : : : 152
D.1 Minerva-3 Blackboard Statistics (Part 1) : : : : : : : : : : : : : : : : : : 161
D.2 Minerva-3 Blackboard Statistics (Part 2) : : : : : : : : : : : : : : : : : : 162
D.3 Minerva-4 Blackboard Statistics : : : : : : : : : : : : : : : : : : : : : : : 163
D.4 Minerva-3,4,5 Blackboard Statistics Comparison : : : : : : : : : : : : : : 163
D.5 DC Scenarios run with Minerva-4 (Part 1) : : : : : : : : : : : : : : : : : 165
D.6 DC Scenarios run with Minerva-4 (Part 2) : : : : : : : : : : : : : : : : : 166
D.7 DC Scenarios run with Minerva-4 (Part 3) : : : : : : : : : : : : : : : : : 167
D.8 DC Scenarios run with Minerva-4 (Part 4) : : : : : : : : : : : : : : : : : 168
D.9 DC Scenarios run with Minerva-5 (Part 1) : : : : : : : : : : : : : : : : : 169
D.10 DC Scenarios run with Minerva-5 (Part 2) : : : : : : : : : : : : : : : : : 170
D.11 DC Scenarios run with Minerva-5 (Part 3) : : : : : : : : : : : : : : : : : 171
D.12 DC Scenarios run with Minerva-5 (Part 4) : : : : : : : : : : : : : : : : : 172
D.13 DC Scenarios run by SWOS Students (Part 1) : : : : : : : : : : : : : : : 173
D.14 DC Scenarios run by SWOS Students (Part 2) : : : : : : : : : : : : : : : 174
D.15 DC Scenarios run by SWOS Students (Part 3) : : : : : : : : : : : : : : : 175

xii
Chapter 1
Introduction
This thesis is organized as follows:
1. Chapter 2, p.2 will present us with a formalization of dynamic problem-solving,
advising, and critiquing tasks. We will also consider some challenges related to
those tasks.
2. Chapter 3, p.11 will go into philosophy, design, analysis, and implementation of our
approach. We will also introduce the test-bed domain of Damage Control aboard
naval vessels.
3. Chapter 4, p.73 will present some widely-known past projects and discuss their
strengths and weaknesses.
4. Chapter 5, p.81 will summarize theoretical and application sides of this project
contributions, concludes the main part of the thesis, and presents us with some
future research directions.
5. Appendix A, p.86 will provide us with some additional information on the NAVY
Damage Control domain. Finally, appendices B, p.89; C, p.153; and D, p.159
contain certain details on the implementation as well as the experimental data
collected.

1
Chapter 2
The Tasks of Dynamic
Problem-Solving, Advising, and
Critiquing
This section will introduce the tasks of dynamic control advising, and critiquing. To get
a better handle on this we will start out with "classical" static diagnosis domains and
then move on to the dynamic domains. This transition makes it easier to appreciate the
complexity of the latter domains.

2.1 Static Diagnosis Domains


Historically, medical diagnosis domains have served as a testbed for expert systems.

2.1.1 Tasks: Diagnosis


The task of medical diagnosis can be summarized as producing an accurate diagnosis most
eciently. In other words given a patient X speci ed by a set of all possible parameters
P (X ) = fP1(X ); :::; Pn(X )g1 we would like the expert system to produce a set of m
1
Some parameters might not be de ned on some patients (e.g. pregnancy duration parameter doesn't
apply to a male patient).

2
plausible diagnoses D(X ) = fD1(X ); :::; Dm(X )g. Patient's age, results of an x-ray, and
gender are examples of parameters from P . During a consultation C (X ) an expert system
can request the values of k parameters Pr (C (X ); X ) = fPC (X )1 (X ); :::; PC (X )k (X )g 
P . Every parameter P 2 P (X ) has a cost (P ; X ) associated with it. For example
performing a blood test is more time and resource consuming than asking the patient's
age. The diagnoses produced can be evaluated by a panel of domain experts and assigned
degrees of quality (Di(X ); X ).
Given the notation we can formalize the task of diagnosis of patient X as maximizing
the diagnosis quality2:
A(D(X ); X ) = d2max
D (X )
f (d; X )g
while minimizing the total consultation cost:
X
;(C (X ); X ) = (P ; X ):
P2Pr (C (X );X )

The score of a diagnosis expert system on scenario X could be de ned as a combination


of A(D(X ); X ) and ;(C (X ); X )). One possible formalization is:


(X ) = A(D(X ); X ) :
1 + ;(C (X ); X )
To de ne the overall score (quality) of a diagnosis expert system we can run the system
on a control set of cases C and then somehow combine (e.g. add together) the scores:
X

=
(X ):
X 2C

2
Here we obviously consider the quality of the best produced diagnosis.
P (d; XAlternatively we can consider
the overall quality of produced diagnoses, i.e. A(D(X ); X ) = ).
d 2 D (X )

3
2.1.2 Challenges
The following aspects of medical diagnosis domain make development of a good expert
system challenging:

1. Often high quality diagnoses are hard to produce, so in reality


is likely to be
bounded.
2. The knowledge is often hard to formalize. Human expertise involves intuition that
usually takes years of experience to emerge.
3. The knowledge is also hard to acquire. Human experts often have a hard time
articulating why they decide to consider one or another disease aspect. Therefore
the system's ability to learn (in an unsupervised or a supervised fashion) is a great
asset.
4. The knowledge has to be easily extendable when a new disease or a new piece of
diagnosis expertise data comes in.
5. The system has to be able to provide an explanation/justi cation of its diagnoses
or even queries it issues. This is especially important for collaborative human-
computer medical diagnosis expert systems.
6. Lastly, if we want to be able to use the system for full-scale student training, the
system has to be able to critique a student performing a scenario. The critiquing
capability brings up a whole set of challenges such as natural language generation,
student's plan recognition, etc.

The challenge set described above is by no means complete but even those challenges
have made a severe impact on medical diagnosis expert system development as we will
see below on examples of some well known systems (see sections 4.1.1, p.73, 4.1.2, p.74).

4
Static Diagnosis Domains Dynamic Problem-solving Domain
State of the world doesn't change considerably State of the world changes signi cantly in real
during the problem-solving process time
New data come only when the expert requests New data come at arbitrary moments of time
it
Once a nding is asserted it holds till the end Findings are subject to changes, noise, and
of the consultation errors
There is a little/no need to deal with temporal Temporal constraints play a major role
knowledge
A problem-solving session (consultation) has There is no a priori set limit on the problem-
a clearly de ned beginning and an end solving session duration
Table 2.1: Static vs. Dynamic Domains
2.2 Dynamic Domains
2.2.1 Tasks: Problem-Solving/Control
The task of dynamic problem-solving (or control) is often more involved than diagnosis
for a number of reasons. Some of them are caused by the presence of the temporal factor
and everything related to it. Table 2.1, p.5 shows some of those di erences.
Other challenges result from the fact that problem-solving often includes diagnosis as
a subcomponent. In other words, a sophisticated control often involves diagnosing the
situation and then guring out the appropriate response (the therapy). Given a scenario
X the state of the world can be speci ed by P (X; t) = fP1(X; t); :::; Pn (X; t)g3 where t
is the time. The control expert system queries parameters fPC (X ) (X; t)g. In addition to
issuing queries it can also order some actions fA(X; t)g to be taken. Given a scenario
X querying a parameter P at time t has cost of (P ; X; t) while performing an action
A has cost of (A; X; t)4 . Let a real function Q(X; t) evaluate the state of the world at
time t and be the more negative the worse situation is and the more positive the better
situation is. In the domain of airplane autopilots Q might give us say ;1000 if the plane
is just about to crash and +1000 if everything is perfect. Given a scenario X beginning
3
Again some of the parameters might not be always de ned on all X .
4
the split of the costs is rather a convention since in reality some parameters have associated actions
to take and that's where most of the cost comes from.

5
at time t1 and ending at time t2 we can de ne the cost of scenario:
X
;(X ) = (A; X; t) + (P ; X; t)
t1 tt2

and the performance level of the system:


X
t2
A(X ) = Q(X; t):
t1

Analogously the score of a diagnosis expert system on scenario X could be de ned as a


combination of A(D(X ); X ) and ;(C (X ); X )). One possible formalization is:


(X ) = A(D(X ); X ) :
1 + ;(C (X ); X )
To de ne the overall score (quality) of a diagnosis expert system we can run the system
on a control set of cases C and then somehow combine (e.g. add together) the scores:
X

=
(X ):
X 2C

2.2.2 Challenges
In a realistic real-time control domain most challenges seem to be caused by the following:
1. Multiple concurrent interacting events to address
2. Limited time available for reaction (time pressure)
3. High information rate
4. Limited resources
5. Subtask interaction
To instantiate these challenges, we will consider a concrete dynamic control domain:
Damage Control on the Navy battleships (the domain of Minerva-DCA described in

6
section 3.8, p.58). In a combat situation many things can go wrong on a ship: a missile
hit is likely to cause multiple and simultaneous res, ruptures, equipment failures, etc.
Most of those crises have to be addressed promptly. For example: re boundaries have
to be set quickly or otherwise the re will spread. Thus the Damage Control Assistant
(DCA) works under time pressure and has only very limited crisis management resources
at his/her disposal. And lastly DCA's actions interact. A good example here is that
the DCA has to be quite careful about shutting remain cut-o valves since it might
prevent water ow to the re ghters and therefore render their e orts ine ective. The
challenges mentioned here are signi cant even for well trained Navy personnel with years
of experience. Thus an expert system which could handle the crises on its own (an
arti cial DCA) or at least be a decision-making aid would be of great help to the Navy
([9], [8]).

2.2.3 Tasks: Advising and Critiquing


The tasks of advising/critiquing and in particular dynamic advising/critiquing could be
formulated and formalized in many ways. One possible formalization is given below.
Assume we have a subject H (usually a human) who is participating in a dynamic
real-time problem-solving session C (X ) given a scenario X (see section 2.2.1, p.5 for the
applicable formalism).
Then an advice at time t could be represented as:

S(t; P; PX ; A ) = hP+j!; P;j!; A +j! ; A ;j! i

where the inputs are:


1. P= fPi(t)g, the set of all parameters available so far;
2. PX = fPC (X ) (X; t)g, the set of all parameters requested by H so far;
3. A = fA(X; t)g, the set of all actions already taken by H ;
and the outputs are:

7
1. P+j! = fPj (t)g, the set of parameters to request in the future;
2. P;j! = fPk (t)g, the set of parameters not to request in the future;
3. A +j! = fAm (t)g, the set of actions to take in the future;
4. A ;j! = fAq (t)g, the set of actions not to take in the future.
In essence, the advice function S takes all parameters, parameters requested by the
subject, and actions taken by the subject and returns parameters and actions to re-
quest/take and not to request/take in the future.
The critiquing function could be de ned similarly with the di erence being the time
scope. While an advice addresses the future (which we still can change), a critique
addresses the past, which is already a history. Formally: a critique C at time t could be
represented as:
C(t; P; PX; A ) = hP+!j; P;!j; A +!j ; A ;!j i
where the inputs are:
1. P= fPi(t)g, the set of all parameters available so far;
2. PX = fPC (X ) (X; t)g is the set of all parameters requested by H so far;
3. A = fA(X; t)g, the set of all actions already taken by H ;
and the outputs are:
1. P+!j = fPj (t)g, the set of parameters that should have been requested but weren't
requested;
2. P;!j = fPk (t)g, the set of parameters that should not have been requested but were
requested;
3. A +!j = fAm (t)g, the set of actions that should have been taken but weren't taken;
4. A ;!j = fAq (t)g, the set of actions that should not have been taken but were taken.

8
So, in essence, the critiquing function C takes all parameters, the parameters re-
quested by the subject, and the actions taken by the subject and returns the di erence
between the "optimal" set of parameters and actions to be taken and the actual one.
Severity of a critique could be characterized by the total cost (as de nes it) of the
deviation. Formally:
X X X X
C;(t; P; PX; A ) = (P+!j) + (P;!j) + (A +!j ) + (A ;!j )

Quality of a critique could be de ned as the di erence between the corrected and the
actual performance levels (as A de nes it).
Quality of an advice could be de ned as the increase in the performance level should
the suggestions be implemented.

2.2.4 Challenges
A number of issues make a direct implementation of the introduced de nitions compli-
cated. The issues and corresponding complications include:
1. Often there are many parallel solution paths. So, while a critique could be severe,
the quality of it could be low.
2. The de nitions introduced assume that we know the "optimal" solution. Often this
is unfeasible to nd, especially in real time.
3. The de nitions assume that the subject is somewhere reasonably close to the "op-
timal" solution path. Otherwise the critique/advice would be totally inapplicable.
4. In practice we would probably consider a certain time window for both future and
the past. The choice of the window width is an open issue.
The list above is by no means complete, but even those challenges have often forced
developing designated expert systems for the critiquing and advising tasks. Sections 3.3.3,

9
p.27 and 3.3.4, p.30 will show how the proposed architecture of Minerva can handle both
at a relatively low extra cost.

10
Chapter 3
Proposed Approach
3.1 Introduction
3.1.1 Objectives
The previous sections were intended to specify the domains we are looking at and to
give a reader an idea of their complexity. This project objectives are to develop a single
architecture that would be:
1. applicable in both static diagnosis and dynamic problem-solving domains;
2. domain-independent otherwise;
3. capable of problem-solving;
4. capable of explaining its actions;
5. capable of evaluating another subject's problem-solving performance;
6. capable of critiquing another subject doing problem-solving.
The architecture presented is called Minerva-5. We will use Minerva-DCA to refer to
Minerva-5 applied in the Navy DC domain (see section 3.8, p.58).

11
3.1.2 Philosophy of the Approach
A system aimed at such goals would have to deal with di erent kinds of knowledge. In
particular the following kinds of knowledge are of a primary concern:
1. Domain Knowledge to reason about the domain events;
2. Strategy Knowledge to guide the reasoning about the domain events;
3. Scheduling Knowledge to schedule the system's activities eciently.
Those three types of knowledge are well known and have been used in a number of
systems (e.g. [18], [11], [14], [20]). We are extending this concept by employing pre-
dictive qualitative simulation and state evaluation knowledge as parts of the scheduling
knowledge. This approach is not a quick hack to boost the performance but rather an
extension based on certain philosophical ideas as the next paragraph describes. Also we
would like to note that re ning a single knowledge layer has been traditional in Minerva
family development (see section 4.2.2, p.76 for the Minerva family history).
It has been observed that the combinatorial computational power of humans is in-
ferior to that of modern computers, yet human experts exhibit superior performance
in many real-world domains including the ones we have discussed. In our opinion, the
impressive performance of human problem-solving comes from the collaboration of the
analytical ("conscious") and the non-analytical ("subconscious") sides of human mind.
The analytical side seems to be somewhat similar to the algorithmical methods we are
using in Arti cial Intelligence problem-solvers (e.g. theorem-proving, rule-based classi -
cation, minimax-style search, etc.). Unfortunately the other "subconscious" side is still
poorly understood and thus leaves plenty of room for hypotheses. Our second hypothesis
is that humans possess certain "subconscious" mechanism that allow them to predict
the environment at a qualitative level very rapidly and thus facilitate intuition or the
"gut feeling". Hypothesizing further we note that the "subconscious" prediction mod-
ule might be implemented as some kind of specialized brain "hardware" capturing the

12
key properties of the environment. Under those assumptions, human reasoning becomes
similar to computations with oracle known in Recursion Theory community ([4, 1]).
Naturally, anything computed on a Turing Machine has to be algorithmical, and
thus we are looking at algorithmical approximations of human "subconsciousness". Our
proposition is to use Extended Petri Nets for rapid qualitative prediction and a classi er
for state evaluation. Various classi ers could be used (e.g. arti cial neural networks,
decision trees, etc.) as we discuss in section 3.2.5, p.23.

3.1.3 Blackboard as an Integrating Framework


The "conscious" and "subconscious" parts would have to be interfaced to facilitate the
cooperation. We propose to use the concept of the blackboard ([21]) as an integrat-
ing framework. In a blackboard architecture we typically have a common knowledge
repository (blackboard) and a number of agents (knowledge sources (KS)) accessing it.
Reasoning with a blackboard resembles a group of experts working on a problem with
the blackboard containing initial data and the solution being built. While being simple
in principle a blackboard setup brings a number of useful properties such as:
1. Using Di erent Reasoning Paradigms is possible since knowledge sources can be
considered as blackboxes following the appropriate blackboard interface protocol.
2. Opportunistic Reasoning obliviates the necessity to precompute/preplan the entire
problem-solving path. The solution is built dynamically by the opportunistically
participating knowledge sources. This often makes the system more exible and
thus increases its applicability area.
3. Multiple Levels of Abstraction allow for more ecient knowledge representation and
reasoning (e.g. see section 4.2.3, p.78).
4. Parallel and Distributed Processing is possible since, just like in a group of human
experts, the knowledge sources can often work in parallel.

13
5. Multiple Solution Paths could be generated by having knowledge sources of di erent
nature and biases. This is helpful for critiquing and search space exploration.
6. Uncertainty Reasoning and Voting could be supported through combining the con-
dence factors of a datum produced by di erent knowledge sources.
The paradigm has been tested and proven useful by a number of researchers ([16],
[23], [18], [11]).

3.2 Overall Design


Figure 3.1, p.15 presents us with the overall design at a high level.
The design is centered on a blackboard which re ects state of the world and system's
internal problem-solving state. The blackboard is accessed by two of Minerva's modules
(deliberation and scheduling) and an interface to the environment. The interface brings
new domain-level ndings to the blackboard (e.g. " re in a compartment", "engineering
casualties", etc.) and takes Minerva's domain-level actions for execution (e.g. "order
re ghters to ght re", "order permission to ood a compartment", etc.). The delibera-
tion module (a descendant of Minerva-4 deliberation module ([20]) ) facilitates reasoning
at domain and strategy levels by opportunistically invoking rule-based domain and s-
trategy knowledge sources. The scheduling module applies the predictor and evaluator
submodules to rank the suggested actions and come up with a good schedule of action
applications.

3.2.1 Deliberation Module: Domain Knowledge Sources


Rule-based domain knowledge sources address the areas of the domain with a strong
domain theory. Each KS contains several rules related to a particular topic (e.g. han-
dling small res). Minerva-5 uses Horn-clause style rules with MYCIN-style con dence
factors. In other words each of the rules has one or more preconditions and a single con-
clusion. Each precondition is speci ed either as a datum ( nding or a hypothesis) with

14
Deliberation Module Blackboard Scheduling Module

Rule-based domain findings


Domain Knowledge

domain KS

domain hypotheses Petri Nets


...

Prediction Module

Rule-based strategy goals

Scheduling Knowledge
domain KS

suggested actions

Rule-based predicted world


Strategy Knowledge

strategy KS
states
State
...

Evaluation
evaluations of Module
Rule-based predicted world
strategy KS states

Environment

Figure 3.1: Minerva-5 Overall Design

15
a required con dence factor or a general-form predicate to be satis ed. The conclusion
is also a datum (usually a hypothesis) with its own con dence factor. Both precondi-
tions and the conclusion might have parameters speci ed partially in the rule. In such
a case Minerva-5 tries to instantiate the unspeci ed parameters from the speci ed ones.
For example ( gure 3.2, p.16): nding alarm can have parameter Where. Then, we can
have a rule stating that if nding [alarm,fire,Where] is known then assert hypoth-
esis [fire,Where]. Given nding [alarm,fire,3-370-0-E] parameter Where in the
hypothesis will be uni ed with parameter Where in the nding and thus instantiated to
3-370-0-E. Then, the hypothesis will read [fire,3-370-0-E].

ccf(r1012,1,1,[alarm,fire,Where,Time],800,[fire,Where,FireClass,discovered,Time],666,[]).
ccb(r1012,1,1,[alarm,fire,Where,TimeAlarm],800,[fire,Where,FireClass,Status,Time],666).

Figure 3.2: A Minerva-5 Domain Rule

In general all rules could be used opportunistically for forward and backward chaining.
However, certain rules should be used exclusively for forward or backward chaining. The
purpose of domain rules is to traverse the domain graph (section B.1.3, p.96). In other
words, forward application of a rule moves us from a certain set of data (the rule's
preconditions) to a di erent datum (the rule's conclusion). Likewise backward application
of a rule moves us from some preconditions or the conclusion to other preconditions of
the rule. A forward application is called rule ring (section 3.4.2, p.31) and results in
asserting the rule's conclusion (via an action-level strategy goal conclude(Hypothesis)).
For eciency purposes (see section 3.3.1.1, p.24), it is bene cial to keep domain
blackboard size down and remove obsolete/outdated ndings and hypotheses. Minerva-5
has a number of removal rules that serve this purpose.
Figure 3.3, p.17 presents us with the format used. A mathematical formalization of
Minerva's domain layer can be found in section 3.4.2, p.31, while the actual code is listed
in section B.1, p.89.

16
For forward chaining:
ccf(RuleID,Clause1,ReqCF,Conclusion,ConclusionCF,UnifList) :- p1(X1,...,XM)
...
ccf(RuleID,ClauseN,ReqCF,Conclusion,ConclusionCF,UnifList) :- pN(X1,...,XM)

For backward chaining:


ccb(RuleID,Clause1,ReqCF,Conclusion,ConclusionCF)
...
ccb(RuleID,ClauseN,ReqCF,Conclusion,ConclusionCF)

For data removal:


ccm(RuleID,Clause1,ReqCF,Conclusion,remove,UnifList) :- p1(X1,...,XM)
...
ccm(RuleID,ClauseN,ReqCF,Conclusion,remove,UnifList) :- pN(X1,...,XM)

The rules are written in Prolog format with each predicate representing one clause. There are
three types of the rules: ccf are used for forward chaining, ccb are used for backward chaining,
and ccm are used to remove obsolete data from the blackboard to keep down its size. The
arguments of the rules are as follows:
Clause is a datum which has to be known with a con dence factor of at least ReqCF in order
for this precondition to be satis ed.
Conclusion is a datum which will be concluded with ConclusionCF should the rule re.
UnifList is a list of variables to unify across Clause1,...,ClauseN, and Conclusion.
pI(X1,...,XM) is a general-form predicate to satisfy.

Figure 3.3: Minerva-5 Domain Rule Format

17
3.2.2 Deliberation Module: Strategy Knowledge Sources
One of the distinctive features of Minerva-5 is its declaratively and domain-independently
represented strategy knowledge. The strategy layer of Minerva-5 knowledge is used to
drive the deliberation process. Depending on the new data and other blackboard contents,
Minerva could perform either backward or forward chaining. This makes Minerva-5 more
exible in its reasoning process in comparison with a xed reasoning strategy system
(such as MYCIN). As is well known ([6]), a xed reasoning strategy is not bene cial in
domains where it is unclear what kind of situation we will be dealing with. Speci cally,
backward chaining is inecient when we have a fan-in situation while forward chaining
doesn't deal too well with fan-out.
While domain knowledge sources reason over the lexicon of domain ndings, hy-
potheses, and actions the rule-based strategy knowledge sources reason over the lexicon
of domain-independent goals. The goals range from the top-level goals:
1. process_hypothesis(Hypothesis)

2. process_finding(Finding)

3. explore_hypothesis(Hypothesis)

4. remove_datum(Datum)

to intermediate-level goals:
1. applyrule_backward(Rule)

2. applyrule_forward(Rule)

3. findout(Datum)

4. pursue_hypothesis(Hypothesis)

5. test_hypothesis(Hypothesis)

and nally to the bottom-level goals:

18
1. perform(Action)

2. lookup(Finding)

3. remove(Datum)

4. conclude(Hypothesis)

The goals are used in building the strategy networks. A strategy network is a directed
acyclic graph consisting of strategy chains. Each chain starts with a top-level goal, goes
through the intermediate and top-level goals, and ends up with a bottom level goal that
carries a domain-level action to take.
During the deliberation process the strategy knowledge sources rules are triggered by
the important ndings and active hypotheses on the blackboard. The triggering data is
considered within the context of top-level goals. For example: re alarm
finding(fire-alarm) would lead to top-level goal process_finding(fire-alarm). In

turn, this goal will trigger other strategy rules and thus entail intermediate-level goals.
This process eventually results in the strategy chain network (an actual strategy chain
is shown in gure 3.4, p.20). Each edge in the network is labeled with the corresponding
strategy operator identi er (e.g. pf5). Figure 3.5, p.20 shows an example of actual
strategy rule. Strategy operators are also implemented as Prolog clauses. The rst
argument of mr is the strategy operator identi er (in this case pf1). The second argument
is the higher level goal the operator applies to. Finally, the third argument is the lower-
level goal. So, in a way, a strategy operator is nothing but a transition from a higher to a
lower level goals (see section 3.4.3, p.34 for details). The pre-conditions of mr are various
predicates that have to hold in order for the strategy operator to re. In the example
show in the gure 3.5, p.20 we should apply Rule in a backward manner if:
1. we are trying to process nding F;
2. F is important (a "red- ag");
3. there is (ccf) a domain rule (Rule) such that F is one of its preconditions (with
required con dence factor of CF) and Hyp is its conclusion;

19
4. and nally F is known to hold with con dence factor of at least CF.

Meaning: "Order Repair-3 to investigate compartment 01-300-2 since there is a


suspected re (domain rule r1010)."

Figure 3.4: Minerva-DCA Strategy Chain

Mathematical formalization of the strategy level is given in section 3.4.3, p.34 with
some complexity analysis results in section 3.5, p.45. The actual strategy layer code is
listed in section B.2, p.96.

mr(pf1,process_finding(F),applyrule_forward(Rule,Hyp)) :-
finding(F,_),
red-flag(F),
ccf(Rule,N,M,F,CF,Hyp,CFC,UL),
satisfied(F,CF).

Figure 3.5: Minerva-5 Strategy Rule

20
3.2.3 Scheduling Module: Design Ideas
It is worth mentioning that scheduling in a dynamic domain is one of the major issues
to consider (sections 2.2.1, p.5, 2.2.2, p.6). Therefore, an ecient scheduler is the key
to successful problem-solving, critiquing, and advising in dynamic domains such as the
damage control domain we are looking at.
At every cycle Minerva generates a number of feasible actions. Given limited re-
sources, task interaction, and potentially contradicting actions we need to schedule the
actions eciently. We are tackling this task by computing actions' utilities and execut-
ing the top-ranked actions rst. Therefore the problem of scheduling now becomes the
problem of utility assignment.
Roughly speaking (see section 3.2.6, p.24 for details), we assign an action's utility by
predicting how much better or worse the environment will become should we take the
action. Thus, our scheduler consists of an extended Petri nets prediction module (sec-
tion 3.2.4, p.21), a static board evaluator (section 3.2.5, p.23), and the utility assignment
module (section 3.2.6, p.24).

3.2.4 Scheduling Module: Predictor


There are many ways to predict the consequences of taking an action. One can run
numerical simulation, use rule-based heuristics, etc. To choose the most appropriate
method we should look into the requirements and preferences of the task:
High speed. In a dynamic control domain, many actions can be generated at every cycle
by the deliberation module. To be maximally e ective they have to be scheduled
promptly. So it is likely that Minerva's predictor will need to make in the order of
hundreds of predictions per second.
Sublinear performance. It would be bene cial if the predictor running time was sub-
linear (ideally, nearly constant) with respect to the prediction time interval. In
other words, spending less than twice as much running time to predict for twice as
long into the future is an asset.

21
Qualitative data. In our scheduling approach, vast quantitative data would put an
unnecessary burden on the board evaluator. Thus qualitative (as opposed to quan-
titative) predicted data is more bene cial for us.
Considering agents' actions. Not only we should predict development of the physical
systems, but even more so, we should predict e ects of the action being scheduled
and other agents' actions.
Modularity. To facilitate easy upgrades, re nement, and debugging the predictor should
be modular with respect to modeled subsystems/agents of the environment. This
way, introducing a new agent wouldn't require rebuilding everything from scratch.
Variables/Parameters. Given the typically vast amount of similar agents/subsystems,
we need to be able to model them with a single predicting submodule. Thus, some
kind of variables/parameters mechanism would be highly ecient.
Temporal Data. Temporal information is obviously very important. Thus we should
be able to predict not only what is going to happen but also when it is going to
happen.
With those ideas in mind we rejected numerical simulation (slow, no sublinearity,
hard to take agents' actions into account), belief networks (hard to implement variables
and support temporal data, less modularity), and decision trees (for similar reasons).
Our novel solution is to use classical Petri Nets ([5]) extended with variable-support and
temporal information as section 3.4.4, p.36 describes.
Petri Nets have a number of attractive features including the following:
1. solid mathematical backup;
2. rapid and mostly sublinear performance with qualitative data;
3. temporal data support;
4. di erent levels of abstraction, variables/parameters support;

22
5. high modularity;
6. taking agents' actions into account;
7. concurrent processing;
8. ease of representing and reading.
Section 3.4.4, p.36 will formally describe our Extended Petri Nets (EPN) predictor
while section B.3, p.105 will list the actual networks we have used.

3.2.5 Scheduling Module: Evaluator


The predictions generated by the prediction module are useless for scheduling until we
evaluate them in terms of how good or bad the predicted situation is. There could
be many kinds of metrics/score functions useful for this task. The following are some
parameters they can di er on:
1. input being a single state or a sequence of states (static vs. dynamic);
2. output being a single value or a vector of values (single vs. multi-value score);
3. evaluator being hand-coded or learned.
Given the complexity of real-world domains we decided to take advantages o ered by
machine-learning methods and learn our evaluator. To facilitate easier learning and
simpler utility computations, we used a static input and a single-value score.
The implementation speci cs of the board evaluator are domain dependent. In our
application domain (see section 3.8, p.58), we have used both decision-tree and neural
net evaluators. The state score was based on the expected time till a major disaster. The
closer the given state is to a disaster, the lower the score would be. A very convenient
side of such an approach is that it is relatively easy to learn the evaluator by collecting
traces of time-stamped environment states and then annotating them with time till a
disaster. Generalizing the samples with an inductive learning method would give us the

23
concept of the evaluator. We have utilized various inductive learning methods including
Multilayer Perceptrons, Kohonen Maps, and Decision Trees. Please refer to section B.4,
p.118 for details.

3.2.6 Scheduling Module: Computing Utilities


Having predicted the state of the ship W and evaluated it s(W ) we can now combine
this information to assess the utility (i.e. "quality") of suggested action a. To do that,
we introduce an operational utility re ecting how much a's direct e ects are worth, and
a goal utility telling us how important a's goal is.
Both formalization and examples are given in section 3.4.5, p.39.

3.3 Operation
3.3.1 Main Loop
Minerva-5 works in cycles as shown in gure 3.6, p.25.
Each cycle consists of 3 stages:
1. Deliberation;
2. Scheduling (qualitative prediction, state evaluation, utilities computation);
3. Execution.
A mathematical formalization of this process is given in section 3.4.6, p.41. The
following sections will go into details on each of the stages.

3.3.1.1 Deliberation
During the deliberation stage, the important data from the blackboard is used to trigger
domain rules and build a strategy network. Each of the strategy chains starts with a
top-level goal (e.g. process_hypothesis(fire)) and ends with a feasible action (e.g.

24
findings

Deliberation
strategy chains

Scheduling new data

findings/hypotheses,
strategy chains

EPN prediction
predicted ship states

Environment
Blackboard
predicted ship states
state
evaluation
ship state scores

ship state scores


utilities
computation
action utilities

orders

rated actions

Execution
orders

Figure 3.6: Minerva-5 Main Loop

25
perform(fight_fire)). Di erent networks can share di erent nodes. Sections 3.2.1,
p.14; 3.2.2, p.18; 3.4.2, p.31; D.1, p.159; and 3.4.3, p.34 give intuition, details, formaliza-
tion, and examples of this process.
Given Minerva-5 strategy layer, the following are two important points to this process:
1. The size of the strategy network is, at most, linear in the number of important (red-
ag) ndings and hypotheses. Since the network is built in a depth- rst manner, the
deliberation time is linear in the total number of ndings as well. See section 3.5,
p.45 for more details and the proof.
2. While being computationally ecient (i.e. linear in the number of data) the de-
liberation mechanism is a universal computing device and can emulate any Turing
machine. Section 3.6, p.53 provides the necessary details and the proof.

3.3.1.2 Scheduling: prediction, evaluation, and utility computation


As mentioned above, each strategy chain ends with a feasible action (either internal
to Minerva (e.g. conclude(hypothesis)) or domain-level (e.g. perform(action))).
While most internal actions could be executed at once, domain-level actions require
scheduling due to the limited resources, di erent priorities, and possible inconsistencies.
Section 3.2.3, p.21 provides some overall ideas of the scheduling.
Scheduling stage has 3 substages:
Qualitative prediction: models the environment to predict e ects of a particular ac-
tion or consequences of a particular nding (see section 3.2.4, p.21 for details).
State evaluation: evaluates the predicted states and assesses their severity levels (see
section 3.2.5, p.23 for details).
Utility computation: combines the outputs of the both stages above and ranks the
actions by assessing their utility (see section 3.2.6, p.24 for details).

26
3.3.2 Problem-Solving
In problem-solving mode, at the end of each cycle, all internal and highest-ranked domain-
level actions are executed. Executing internal actions involves asserting hypotheses,
removing obsolete data from the blackboard, etc. Executing domain level actions involves
passing the appropriate message to the environment's actuators (either simulated or real).
It is worth noticing that domain-level actions could be inconsistent with each other (e.g.
one of the reasons is the existence of multiple solution paths). To resolve potential
con icts, Minerva-5 executes the single top-ranked domain-level action at every cycle.

3.3.3 Advising
In advising mode, Minerva-5's external actions are not passed to the domain actuators
but instead used for advising. Speci cally, we present ndo top-ranked actions and ndon0t
bottom-ranked actions to the student through Minerva Graphical User Interfaces (GUIs)
(see section C, p.153). For each action the following information is available upon request:
Reasoning behind the action could be shown by displaying the appropriate strategy
chain(s) in both graphical and textual forms. The textual form involves generating
natural language output for the action and the top nodes of the chain (the short
form) or possibly all nodes of the chain (the long form). Figures 3.7, p.28 and 3.8,
p.29 show a screenshot of the actual GUI displaying a particular strategy chain
together with the corresponding short-form and long-form explanations.
Reasoning behind the action's rank could be shown by displaying the environment
states predicted by the EPN prediction module and their scores computed by the
state evaluator. Further details could be provided by tracking down evaluator work
if it allows for that (e.g. in case with decision trees).

27
28
Figure 3.7: Minerva-5 Advisory GUI displaying a strategy chain and corresponding short-form NL explanation
29
Figure 3.8: Minerva-5 Advisory GUI displaying a strategy chain and corresponding long-form NL explanation
3.3.4 Critiquing and Measuring Performance
One of Minerva-5's distinctive features is built-in facilities for evaluating subject's problem-
solving performance as well as critiquing the subject. In both cases the setup is as follows:
1. The subject (typically a student) is solving a scenario using some kind of problem-
solving environment (either natural or simulated). In our domain we used the
DCTrain immersive multimedia environment (section 3.8.2, p.60).
2. Minerva is solving the same scenario simultaneously with the subject.
3. As a result, we are now presented with two streams of time-stamped actions: sub-
ject's and Minerva's. Critiquing and performance measuring is based on window-
matching those two streams as follows.
(a) At every Minerva's cycle (at say time t) the critiquing module determines the
window sizes as described in section 3.4.7, p.41. Actions in those two windows
form two sets: Asubject and AMinerva;5 .
(b) The sets are matched and the performance measure at time t is calculated as
described in section 3.4.8, p.43.
(c) Every high-ranked action that Minerva suggested but the subject didn't take1
is an error of omission that the subject could be critiqued for.
(d) For every action a that the subject took (i.e. a 2 Asubject) one of the three
cases holds:
Action a is generated by Minerva and ranked high. In this case, no cri-
tique is issued since the subject is considered to be doing well.
Action a is generated by Minerva and ranked low. In this case, we cri-
tique the subject by explaining why a is bad (see section 3.3.3, p.27 for
the details). This case constitutes an error of commission.
1
That corresponds to a high-ranked part of AMinerva;5 n Asubject.

30
Action a is not generated by Minerva. In this case, we feed it into the
Minerva scheduler and calculate a's utility u(a). If u(a) is low then we
critique the subject and provide him/her with scheduler reasoning2. If
u(a) is high the subject is considered to be doing ne.
Mathematical formalization of this process is given in sections 3.4.7, p.41 and 3.4.8,
p.43 while the actual Minerva-DCA implementation is given in B.6, p.151.

3.4 Mathematical Formalization


3.4.1 Notation
Notation 3.4.1 R is the set of real numbers. N is the set of natural numbers including
0.

3.4.2 Domain Layer


3.4.2.1 Data
De nition 3.4.1 D is the set of all possible domain data; D = F [ H [ A where F
is the set of all ndings, A is the set of all domain-level actions, and H is the set of
all possible hypotheses. All three sets are disjoint. Remark: often on a blackboard
or in a rule a nding, action, or hypothesis is not speci ed completely. In other words
the speci cation given will leave some parameters open. In the mathematical sense it
corresponds to specifying a subset of F; A or H correspondingly.

De nition 3.4.2 Drf  D is the set of red- ag (i.e. unconditionally important) data.
Hrf = Drf \ H and Frf = Drf \ F.
There is a more primitive approach to this case: instead of computing u(a) we can simply assume
2

that a is bad since it was not generated by Minerva.

31
3.4.2.2 Rules
De nition 3.4.3 Rf and Rb are the sets of domain rules of the following form: hd1 ;
1 ;
:::; dn ;
n; P1 ; :::; Pm ; C; !i. Here di  D are the premises3;
i 2 R are their required
con dence factors; C  D is the conclusion4 and ! 2 R is its con dence factor; Pj  Dn
are some predicates. n  1; m  0. A domain rule r res i all its premises are asserted
with at least con dence factors required and all the predicates hold. In other words:
r = hd1 ;
1 ; :::; dn ;
n ; P1 ; :::; Pm ; C; !i res () 8(1  i  n)9d~i [asserted(d~i ;
e i ) &
d~i 2 di &
e i 
i ] & 8(1  j  m)Pj (d~1 ; :::; d~n ). Here and below asserted(d~ ;
~ )
denotes that nding/hypothesis d~ is asserted on the blackboard with con dence factor of

~ .

De nition 3.4.4 Rrm is the set of removal rules of the following form: hd1 ;
1 ; :::; dn ;
n ;
P1; :::; Pm ; C; removei. Here di  D are the premises;
i 2 R are their required con dence
factors; C  D is the datum to remove5; Pj are some predicates de ned on di . n  1; m 
0.

De nition 3.4.5 Removal rule ring mechanism is identical to the one of domain rules.

3.4.2.3 Source Relation


De nition 3.4.6 Set Sor is a set of source relations of the following form: hd1 ; d2 ; P1 ; :::;
Pm i. Here d1 ; d2  D; Pi are some predicates de ned on D2 .
di  D as opposed to di 2 D because rules might have some of the data partially speci ed (see the
3

remark in de nition 3.4.1, p.31).


4
Strictly speaking C is a mapping from (2D )n to 2D because a possibly incompletely speci ed conclusion
C  D depends on the possibly incompletely premises di  D. However for the sake of readability we
will use C instead of C (d1; :::; dn).
5
Again in reality C is a function of d1; :::; dn.

32
3.4.2.4 Domain Graph
De nition 3.4.7 The domain graph is a directed graph (V; E ) where V = fdj9(r 2
Rf [ Rb [ Rrm [ Sor )[d is in r as C or a di ]g (i.e. all data subsets mentioned in domain
rules and source relations) and E = Ef [ Eb [ Es [ Erm as follows:

1. Ef = f(v1; v2 )j(9hd1 ;
1 ; :::; dn ;
n ; P1; :::; Pm ; C; !i 2 Rf )(9i 2 f1; : : : ; ng)[di \ v1 6=
; & C \ v2 6= ;]g (forward rule edges)6;
2. Eb = f(v1 ; v2 )j(9hd1 ;
1 ; :::; dn ;
n; P1 ; :::; Pm ; C; !i 2 Rb)(9i 2 f1; : : : ; ng)[di \ v1 6=
; & C \ v2 6= ;]g (backward rule edges);
3. Es = f(v1 ; v2 )j(9hd1 ; d2 ; P1 ; :::; Pm i 2 Sor )[d1 \ v1 6= ; & d2 \ v2 6= ;]g (source
edges);
4. Erm = f(v1; v2 )j(9hd1 ;
1 ; :::; dn ;
n ; P1; :::; Pm ; C; removei 2 Rrm )(9i 2 f1; : : : ; ng)
[di \ v1 6= ; & C \ v2 6= ;]g (removal rule edges);

De nition 3.4.8 The following are degrees of the domain graph vertices (v 2 D):
1. fB (v) = jfv1j(v; v1 ) 2 Ef gj;
2. bB (v) = jfv1j(v; v1 ) 2 Eb gj;
3. sB (v) = jfv1j(v; v1 ) 2 Esgj;
B (v ) = jfv1 j(v; v1 ) 2 Erm gj;
4. rm
5. fC (v) = jfv1j(v1; v) 2 Ef gj;
6. bC (v) = jfv1j(v1; v) 2 Eb gj;
7. sC (v) = jfv1j(v1; v) 2 Esgj;
C (v ) = jfv1 j(v1 ; v ) 2 Erm gj.
8. rm
6
Loosely speaking it means that di is unifyable with v1 and C is unifyable with v2 .

33
B refers to outcoming edges while C refers to incoming edges. 's without arguments
are maximums of the 's with arguments over all vertices (e.g. fB = max B(v)).  =
v2D f
maxffB; fC ; bB ; bC g.

3.4.3 Strategy Layer


3.4.3.1 Goals
De nition 3.4.9 G is the set of all possible goals Minerva-5 can pursue. G is the
subset of action-level goals; G is the subset of top-level goals; and GI is the subset of
internal (or intermediate-level) goals. GI ; G ; G  G, G \ G = ;.

De nition 3.4.10 At every cycle i we de ne blackboard: Bi = hFi ; Hi i where Fi  F


(current ndings), Hi  H (asserted hypotheses). B0 = h;; ;i. Bi for i  1 is de ned as
described in sections 3.3.1, p.24 and 3.4.6, p.41. B = 2F  2H is the set of all possible
blackboards.

De nition 3.4.11 Functions ; ; : B ! G are the goal functions. : B ! G ,


 : B ! G . Given a blackboard B , (B ) de nes the set of goals to pursue, (B ) de nes
the subset of feasible actions,  (B ) de nes the subset of top-level goals.

De nition 3.4.12 Feasible actions are of two types: external and internal. External
actions are to be passed to environment actuators while internal actions a ect Bm+1
directly and aren't passed to the environment. Thus (Bm ) = e(Bm ) [ i (Bm ). Sets
e(Bm ) (external actions) and i (Bm ) (internal actions) are always disjoint.

3.4.3.2 Strategy Rules


De nition 3.4.13 Set S of the strategy rules (or strategy operators) is a subset of
fsjs : G  G ! ftrue; falsegg.

34
3.4.3.3 Action-Goal Chains
De nition 3.4.14 An action-goal (or strategy) chain for cycle i is an n-tuple hg1 ; : : : ; gn i
such that:

1. fg1 ; : : : ; gng  (Bi )


2. jfg1 ; : : : ; gn gj = n (linearity condition)
3. g1 2  (Bi ) (top-level goal)
4. gn 2 (Bi ) (action)
5. gm 2= (Bi ) [  (Bi ) for 1 < m < n
6. (8i = 1; : : : ; n ; 1)(9s 2 S )[s(gi ; gi+1 )].

De nition 3.4.15 Ch is the set of all possible chains. Chi is the set of strategy chains
(or the strategy network) generated during Minerva cycle i.

De nition 3.4.16 (Building strategy networks) At every cycle i strategy network


Chi is generated in the depth- rst manner (e.g. see [6]) by applying strategy rules S to
blackboard data Fi and Hi . Here are the properties of Chi reinsured by the algorithm:
1. Chains c1 ; c2 2 Chi might share some nodes but the resulting directed graph Cchi =
(fgm j9cj 2 Chi [gm 2 cj ]g; f(gp; gq )j9cj 2 Chi 9k[gp = gk & gq = gk+1 & cj =
h:::; gk ; gk+1 ; :::i]g) has no cycles.
2. All properly formed strategy chains7 are generated given cycle data Fi ; Hi and Min-
erva's strategy rules S .
7
Except the ones that form cycles in Cchi .

35
3.4.4 Scheduling Layer: Extended Petri Nets Prediction Mod-
ule
Following the formalism from [5] we will denote a Petri net C as a tuple C = (P; T; I; O)
where P is a set of places, T is a set of transitions, I is a set of transition inputs, and O
is a set of transition outputs. A marking of Petri net C is denoted by . A marked Petri
net is referred to as M = (P; T; I; O; ). Marking  is a n-vector of numbers, n = jP j.
To extend this formalism we introduce the following:
1. The tokens now have identi ers assigned to them. Each token will be identi ed
using a label l 2 2Ot  2Ov  T 2 , where Ot ; Ov represent the set O = Ot  Ov of
objects (represented as a pair (type; value)) we would like to model. In the domain
of re ghting8 O might include compartments C and damage control stations S.
Each token can have a list of domain-object pairs so l is de ned on 2Ot  2Ov . Each
token will also have a time interval [tb ; te] associated with it (tb ; te 2 T ). The time
interval will be changed as the token proceeds through the net. Given a token x its
identi er is given by iO(x) (the object component) and iT (x) (the time interval).
Intuitively, places correspond to di erent states of the modeled subsystem, domain
identi ers of the tokens represent a speci c subsystem, and the time intervals rep-
resent the interval when the subsystem got into the state.

Example 3.4.1 Suppose we have a place Pf representing a re. Then token


<(compartment,3-370-0-E), [5,10]> sitting in Pf says that "With a high level
of con dence we know that compartment 3-370-0-E got engulfed sometime between
5 and 10 minutes of the scenario time".

2. The marking function  has to be extended accordingly. Now  : P ! (O  T 2 )1 .


3. Transitions now have a delay interval associated with them. Speci cally a transition
t 2 T has a delay interval [min ; max ] where min ; max 2 T .
8
Below we will refer to re ghting as FF for short.

36
Intuitively, the EPN transitions represent the transitions between di erent states of
the modeled system. Transition delays re ect the time it typically takes to change
the state.
4. In additon to the classical Petri Nets edges between the places and the transitions
we now have the following new types of edges:
(a) a negation edge from place p to transition t speci es that the transition t
should not re if there is an appropriate token in p;
(b) a double-ended edge from place p to transition t speci es that if the transition
t res all the appropriate tokens in p should remain there.
5. Suppose places fA1; :::; An g are connected to transition T via regular or double-
ended edges f(Ai ; T )g and places fB1 ; :::; Bm g are connected to T via negation edges
f(Bj ; T )g (see gure 3.9, p.38). Suppose f[ajb ; aje]g are the time intervals associated
with all tokens in places fAig and f[bjb ; bje]g are the time intervals associated with
all tokens in places fBi g. Then the ring mechanism could be described as follows:
(a) De ne (compute) [maxfajb g; maxfajeg] = [ab ; ae] as the common time interval
for all f[ajb ; aje]g.
(b) De ne (compute) [minfbjb g; maxfbjeg] = [bb ; be] as the common time interval
for all f[bjb ; bje]g.
(c) Compute the new time interval [ab ; minfae; bbg] = [c0b ; c0e ] representing the
domain time when the transition is allowed to re.
(d) If c0b > c0e then the transition would not re. Otherwise we add the delays
of T and have [c0b + min; c0e + max ] = [cb ; ce] as the time interval for the
propagated tokens.
(e) Merge object components of all the enabling token labels together and to get
the object component of identi er for the token(s) in the output places of the
transition.

37
A1 An B1 Bm
[a 1 b ,a 1 e ]
[b m b ,b m e ]
... ...

~
[∆min ,∆m a x ]

[c b ,c e ]

Figure 3.9: EPN ring mechanism

Intuitively, this time handling mechanism is related to the multi-processing nature


of EPNs and the fact that the transition- ring-order ow of time doesn't have to
correspond to the modeled ow of time. The modeled ow of time is supported
solely through the tokens' time-stamps, and therefore all the manipulations have
to be done explicitly by using the delays. The equations above describe the situa-
tion when all necessary tokens meet at the transition not only place-wise but also
modeled-time-wise.
6. Finally, some of the edges (between a transition and a place) might have operators
assigned to them. The operators are to process the identi ers of the tokens produced
by the transition. The idea here is that the identi ers of the enabling tokens are all
merged together when the transition res. However, there is a need to "unmerge"
them later on. Formally each edge (t; p) from a transition t to a place p has an
operator tp : 2O ! 2O .

Example 3.4.2 . Suppose we had a token with the identi er (compartment,


3-370-0-E) 2 C  O (representing a compartment on re) and a token with

the identi er (station, R5)2 S  O as the two enabling tokens for some tran-

38
sition. When the transition res, the resulting token(s) will have an identi er
f(compartment, 3-370-0-E),(station,R5)g. Later on, we will need to sepa-
rate this combined identi er into two identi ers. It could be done with operator
C (X ) = X \ C and operator S (X ) = X \ S.

3.4.5 Scheduling Layer: Computing Utilities


De nition 3.4.17 Let Wt (fa1; :::; an g; ff1 ; :::; fm g) be the predicted state of the world at
time t given ndings9 ff1 ; :::; fm g and proposed actions fa1; :::; an g. This prediction is
done by the EPN modeling layer.

De nition 3.4.18 Let s : W ! [smin; smax ] be the scoring function. This value is
provided by the static state evaluator. W is the set of all world states. smin and smax
correspond the worst and the best world states from the DCA standpoint.

De nition 3.4.19 Let a be a suggested action, ! be its CF, sgn be the sign function10 .
Let Fhc be a set of high-con dence ndings11 currently present on the blackboard. Then
we de ne operational utility of a as:
ta +(
X a) ta +(
X a)
uo (a; !) = sgn(!)[ s(Wt(fag; Fhc )) ; s(Wt(;; Fhc ))]:
t=ta t=ta

Here ta is when action a would be taken, (a) de nes the prediction interval duration.
Intuitively, if the CF ! is positive then a's operational utility is a measure of how much
the world gets better if we take a. Otherwise (! < 0) uo (a) is the measure of how much
the world gets better if we do not take the action. If posted CF is equal to 0 then the
operational utility is equal to 0 as well.
9
only high-con dence ndings are fed into the EPN.
10
i.e. returns +1 if the argument is positive, ;1 if negative, and 0 if the argument is 0.
11
such as con rmed re, ood, low pressure, etc.

39
Example 3.4.3 Fighting re in a compartment will most likely have a positive opera-
tional utility since the rst sum above will be larger than the second one (as the state of
the world is likely to be better with re- ghting than without).

De nition 3.4.20 Let a be an action. Let h be the hypothesis or nding a is trying


to address (i.e. to clarify, to process, etc.). This corresponds to the argument of the
top-level goal leading to a. Let Fhc be a set of high-con dence ndings currently present
on the blackboard. Then we de ne goal utility of a as:
th +(
X h) th +(
X h)
ug (a) = s(Wt(;; Fhc n fhg)) ; s(Wt(;; Fhc [ fhg)):
t=th t=th

Here th is the time when h was rst considered, (h) de nes the prediction interval
duration. Intuitively, goal utility of a strategy chain (from h to a) is the amount by how
much better the world would be if h didn't hold.

Example 3.4.4 Investigating a compartment with an active high-temp alarm will get
a positive goal utility since this action is trying to con rm hypothesis " re" (i.e. h =
" re"). Fire results in low-score world states and therefore the rst sum (which ignores
h) will be higher than the second sum.
De nition 3.4.21 Total utility (or simply utility) of an action is de ned as:
X
u(a) = !i [uo (a; !i ) + ug (a)]:
[a;!i ]

The sum runs over all instances of the suggested action (every single instance is repre-
sented as [a; !i ]). This formula combines the con dence factors !i with the utilities. We
can also consider normalizing the result:
X
u(a) = g( !i [uo (a; !i ) + ug (a)])
[a;!i ]

where g(x) = eexx;+ee;;xx or just tanh(x).

40
Example 3.4.5 Consider the following:
1. Utility of re ghting is most likely to be high due to operational utility component.
However it will be even higher if the re is in/near to a magazine as the goal utility
will contribute a lot too.
2. Compartment investigation will have 0 or negative operational utility since the world
is no better and actually a little worse (since investigations takes up some resources)
with investigation than without. However the goal utility should make the total
utility positive since a re can cause serious troubles and therefore addressing this
"dangerous" hypothesis (" re") is important and worthwhile doing.

3.4.6 Minerva Main Loop


De nition 3.4.22 Minerva works in cycles. On cycle j the following happens:
1. New ndings are formed: Fj = Fj0;1 [ Fnew ndings from the environment.
2. Strategy network Chj is built as de ned in 3.4.16, p.35.
3. For each action a 2 e(Bj ) total utility u(a) is computed as de ned in 3.4.21, p.40.
4. arg max(fu(a)ja 2 e(Bj )g) is passed to the environment actuators for execution.
5. Actions from i (Bj ) are used to form Hj+1 and Fj0 (section 3.3.2, p.27).

3.4.7 Computing Window Sizes for Critiquing and Performance


Measure
De nition 3.4.23 At time t time windows [tXb ; tXe ] (for subject's actions) and [tMb ; tMe ]
(for Minerva's actions) are be de ned as :

[tXb ; tXe ] = [t ; window


t (X ); t]

b ; te ] = [t ; t
[tM M window (M ) ; lag ; t ; lag ]
t t

41
where
1. window
t (X ) is the width of subject's window (de nition 3.4.24, p.42);
2. window
t (M ) is the width of Minerva's window (de nition 3.4.24, p.42);
3. lag
t is the di erence between subject's and Minerva's windows (de nition 3.4.24,
p.42). It is necessary since in DCTrain subject gets reports with a certain delay. In
fact the more involved scenario is the longer the delay becomes.
Intuitively, subject's actions are considered from time t ; window
t to t while Minerva's
actions are taken from the window of the same width but pushed back lag t time units.
This is illustrated in gure 3.10, p.43.

De nition 3.4.24 Functions window


t (X ), window
t (X ), and lag
t are de ned as follows:

X
window
t (X ) = ta (X; t; aXi )
i=Na
X
lag
t = da (t; aXi )
i=Na
X
window
t (M ) = ta (M; t ; lag X
t ; ai )
i=Na

where
1. Na is a constant showing how many subject's action we will track back from time t;
2. aXi ; aM
i are subject's and Minerva's consecutive actions with the last one (aNa or
X

aMNa correspondingly) being the last action ordered before a certain time (see the next
bullet);
3. ta (S; t; aSi ) is the average S 's time to come up with action aSi . Action faS1 ; :::; aSNa g
have t as the reference (ending time);
4. da (t; aXi ) is the average delay time for the multimedia interface to display the con-
rmation videoclip and the nding videoclip for action aXi . t is the reference time.

42
Intuitively, subject's window width is subject's total average time to come up with the last
Na subject's actions. Minerva's window width is Minerva's average total time to come
up with the last Na subject's actions. And lastly, the time lag is the total average time
of interface to play video clips corresponding to the last Na subject's action con rmation
messages and ndings leading to those actions. See gure 3.10, p.43.

Scenario
tb

Subject .
∆t w i n d o w ( M ) a '
Na
a 2'
Minerva's
a 1'
actions

∆t lag aNa
∆t w i n d o w (X) Subject's
a2 actions
a1
Minerva

t (current time)

te

Figure 3.10: Illustrating performance measure

3.4.8 Computing Performance Measure


De nition 3.4.25 Overall performance measure P is de ned as follows:
te
X
P (A(X; C )) = P (A(X; C ); t)
t=tb

where
1. tb and te are the beginning and ending times of the scenario;

43
2. t runs from tb to te with the step of step
t ;

3. X is the subject, C is the scenario session we are looking at, and A(X; C ) is X 's
action stream while solving C ;
4. P (A(X; C )) is the overall performance of X on C ;
5. P (A(X; C ); t) is X 's instantaneous performance at time t (de nition 3.4.26, p.44).
Intuitively, the overall performance measure is a sum of "instantaneous" performance
measures with a certain time step.

De nition 3.4.26 Performance measure at time t (a.k.a. instantaneous performance


measure) is de ned as follows:
X M
P (A(X; C ); t) = M(A(X; C )jtteXb ; A(M; C )jtteMb )

where
1. M is a matching function (de nitions 3.4.27, p.44, 3.4.28, p.45);
2. A(M; C ) and A(X; C ) are Minerva's and subject's actions on scenario C . Vertical
line denotes the scope. In other words A(subject; C )jtt21 is the set of subject's action
which were ordered between t1 and t2;
3. [tXb ; tXe ] and [tM
b ; te ] are X 's and Minerva's time windows where we look for actions.
M

Intuitively, instantaneous performance measure is the quality of the match between


the set of Minerva's actions and subject's actions each taken in generally di erent but
corresponding time windows.

De nition 3.4.27 Matching function (version 1) could be de ned as:

M1(A1 ; A2) = kA1 \ A2k

44
where A1 and A2 are sets of actions.
Intuitively, this simple matching function returns the number of actions which belong
to the both sets. Notice that this could be called "miss/don't miss" matching function
since it requires perfect match between the actions in the sets.

De nition 3.4.28 Matching function (version 2) could be de ned as


X
M2 (A1; A2 ) = m(a; A1; A2 )
a2A1 [A2

where m(a; A1; A2 ) is the matching quality (de nition 3.4.29, p.45) of action a (either
subject's or Minerva's).

De nition 3.4.29 Matching quality of action a is de ned as follows:


8
>
<c(a; arg maxa0 2A2 c(a; a0)) if a 2 A1
m(a; A1; A2 ) = >
:c(a; arg maxa0 2A1 c(a; a0)) if a 2 A2 :

where c(a; a0 ) is the degree of closeness of actions a and a0 (de nition 3.4.30, p.45).
Intuitively, matching quality m(a; A1 ; A2) is the degree of closeness of an action from
one set to the closest action from the other set.

De nition 3.4.30 Degree of closeness c(a; a0 ) could be de ned as:

c(a; a0 ) = a numerical/look-up table function based on a and a0

Intuitively, c(a; a0 ) returns some degree of closeness depending on the arguments of action
a.

3.5 Complexity Analysis


3.5.1 Domain Level Bounds

45
De nition 3.5.1 Let dg be the number of vertices in the domain graph.

3.5.2 Blackboard Bounds


The following results rely on Minerva-5 strategy knowledge. They have been proven
with the strategy knowledge presented in section B.2, p.96 and partially in the following
fact 3.5.1.

Fact 3.5.1 Minerva-5 has the following goal sets12 :


1. G = process-finding(Frf )[process-hypothesis(Hrf )[explore-hypothesis(H)
[ remove-datum(F [ H).
2. G = conclude(H) [ perform(A) [ remove(F [ H).
3. GI = applyrule-forward(Rf ; H) [ applyrule-backward(Rb; H) [
pursue-hypothesis(H) [ test-hypothesis(H) [ findout(D) [ findout(F).

De nition 3.5.2 Each goal of Minerva-5 is of form operation(args). Let ng be the


number of di erent operations. For example: in fact 3.5.1 we have ng = 12 of them.

Theorem 3.5.1 For any blackboard the following holds:


1. Any strategy chain starting with process-finding(h) has O(3ng dg ) edges.
2. Any strategy chain starting with process-hypothesis(h) has O(3ng dg ) edges.
3. Any strategy chain starting with explore-hypothesis(h) has O(3ng dg ) edges.
4. Any strategy chain starting with remove-datum(d) has precisely 1 edge.
Proof. For this proof it is convenient to come with some kind of simple graphi-
cal representation of the strategy networks. We will use strategy network scheme which
is a directed (possibly cyclic) graph representing a skeleton of the strategy network.
12
Below f(S ) = ff(s)js 2 S g.

46
process-hypothesis remove-datum explore-hypothesis
process-finding

pursue-hypothesis

applyrule-backward
applyrule-backward
remove test-hypothesis
applyrule-forward
applyrule-forward
applyrule-backward

findout
applyrule-forward
findout
applyrule-forward
perform conclude
perform conclude

47
findout
applyrule-forward
test-hypothesis
test-hypothesis
perform conclude perform
perform conclude perform

test-hypothesis

perform conclude perform

Figure 3.11: Possible Strategy Network Schema


Speci cally: for a strategy network we will draw a strategy network scheme with the
edges being the strategy operators S and the vertices being the network vertices com-
bined by ignoring the arguments. For example: suppose nding f leads to n conclu-
sions: jf(f; Ci) 2 Ef gj = n. Then top-level goal process-finding(f ) would lead to
n intermediate-level goals applyrule-forward(ri; Ci ). However, for the purposes of
our analysis, we abstract from the speci c ri and Ci and represent it as a single edge
(process-finding;applyrule-forward) on our strategy network scheme. Now it be-
comes clear why a strategy network scheme could be cyclic: it would happen if the
original strategy chain had two non-adjacent nodes di erent in their arguments only.
However, for visualization purposes, we will sometimes duplicate certain strategy net-
work scheme vertices. It is important to keep in mind that the actual strategy chains
are never cyclic as ensured by the strategy network building algorithm (de nition 3.4.16,
p.35).
Given the speci c strategy knowledge Minerva-5 has (see section B.2, p.96) strategy
chains schema look as shown in Figure 3.11, p.47.
Let's prove the rst statement of the theorem. The other statements can be proved
in a similar way. Figure 3.12, p.49 shows the scheme for strategy chains starting with
process-finding goal. The strategy network scheme edges are labeled with the upper

bounds on the edge number of the actual strategy chain. We also augmented each ver-
tex (i.e. a strategy goal) with its arguments to make the edge bounds more obvious.
For example: there could be no more than fB edges going from process-finding to
applyrule-forward since there are no more than fB domain-level rules from Rf con-

necting any nding with any hypothesis.


The tree has a recursive subpart but its height is limited (see later in the proof).
Figure 3.13, p.49 shows the same scheme but with relaxed bounds and the recursive
subtree shown explicitly. Let's nd out the size of this subtree. Suppose the size of the
subtree is S (n) edges where n is its height. Keeping in mind the de nition of strategy

48
process-finding(d)

δ f = {(d , C2 ) ∈ E f } δ b = {(d , C1 ) ∈ Eb }

applyrule-backward(C1)
applyrule-forward(C2)

1
1
δ f = {(d1 , C1 ) ∈ E f }

perform(C2) conclude(C2) δ b = {(d 2 , C1 ) ∈ Eb } δ b = {(•, d 2 ) ∈ Eb }


findout(d2)

applyrule-forward(C1)

1
1 δ s = {(a, d 2 ) ∈ Es } 1

conclude(C1) perform(a) test-hypothesis(d2)


perform(C1)

Figure 3.12: Strategy Network Scheme Starting with process-finding (step 1)

process-finding(d)

δ
applyrule-forward(C2) applyrule-backward(C1)

1 δ
1 δ
findout(d2)
applyrule-forward(C1)

1
perform(C2) conclude(C2) S(n) δ
1 1
test-hypothesis(d2)
perform(a) δ
perform(C1) conclude(C1)
applyrule-backward(C1)

S(n-1)

Figure 3.13: Strategy Network Scheme Starting with process-finding (step 2)

49
network schema and looking at the diagram 3.13, p.49 we notice that:

(3.1) S (n) = 5 + 22 + 2 S (n ; 1)

with the boundary condition of

(3.2) S (2) = 5 + 2 :

The whole tree size is clearly:

Swhole (n + 1) = 3 + (1 + S (n)):

We will upperbound S (n) by using the "guess-method" ([34]). Speci cally, we guess
that
S (n) = O(3n):
Thus our inductive hypothesis is that:

(3.3) S (n ; 1)  c3(n;1)

holds for some c. We now need to show that:

(3.4) S (n)  c3n

holds for the same c. Plugging 3.3 into 3.1 we get:

S (n)  5 + 22 + 2c3(n;1)


= 5 + 22 + c3n;1
(3.5)  c3n ;

50
It is easy to check that if c = 5 +  and   2 then both the inductive step and the
boundary condition hold for all n  2. This case with  = 1 is trivial since then
S (n) = O(n). Thus we just proved S (n) = O(3n) and Swhole (n) = O(3n).
Strategy chains are acyclic by de nition. Therefore in any particular strategy chain
all the nodes are distinct. Taking into account de nitions 3.5.2, p.46 and 3.5.1, p.45 we
conclude that any strategy chain has no more than ng dg vertices13. Thus, the height n
of the entire strategy network tree is limited by ng dg .
Combining everything together we conclude that the total size (number of edges) of
all strategy chains starting with process-finding(h) is O(3ng dg ) edges.

Corollary 3.5.1 All strategy chains are O(3ng dg ) edges long.


De nition 3.5.3 Let Bi = hFi ; Hi i be a blackboard. Then:
1. Firf = Fi \ Frf is the set of red- ag ndings. firf is its size.
2. Hirf = Hi \ Hrf is the set of red- ag hypotheses. hrfi is its size.
3. di = jFi j + jHi j = fi + hi = jBi jD .

Theorem 3.5.2 The following are upper bounds on the total number of the strategy
chains generated from a blackboard Bi = hFi ; Hi i:

1. There are no more than hrfi (fB + bB) chains starting with process-hypothesis(h)
where h 2 Hirf .
2. There are no more than hi chains starting with explore-hypothesis( h) where
h 2 Hi .
3. There are no more than firf (fB + bB) chains starting with process-finding( f)
where f 2 Firf .
13
This holds since domain-level arguments of the strategy goals are carried throughout the strategy
chain unmodi ed.

51
B chains starting with
4. There are no more than di rm remove-datum( d) where d 2
Fi [ Hi .

Proof follows straight from the de nitions.


Corollary 3.5.2 The total size (the number of edges) of strategy network Chi for a black-
board Bi = hFi ; Hi i is upper bounded by:

jBi jS = hrfi (fB + bB )O(3ngdg ) +


hi O(3ngdg ) +
firf (fB + bB )O(3ngdg ) +
B = O (jBi jD   3ng dg )
di rm
That means that the total size of strategy network is at most linear in the total number
of ndings and hypotheses on the blackboard and exponential in the size of the domain
graph.
Proof follows directly from theorems 3.5.1, p.46 and 3.5.2, p.51.
Theorem 3.5.3 If all the free-form predicates in the domain rules (section 3.4.2, p.31)
take O(1) time to compute then the deliberation time is linear in the size of the strategy
network (jBi jS ).
Proof. The strategy network is built in a depth- rst fashion as de nition 3.4.16, p.35
indicates. At each frontier node of the strategy network being built the algorithm tries
to apply all possible strategy rules instantiating them with the appropriate domain-level
arguments. If a strategy rule's preconditions can be met then the rule is applied and a
new edge is added to the strategy network (section 3.4.3, p.34). Obviously at most jS j
operators are tried. We assume that the free-form predicates in the domain rules take
O(1) time and therefore conclude that the algorithm spends O(1) time at each strategy
network node.

52
Corollary 3.5.3 If all the free-form predicates in the domain rules take O(1) time to
compute then the deliberation time is linear in the total number of ndings and hypotheses
currently posted on the blackboard.
Proof follows directly from corollary 3.5.2, p.52 and theorem 3.5.3, p.52.

3.6 Equivalence to Turing Machine


It is fairly obvious that computation power of Minerva doesn't exceed the one of a uni-
versal computing device ([4]). We will show that in fact Minerva is a universal computing
device. To do this we will prove that any Turing Machine can be encoded in Minerva.

De nition 3.6.1 (Turing Machine (from [1])). A Turing machine M is de ned


by tuple (Q; ; ;; ; q0 ; B; F ) where:
1. Q is the nite set of states;
2. ; is the nite set of allowable tape symbols. Q \ ; = ;;
3. B 2 ; is the blank symbol;
4.   ;; B 62  is the set of input symbols;
5.  : Q  ; ! Q  ;  fL; Rg is the next move function.  could be unde ned on
some inputs;
6. q0 2 Q is the start state;
7. F  Q is the set of nal states.
At every moment of time t the machine state could be described with:
1. machine state qt 2 Q;
2. head position it ;
3. symbol it 2 ; which machine head is observing.

53
Given the next move function (qt ; it ; q0 ; 0 ; move) the next state at time t + 1 is then
de ned as:
1. qt+1 = q0 ;
2. t+1 = 0 ;
3. it+1 = it + 1 if move = R;
4. it+1 = it ; 1 if move = L.

Theorem 3.6.1 Minerva shell is a universal computing device.


Proof. We will show how to encode an arbitrary Turing Machine (Q; ; ;; ; q0 ; B; F ) in
Minerva. In fact the encoding is somewhat straightforward:

1. The ndings will be:


(a) is_finding([accept,Q]) describes F (one fact per each accepting state);
(b) is_finding([init_head,I,Q]) describes the initial head position I and ini-
tial state Q (to match the de nition above precisely we need to have only one
fact is_finding([init_head,0,q0]));
(c) is_finding([init_tape,I,X]) describe the initial tape state: each fact spec-
i es an input symbol X2  at the tape position I ;
(d) is_finding([move,Q,X,P,Y,MoveLR]) represent the move (Q; X; P; Y;
MoveLR) (one fact for each rule).
2. Working states and the tape are represented as hypotheses:
(a) is_hypothesis([tape,I,X]) indicates that tape cell I contains symbol X at
the moment. If there is no hypothesis for cell J we assume that it contains
the blank symbol b;
(b) is_hypothesis([head,I,Q]) represents current head location I and head
state Q;

54
(c) is_hypothesis([next_head,I1,P]) and is_hypothesis([next_tape,I,Y])

represent the same type of information but for the next time tick.
3. We can have one action only: is_action([halt]) which signals reaching an ac-
cepting state.
4. The rules are used to simulate Turing Machine operation. Figure 3.14, p.56 shows
the rules.


Remark. It is interesting to notice that this proof uses Minerva deliberation mechanism
only. No scheduler is needed since all the actions except [halt] are internal.

Example 3.6.1 We will show an encoding for Turing Machine which accepts any string
of form 1 and converts all 1's to x's. The mathematical and Minerva-style programs are
given in gure 3.15, p.57.

3.7 Implementation
Minerva-5 is written in Prolog. It communicates with the other modules through an
ODBC interface by posting and reading messages to/from an Microsoft-Access table.
See section 3.8.2, p.60 for additional details. We have been using di erent versions of
LPA Prolog (e.g. 3.4, 3.5, etc.) to run it on a PC under MS Windows 95 or MS Windows
NT. Minerva can run as a task concurrently with other tasks but the best performance is
achieved is when it is given a whole networked machine (see section D.2, p.160 for more
details).
Large fragments of Minerva-5 Prolog code are given in Appendix B, p.89.

55
%% Moving Head (#1) -- right move, create new head state
ccf(r_hm1,1,3,[head,I,Q],818,[next_head,I1,P],818,[I,P,Q,X,I1]) :- add1(I,I1).
ccf(r_hm1,2,3,[tape,I,X],818,[next_head,I1,P],818,[I,P,Q,X,I1]) :- add1(I,I1).
ccf(r_hm1,3,3,[move,Q,X,P,Y,r],818,[next_head,I1,P],818,[I,P,Q,X,I1]) :- add1(I,I1).
ccb(r_hm1,1,3,[head,I,Q],818,[next_head,I1,P],818).
ccb(r_hm1,2,3,[tape,I,X],818,[next_head,I1,P],818).

add1(A,B) :-
nonvar(A),!,
TMP is A+1,
B = TMP,!.
add1(_,_).

%% Moving Head (#2) -- left move, create new head state


ccf(r_hm2,1,3,[head,I,Q],818,[next_head,I,P],818,[I,P,Q,X]) :- sub1(I,I1).
ccf(r_hm2,2,3,[tape,I,X],818,[next_head,I,P],818,[I,P,Q,X]) :- sub1(I,I1).
ccf(r_hm2,3,3,[move,Q,X,P,Y,l],818,[next_head,I,P],818,[I,P,Q,X]) :- sub1(I,I1).
ccb(r_hm2,1,3,[head,I,Q],818,[next_head,I1,P],818).
ccb(r_hm2,2,3,[tape,I,X],818,[next_head,I1,P],818).

sub1(A,B) :-
nonvar(A),!,
TMP is A-1,
B = TMP,!.
sub1(_,_).

%% Moving Head (#3) -- create new tape state


ccf(r_hm3,1,3,[head,I,Q],818,[next_tape,I,Y],818,[I,Y,Q,X]) .
ccf(r_hm3,2,3,[tape,I,X],818,[next_tape,I,Y],818,[I,Y,Q,X]) .
ccf(r_hm3,3,3,[move,Q,X,P,Y,_],818,[next_tape,I,Y],818,[I,Y,Q,X]) .
ccb(r_hm3,1,3,[head,I,Q],818,[next_tape,I,Y],818).
ccb(r_hm3,2,3,[tape,I,X],818,[next_tape,I,Y],818).

%% Remove current state and tape


ccm(rem_hm1,1,3,[head,I,Q],818,[head,I,Q],remove,[I,P,Q,X]).
ccm(rem_hm1,2,3,[tape,I,X],818,[head,I,Q],remove,[I,P,Q,X]).
ccm(rem_hm1,3,3,[move,Q,X,P,Y,Move],818,[head,I,Q],remove,[I,P,Q,X]).
ccm(rem_hm3,1,3,[head,I,Q],818,[tape,I,X],remove,[I,P,Q,X]) .
ccm(rem_hm3,2,3,[tape,I,X],818,[tape,I,X],remove,[I,P,Q,X]) .
ccm(rem_hm3,3,3,[move,Q,X,P,Y,Move],818,[tape,I,X],remove,[I,P,Q,X]) .

%% Transferring state and head (2 rules)


ccf(r_ts1,1,1,[next_tape,I,Y],818,[tape,I,Y],818,[]).
ccf(r_th1,1,1,[next_head,I1,P],818,[head,I1,P],818,[]).

%% Removing next_state and next_tape


ccm(rem_nt,1,1,[next_tape,I,Y],818,[next_tape,I,Y],remove,[]).
ccm(rem_ns,1,1,[next_head,I1,P],818,[next_head,I1,P],remove,[]).

%% Handling blank [default] cells on the tape


ccf(r_bc,1,1,[tape,I,_],no,[tape,I,b],818,[]).
ccb(r_bc,1,1,[tape,I,_],no,[tape,I,b],818).

%% Halting in an accepting state


ccf(r_ha,1,2,[head,I,Q],818,[halt],818,[Q]).
ccf(r_ha,2,2,[accept,Q],818,[halt],818,[Q]).

%% Transfering initial data


ccf(r_tid1,1,1,[init_head,I,Q],818,[head,I,Q],818,[]).
ccf(r_tid2,1,1,[init_tape,I,X],818,[tape,I,X],818,[]).

% Removing Initial Data


ccm(rem_id1,1,1,[init_head,I,Q],818,[init_head,I,Q],remove,[]).
ccm(rem_id2,1,1,[init_tape,I,X],818,[init_tape,I,X],remove,[]).

Figure 3.14: Minerva rules for MT simulation


56
Mathematical encoding:
1. Q = fq0 ; q1 g;
2. ; = fB; 1; xg;
3.  = f1g;
4.  is de ned as f(q0 ; 1; q0 ; x; R); (q0 ; B; q1 ; B; R)g;
5. q0 is the start state;
6. F = fq1 g.

Minerva encoding:

% Initial head state

finding([init_head,0,q0],818)).

% Tape: 111bbbbb.....

finding([init_tape,0,1],818).
finding([init_tape,1,1],818).
finding([init_tape,2,1],818).

% Moves

finding([move,q0,1,q0,x,r],818)).
finding([move,q0,b,q1,b,r],818)).

%% Accepting states: q1

finding([accept,q1],818).

Figure 3.15: MT accepting 1 and changing them to x

57
3.8 Application to the Navy Damage Control Do-
main
3.8.1 Domain Background
This section will provide us with a brief domain introduction. Further information could
be found in [2], [3].
We are concerned with the task of the DCA (Damage Control Assistant). The DCA
is an ocer aboard a Navy ship. Many of the DCA's tasks are similar across various
platforms; however, we will focus on Arleigh Burke class destroyers. A representative of
the class (DDG-51) is shown in gure 3.16, p.59.
Brie y speaking the DCA is concerned with maintaining ship readiness in a com-
bat situation (i.e. when the crises happen). Typically the crises ( re, ood, rupture,
equipment failures, personnel casualties, etc.) are caused by primary damage events (e.g.
missile/torpedo/mine hits; internal explosions, ignitions, etc.) and secondary damage
events (e.g. re/ ood/smoke propagation). The following main tasks are related to crisis
management:
1. maintaining situation awareness;
2. containing/extinguishing res;
3. dealing with oods;
4. dealing with smoke;
5. maintaining damage control subsystems (e.g. remain);
6. maintaining other vital systems (e.g. chillwater).
The DCA is physically located in the Damage Control Central (DCC) and communi-
cates with other stations (Repair-2,3,5,8; CSMC; CIC; CHENG; EOOW; Aft/Fwd BDS;
CO; etc.) through phone talkers. Figure 3.17, p.59 shows the location of four major
repair stations (Repair-2,3,5,8) and their areas of jurisdiction.

58
Figure 3.16: DDG-51 Arleigh Burke Destroyer

Figure 3.17: Main Repair Station Locations

59
The communications include reports to the DCA from the di erent stations and
DCA's orders. The repair stations are capable of performing the following major tasks:
1. investigation to establish a status of a speci c space/system on the ship;
2. setting up and maintaining re boundaries to prevent re spread;
3. re ghting to put out a re;
4. setting up and maintaining smoke boundaries to prevent smoke spread;
5. securing ooded spaces (setting ood boundaries);
6. dewatering spaces;
7. manual operation of various equipment (e.g. shutting valves);
8. repairing di erent kinds of damage.
Appendix A, p.86 presents us with ow charts for a subset of DCA's actions at a very
high-level. The charts probably don't even cover 5% of the variety of DCA's tasks, thus
clearly indicating the vast complexity of the control domain.
All of the tasks are subject to concurrency, time pressure, task/crisis interaction, and
shortage of resources. This makes the task very challenging even for well trained Navy
personnel with years of experience ([9]).

3.8.2 Minerva as an instructor aid in DCTrain environment


The DCTrain multimedia environment ([28]) was created to enhance naval ocer training
with respect to the damage control domain. As gure 3.18, p.62 shows, the environment
consists of:
1. Multimedia student interface providing the DCA student with reports and
taking his/her commands.
2. Instructor interface allowing the instructor to specify training scenarios.

60
3. Scenario generator { an auxiliary module helping the instructor to build a useful
scenario.
4. State of the world (ship) database containing time-ordered world snapshots
and also serving as a common knowledge repository to provide the modules with
the domain data and to interface them together.
5. Physical simulator and intelligent agents simulating physical processes (com-
bustion, ooding, etc.) and human/mechanical agents e ects (investigation, re
ghting, etc.). DCTrain has an agent for each ship station. The agents use the
common knowledge base to communicate with each other.
6. Minerva-5 doing problem-solving, advising, and student critiquing.
Typically the system is used as follows:
1. Training objectives (e.g. handling small res) are speci ed through the instructor
interface.
2. The scenario generation module takes them as an input and produces appropriate
primary damage speci cations.
3. Simulator and intelligent agents simulate the crisis development which is presented
to the student through the immersive multimedia interface.
4. The student uses the interface to order di erent damage control stations (simulated
by the intelligent agents) to handle the crisis.
5. Minerva-5 is solving the same scenario on its own but instead of operating the
simulator it uses its generated and ranked actions for advising and critiquing the
student.

61
Simulator
State of the
world Intelligent Agents
Database
Scenario
Generator

62
Student
Multimedia
Interface Minerva-5
Instructor
Interface

Figure 3.18: Minerva-5 in DCTrain


3.8.3 Minerva as a DCA Decision Aid in DC-ARM
The Damage Control Automation for Reduced Manning (DC-ARM) project involves
creating an entirely new approach to damage control on the future vessels with reduced
manning ([9]). This large project will result in creating new damage control actuators
(e.g. re suppressors, containment aids, powered hatches, valves, etc.) and a new gen-
eration of damage control sensors (e.g. chemical, infrared, temperature, etc.). However,
the heart of it is a computer system monitoring the sensors and responding to a potential
crisis. This situation awareness and casualty response system called DC-Aware is being
created at the KBS group ([31], [32]). The Minerva-5 framework will become the central
and integrating part of the system as gure 3.19, p.64 shows.
The overall setup has the following modules:
1. Smart sensors monitor a number of parameters throughout the vessel and output
a massive stream of data. They are called "smart" due to sophisticated fault-
tolerance, noise-suppression, and pre-processing algorithms built into them.
2. Smart actuators perform various domain actions. The term "smart" comes from
a certain low-level intelligence built into them.
3. Supervisory interface presents state of the vessel, conclusions, reasoning, and
suggestions of the expert system to a supervisory ocer. This keeps a man in the
loop and thus increase overall robustness and decrease overall risk.
4. State of the world database contains time-ordered world snapshots and also
serves as a common knowledge repository providing the modules with the domain
data and interfacing them together.
5. Physical simulator and intelligent agents simulates physical processes (com-
bustion, ooding, etc.) and human/mechanical agents e ects (investigation, re
ghting, etc.). DC-Aware has an agent for each vessel station. The agents use the
common knowledge base to communicate with each other. This module is used for
predictive validation, lookahead simulation, and the like.

63
Simulator
++
Smart Sensors State of the Intelligent
world Agents/Objects

Smart Actuators Database


Minerva-5:

64
problem-solving
••problem-solving
explanation
••explanation
learning
••learning

Supervisory
Interface

Figure 3.19: Minerva-5 in DC-Aware


6. Minerva-5 doing problem-solving, advising, and auto-learning.
The system operates as follows:
1. Ship parameters measured by di erent sensors get preprocessed (noise- ltered, etc.)
by the low-level preprocessing system built into the sensors.
2. The preprocessed information is posted on Minerva's blackboard and gets processed
in the standard way.
3. Should a crisis be suspected, Minerva can run numerical simulation and intelligent
agent modules to conduct a predictive validation.
4. Once a crisis is veri ed, Minerva will come up with feasible damage control actions.
5. The ndings, Minerva reasoning, and Minerva's suggested actions will be presented
to the supervisory ocer via the Supervisory GUI.
6. Once the critical actions are approved, they will be passed to the smart actuators
for execution.
This setup also allows for on-line and o -line learning. Speci cally, di erent layers of
Minerva knowledge (e.g. EPN prediction layer) could be re ned dynamically by using
traces of actual or simulated scenarios.
Minerva-DCA, the damage control instantiation of Minerva-5, is currently capable
of expert-level problem-solving in the DCTrain environment simulating damage control
as it is done today on the United States Navy eet. Naturally running in the real-
world environment (as opposed to DCTrain) and having new actuators and sensors at
its disposal will call for Minerva-5 re nement and extensions. However the beauty of
Minerva-5 blackboard architecture (see section 3.1.3, p.13) is that the re nements and
extensions are made easy due to modular design and opportunistic problem-solving.
This power makes us believe that an extended and re ned Minerva-5 framework will be
adequate to the highly demanding problem-solving task of DC-ARM.

65
3.9 Evaluation
3.9.1 Theoretical Evaluation of the Proposed Approach
The expert shell constructed appears to have sound foundation insomuch as the following
aspects are concerned:
1. Deliberation cycle is computationally feasible since it takes time linear in the num-
ber of ndings and hypotheses (section 3.5, p.45).
2. Deliberation mechanism is a universal computing device (section 3.6, p.53).
3. Board evaluation method is appropriate for the domain as it achieves high cross-
validation accuracy (section B.4, p.118).

3.9.2 Practical Evaluation of the Proposed Approach


The following subsections describe Minerva-5 performance in the Navy Damage Control
domain.

3.9.2.1 Problem-solving Performance


Evaluation Goals. Our evaluation of Minerva-5 problem-solving performance targets
the following issues:
1. Minerva-5 framework applicability in the DC domain. We are interested in com-
paring Minerva-5 performance to the performance of the SWOS (Surface Warfare
Ocer's School) graduates who specialize in the DCA area;
2. Minerva-5 scheduling layer performance. One of the main contributions of Minerva-
5 project is re ning the scheduling layer with a Petri Nets predictor and an AN-
N/C5.0 board evaluator. To investigate this aspect of Minerva-5 performance we
compare Minerva-5 and Minerva-4 since the only signi cant di erence between
them is the scheduling layer.

66
Evaluation Setup. To address the two issues mentioned above we tested SWOS s-
tudents, Minerva-4, and Minerva-5 on 160 damage control scenarios simulated in the
DCTrain immersive multimedia environment. For each scenario the following parame-
ters were recorded:
1. Complete primary damage speci cations were logged as n blasts descriptions. Each
blast was described by its compartment, 3-value parameter vector (as DCTrain
requires), and the blast time.
2. Outcome of the scenario was de ned as follows:
(a) "dead" if a kill-point was reached within the rst 25 minutes of the scenario;
(b) "survived" if no kill-point was reached but some res didn't get extinguished
within the rst 25 minutes of the scenario;
(c) "victory" if all the res were extinguished and no kill-point was reached within
the rst 25 minutes.
3. Average cycle time was recorded for Minerva-4 and Minerva-5 as the time spent
per cycle averaged over all cycles starting from the rst damage report and ending
at the last damage-related message.
The detailed transcripts for all 480 runs are presented in section D.2, p.160.

Minerva-4/5 vs. SWOS students. To address the rst target of our evaluation
we collected some statistics on the scenario outcomes. The number of scenarios where
SWOS students and Minerva-4/5 died, survived, or won are presented in table 3.1, p.68
and graph 3.20, p.68. The fact that Minerva-4 and Minerva-5 are better than students
is statistically signi cant.

Minerva-4 vs. Minerva-5. To address the second target of our evaluation (Minerva-
5 scheduler performance) we compare Minerva-4 and Minerva-5 performance on the 160

67
Outcome SWOS Students Minerva-4 Minerva-5
Dead 39 28 21
Survived 93 22 22
Victorious 28 110 117
Total 160 160 160

Table 3.1: Minerva-4/5 vs. SWOS graduates

140
117
120 110
100 93

80

60
39
40 28 28
21 22 22
20

0
Dead Survived Victorious

SWOS Students Minerva-4 Minerva-5

Figure 3.20: Minerva-4/5 vs. SWOS graduates

68
scenarios. It is important to note that at the time Minerva-5's Extended Petri Nets
predictor was just implemented, barely debugged, and no tuned optimally.
However as table 3.1, p.68 and graph 3.20, p.68 show, even with the beta-version
of Extended Petri Nets predictor Minerva-5 slightly outperforms Minerva-4. We also
investigated the possibility that this performance increase is caused by a more frequent
cycling. That would certainly decrease Minerva-5's response time and help in involved
scenarios. However, as table 3.2, p.69 and graphs 3.21, p.70, 3.22, p.70, 3.23, p.71
show Minerva-5's average cycle time is actually greater than Minerva-4's. Therefore
the performance increase is not caused by a faster cycling. Since the only signi cant
di erence between Minerva-4 and Minerva-5 is the scheduling layer, we conclude that
the performance increase is likely to be due to the more intelligent scheduling.

Minerva-4
Parameter Overall Dead Survived Victorious
Min Avg Cycle Time 0.054 0.061 0.162 0.054
Max Avg Cycle Time 4.853 4.833 3.986 4.853
Avg Avg Cycle Time 1.627 1.560 1.642 1.614

Minerva-5
Parameter Overall Dead Survived Victorious
Min Avg Cycle Time 0.051 0.057 0.129 0.051
Max Avg Cycle Time 11.778 11.778 3.962 11.740
Avg Avg Cycle Time 2.711 3.535 2.328 2.604

Table 3.2: Minerva-4/5 Average Cycle Time

3.9.2.2 Advising Performance


As of now Minerva-5 advising facility is still under development. It is currently capable
of producing long-form and short-form Natural Language explanations as shown in g-
ures 3.7, p.28 and 3.8, p.29. However the recommended and not recommended lists have
not been implemented yet. Figure C.2, p.157 presents our projection of how the nal
interface will look like.

69
Min Avg Cycle Time
0.180
0.162
0.160
0.140 0.129
0.120
0.100
0.080
0.057 0.061
0.060 0.051 0.054 0.051 0.054

0.040
0.020
0.000
Overall Dead Survived Victorious
Minerva-5 Minerva-4

Figure 3.21: Minerva-4/5 Average Cycle Time (Graph 1)

Max Avg Cycle Time


14.000
11.778 11.778 11.740
12.000

10.000

8.000

6.000 4.853 4.853


4.833
3.962 3.986
4.000

2.000

0.000
Overall Dead Survived Victorious

Minerva-5 Minerva-4

Figure 3.22: Minerva-4/5 Average Cycle Time (Graph 2)

70
Avg Avg Cycle Time
4.000
3.535
3.500

3.000 2.711 2.604


2.500 2.328

2.000
1.627 1.560 1.642 1.614
1.500

1.000

0.500

0.000
Overall Dead Survived Victorious

Minerva-5 Minerva-4

Figure 3.23: Minerva-4/5 Average Cycle Time (Graph 3)

3.9.2.3 Critiquing Performance


As of now Minerva-5 critiquing facility is still under development. The window matching
part (section 3.4.7, p.41) and scoring functions (section 3.4.8, p.43) are already imple-
mented but other submodules are still under development. Figure C.3, p.158 presents
our projection of how the nal interface will look like.

3.9.2.4 Summary
The beta-implementation of Minerva-DCA shows an impressive performance in the Dam-
age Control domain. In particular, Minerva-5 greatly outperforms Navy SWOS graduates
(73% vs. 18% of victorious scenarios correspondingly) in the DCTrain simulated envi-
ronment. The theoretical extensions of the Minerva-4 blackboard architecture turn out
to improve the problem-solving performance through better scheduling. As a result, the
beta version of Minerva-5 outperforms Minerva-4 despite its slower cycle-operation.

71
The explanatory facility implemented up to date features a natural language output
and a graphical user interface. It will serve as a foundation for the upcoming advising
and critiquing add-on modules.

72
Chapter 4
Related Work
4.1 Medical Diagnosis Expert Systems
4.1.1 MYCIN
A "grandfather" of many modern expert systems, including Minerva, MYCIN is an expert
system for diagnosing bacterial blood infections and prescribing treatment [13].
Technically speaking MYCIN's diagnostic module is mostly a backward chainer with
some meta-level knowledge and a preview mechanism introduced mainly for optimization
purposes. MYCIN performs a dialog with the user to collect necessary evidences which
would allow it to diagnose a patient's disease. MYCIN has been implemented in LISP
and reasons over a purely rule-based database. A con dence factor (CF) mechanism is
used to handle uncertainty issues. A similar mechanism is used in Minerva.
There has been a lot of research showing a strong necessity for an antibiotic drug
prescription aid. The basic idea is that lab tests to precisely identify the infection take
too long. This leaves a physician with two choices: (a) to use wide-coverage drugs which
are typically less e ective or (b) to use disease-speci c drugs given some relatively-easy-
to-get evidences. MYCIN's goal is to help the physician along the second direction.
MYCIN has been proven to be very e ective in its domain, though some people have
remarked that the domain is quite narrow and has a combinatorial nature ([13]).

73
Backward chaining easily allows for explanation and MYCIN is equipped with a
natural language generator for them.
An attempt to use MYCIN for tutoring resulted in creating GUIDON: a MYCIN-
based tutoring system. While being successful the system left a considerable space for
development. Further research has resulted in creating many other systems such as
NEOMYCIN, Minerva, etc. Refer to Minerva history section 4.2.2, p.76 for further
detail.

4.1.2 NEOMYCIN
NEOMYCIN was one of the MYCIN extensions [14]. One of the major changes was
creating an explicit strategy layer. Parts of this layer have been extracted from MYCIN's
domain knowledge where there were represented implicitly (e.g. encoded by the order of
premises). Other parts have been created from scratch.
Control knowledge in NEOMYCIN is based on the concept of task. The tasks are or-
ganized hierarchically with strategy (a.k.a. meta) rules representing transitions between
them.
This framework, while being e ective for medical diagnosis problem-solving and tu-
toring, turned out to be not so e ective for critiquing and apprenticeship learning ([11]).
This shortcoming of NEOMYCIN eventually resulted in the creation of Minerva.

4.2 Blackboard Expert Systems


4.2.1 Guardian
Guardian is a blackboard-oriented real-time control expert system for the domain of
intensive-care unit monitoring and causal response [15]. At a high level the system
consists of:
1. Perceptual preprocessor that acquires information from the sensors and performs
certain preprocessing, ltering, etc. to o -load the reasoning system.

74
2. Reasoning system constitutes the largest part of Guardian. It is centered around
the blackboard containing knowledge, reasoning results, and the cognitive state.
The reasoning process has two types of reactions:
(a) fast re ex reactions that don't really require reasoning but rather execute
certain actions in response to the critical ndings;
(b) slow cognitive reactions which actually involve opportunistic reasoning.
Guardian's knowledge contains domain knowledge and reasoning or strategy knowl-
edge. Reasoning results include intermediate and nal products of the reasoning
tasks: ndings, hypotheses, diagnoses, predictions, plans, etc. And nally the cog-
nitive state consists of an event bu er containing asynchronous incoming ndings
and cognitive events produced by reasoning; an agenda holding executable reason-
ing operations; and the next operation. The system works in cycles involving the
concept of control plan.
3. Action systems control the execution of the actions. Guardian is capable of manip-
ulating ventilation settings, recommending other interventions, etc.
The system has been tested in a simulated environment with a sophisticated physical
patient model and has shown impressive performance. Speci cally, it outperformed ICU
nurses and physicians in prompt diagnosis and proper response on numerous realistically
simulated scenarios ([16]).
However, despite impressive problem-solving performance, the system currently lacks
some important features including:
1. natural language explanations of its behavior;
2. critiquing facility;
3. non-apprenticeship type tutoring.

75
4.2.2 Minerva Family
Minerva-2 ([18]) and Minerva-3 ([11]) have been developed at the Knowledge-Based Re-
search Group at the University of Illinois at about the same time. Both of them are
based on ProHC which in turn was based on Proneo. Proneo was mainly a Prolog reim-
plementation of NEOMYCIN. The Minerva-2 and Minerva-3 projects, however, pursued
slightly di erent research directions. Minerva-2 research focused on scheduling issues as
well as recursive heuristic classi cation approach ([10]). The research on Minerva-3, on
the other hand, focused on strategy layer and its reusability for critiquing.
Just as the transition from MYCIN to NEOMYCIN resulted in creating an addition-
al knowledge layer (strategy knowledge), the transition from NEOMYCIN to Minerva-
2,3 resulted in creating one more explicit knowledge layer: scheduling knowledge (see
gure 4.1, p.77). This additional layer has allowed for critiquing and apprenticeship
learning.
Minerva-4 ([20]) has been developed on the base of Minerva-3. With real-time control
domains in mind all Minerva-3 knowledge layers have been completely rewritten and
signi cantly extended while the inference engine has been considerably modi ed and
streamlined. This allowed Minerva-4 be the rst expert system of this family achieving
real-time performance.
However, rule-based and handcoded scheduling layer of Minerva-4 didn't allow for
taking a full advantage of the blackboard architecture and hindered its performance in
complex scenarios. Minerva-5 was developed on the base of Minerva-4 by going away from
domain-independent rule-based scheduling knowledge and substituting it with qualitative
prediction and state evaluation knowledge ( gure 4.1, p.77). This re nement improved
dynamic scheduling and thus boosted all of Minerva's features: problem-solving, advising,
and critiquing.

76
Inference
Engine

State
Inference Inference
Evaluation
Engine Engine
Knowledge

Qualitative
Inference Scheduling Scheduling
Prediction
Engine Knowledge Knowledge
Knowledge
static →
dynamic

77
Inference Stategy Strategy domains Strategy Strategy
Engine Knowledge Knowledge Knowledge Knowledge

Domain Domain Domain Domain Domain


Program
Knowledge Knowledge Knowledge Knowledge Knowledge

MYCIN NEOMYCIN MINERVA-2,3 MINERVA-4 MINERVA-5


(1969-1979) (1979-1988) (1988-1995) (1997) (1997-1998)

Figure 4.1: Minerva history


4.2.3 HASP
HASP project ([23]) began in 1972 and was terminated in 1975. However, it was reinstat-
ed in 1976 under the name SIAP. The two major objectives were to demonstrate that AI
techniques can make a signi cant contribution to the surveillance problem and that the
task is tractable. The domain of HASP/SIAP was multi-sensor detection and recognition
of naval platforms (such as friendly and hostile Navy surface ships and submarines, as
well as civilian eets). The task of the system was to develop and maintain a situation
board re ecting platform types and movements in a region under surveillance. The inputs
to the system included digitized acoustic data coming from massive hydrophone arrays
and intelligence reports with di erent degrees of con dence.
The rst stage of development was in uenced by the plan-generate-test paradigm
from DENDRAL expert system ([35]). However soon the approach was abandoned for
the following reasons:
1. The data comes in a continuous stream.
2. The analysis has to be tracked and maintained over time. History plays an impor-
tant role.
3. Numerous types of information are relevant but also remote to the process.
The second stage was originally in uenced by seemingly similar domain of HEARSAY-
II speech recognition blackboard system ([36]). However soon the following important
distinctions were discovered:
1. The semantics and syntax are ill-de ned due to unknown/partially-known enemy
platform speci cations.
2. Thus there is no "legal move generator" for the solution space.
3. We must heavily rely on analytical and heuristic data.
4. Di erent platforms present di erent degrees of interest.

78
Despite those severe diculties, HASP/SIAP proved to be a successful blackboard
system. At a very high-level the architecture consisted of the following modules:
Blackboard represented the current best hypothesis (CBH). CBH was partitioned into
multiple levels from raw hydrophone spectra to information about whole eets.
Technically CBH was organized into an AND-tree with local alternatives. Its nodes
were called "hypothesis elements".
Knowledge sources operated on the information in the blackboard at various level-
s. The knowledge sources were rule-based implementations of the model-driven
approach.
Control data was used to direct the problem-solving process and consisted of:
Event list recording changes to the blackboard. One of the changes was selected
and represented the "attention focus".
Expectation list carrying some expected data (e.g. acoustic signatures of plat-
forms anticipated from intelligence reports). Every once in a while, the list
was searched to see if the data had arrived.
Problem list representing currently open problems as well as missed and desired
information. Example: a knowledge source could post a currently unavailable
piece of information that would increase the con dence in some hypothesis if
it were available.
Clock-event list scheduling execution of certain rules.
History list logging system activity and used for explanation and debugging pur-
poses.
Control modules were rule-based and consisted of:
Strategy KS deciding which category of control knowledge to focus on next.
Control managers (one for each of the control knowledge categories) selecting
which datum in the category to focus on. Focusing on a control knowledge

79
datum led to executing domain-level knowledge sources and thus updating the
blackboard.
While being blackboard-based systems and operating in a dynamic real-world domain,
Minerva-5 and HASP have a number of distinctions including:
1. HASP is primarily concerned with the task of situation awareness and dynamic
classi cation. Minerva's functions are much wider and, in addition, include: casu-
alty response, human problem-solver NL advising, NL critiquing, and performance
measurements.
2. Minerva has been actively using non-rule-based knowledge such as Arti cial Neural
Networks and Petri Nets.
3. Minerva's environment often changes very rapidly and often a datum comes once
only. In HASP the environment changes are relatively slow and the data readings
are quite repetitive.
4. HASP control knowledge was the rst attempt to separate control and domain-level
knowledge. It was still kept rule-based and was processed in a special way. Minerva-
5 employs a sophisticated qualitative simulation and state evaluation scheduler.

80
Chapter 5
Thesis Contributions and
Conclusions
5.1 Contributions
This section will summarize the theoretical and practical contributions of the author
described in this thesis. In that way it is di erent from chapter 5.2, p.83 which provides
a comprehensive project summary.

5.1.1 Theoretical Contributions


This thesis presents the following novel contributions to the area of Arti cial Intelligence
made while designing Minerva-5 framework:

1. Extending classical blackboard system cycle operation (deliberate-schedule-


execute) ([18, 11, 20]) by re ning rule-based domain-independent scheduling knowl-
edge layer. The re nement involves employing di erent AI qualitative prediction
and classi cation methods (Petri Nets, Arti cial Neural Networks, Decision Trees)
to reach expert-level performance in challenging real-world and real-time domains.
Refer to section 3.1.2, p.12 for further details.

81
2. Extending classical Petri Nets ([5]) typically used for modeling of parallel sys-
tems behavior at a logical level. The extended formalism (Extended Petri Nets
(EPNs)) allows for a beyond-logical-level high-quality qualitative prediction while
still being computationally ecient (section 3.2.4, p.21).
3. Devising a static state evaluator to work together with the predictor as a
part of the new scheduling layer. The design is simple in concept, computationally
ecient, and easily inducible from a handful of scenarios in a unsupervised manner
(section 3.2.5, p.23).
4. Devising performance measuring and critiquing mechanisms to extend
Minerva functionality (section 3.3.4, p.30).
5. Analyzing the complexity of the deliberation module. The analysis shows
the deliberation scheme to be linear timewise in the size of domain blackboard and
thus computationally feasible (section 3.5, p.45).
6. Showing equivalence to Turing Machine proves that Minerva framework and
the deliberation mechanism is indeed a universal computational device (section 3.6,
p.53).

5.1.2 Practical Contributions


The project resulted in creating a working expert system Minerva-DCA and thus had a
number of practical contributions including:
1. Redesigning Minerva-3 code to support dynamic domains and real-time
performance. This process resulted in creating Minerva-4 ([20]) that reached
expert-level performance in DCTrain environment ([28]).
2. Directing e orts of Intelligent Reasoning, Critiquing, and Learning KBS
subgroup on Minerva-5 and Minerva-DCA implementation including the following
areas:

82
(a) domain knowledge layer;
(b) strategy knowledge layer;
(c) EPN predictor layer;
(d) state evaluator layer (inductive learning setup, trace collection, classi er (ANNs,
K-maps, C5.0) implementation);
(e) utility computation module;
(f) ODBC-based interface to the DCTrain environment;
(g) explanatory GUI;
(h) advising GUI;
(i) critiquing GUI;
(j) performance measure utility.
3. Setting up Minerva-DCA as an instructor-aid in DCTrain training environ-
ment (section 3.8.2, p.60).
4. Setting up Minerva-5 as a foundation of the Situation Awareness and
Casualty Response system in the DC-ARM project (section 3.8.3, p.63).

5.2 Conclusions
5.2.1 Thesis Summary
This thesis describes a research project on building a versatile expert system shell and
its applications. The system presented, Minerva-5, is an attempt to use blackboard
architectures for dynamic control, advising, and critiquing.
The concepts of blackboard and deliberate-schedule-execute cycle operation have been
well-known and exploited in di erent areas ([18], [11], [20]). Our work extends this ap-
proach by re ning the scheduling stage into qualitative prediction and state evaluation

83
substages. This re nement turns out to improve all Minerva-5 functions (control, advis-
ing, and critiquing).
In order to support the extended framework, we found it ecient and convenient to
employ various AI paradigms (such as rule-based reasoning, Petri Nets, arti cial neural
networks) as knowledge sources of an integrating blackboard architecture. Such a setup
naturally provides for multiple levels of abstraction and opportunistic reasoning ([21])
and thus eciently addresses di erent reasoning subtasks (such as classi cation, predic-
tive simulation, and scheduling). It also facilitates low-cost explanatory and critiquing
facilities which normally come at a high expense.
As a mathematical analysis shows, the Minerva-5 deliberation process is computa-
tionally feasible while Minerva is a universal computational device (i.e. equivalent to
Turing Machine).
The framework has been tested in the domain of Damage Control on the NAVY bat-
tleships and achieved an expert-level performance in the simulated environment DCTrain
([28]).

5.2.2 Future Research Directions


Minerva-5 while being a fairly successful system opens an exciting area for the further
research including the following 1 :
1. Context-based attention focus would prioritize incoming data according to the cur-
rent situation, thus allowing us to process the most vital ndings rst. That would
help in intensive scenarios where the data ow is too high to process in real time.
2. Pattern recognizer would go through the ow of incoming data and extract some
important temporal patterns such as "temperature is rising", " remain pressure
is dropping", etc. By extending the nding vocabulary with those higher-order
ndings we should be able to increase Minerva performance, for it will "see" more.
1
some of those directions have been researched in [16].

84
3. Certain ndings might indicate vital events that require immediate response. Min-
erva's reasoning mechanism might not have enough time for inference. Thus it
might be advisable to have an additional "fast-re ex" mechanism with hardwired
reactions.
4. Importance/utility of a nding or a hypothesis is generally not constant but changes
over time. In our approach we have used an Extended Petri Nets predictor and a
board evaluator to assess importance of the ndings and hypotheses. An alternative
approach would be to use a partially hardwired module for that task. Such a module
called TRM (Temporal Resource Manager) has been developed at KBS ([24]). A
comparative performance analysis of those two approaches would be of interest.
5. Currently Minerva-5 critiquing and performance measuring is based on action
matching (section 3.3.4, p.30). This approach is computationally ecient, but
rather limited to scenarios with a narrow set solution paths. In other words, qual-
ity of the critique delivered drops if there are radically di erent but equally good
solution paths. Alternative approaches include:
(a) Evaluating subject's actions using the EPN predictor and the state evaluator
as described in section 2.2.3, p.7. Although being more graceful to multiple so-
lution paths, this approach is still a subject to the same problem (section 2.2.4,
p.9).
(b) Inferring subject's goals and intentions and using them to critique. This ap-
proach seems to be closer to what human instructors use; however, it is also
a subject to a number of problems and complications. Some of the aspects of
this approach for the Navy Damage Control domain have been researched in
[33].
(c) Perhaps a better critiquing performance could be achieved by a hybrid ap-
proach that uses action-matching-based critique when applicable and tries to
infer the subject's goals when possible.

85
Appendix A
DCA Doctrines
The following illustrations re ect a subset of DCA responsibilities at a very high-level.
They are reproduced here to allow the reader to appreciate the complexity of the domain.
We appreciate S. Ramachandran's help on bringing them to KBS.

Figure A.1: DCA's responsibilities on setting GQ

86
Figure A.2: DCA's responsibilities on investigation and setting FBs

Figure A.3: DCA's responsibilities on handling re progress

87
Figure A.4: DCA's responsibilities on managing pressure drop on re main

88
Appendix B
Minerva-DCA knowledge layers
The following are Minerva-DCA knowledge layers.

B.1 Domain Layer


B.1.1 Domain Facts
Those facts describe the domain lexicon of Minerva.

%% FINDINGS

is_finding([alarm,AlarmType,Where,Time]).
is_finding([fire_report,From,Where,FireClass,Time]).
is_finding([fbs_report,From,SA,PA,PF,SF,Pbelow,Pabove,Time]).
is_finding([ff_progress,From,Where,Time]).
is_finding([fire_out,From,Where,Time]).
is_finding([permission_flood_granted,Station,Space,Time]).
is_finding([no_pers_avail,Station,Time]).
is_finding([mrzp,Station,Time]).
is_finding([mrzs,Station,Time]).
is_finding([request_mrz,Time]).
is_finding([granted_start_pump, Station, Time]).

%% REDFLAG (IMPORTANT) FINDINGS

redflag([alarm,_,_,_]).
redflag([fire_report,_,_,_,_]).

89
redflag([no_pers_avail,_,_]).
redflag([request_mrz,_]).
redflag([mrzs,dcco,_]).
redflag([fire_out,_,_,_]).

%% HYPOTHESES

is_hypothesis([fire,Where,FireClass,Status,Time]).
is_hypothesis([mrzs,Time]).

%% REDFLAG (IMPORTANT) HYPOTHESES

redflag([fire,_,_,discovered,_]).

%% ACTIONS

is_action([invest_f,Station,Where]).
is_action([fight_fire,Station,Where]).
is_action([setfb,Station,Where,SA,PA,PF,SF,Pbelow,Pabove]).
is_action([flood, Station, Space]).
is_action([request_permission_flood, Station, Space]).
is_action([report_mrzp]).
is_action([report_mrzs]).
is_action([request_mrz_status,Station]).
is_action([thmrz]).
is_action([request_start_fp, Station]).
is_action([start_fp, Station, FP]).
is_action([close_valve, Station, Sys, Valve]).
is_action([open_valve, Station, Sys, Valve]).

%% FINDINGS SOURCES

source([fire_report,StationR,Where,FireClass,Time],[invest_f,Station,Where]) :-
ja(Where,Station).

%% source([fbs_report,StationR,SA,PA,PF,SF,PBelow,PAbove,Time],
[setfb,Station,SA,PA,PF,SF,PBelow,PAbove]) :-
%% fb(Where,SA,PA,PF,SF,PBelow,PAbove),
%% ja(Where,Station).

source([permission_flood_granted,Station,Space,Time],
[request_permission_flood,co,Space]).

%% bother the stations with status only if the CO asks it at the 2nd time

90
source([mrzs,Station,Time], [request_mrz_status,Station]) :-
satisfied([request_mrz,Time1],800),
satisfied([request_mrz,Time2],800),
Time1 \= Time2.

source([granted_start_pump, eoow,Time],[request_start_fp, eoow]).

B.1.2 Domain Rules


Domain rules encode the ground level NAVY doctrines.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Normal Rules: populate the BB with new hypotheses
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%% R1010: fire report gives 818 CF to the fire discovered hypothesis

ccf(r1010,1,1,[fire_report,From,Where,FireClass,Time],800,
[fire,Where,FireClass,discovered,Time],818,[]).
ccb(r1010,1,1,[fire_report,From,Where,FireClassRep,TimeRep],800,
[fire,Where,FireClass,Status,Time],818).

%% R1012: fire alarm gives 666 CF to the fire discovered hypothesis

ccf(r1012,1,1,[alarm,fire,Where,Time],800,[fire,Where,FireClass,
discovered,Time],666,[]).
ccb(r1012,1,1,[alarm,fire,Where,TimeAlarm],800,[fire,Where,FireClass,
Status,Time],666).

%% R1020: fight fire as soon as the fire is discovered

ccf(r1020,1,1,[fire,Where,FireClass,discovered,TimeD],800,
[fight_fire,Station,Where],818,[]) :-
\+ vital_space(Where),
ja(Where,Station).

%% R1021: setfb if a fire is discovered

ccf(r1021,1,1,[fire,Where,FireClass,discovered,TimeD],800,
[setfb,Station,Where,SA,PA,PF,SF,PBelow,PAbove],818,[]) :-
fb(Where,SA,PA,PF,SF,PBelow,PAbove),

91
\+ vital_space(Where),
ja(Where,Station).

%% R1022: When fire is discovered and ff_progress report


%% comes post "fire fought" hypothesis

ccf(r1022,1,2,[ff_progress,Station,Where,Time],800,
[fire,Where,FireClass,fought,Time],818,[Where,FireClass,Time]).
ccf(r1022,2,2,[fire,Where,FireClass,discovered,TimeD],800,
[fire,Where,FireClass,fought,Time],818,[Where,FireClass,Time]).

%% R1030: When the fire is being tackled and "fire out" report
%% comes we assert "fire out" hypothesis

ccf(r1030,1,2,[fire_out,Station,Where,Time],800,
[fire,Where,FireClass,out,Time],818,[Where,FireClass,Time]).
ccf(r1030,2,2,[fire,Where,FireClass,out,TimeFF],no,
[fire,Where,FireClass,out,Time],818,[Where,FireClass,Time]).

%% R1040: no personnel available message


%% results in trying another Repair Locker

ccf(r1040,1,1,[no_pers_avail,Station,_],800,[fight_fire,StationNew,Where],
818,[]) :-
ordered([fight_fire,Station,Where],_),
repair_locker(StationNew),
\+ satisfied([no_pers_avail,StationNew,Time1],800),
!.

%% R1050: Instead of fighting fire flood the space if it is vital and on fire

ccf(r1050,1,2,[fire,Space,FireClass,discovered,TimeD],800,
[flood,Station,Space],818,[Space,Station]) :-
vital_space(Space),
ja(Space,Station).

ccf(r1050,2,2,[permission_flood_granted,StationG,Space,Time],
800,[flood,Station,Space],818,[Space,Station]).

ccb(r1050,1,2,[fire,Space,FireClass,discovered,TimeD],800,
[flood,Station,Space],818) :-
vital_space(Space),
ja(Space,Station).

ccb(r1050,2,2,[permission_flood_granted,StationG,Space,Time],800,
[flood,Station,Space],818) :-

92
vital_space(Space),
ja(Space,Station).

%% R1052 request permission to flood as soon


%% as a vital space is suspected for a fire

ccf(r1052,1,1,[fire,Space,FireClass,discovered,TimeD],600,
[request_permission_flood,co,Space],818,[]) :-
vital_space(Space).

%% If MRZ set throughout the ship and the CO wants it, give it to him

ccf(r1060,1,2,[request_mrz,Time],800,[report_mrzs],818,[]).
ccf(r1060,2,2,[mrzs,Time1],800,[report_mrzs],818,[]).

ccb(r1060,1,2,[request_mrz,Time],800,[report_mrzs],818).
ccb(r1060,2,2,[mrzs,Time1],800,[report_mrzs],818).

%% otherwise report MRZ in progress

ccf(r1062,1,2,[request_mrz,Time],800,[report_mrzp],818,[]).
ccf(r1062,2,2,[mrzs,Time1],no,[report_mrzp],818,[]).

%% [mrzs,_] holds when all the station checked-in

ccf(r1064,1,9,[mrzs,r2,_],800,[mrzs,TimeR3],818,[TimeR3]).
ccf(r1064,2,9,[mrzs,r3,TimeR3],800,[mrzs,TimeR3],818,[TimeR3]).
ccf(r1064,3,9,[mrzs,r5,_],800,[mrzs,TimeR3],818,[TimeR3]).
ccf(r1064,4,9,[mrzs,aftbds,_],800,[mrzs,TimeR3],818,[TimeR3]).
ccf(r1064,5,9,[mrzs,fwdbds,_],800,[mrzs,TimeR3],818,[TimeR3]).
ccf(r1064,6,9,[mrzs,bridge,_],800,[mrzs,TimeR3],818,[TimeR3]).
ccf(r1064,7,9,[mrzs,eng,_],800,[mrzs,TimeR3],818,[TimeR3]).
ccf(r1064,8,9,[mrzs,dcco,_],800,[mrzs,TimeR3],818,[TimeR3]).
ccf(r1064,9,9,[mrzs,csmc,_],800,[mrzs,TimeR3],818,[TimeR3]).

ccb(r1064,1,9,[mrzs,r2,_],800,[mrzs,TimeR3],818).
ccb(r1064,2,9,[mrzs,r3,_],800,[mrzs,TimeR3],818).
ccb(r1064,3,9,[mrzs,r5,_],800,[mrzs,TimeR3],818).
ccb(r1064,4,9,[mrzs,aftbds,_],800,[mrzs,TimeR3],818).
ccb(r1064,5,9,[mrzs,fwdbds,_],800,[mrzs,TimeR3],818).
ccb(r1064,6,9,[mrzs,bridge,_],800,[mrzs,TimeR3],818).
ccb(r1064,7,9,[mrzs,eng,_],800,[mrzs,TimeR3],818).
ccb(r1064,8,9,[mrzs,dcco,_],800,[mrzs,TimeR3],818).
ccb(r1064,9,9,[mrzs,csmc,_],800,[mrzs,TimeR3],818).

93
%% If MRZ was already given to the captain but he
%% keeps bugging the DCA tell him to take a hike

ccf(r1066,1,1,[request_mrz,Time],800,[thmrz],818,[]) :-
ordered([report_mrzs],TimeR),
earlier(TimeR,Time).

%% Start the appropriate FM FP when there is a FM


%% low pressure and permission granted to start one

ccf(r1070,1,2,[alarm,fm_low_pressure,Where,Time],800,
[start_fp, dcco, FP],818,[Where,FP]) :-
choose_fp(Where,FP).
ccf(r1070,2,2,[granted_start_pump, eoow, Time1],800,
[start_fp, dcco, FP],818,[Where,FP]).

ccb(r1070,1,2,[alarm,fm_low_pressure,Where,Time],800,
[start_fp, dcco, FP],818) :-
choose_fp(Where,FP).
ccb(r1070,2,2,[granted_start_pump, eoow, Time1],800,
[start_fp, dcco, FP],818).

%% Close open zebra valves when DCCo reports zebra set on FM

ccf(r1080,1,1,[mrzs,dcco,Time],800,[close_valve, Station,
firemain, Valve],818,[]) :-
zebra_fm_valve(Valve),
fm_valve(Valve,Space,open),
ja(Space,Station).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Removal Rules -- KEEP THE BB CLEAN
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%% REM1011: fire report and fire discovered hypothesis


%% remove fire report

ccm(rem1011,1,2,[fire_report,From,Where,FireClass,TR],800,
[fire_report,From,Where,FireClass,TR],remove,[From,Where,FireClass]).
ccm(rem1011,2,2,[fire,Where,FireClass,discovered,_],800,
[fire_report,From,Where,FireClass,TR],remove,[From,Where,FireClass]).

%% REM1014: fire alarm and fire discovered hypothesis


%% remove fire alarm

ccm(rem1014,1,2,[alarm,fire,Where,TA],800,
[alarm,fire,Where,TA],remove,[Where,TA]).
ccm(rem1014,2,2,[fire,Where,FireClass,discovered,_],800,

94
[alarm,fire,Where,TA],remove,[Where,TA]).

%% REM1013: "fire discovered hypothesis" with 800 CF


%% removes "fire discovered" hypothesis with
%% less than 800 CF

ccm(rem1013,1,1,[fire,Where,FC,discovered,_],800,
[fire,Where,FireClass,discovered,Time],remove,[]) :-
hypothesis([fire,Where,FireClass,discovered,Time],CF),
CF < 800.

%% REM1024: When fire is discovered and ff_progress


%% report comes remove "fire discovered" hypothesis

ccm(rem1024,1,2,[fire,Where,FireClass,fought,Time],800,
[fire,Where,FireClass,discovered,TimeD],remove,[TimeD,Where,FireClass]).
ccm(rem1024,2,2,[fire,Where,FireClass,discovered,TimeD],800,
[fire,Where,FireClass,discovered,TimeD],remove,[TimeD,Where,FireClass]).

%% REM1032: "fire out" and "fire fought" hypotheses


%% cause removal of "fire fought" hypothesis

ccm(rem1032,1,2,[fire,Where,FireClass,out,Time],800,
[fire,Where,FireClass,fought,TimeFF],remove,[Where,FireClass,TimeFF]).
ccm(rem1032,2,2,[fire,Where,FireClass,fought,TimeFF],800,
[fire,Where,FireClass,fought,TimeFF],remove,[Where,FireClass,TimeFF]).

%% REM1034: "fire out" hypothesis removes "FBs set" report

ccm(rem1034,1,2,[fire,Where,FireClass,out,Time1],800,
[fbs_report,Station,SA,PA,PF,SF,Time],remove,[Where,FireClass,Time,Station]).
ccm(rem1034,2,2,[fbs_report,Station,SA,PA,PF,SF,Time],800,
[fbs_report,Station,SA,PA,PF,SF,Time],remove,[Where,FireClass,Time,Station]).

%% REM1036: "fire out" hypothesis removes "FF in progress" report

ccm(rem1036,1,2,[fire,Where,FireClass,out,Time1],800,
[ff_progress,Station,Where,Time],remove,[Where,FireClass,Time,Station]).
ccm(rem1036,2,2,[ff_progress,Station,Where,Time],800,
[ff_progress,Station,Where,Time],remove,[Where,FireClass,Time,Station]).

%% REM1038: "FF in progress" report removes "fight fire" order

ccm(rem1038,2,2,[ff_progress,Station1,Where,Time],800,
[fight_fire,Station,Where],remove,[Station,Where]).

95
%% REM1040: "fire out" hypothesis removes "fire out" report

ccm(rem1040,1,2,[fire,Where,FireClass,out,Time1],800,
[fire_out,Station,Where,Time],remove,[Where,Time,Station]).
ccm(rem1040,2,2,[fire_out,Station,Where,Time],800,
[fire_out,Station,Where,Time],remove,[Where,Time,Station]).

%% REM105?: report_mrzs hypothesis removes all mrzs,


%% mrzp, request_mrz findings as well as report_mrzp hypothesis

ccm(rem1056,1,2,[report_mrzs],800,
[mrzs,Station,Time],remove,[Time,Station]).
ccm(rem1056,2,2,[mrzs,Station,Time],800,
[mrzs,Station,Time],remove,[Time,Station]).

ccm(rem1058,1,2,[report_mrzs],800,
[mrzp,Station,Time],remove,[Time,Station]).
ccm(rem1058,2,2,[mrzp,Station,Time],800,
[mrzp,Station,Time],remove,[Time,Station]).

%% Remove 'permission granted to start a FP' when one is started

ccm(rem1070,1,2,[granted_start_pump, eoow, Time1],800,


[granted_start_pump, eoow, Time1],remove,[FP,Time1,Time2]).
ccm(rem1070,2,2,[start_fp, dcco, FP],800,
[granted_start_pump, eoow, Time1],remove,[FP,Time1,Time2]) :-
ordered([start_fp, dcco, FP],Time2),
earlier(Time1,Time2).

B.1.3 Domain Graph


Domain rules and facts could be also represented graphically in form of the domain graph
(see section 3.4.2.4, p.33 for the de nition). The domain graph for Minerva-5 is shown
in gures B.1, p.97, B.2, p.98.

B.2 Strategy Layer


The following is strategy meta rules which guide the application of domain rules in
building strategy chain networks.

96
fire,out

0
r103
fire_out

r1030
fire,fought

22
r10 fight_fire
ff_progress

0
04
r1
0
04
r1
r1022

flood setfb fight_fire no_pers_avail

50
1
r1050

r10
r102

20
r10

permission_to_fl fire,discovered
ood_granted
2
01

2
r1
0

05
01

r1
Legend
r1

request_permission_ fire_report alarm,fire


to_flood
hypothesis
source edge

action forward edge


invest_f

forward and
datum backward edge

Figure B.1: Domain Subgraph (part 1)

97
start_fp 70
r10

70
report_mrzp report_mrzs r10 granted_sta
fm_low_pres rt_pump

~ sure_alarm
r1062

request_start_fp
r1060

60
r10
62
r10

request_mrz mrzs

close_valve(zebra)
64

64 64
r10
4
r106 r1064
r10
r10
4
r106

4
64 64 06
r10 r10 r1

0
08
r1
mrsz,r2 mrsz,r3 mrsz,r5 mrsz,aftbds mrsz,fwdbds mrsz,bridge mrsz,eng mrsz,dcco mrsz,csmc

request_mrz_status,r2 request_mrz_status,r5 request_mrz_status,fwdbds request_mrz_status,eng request_mrz_status,csmc

request_mrz_status,r3 request_mrz_status,aftbds request_mrz_status,bridge request_mrz_status,dcco

Legend
hypothesis action datum source edge forward and forward edge
backward edge

Figure B.2: Domain Subgraph (part 2)

98
:- dynamic rule_fired/3.
:- dynamic finding/2.
:- dynamic ordered/2.
:- dynamic hypothesis/2.

%% ------------------------- PROCESS FINDINGS -----------------------------

always_expand(process_finding(X)).

%% for every posted redflag finding do backward chaining

mr(pf1,process_finding(F),applyrule_backward(Rule,Hyp)) :-
finding(F,_),
redflag(F),
ccb(Rule,N,M,F,CF,Hyp,CFC),
satisfied(F,CF).

%% for every posted redflag finding do forward chaining

mr(pf2,process_finding(F),applyrule_forward(Rule,Hyp)) :-
finding(F,_),
redflag(F),
ccf(Rule,N,M,F,CF,Hyp,CFC,UL),
satisfied(F,CF).

%% ---------------------- PROCESS HYPOTHESIS ---------------------

always_expand(process_hypothesis(X)).

%% for every posted redflag hypothesis do backward chaining

mr(prh1,process_hypothesis(H),applyrule_backward(Rule,H1)) :-
hypothesis(H,_),
redflag(H),
ccb(Rule,N,M,H,CF,H1,CFC),
satisfied(H,CF).

%% for every posted redflag hypothesis also do foward chaining

mr(prh2,process_hypothesis(H),applyrule_forward(Rule,H1)) :-
hypothesis(H,_),
redflag(H),
ccf(Rule,N,M,H,CF,H1,CFC,UL),
satisfied(H,CF).

99
%% ----------------------- EXPLORE HYPOTHESIS --------------------

always_expand(explore_hypothesis(H)).

%% Pursue focus differential hypotheses

mr(eh1,explore_hypothesis(Hyp),pursue_hypothesis(Hyp)) :-
focus_differential(Hyp),
value(Hyp,unknown).

%% --------------------------- APPLY RULE BACKWARD ---------------------

mr(ab1,applyrule_backward(Rule,Hyp),findout(Finding)) :-
ccb(Rule,_,_,Finding,_,Hyp,_),
\+ concluded(Finding).

mr(ab2,applyrule_backward(Rule,Hyp),applyrule_forward(Rule,Hyp)) :-
ccf(Rule,N1,M1,F,CF,Hyp,CFC,UL),
\+ rule_applied(Rule,Hyp).

%% ----------------------- APPLY RULE FORWARD ----------------------------

%% AF1: Conculde rule-fired if all the premises are satisfied

mr(af1,applyrule_forward(Rule,Hyp),conclude(rule_fired(Rule,Hyp,CFC))) :-
satisfied_rp(Rule,Hyp,CFC),
is_hypothesis(Hyp),
\+ rule_applied(Rule,Hyp).

%% AF2: Post an action if the rule implies it and all the premises are met

mr(af2,applyrule_forward(Rule,Action),perform(Action,CFC)) :-
satisfied_rp(Rule,Action,CFC),
is_action(Action),
\+ ordered(Action,_).

%% ------------------ REMOVAL STRATEGY OPERATORS ---------------------

always_expand(remove_datum(R,D)).

mr(rd1,remove_datum(Rule,Datum),remove(Datum)) :-
finding(Datum,_),
satisfied_rp_rem(Rule,Datum,remove).

100
mr(rd2,remove_datum(Rule,Datum),remove(Datum)) :-
hypothesis(Datum,_),
satisfied_rp_rem(Rule,Datum,remove).

%% ---------------- ADJUSTING HYPOTHESIS CONFIDENCE FACTORS -------------

adjust_cf(Hyp,Delta) :-
hypothesis(Hyp,Old),!,
New is Old + Delta,
retractall(hypothesis(Hyp,_)),
assert(hypothesis(Hyp,New)).

adjust_cf(Hyp,Delta) :-
assert(hypothesis(Hyp,Delta)).

rule_applied(Rule,Hyp) :-
rule_fired(Rule,Hyp,_).

%% ------------------- FINDOUT -----------------------

%% FO1: to find out a hypothesis test it

mr(fo1,findout(H),test_hypothesis(H)) :-
is_hypothesis(H),
value(H,unknown).

%% FO2: if we want to find out a finding and there is a


%% source for it go ahead and execute the
%% information collection data.

mr(fo2,findout(F1),perform(A,800)) :-
is_finding(F1),
source(F1,A),!,
\+ ordered(A,_).

%% F03: if we want to find out a finding and there is no


%% source for it then look if it is already posted

mr(fo3,findout(Param),lookup(Param)) :-
is_finding(Param),
\+ source(Param,A),!,
fstatus(Param,_,unknown).

%% ---------------------- PURSUE HYPOTHESIS -----------------------

101
mr(ph1,pursue_hypothesis(Hyp),test_hypothesis(Hyp)).

%% ----------------------- TEST HYPOTHESIS ---------------------------

mr(th1,test_hypothesis(Hyp),applyrule_backward(Rule,Hyp)) :-
ccb(Rule,N,M,F,CF,Hyp,CFC),
value(Hyp,unknown),
\+ rule_applied(Rule,Hyp).

---------------------------------------------------------

%% Checks if a parameter (finding/hypothesis) is


%% satisfied given the CF required

satisfied(Param,CF) :-
number(CF),
hypothesis(Param,CF1),
CF1 >= CF.

satisfied(Param,CF) :-
number(CF),
finding(Param,CF1),
CF1 >= CF.

satisfied(Param,no) :-
\+ hypothesis(Param,_).

satisfied(Param,no) :-
hypothesis(Param,CF),
CF =< -800.

satisfied(Param,no) :-
\+ finding(Param,_).

satisfied(Param,no) :-
finding(Param,CF),
CF =< -800.

%% Checks if all premises have been satisfied. Instantiates C and CFC.


%% The Unification List (UnifList) MUST contain the following
%% 1) all variables in premises that have to unified
%% 2) all variables in the conclusion that also occur in the premises

102
%% Single clause rules (N=M=1) could have UnifList=[]

:- dynamic ub/1.

satisfied_rp(Rule,C,CFC) :-
ccf(Rule,_,_,_,_,C,CFC,UnifList),
retractall(ub(_)),
assert(ub(UnifList)),
forall(
ccf(Rule,_,_,Finding,CF,C,CFC,UnifList),
(
satisfied(Finding,CF),
ub(UnifList),
retract(ub(_)),
assert(ub(UnifList))
)
),
ub(UnifList),
ccf(Rule,_,_,F,CF,C,CFC,UnifList), %% Those two lines are added
satisfied(F,CF), %% solely to support empty
%% Unifiying List for a single clause rule
retract(ub(_)).

%% Checks if all premises have been satisfied. Instantiates C and CFC.


%% The Unification List (UnifList) MUST contain the following
%% 1) all variables in premises that have to unified
%% 2) all variables in the conclusion that also occur in the premises
%% Single clause rules (N=M=1) could have UnifList=[]

satisfied_rp_rem(Rule,C,CFC) :-
ccm(Rule,_,_,_,_,C,CFC,UnifList),
retractall(ub(_)),
assert(ub(UnifList)),
forall(
ccm(Rule,_,_,Finding,CF,C,CFC,UnifList),
(
satisfied(Finding,CF),
ub(UnifList),
retract(ub(_)),
assert(ub(UnifList))
)
),
ub(UnifList),
ccm(Rule,_,_,F,CF,C,CFC,UnifList), %% Those two lines are added
satisfied(F,CF), %% solely to support empty
%% Unifiying List for a single clause rule
retract(ub(_)).

103
differential(Hyp) :-
hypothesis(Hyp,CF),
CF >= 200 .

focus_differential(Hyp) :-
hypothesis(Hyp,CF),
CF >= 500 .

concluded(X) :-
finding(X,CF).

concluded(X) :-
hypothesis(X,CF).

value(X,yes) :-
hypothesis(X,CF),
CF >= 800.

value(X,no) :-
hypothesis(X,CF),
CF =< -800.

value(X,unknown) :-
\+ hypothesis(X,_).

value(X,unknown) :-
hypothesis(X,CF),
CF < 800,
CF > -800.

%% Finding F has been asserted

fstatus(F,A,known) :-
finding(F,CF).

fstatus(F,A,unknown) :-
\+ fstatus(F,A,known).

%% Finding F has \+ been asserted but is expected


%% to come since action A (which is a source of F)
%% has been ordered

fstatus(F,A,expected) :-
ordered(A),
source(F,A),
\+ fstatus(F,A1,known).

104
B.3 Extended Petri Nets Predictor
Our Prolog EPN predictor code is given below.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%epn.pl
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Some definitions:
%
%PLACE:
%place(SubnetID,PlaceID,WordDescribtion)
%
%MARKING:
%marking(SubnetID,PlaceID,ListOfTokens)
%
%TOKEN:
%Token is a list of the form: [TimeB,TimeE,[[LabelType1,
% Label1],...]]
%
%TRANSITION:
%transition(SubnetID,TransitionID,TimeMin,TimeMax,
% WordDescription,EnablingPlacesList,
% PropagationPlacesList)
%
%where:
% 'EnablingPlacesList' is defined as:
% [[TokenLabelTypeToMatch1, [EdgeType1,PlaceID1],...],
% [TokenLabelTypeToMatch2,..]]
%
%and 'PropagationPlacesList' is just:
% [[EdgeType1,PlaceID1],...]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%initializes any constants
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
epn_initialize :-
% spy(edge_spread_fire/2),
%vital spaces
retractall(vital_spaces_marking(_)),
assert(vital_spaces_marking([])),

105
forall(
vital_space(Space),
(
vital_spaces_marking(List),
retractall(vital_spaces_marking(_)),
append([[0,0,[[space,Space]]]],List,New_List),
assert(vital_spaces_marking(New_List))
)
),

%explosive compartments
retractall(explosive_comp_marking(_)),
assert(explosive_comp_marking([])),
forall(
flammable(Space),
(
explosive_comp_marking(List),
retractall(explosive_comp_marking(_)),
append([[0,0,[[space,Space]]]],List,New_List),
assert(explosive_comp_marking(New_List))
)
),

%repair lockers (should be obtained dynamically once


%Minerva keeps track of
%personnel
retractall(repair_lockers_marking(_)),
assert(repair_lockers_marking([])),
forall(
repair_locker(Station),
(
repair_lockers_marking(List),
retractall(repair_lockers_marking(_)),
append([[0,0,[[station,Station]]]],List,New_List),
assert(repair_lockers_marking(New_List))
)
).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% FOR DEBUGGING PURPOSES ONLY:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

test_prediction(Time) :-
clauses(marking(_,_,_),Initial_markings),
nl,write('INITIAL MARKINGS:'),nl,
print_list(Initial_markings),
predict(Time),
clauses(marking(_,_,_),Resulting_markings),
nl,write('RESULTING MARKINGS:'),nl,
print_list(Resulting_markings),
!.

106
print_list([Element|[]]) :-
write(Element),nl,!.

print_list([Element|More]) :-
write(Element),nl,
print_list(More).

%%%%%%%%%%%%%%%%%%%%%%%
% Step by Step testing
%%%%%%%%%%%%%%%%%%%%%%%
test_fire_transition(Time) :-
transition(Sub,ID,TimeMin,TimeMax,_,EnablingPlacesList,
PropagationPlacesList),
clauses(marking(_,_,_),Initial_markings),
enable_transition(EnablingPlacesList,0,100000000,MaxB,
MinE,Labels,Time - TimeMin),
NewTimeB is TimeMin + MaxB,
NewTimeE is TimeMax + MinE,
propagate(PropagationPlacesList,[[NewTimeB,NewTimeE,
Labels]]),
nl,write('INITIAL MARKINGS:'),nl,
print_list(Initial_markings),
nl,write('FIRING TRANSITION '''),write(ID),write(''''),nl,
write('Enabling places list: '),
write(EnablingPlacesList),nl,
write('Resulting labels: '),write(Labels),nl,
write('Propagating places list: '),
write(PropagationPlacesList),nl,
clauses(marking(_,_,_),Resulting_markings),
nl,write('RESULTING MARKINGS:'),nl,
print_list(Resulting_markings),!.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%
%The top level EPN routine
%%%%%%%%%%%%%%%%%%%%%%%%%%%
predict(Time) :-
retractall(spread(_)),
repeat,
\+ fire_transition(Time), !.

%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Fires a single transition

107
%%%%%%%%%%%%%%%%%%%%%%%%%%%
fire_transition(Time) :-
transition(_,N,TimeMin,TimeMax,_,EnablingPlacesList,
PropagationPlacesList),
% nl,write('Transition '),write(N),
([[_,PropPlace1]|_] = PropagationPlacesList,
marking(_,PropPlace1,Tokens) -> true;Tokens = []),
enable_transition(EnablingPlacesList,Tokens,0,
100000000,MaxB,MinE,Labels,Time - TimeMin),
NewTimeB is TimeMin + MaxB,
NewTimeE is TimeMax + MinE,
propagate(PropagationPlacesList,[[NewTimeB,NewTimeE,
Labels]]).
% write(': success!'),nl.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% checks if transition is allowed to proceed, than creates
% a new label list and
% gets the min and max values for all tokens' time stamps
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
enable_transition([],_,B,E,B,E,[],Time) :-
!,
(B < Time ->
true;
% write(' skipping transition; MaxB = '),
write(B),write(' > Time = '),write(Time),nl,
false),!.

enable_transition([EnablingGroup1|MoreGroups],PropTokens,B,E,
MaxB,MinE,Labels,Time) :-
verify_markings(EnablingGroup1,_,_,CurrMaxBr,CurrMaxEr,
CurrMinBn,CurrMaxEn,Tokens),!,
NewB is max(CurrMaxBr,B),
NewE is min(min(CurrMaxEr,CurrMinBn),E),
(NewE < NewB -> fail;
enable_transition(MoreGroups,PropTokens,
NewB,NewE,MaxB,MinE,MoreLabels,Time),
extract_labels(Tokens,MoreLabels,Labels),
(member([_,_,Labels],PropTokens) ->
% write(' breaking loop...'),
fail
;
get_tokens(EnablingGroup1,Tokens))
).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%verifies that the given list of places matches the specified

108
%label type
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

verify_markings([MatchingType,[Edge,PlaceID]|MorePlaces],
Tokens,Label,MaxBr,MaxEr,MinBn,MaxEn,
[MatchedToken|MoreTokens]) :-
Edge(MatchingType,PlaceID,Tokens,MatchedTokens),
verify_markings([MatchingType|MorePlaces],MatchedTokens,
Label,PrevMaxBr,PrevMaxEr,PrevMinBn,
PrevMaxEn,MoreTokens),

((Edge = edge_negate) ->


MaxBr is PrevMaxBr,
MaxEr is PrevMaxEr,
MatchedToken = [0,0,[[not_a_token]]],
(marking(_,PlaceID,AvailableTokens),
match_token(MatchingType,AvailableTokens,
CurrentB,CurrentE,_,Label) ->
MinBn is min(CurrentB,PrevMinBn),
MaxEn is max(CurrentE,PrevMaxEn)
;
MinBn is PrevMinBn,
MaxEn is PrevMaxEn)
;
match_token(MatchingType,MatchedTokens,CurrentB,
CurrentE,MatchedToken,Label),
MinBn is PrevMinBn,
MaxEn is PrevMaxEn,
MaxBr is max(CurrentB,PrevMaxBr),
MaxEr is max(CurrentE,PrevMaxEr)).

verify_markings([_|[]],_,_,0,0,1000000000,0,[]).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%verifies that a given marking contains the specified label
% type
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
match_token(_,[],_,_,_) :- fail,!.

match_token(MatchingType,[[TimeB,TimeE,Labels]|MoreTokens],
MaxB,MinE,MatchedToken,
Label) :-
member([MatchingType,Label],Labels) ->
(MatchedToken = [TimeB,TimeE,Labels],
MaxB is TimeB,
MinE is TimeE);
match_token(MatchingType,MoreTokens,MaxB,MinE,
MatchedToken,Label).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

109
%gets all tokens that will be propagated
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
get_tokens(_,[]) :- !.

get_tokens([MatchingType,[Edge,PlaceID]|MorePlaces],
[Token|MoreTokens]) :-
Edge(PlaceID,Token),
get_tokens([MatchingType|MorePlaces],MoreTokens),!.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%extracts all labels from a list of tokens and
% removes any repetitions
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
extract_labels([],Tokens,Tokens) :- !.

extract_labels([[_,_,Labels]|MoreTokens],Tokens,Result) :-
extract_labels(MoreTokens,Tokens,NewList),
((Labels == [[not_a_token]]) -> Result = NewList;
add_labels(Labels,NewList,Result)).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%concatenates labels to a list, if they are not
% already there
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
add_labels([],List,List) :- !.

add_labels([Label|MoreLabels],List,Result) :-
add_labels(MoreLabels,List,NewResult),
(member(Label,NewResult) -> Result = NewResult;
Result = [Label|NewResult]).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Propagates the new token to all propagation places
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
propagate([],_) :- !.

propagate([[_,PlaceID]|MorePlaces],NewTokens) :-
marking(Subnet,PlaceID,Tokens) ->
(retract(marking(Subnet,PlaceID,Tokens)),
append(NewTokens,Tokens,NewList),
assert(marking(Subnet,PlaceID,NewList)),
propagate(MorePlaces,NewTokens));
place(Subnet,PlaceID,_),
assert(marking(Subnet,PlaceID,NewTokens)),
propagate(MorePlaces,NewTokens).

110
get_intersection(MatchingType,[Token|MoreTokens],
AvailableTokens,MatchedTokens) :-
get_intersection(MatchingType,MoreTokens,
AvailableTokens,OtherMatchedTokens),
(match_token(MatchingType,[Token],_,_,_,Label),
match_token(MatchingType,AvailableTokens,_,_,
MatchedToken,Label) ->
append([MatchedToken],OtherMatchedTokens,
MatchedTokens);
MatchedTokens = OtherMatchedTokens).

get_intersection(_,[],_,[]).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Below are definitions of all types of edges.
% Add new, if needed, but they must be
%in two versions:
%
% 1. Transition verification (checks if transition is
% allowed to proceed):
%
% edge_direct(MatchingType,PlaceID,MaxB,MinE,MatchedToken,Label)
%
% 2. Transition effects on enabling place (removes
% token on regular transition):
%
% edge_Type(PlaceID,Token).
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

edge_direct(MatchingType,PlaceID,Tokens,MatchedTokens) :-
marking(_,PlaceID,AvailableTokens),
(var(Tokens) ->
MatchedTokens = AvailableTokens
;
get_intersection(MatchingType,Tokens,AvailableTokens,
MatchedTokens),
(MatchedTokens = [] -> fail;true)).

edge_direct(PlaceID,Token) :-
marking(Subnet,PlaceID,[Token|[]]) ->
retract(marking(Subnet,PlaceID,[Token|[]]));
marking(Subnet,PlaceID,Tokens),
retract(marking(Subnet,PlaceID,Tokens)),
remove(Token,Tokens,NewTokens),
assert(marking(Subnet,PlaceID,NewTokens)),!.

111
edge_double(MatchingType,PlaceID,Tokens,MatchedTokens) :-
marking(_,PlaceID,AvailableTokens),
(var(Tokens) ->
MatchedTokens = AvailableTokens
;
get_intersection(MatchingType,Tokens,
AvailableTokens,MatchedTokens),
(MatchedTokens = [] -> fail;true)).

edge_double(_,_) :- !.

edge_negate(MatchingType,PlaceID,Tokens,MatchedTokens) :-
var(Tokens) ->
(nl,
write('EPN ERROR: ''edge_negate'' cannot
be first in an enabling places list'),
abort)
;
MatchedTokens = Tokens.

% marking(_,PlaceID,AvailableTokens),
% get_intersection(MatchingType,Tokens,AvailableTokens,MatchedTokens),
% (MatchedTokens = [] -> fail;true).

edge_negate(_,_) :- !.

%SPECIFIC TO MINERVA ONLY:

edge_spread_fire(MatchingType,PlaceID,Tokens,MatchedTokens) :-
marking(_,PlaceID,AvailableTokens),
(var(Tokens) ->
MatchedTokens = AvailableTokens
;
get_intersection(MatchingType,Tokens,
AvailableTokens,MatchedTokens),
(MatchedTokens = [] -> fail;true)).

% match_token(MatchingType,Tokens,MaxB,MinE,MatchedToken,Label),!,
% repeat,
% neighbors(Label,Neighbor,_),
% (member([_,_,[[space,Neighbor]]],Tokens) -> false;true).

edge_spread_fire(_,[TimeB,TimeE,[[_,Space]]]) :-
transition(Subnet,fire_spread,Tmin,Tmax,_,_,_),
NewTB is TimeB + Tmin,

112
NewTE is TimeE + Tmax,
(spread(Space) -> fail;
assert(spread(Space)),
forall(neighbors(Space,Neighbor,_),
(
(marking(_,fire,Tokens) ->
(member([_,_,[[space,Neighbor]]],Tokens) ->
true
;
retractall(marking(_,fire,Tokens)),
append([[NewTB,NewTE,[[space,
Neighbor]]]],Tokens,Token_list),
assert(marking(Subnet,fire,Token_list))
)
;
assert(marking(Subnet,fire,[[NewTB,NewTE,
[[space,Neighbor]]]]))
)
))
).

We have used EPNs presented in gures B.3, p.116, B.4, p.117. Their Prolog encoding
is given below:

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%epn.pl
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Some definitions:
%
%PLACE:
%place(SubnetID,PlaceID,WordDescribtion)
%%
%TRANSITION:
%transition(SubnetID,TransitionID,TimeMin,TimeMax,
%WordDescription,
% EnablingPlacesList,
% PropagationPlacesList)
%
%where:
% 'EnablingPlacesList' is defined as:
% [[TokenLabelTypeToMatch1, [EdgeType1,PlaceID1],...],
%[TokenLabelTypeToMatch2,..]]
%
%and 'PropagationPlacesList' is just:
% [[EdgeType1,PlaceID1],...]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

113
%fire

place(fire,ht_alarm,"High temp alarm").


place(fire,fire,"Fire").
place(fire,fbs,"Fire boundaries set").
place(fire,granted_flood,"Granted permission to flood").
place(fire,ff_in_progress,"Firefighting in progress").
place(fire,pers_avail,"Available personnel").
place(fire,explosive_comp,"Explosive compartments").
place(fire,invest_f,"Investigate").
place(fire,fight_fire,"Fight fire").
place(fire,request_flood,"Request permission to flood").
place(fire,flood,"Flood").
place(fire,sfb,"Set fire boundaries").
place(fire,vital_comp,"Vital compartments").
place(fire,invest_complete,"Investigation complete").
place(fire,fire_out,"Fire_out").
place(fire,flooded,"Flooded").
place(fire,destroyed,"Destroyed").
place(fire,exploded,"Exploded").
place(fire,vial_space_lost,"Vital space lost").
place(fire,pers_occup,"Occupied personnel").
place(fire,engulfed,"Engulfed").

%transition(fire,ignition,0,60,"ignition",
% [ [space,[edge_direct,ht_alarm],[edge_negate,fire]] ],
% [ [edge_direct,fire] ]).
transition(fire,start_ff,120,300,"start firefighting",
[ [station,[edge_direct,pers_avail],[edge_direct,fight_fire]] ],
[ [edge_direct,ff_in_progress],[edge_direct,pers_occup] ]).
transition(fire,extinguishing,600,1200,"extinguishing",
[ [space,[edge_direct,fire],[edge_direct,ff_in_progress],
[edge_negate,low_pressure]] ],
[ [edge_direct,fire_out] ]).
transition(fire,flooding,120,420,"flooding",
[ [space,[edge_direct,granted_flood],[edge_direct,flood]] ],
[ [edge_direct,flooded] ]).
transition(fire,time_to_sfb,180,300,"time to set fire boundaries",
[ [station,[edge_direct,pers_avail],[edge_direct,sfb]] ],
[ [edge_direct,fbs],[edge_direct,pers_occup] ]).
transition(fire,walls_heat_up,300,600,"walls heat up",
[ [space,[edge_direct,fire]] ],
[ [edge_direct,engulfed] ]).
transition(fire,fire_spread,150,300,"fire spreads",
[ [space,[edge_spread_fire,engulfed],[edge_negate,fbs]] ],
[ ]).
transition(fire,destruction,1200,1800,"destruction",
[ [space,[edge_double,fire],[edge_negate,granted_flood],
[edge_negate,ff_in_progress],[edge_negate,explosive_comp]] ],

114
[ [edge_direct,destroyed] ]).
transition(fire,explosion,120,420,"explosion",
[ [space,[edge_double,fire],[edge_negate,granted_flood],
[edge_negate,ff_in_progress],[edge_double,explosive_comp]] ],
[ [edge_direct,destroyed],[edge_direct,exploded] ]).
transition(fire,investigation,180,420,"investigation",
[ [station,[edge_direct,invest_f],[edge_direct,pers_avail]] ],
[ [edge_direct,invest_complete],[edge_direct,pers_occup] ]).
transition(fire,flooding_fire,0,0,"flooding fire",
[ [space,[edge_double,flooded],[edge_direct,fire]] ],
[ [edge_direct,fire_out] ]).
transition(fire,loosing_vital_space,0,0,"loosing vital space",
[ [space,[edge_double,destroyed],[edge_double,vital_comp]] ],
[ [edge_direct,vital_space_lost] ]).
transition(fire,ttr_after_invest,120,300,"time to report after
investigation",
[ [station,[edge_double,invest_complete],
[edge_direct,pers_occup]] ],
[ [edge_direct,pers_avail] ]).
transition(fire,ttr_after_fbs,120,300,"time to report after fbs recall",
[ [space,[edge_direct,fbs],[edge_double,fire_out]],
[station,[edge_direct,fbs],[edge_direct,pers_occup]] ],
[ [edge_direct,pers_avail] ]).
transition(fire,ttr_after_ff,120,300,"time to report after ff",
[ [station,[edge_double,fire_out],[edge_direct,pers_occup]] ],
[ [edge_direct,pers_avail] ]).

%firemain

place(firemain,low_pressure,"Low pressure").
place(firemain,granted_start_fp,"Permission granted
to start the pump").
place(firemain,start_fp,"Start fire pump").
place(firemain,fp_running,"Fire pump running").
place(firemain,high_pressure,"High pressure").
place(firemain,fp_lost,"Fire pump lost").
place(firemain,rupture,"Rupture").

transition(firemain,start_fp,0,60,"start fire pump",


[ [pump,[edge_direct,granted_start_pump],
[edge_negate,fp_lost]] ],
[ [edge_direct,fp_running] ]).
transition(firemain,pressure_buildup,60,120,"pressure buildup",
[ [system,[edge_direct,low_pressure],
[edge_double,fp_running]] ],
[ [edge_direct,high_pressure] ]).

115
High Granted FF in Available
Fire FBS FIRE
Temp permission progress personnel
Alarm to flood
Exploding
compartments ACTIONS:
FINDINGS:
Investigate
Low
pressure

~
Firemain

~
~
~
~ ~ Fight
fire
space position space space space space station investigation
engulfing extinguishing flooding destruction explosion [3,7]
[5,10] [10,20] [2,7] [20,30] [2,7]

station start FF Flood


[2,5]

Engulfed
Vital
compartments

fire
spread
SFB
space

116
fire
spread
[3,5] space time to
loosing vital station set FB
space
[3,5]
[0,0]
Request
space permission
self to Flood
extinguishing
space [0,0]
flooding
fire
RESULTS: [0,1] space
waiting for
permission to
Investigation Fire Vital space Occupied flood
Flooded Destroyed Exploded [0,2]
complete out lost personnel

external input

connecting
nodes:
time to time to time to
station report after space station report after station report after
belongs to
investigation FBS recall FF this subnet
[2,5] [2.5] [2,5]
Fire belongs to
main other subnet

Figure B.3: EPN dealing with re


Low Granted Bow Stern Pipe
Fire pump Fire FIREMAIN
pressure start rupture
lost zebra zebra
pump valve valve Combusti ACTIONS:
FINDINGS: on
closed closed Start fire
pump

~
pump ~
start fp Request
[0,1]
start fire
~
Fire pump
pump
running pump
waiting for
permission to
Close
start fp stern
[0,2]
zebra
valve
pressure position
buildup
[1,2]
space closing
fp failing stern
[0,3]
position zebra Close
valve bow zebra
[0,1]
valve

117
closing
position bow zebra
valve
[0,1]

High water
pressure

ESULTS:
external input

connecting belongs to
nodes: this subnet

Fire belongs to
main other subnet

Figure B.4: EPN dealing with remain


B.4 State Evaluator
B.4.1 Design
The idea of the state evaluator is to assess the severity of the environment state based on
the information we have and assign a numerical score to it. In our design we have input
being a single state, output being a single real number, and the evaluator itself being
learned by inductive learning methods (section 3.2.5, p.23 gives the reasoning behind
those choices).
Speci cally for our domain we represent the state of the environment (i.e. the bat-
tleship) W as a vector of 479 numbers representing compartment statuses (one number
per compartment). The output of the evaluator s(W ) is estimated time till a kill-point
ttkp(W )1 . Thus our evaluator is actually a predictor predicting time till a kill-point given
a ship state under the standard level of crisis management e orts.
It is important to choose the compartment status representation appropriately to
make the input W rich enough for good evaluation (i.e. prediction). The information
included has to be sucient to assess the crisis level yet abstracted to minimize noise
in uence. We have used encoding shown in table B.1, p.119. An important idea here
(suggested by S.Magda) is that the time of how long a compartment has been in a
particular state should be encoded as well since is important for assessing the "danger
level"2.
Here:
1

1. A kill-point is a point of scenario where a fatal event occurs causing the scenario to terminate.
Examples: explosion in a missile magazine, torpedo hit, missile hit, loss of all radars, etc.
2. It is possible to normalize the time till a kill-point and have the score within certain range.
Example: given ship state W the score could be s(W ) = tanh(ttkp(W )).
2
Technically there are many ways to map the time interval to the code interval. For example t 2 [0; 1)
0:4
could be mapped to Code 2 [0:1; 0:5) through Code(t) = 0:5 ; 1+ t.

118
Code Status
-0.5 Flooded
0.0 Intact
[0.1,0.5) High-temp alarm that went o [0,1) minutes ago
[0.5,1.0) Fire that was discovered [0,1) minutes ago
1.0 Destroyed

Table B.1: Compartment Status Encoding Scheme

B.4.2 Inductive Learning Setup


Inductive learning methods work by generalizing a set of examples into a concept (some-
times called hypothesis). While being limited in their ability to learn all concepts fre-
quently approximately correctly ([27]) they have been shown to perform very well on the
data containing certain patterns ([25]).
We start out by collecting a number of annotated training samples. Each training
sample consists of:
1. the ship state W encoded as a vector hc1 ; :::; c479 i with ci being compartment status
codes (see table B.1, p.119);
2. the annotation ttkp(W ) represented as a vector hd1 ; :::; d60 i with all di being ;0:5
except dttkp(W ) = +0:5 [ttkp is given in minutes].
This allows us to represent ttkp(W ) with the values up to an hour and the accuracy of 1
minute. Those values seem to be reasonable in our domain since they allow to abstract
from the noise, yet have enough information to work with. Also, we have chosen the
distributed representation of ttkp to improve connectionist inductive learning methods
performance ([25]).
We have collected the training samples by making Minerva-4 ([20]) solve realistic
damage control scenarios in the DCTrain simulated environment ([28]). We decided to
use this setup for collecting training samples since Minerva-4 has been shown to reach an

119
expert-level performance in that environment ([20]). During a scenario at every Minerva-
4 cycle (i.e. every 3-10 seconds) the entire state of the ship was time-stamped and stored.
Once a scenario was over, we went back and annotated all the states with their times till
kill-point. If no kill-point occurred or the real value of ttkp(W ) was over an hour, we set
our annotation to 60 minutes.
Having run around 10 scenarios of various crisis levels, we have collected 1353 training
samples. Some of them represented an intact ship (i.e. 8i[ci = 0]) though. We pruned
those o since an intact ship exemplar can precede any crisis situation and thus its ttkp
is not de ned. Having done the pruning, we obtained 854 training exemplars whose
sequence we randomly shued. In the experiments below we typically used the rst 60%
of the sequence (512 exemplars) as the training set for the inductive learning algorithms
and the remaining 40% (342 exemplars) as the cross-validation set.
To get a better understanding of inductive learning algorithms performance, we also
conducted additional experiments with the two concepts introduced below.

De nition B.4.1 A no-kill-point exemplar W is an exemplar with all di being ;0:5.


De nition B.4.2 Time t is a cut-o time (COT) i during an experiment every exem-
plar W with ttkp(W ) > t is considered (modi ed) to be a no-kill-point exemplar.
Those two concepts allow us to limit the prediction range. Intuitively, the further
from a kill-point an exemplar is the less correlation between W and ttkp(W ) it has. Our
experiments with COT support this intuition.

De nition B.4.3 Allowed deviation (AD) is maximum di erence between ttkp output
by our evaluator and the actual ttkp recorded with the exemplar such that we consider
the evaluator to predict ttkp correctly.
Experimenting with AD gives us some insight on how much o a learned evaluator
typically is. Following sections will present di erent inductive learning methods we have
utilized.

120
B.4.3 Multilayer Perceptrons with Backpropagation Learning
Multilayer perceptron networks with backpropagation learning are one of the most pop-
ular connectionist methods ([22]). They have been also shown to reach high accuracy in
many domains ([25]). We have tested various 2-layer networks (i.e. one hidden and one
output layers). All of them had 479 inputs and 60 outputs. Parameters varied include:
1. number of hidden nodes;
2. cut-o time (COT) [see section B.4.2, p.119];
3. allowed deviation AD [see section B.4.2, p.119];
4. output interpretation scheme (see table B.2, p.122);
5. learning rates.
The results are summarized in the table B.2, p.122.

B.4.4 Kohonen Maps


Kohonen maps are generally known for their ability to perform unsupervised learning
([22]). In other words during the training stage those self-organizing feature maps don't
need any annotation (such as ttkp) of their input. Every ship state could be considered
as a point in a 479-dimensional space. Given our compartment status encoding scheme
(table B.1, p.119), certain points of the space would correspond to serious crisis situation
and ttkp could be estimated as a distance from the point representing current ship state
to the closest crisis point.
This reasoning encouraged us to implement K-Maps as a board evaluator and exper-
iment with them. Parameters varied include:
1. cut-o time (COT) [AD was set to 0];
2. Kohonen map size (LSxLS);
3. number of training epochs;

121
Topology Annout Scheme Data is in Score
479-2-60 " = 0:0; mat = 1 table B.3, p.123, graph B.5, p.124 75.9
479-100-60 " = 0:0; mat = 1 table B.4, p.125, graph B.6, p.126 85.0
479-200-60 " = 0:0; mat = 1 table B.5, p.127, graph B.7, p.128 75.2
479-500-60 " = 0:0; mat = 1 table B.6, p.129, graph B.8, p.130 74.3
479-100-60 " = 0:3; mat = 3 table B.7, p.131, graph B.9, p.132 79.4
479-100-60 " = 0:0; maximum table B.8, p.133, graph B.10, p.134 81.2
Notes:
1. Topology is given in form N ; M ; L where N is the number of inputs, M is the number
of hidden units, and L is the number of outputs.
2. Annout Scheme column speci es how network outputs were interpreted to compute
cross-validation percentage. Suppose hy1; :::; yL i is the actual output of the network and
ttkpy is our interpretation of time till a kill-point the network predicts. Then two schemes
we used could be described as follows:
(a)
8
>
<no-kill-point if 8i[yi  "];
ttkpy = >round(avg(AT )) where AT = fijyi > "g & jAT j  mat;
:unde ned else (i.e. when more than mat outputs are above "):
(b)
(
ttkpy = no-kill-point if 8i[yi  "];
max(AT ) where AT = fijyi > "g:
Given this interpretation of ANN outputs the cross-validation percentage could be com-
puted over the set of testing exemplars T EE as:
CV = 100 jfe 2 T EE jttkp(e) = ttkpy (e)gj %:
jT EE j
3. Score is a single number summarizing the network performance over the cross-validation
set. It is de ned as:
Score = avg CV (AD; COT ):
AD=0
COT =5;10;15;:::;60
In other words it is averaged cross-validation percentage with allowed deviation of 0 and
cut-o time ranging from 5 to 60 minutes with the step of 5 minutes.

Table B.2: Multi-layer Perceptrons as Board Evaluator

122
100 epochs
Allowed Deviation CutOffTime ANN lp_out lp_hidden CV %
0 5 479-2-60 0.2 0.4 94.2
0 10 479-2-60 0.2 0.4 87.2
0 15 479-2-60 0.2 0.4 80.2
0 20 479-2-60 0.2 0.4 75.6
0 25 479-2-60 0.2 0.4 74.4
0 30 479-2-60 0.2 0.4 75.6
0 35 479-2-60 0.2 0.4 70.9
0 40 479-2-60 0.2 0.4 69.8
0 45 479-2-60 0.2 0.4 70.9
0 50 479-2-60 0.2 0.4 70.9
0 55 479-2-60 0.2 0.4 70.9
0 60 479-2-60 0.2 0.4 69.8
1 5 479-2-60 0.2 0.4 94.2
1 10 479-2-60 0.2 0.4 87.2
1 15 479-2-60 0.2 0.4 81.4
1 20 479-2-60 0.2 0.4 76.7
1 25 479-2-60 0.2 0.4 74.4
1 30 479-2-60 0.2 0.4 75.6
1 35 479-2-60 0.2 0.4 70.9
1 40 479-2-60 0.2 0.4 69.8
1 45 479-2-60 0.2 0.4 70.9
1 50 479-2-60 0.2 0.4 72.1
1 55 479-2-60 0.2 0.4 72.1
1 60 479-2-60 0.2 0.4 69.8
2 5 479-2-60 0.2 0.4 94.2
2 10 479-2-60 0.2 0.4 87.2
2 15 479-2-60 0.2 0.4 81.4
2 20 479-2-60 0.2 0.4 76.7
2 25 479-2-60 0.2 0.4 74.4
2 30 479-2-60 0.2 0.4 75.6
2 35 479-2-60 0.2 0.4 70.9
2 40 479-2-60 0.2 0.4 69.8
2 45 479-2-60 0.2 0.4 70.9
2 50 479-2-60 0.2 0.4 72.1
2 55 479-2-60 0.2 0.4 72.1
2 60 479-2-60 0.2 0.4 69.8
3 5 479-2-60 0.2 0.4 94.2
3 10 479-2-60 0.2 0.4 87.2
3 15 479-2-60 0.2 0.4 82.6
3 20 479-2-60 0.2 0.4 76.7
3 25 479-2-60 0.2 0.4 74.4
3 30 479-2-60 0.2 0.4 75.6
3 35 479-2-60 0.2 0.4 70.9
3 40 479-2-60 0.2 0.4 69.8
3 45 479-2-60 0.2 0.4 70.9
3 50 479-2-60 0.2 0.4 72.1
3 55 479-2-60 0.2 0.4 72.1
3 60 479-2-60 0.2 0.4 69.8

Table B.3: ANN 479-2-60 Results

123
ANNs

100

95
94.2

90

87.2
CV Accuracy (%)

85

82.6

81.4

80.2
80

76.7

75.6 75.6
75
74.4

72.1 72.1

70.9 70.9 70.9 70.9


70 69.8 69.8

65
5 10 15 20 25 30 35 40 45 50 55 60
Cut-Off Time (min)

AD = 0 min AD = 1 min AD = 2 min AD = 3 min

Figure B.5: ANN 479-2-60 Results

124
1000 epochs
Allowed Deviation CutOffTime ANN lp_out lp_hidden CV %
0 5 479-100-60 0.2 0.4 95.3
0 10 479-100-60 0.2 0.4 90.1
0 15 479-100-60 0.2 0.4 86.3
0 20 479-100-60 0.2 0.4 85.4
0 25 479-100-60 0.2 0.4 84.2
0 30 479-100-60 0.2 0.4 83.9
0 35 479-100-60 0.2 0.4 81.6
0 40 479-100-60 0.2 0.4 81.3
0 45 479-100-60 0.2 0.4 82.7
0 50 479-100-60 0.2 0.4 82.7
0 55 479-100-60 0.2 0.4 82.7
0 60 479-100-60 0.2 0.4 83.9
1 5 479-100-60 0.2 0.4 97.4
1 10 479-100-60 0.2 0.4 97.1
1 15 479-100-60 0.2 0.4 93.6
1 20 479-100-60 0.2 0.4 91.5
1 25 479-100-60 0.2 0.4 92.7
1 30 479-100-60 0.2 0.4 91.5
1 35 479-100-60 0.2 0.4 88.6
1 40 479-100-60 0.2 0.4 90.1
1 45 479-100-60 0.2 0.4 90.9
1 50 479-100-60 0.2 0.4 89.5
1 55 479-100-60 0.2 0.4 89.2
1 60 479-100-60 0.2 0.4 90.1
2 5 479-100-60 0.2 0.4 97.4
2 10 479-100-60 0.2 0.4 97.4
2 15 479-100-60 0.2 0.4 94.2
2 20 479-100-60 0.2 0.4 92.4
2 25 479-100-60 0.2 0.4 93.6
2 30 479-100-60 0.2 0.4 92.7
2 35 479-100-60 0.2 0.4 89.8
2 40 479-100-60 0.2 0.4 91.2
2 45 479-100-60 0.2 0.4 92.4
2 50 479-100-60 0.2 0.4 90.6
2 55 479-100-60 0.2 0.4 90.4
2 60 479-100-60 0.2 0.4 91.5
3 5 479-100-60 0.2 0.4 97.4
3 10 479-100-60 0.2 0.4 97.4
3 15 479-100-60 0.2 0.4 94.2
3 20 479-100-60 0.2 0.4 92.4
3 25 479-100-60 0.2 0.4 93.6
3 30 479-100-60 0.2 0.4 92.7
3 35 479-100-60 0.2 0.4 90.1
3 40 479-100-60 0.2 0.4 91.5
3 45 479-100-60 0.2 0.4 92.7
3 50 479-100-60 0.2 0.4 90.9
3 55 479-100-60 0.2 0.4 90.6
3 60 479-100-60 0.2 0.4 91.8

Table B.4: ANN 479-100-60 Results

125
ANNs

100

98
97.4 97.4
97.1

96
95.3

94.2
94
93.6 93.6

92.7 92.7 92.7


92.4 92.4
92 91.8
CV Accuracy (%)

91.5 91.5 91.5 91.5


91.2
90.9 90.9
90.6 90.6
90.4
90 90.1 90.1 90.1 90.1
89.8
89.5
89.2

88.6

88

86.3
86
85.4

84.2
84 83.9 83.9

82.7 82.7 82.7

82
81.6
81.3

80
5 10 15 20 25 30 35 40 45 50 55 60
Cut-Off Time (min)

AD = 0 min AD = 1 min AD = 2 min AD = 3 min

Figure B.6: ANN 479-100-60 Results

126
1000 epochs
Allowed Deviation CutOffTime ANN lp_out lp_hidden CV %
0 5 479-200-60 0.2 0.4 95.3
0 10 479-200-60 0.2 0.4 86.8
0 15 479-200-60 0.2 0.4 78.9
0 20 479-200-60 0.2 0.4 72.2
0 25 479-200-60 0.2 0.4 72.2
0 30 479-200-60 0.2 0.4 72.2
0 35 479-200-60 0.2 0.4 69.6
0 40 479-200-60 0.2 0.4 73.4
0 45 479-200-60 0.2 0.4 69.3
0 50 479-200-60 0.2 0.4 74.0
0 55 479-200-60 0.2 0.4 69.3
0 60 479-200-60 0.2 0.4 69.3
1 5 479-200-60 0.2 0.4 95.3
1 10 479-200-60 0.2 0.4 90.9
1 15 479-200-60 0.2 0.4 84.5
1 20 479-200-60 0.2 0.4 72.2
1 25 479-200-60 0.2 0.4 72.2
1 30 479-200-60 0.2 0.4 72.2
1 35 479-200-60 0.2 0.4 70.5
1 40 479-200-60 0.2 0.4 78.1
1 45 479-200-60 0.2 0.4 69.3
1 50 479-200-60 0.2 0.4 77.8
1 55 479-200-60 0.2 0.4 69.3
1 60 479-200-60 0.2 0.4 69.3
2 5 479-200-60 0.2 0.4 95.3
2 10 479-200-60 0.2 0.4 91.8
2 15 479-200-60 0.2 0.4 87.4
2 20 479-200-60 0.2 0.4 72.2
2 25 479-200-60 0.2 0.4 72.2
2 30 479-200-60 0.2 0.4 72.2
2 35 479-200-60 0.2 0.4 70.8
2 40 479-200-60 0.2 0.4 78.4
2 45 479-200-60 0.2 0.4 69.3
2 50 479-200-60 0.2 0.4 79.5
2 55 479-200-60 0.2 0.4 69.3
2 60 479-200-60 0.2 0.4 69.3
3 5 479-200-60 0.2 0.4 95.3
3 10 479-200-60 0.2 0.4 92.1
3 15 479-200-60 0.2 0.4 87.7
3 20 479-200-60 0.2 0.4 72.2
3 25 479-200-60 0.2 0.4 72.2
3 30 479-200-60 0.2 0.4 72.2
3 35 479-200-60 0.2 0.4 70.8
3 40 479-200-60 0.2 0.4 78.7
3 45 479-200-60 0.2 0.4 69.3
3 50 479-200-60 0.2 0.4 80.1
3 55 479-200-60 0.2 0.4 69.3
3 60 479-200-60 0.2 0.4 69.3

Table B.5: ANN 479-200-60 Results

127
ANNs

100

95.3
95

92.1
91.8
90.9
90

87.7
87.4
86.8
CV Accuracy (%)

85
84.5

80 80.1
79.5
78.9 78.7
78.4
78.1
77.8

75
74.0
73.4

72.2 72.2 72.2

70.8
70.5
70
69.6
69.3 69.3 69.3

65
5 10 15 20 25 30 35 40 45 50 55 60
Cut-Off Time (min)

AD = 0 min AD = 1 min AD = 2 min AD = 3 min

Figure B.7: ANN 479-200-60 Results

128
1000 epochs
Allowed DeCutOffTimeANN lp_out lp_hidden CV %
0 5 479-500-60 0.2 0.4 95.3
0 10 479-500-60 0.2 0.4 86.0
0 15 479-500-60 0.2 0.4 75.7
0 20 479-500-60 0.2 0.4 72.2
0 25 479-500-60 0.2 0.4 73.1
0 30 479-500-60 0.2 0.4 72.2
0 35 479-500-60 0.2 0.4 69.9
0 40 479-500-60 0.2 0.4 69.3
0 45 479-500-60 0.2 0.4 69.9
0 50 479-500-60 0.2 0.4 69.3
0 55 479-500-60 0.2 0.4 69.3
0 60 479-500-60 0.2 0.4 69.3
1 5 479-500-60 0.2 0.4 95.3
1 10 479-500-60 0.2 0.4 86.5
1 15 479-500-60 0.2 0.4 75.7
1 20 479-500-60 0.2 0.4 72.2
1 25 479-500-60 0.2 0.4 73.1
1 30 479-500-60 0.2 0.4 72.2
1 35 479-500-60 0.2 0.4 70.2
1 40 479-500-60 0.2 0.4 69.3
1 45 479-500-60 0.2 0.4 70.8
1 50 479-500-60 0.2 0.4 69.3
1 55 479-500-60 0.2 0.4 69.3
1 60 479-500-60 0.2 0.4 69.3
2 5 479-500-60 0.2 0.4 95.3
2 10 479-500-60 0.2 0.4 86.5
2 15 479-500-60 0.2 0.4 75.7
2 20 479-500-60 0.2 0.4 72.2
2 25 479-500-60 0.2 0.4 73.1
2 30 479-500-60 0.2 0.4 72.2
2 35 479-500-60 0.2 0.4 70.8
2 40 479-500-60 0.2 0.4 69.3
2 45 479-500-60 0.2 0.4 71.3
2 50 479-500-60 0.2 0.4 69.3
2 55 479-500-60 0.2 0.4 69.3
2 60 479-500-60 0.2 0.4 69.3
3 5 479-500-60 0.2 0.4 95.3
3 10 479-500-60 0.2 0.4 86.5
3 15 479-500-60 0.2 0.4 75.7
3 20 479-500-60 0.2 0.4 72.2
3 25 479-500-60 0.2 0.4 73.1
3 30 479-500-60 0.2 0.4 72.2
3 35 479-500-60 0.2 0.4 71.6
3 40 479-500-60 0.2 0.4 69.3
3 45 479-500-60 0.2 0.4 71.3
3 50 479-500-60 0.2 0.4 69.3
3 55 479-500-60 0.2 0.4 69.3
3 60 479-500-60 0.2 0.4 69.3

Table B.6: ANN 479-500-60 Results

129
ANNs

100

95.3
95

90

86.5
86.0
CV Accuracy (%)

85

80

75.7
75

73.1
72.2 72.2
71.6
71.3
70.8 70.8
70.2
70 69.9 69.9
69.3 69.3 69.3 69.3

65
5 10 15 20 25 30 35 40 45 50 55 60
Cut-Off Time (min)

AD = 0 min AD = 1 min AD = 2 min AD = 3 min

Figure B.8: ANN 479-500-60 Results

130
1000 epochs
Allowed Deviation CutOffTime ANN lp_out lp_hidden CV %
0 5 479-100-60 0.2 0.4 96.5
0 10 479-100-60 0.2 0.4 86.0
0 15 479-100-60 0.2 0.4 81.3
0 20 479-100-60 0.2 0.4 79.8
0 25 479-100-60 0.2 0.4 75.1
0 30 479-100-60 0.2 0.4 80.1
0 35 479-100-60 0.2 0.4 77.5
0 40 479-100-60 0.2 0.4 76.6
0 45 479-100-60 0.2 0.4 76.9
0 50 479-100-60 0.2 0.4 76.6
0 55 479-100-60 0.2 0.4 70.5
0 60 479-100-60 0.2 0.4 76.9
1 5 479-100-60 0.2 0.4 97.7
1 10 479-100-60 0.2 0.4 87.4
1 15 479-100-60 0.2 0.4 84.2
1 20 479-100-60 0.2 0.4 84.2
1 25 479-100-60 0.2 0.4 75.7
1 30 479-100-60 0.2 0.4 83.6
1 35 479-100-60 0.2 0.4 79.8
1 40 479-100-60 0.2 0.4 80.1
1 45 479-100-60 0.2 0.4 83.6
1 50 479-100-60 0.2 0.4 79.8
1 55 479-100-60 0.2 0.4 71.9
1 60 479-100-60 0.2 0.4 80.7
2 5 479-100-60 0.2 0.4 97.7
2 10 479-100-60 0.2 0.4 88.0
2 15 479-100-60 0.2 0.4 86.5
2 20 479-100-60 0.2 0.4 84.8
2 25 479-100-60 0.2 0.4 76.3
2 30 479-100-60 0.2 0.4 86.5
2 35 479-100-60 0.2 0.4 80.7
2 40 479-100-60 0.2 0.4 81.3
2 45 479-100-60 0.2 0.4 85.4
2 50 479-100-60 0.2 0.4 80.4
2 55 479-100-60 0.2 0.4 73.1
2 60 479-100-60 0.2 0.4 83.3
3 5 479-100-60 0.2 0.4 97.7
3 10 479-100-60 0.2 0.4 88.9
3 15 479-100-60 0.2 0.4 87.4
3 20 479-100-60 0.2 0.4 85.1
3 25 479-100-60 0.2 0.4 76.6
3 30 479-100-60 0.2 0.4 86.8
3 35 479-100-60 0.2 0.4 81.0
3 40 479-100-60 0.2 0.4 81.3
3 45 479-100-60 0.2 0.4 85.7
3 50 479-100-60 0.2 0.4 81.0
3 55 479-100-60 0.2 0.4 73.1
3 60 479-100-60 0.2 0.4 83.6

Table B.7: ANN 479-100-60 Results

131
ANNs

100

97.7

96.5

95

90
88.9

88.0
CV Accuracy (%)

87.4 87.4
86.8
86.5 86.5
86.0
85.7
85.4
85 85.1
84.8
84.2 84.2
83.6 83.6 83.6
83.3

81.3 81.3
81.0 81.0
80.7 80.7
80.4
80 80.1 80.1
79.8 79.8 79.8

77.5
76.9 76.9
76.6 76.6 76.6
76.3
75.7
75 75.1

73.1

71.9

70.5
70
5 10 15 20 25 30 35 40 45 50 55 60
Cut-Off Time (min)

AD = 0 min AD = 1 min AD = 2 min AD = 3 min

Figure B.9: ANN 479-100-60 Results

132
1000 epochs
Allowed Deviation CutOffTime ANN lp_out lp_hidden CV %
0 5 479-100-60 0.2 0.4 96.8
0 10 479-100-60 0.2 0.4 86.5
0 15 479-100-60 0.2 0.4 82.7
0 20 479-100-60 0.2 0.4 80.7
0 25 479-100-60 0.2 0.4 77.5
0 30 479-100-60 0.2 0.4 81.6
0 35 479-100-60 0.2 0.4 78.7
0 40 479-100-60 0.2 0.4 79.2
0 45 479-100-60 0.2 0.4 78.4
0 50 479-100-60 0.2 0.4 78.9
0 55 479-100-60 0.2 0.4 73.7
0 60 479-100-60 0.2 0.4 79.8
1 5 479-100-60 0.2 0.4 99.1
1 10 479-100-60 0.2 0.4 89.5
1 15 479-100-60 0.2 0.4 90.6
1 20 479-100-60 0.2 0.4 88.6
1 25 479-100-60 0.2 0.4 84.2
1 30 479-100-60 0.2 0.4 88.3
1 35 479-100-60 0.2 0.4 86.0
1 40 479-100-60 0.2 0.4 84.2
1 45 479-100-60 0.2 0.4 85.1
1 50 479-100-60 0.2 0.4 86.8
1 55 479-100-60 0.2 0.4 78.9
1 60 479-100-60 0.2 0.4 86.3
2 5 479-100-60 0.2 0.4 99.1
2 10 479-100-60 0.2 0.4 91.2
2 15 479-100-60 0.2 0.4 92.4
2 20 479-100-60 0.2 0.4 91.5
2 25 479-100-60 0.2 0.4 86.3
2 30 479-100-60 0.2 0.4 90.4
2 35 479-100-60 0.2 0.4 88.3
2 40 479-100-60 0.2 0.4 85.7
2 45 479-100-60 0.2 0.4 88.0
2 50 479-100-60 0.2 0.4 88.6
2 55 479-100-60 0.2 0.4 80.7
2 60 479-100-60 0.2 0.4 88.6
3 5 479-100-60 0.2 0.4 99.1
3 10 479-100-60 0.2 0.4 91.8
3 15 479-100-60 0.2 0.4 93.0
3 20 479-100-60 0.2 0.4 92.1
3 25 479-100-60 0.2 0.4 86.5
3 30 479-100-60 0.2 0.4 90.4
3 35 479-100-60 0.2 0.4 88.6
3 40 479-100-60 0.2 0.4 86.3
3 45 479-100-60 0.2 0.4 88.3
3 50 479-100-60 0.2 0.4 88.9
3 55 479-100-60 0.2 0.4 81.3
3 60 479-100-60 0.2 0.4 90.1

Table B.8: ANN 479-100-60 Results

133
ANNs

100
99.1

96.8

95

93.0
92.4
92.1
91.8
91.5
91.2
90.6 90.4
90 90.1
89.5
88.9
88.6 88.6 88.6 88.6
88.3 88.3 88.3
88.0
CV Accuracy (%)

86.8
86.5 86.5
86.3 86.3 86.3
86.0
85.7
85 85.1

84.2 84.2

82.7

81.6
81.3
80.7 80.7
80 79.8
79.2
78.7 78.9 78.9
78.4

77.5

75

73.7

70
5 10 15 20 25 30 35 40 45 50 55 60
Cut-Off Time (min)

AD = 0 min AD = 1 min AD = 2 min AD = 3 min

Figure B.10: ANN 479-100-60 Results

134
4. number of LVQ epochs;
5. pull-in and push-away coecients for the LVQ stage.
The results are presented in table B.9, p.136 and gure B.11, p.137. The scores are
given under each table.

B.4.5 Decision Trees


Another famous inductive learning method is learning decision trees ([26]). It resulted in
creating industry-strength learning systems such as ID3, C4.5, and C5.0 ([29]). Learning
decision trees and rules typically has a number of attractive features: it is fast, it produces
human readable output, it often allows for good explanation of its reasoning, etc.
For our experiments, we have secured a commercial version of C5.0 ([29]) and con-
ducted a number of experiments varying the following parameters:
1. cut-o time COT (AD was set to 0);
2. 10-pass boosting on/o ;
3. 10-fold cross-validation on the training set.
The results are presented in table B.10, p.139 and gure B.12, p.138. The scores
are given under each table. An example of learned decision tree and extracted rules for
COT=60 are given below. cNNN represents the status of compartment number NNN.

C5.0 INDUCTION SYSTEM [Release 1.07] Thu Mar 12 11:27:38 1998


------------------------------------

Options:
File stem <dcfull\eval60>
Convert trees to rules

Class specified by attribute result


Read 512 cases (480 attributes) from dcfull\eval60.data

Decision tree:

135
Before LVQ After LVQ
Cut-off Time LS Train. Ep. LVQ Ep. Pull-in Push-away Training % CV % Training % CV % Map Quality
5 3 5 3 0.05 0.0001 89.4 72.4 89.4 72.4 8.130
10 3 5 3 0.05 0.0001 80.2 64.9 80.2 64.9 7.329
15 3 5 3 0.05 0.0001 70.6 57.6 70.6 57.6 6.518
20 3 5 3 0.05 0.0001 68.2 55.6 68.2 55.6 6.266
25 3 5 3 0.05 0.0001 68.2 55.6 68.2 55.6 6.266
30 3 5 3 0.05 0.0001 68.0 55.4 68.0 55.4 6.247
35 3 5 3 0.05 0.0001 66.6 54.1 67.8 54.1 6.109
40 3 5 3 0.05 0.0001 66.6 54.1 67.8 54.1 6.109
45 3 5 3 0.05 0.0001 66.6 54.1 67.8 54.1 6.109
50 3 5 3 0.05 0.0001 66.6 54.1 67.8 54.1 6.109
55 3 5 3 0.05 0.0001 66.6 54.1 67.8 54.1 6.109
60 3 5 3 0.05 0.0001 66.6 54.1 67.8 54.1 6.109
57.2 57.2

Before LVQ After LVQ


Cut-off Time LS Train. Ep. LVQ Ep. Pull-in Push-away Training % CV % Training % CV % Map Quality
5 20 5 3 0.05 0.0001 97.0 71.5 96.8 70.9 122.647619
10 20 5 3 0.05 0.0001 94.6 68.7 93.8 69.1 120.154762
15 20 5 3 0.05 0.0001 92.2 66.4 91.4 66.7 118.597619
20 20 5 3 0.05 0.0001 91.4 64.9 89.4 63.4 115.080952
25 20 5 3 0.05 0.0001 92.4 65.8 91.4 65.6 120.214286
30 20 5 3 0.05 0.0001 91.0 65.6 90.4 65.1 116.330952
35 20 5 3 0.05 0.0001 90.8 64.2 90.0 64.7 116.797619
40 20 5 3 0.05 0.0001 90.6 65.1 89.0 64.7 121.285714
45 20 5 3 0.05 0.0001 90.4 64.2 89.8 63.6 114.580952
50 20 5 3 0.05 0.0001 90.6 63.8 89.8 62.9 112.002778
55 20 5 3 0.05 0.0001 90.4 63.8 89.6 63.4 117.207937
60 20 5 3 0.05 0.0001 91.0 65.3 90.8 65.3 116.855128
65.8 65.5

Before LVQ After LVQ


Cut-off Time LS Train. Ep. LVQ Ep. Pull-in Push-away Training % CV % Training % CV % Map Quality
5 20 3 1 0.1 0.001 97.0 71.5 96.6 71.1 122.6476
10 20 3 1 0.1 0.001 94.6 68.2 93.8 70.4 119.1548
15 20 3 1 0.1 0.001 92.2 66.4 91.8 66.7 117.7643
20 20 3 1 0.1 0.001 91.4 64.5 88.8 62.0 115.1333
25 20 3 1 0.1 0.001 92.4 66.2 91.0 66.0 118.2143
30 20 3 1 0.1 0.001 91.0 65.3 90.2 65.8 116.7310
35 20 3 1 0.1 0.001 90.8 64.2 90.4 65.6 116.7976
40 20 3 1 0.1 0.001 90.6 65.1 89.8 65.3 121.2857
45 20 3 1 0.1 0.001 90.4 64.0 89.4 64.5 114.5810
50 20 3 1 0.1 0.001 90.6 64.5 88.8 63.8 112.0028
55 20 3 1 0.1 0.001 90.6 64.0 89.8 64.2 117.2524
60 20 3 1 0.1 0.001 91.0 65.8 90.6 66.0 116.8551
65.8 66.0

Before LVQ After LVQ


Cut-off Time LS Train. Ep. LVQ Ep. Pull-in Push-away Training % CV % Training % CV % Map Quality
5 50 3 1 0.1 0.001 98.2 67.1 97.0 69.5 185.0000
10 50 3 1 0.1 0.001 96.8 65.6 96.0 67.5 187.1000
15 50 3 1 0.1 0.001 95.0 62.3 92.8 66.4 174.2167
20 50 3 1 0.1 0.001 96.2 64.0 93.6 66.2 183.5000
25 50 3 1 0.1 0.001 96.0 63.6 93.6 65.1 178.5000
30 50 3 1 0.1 0.001 95.2 63.1 94.4 65.3 178.9667
35 50 3 1 0.1 0.001 94.8 61.6 91.2 64.0 177.2500
40 50 3 1 0.1 0.001 95.4 61.6 92.8 62.9 182.8167
45 50 3 1 0.1 0.001 95.8 62.0 91.0 63.1 184.3167
50 50 3 1 0.1 0.001 95.4 60.9 93.0 63.6 184.1167
55 50 3 1 0.1 0.001 94.8 61.1 92.6 64.5 180.8167
60 50 3 1 0.1 0.001 95.2 61.1 91.8 64.2 183.2833
62.8 65.2

Table B.9: Experiments with Kohonen Maps

136
5
15
25

CutOffTime (min)
35

45

55

137
K-map 3x3 (before LVQ)
K-maps (3D)

K-map 3x3 (after LVQ)

K-map 20x20 (before LVQ)

K-map 20x20 (after LVQ)

K-map 20x20 (before LVQ)

K-map 20x20 (after LVQ)

K-map 50x50 (before LVQ)

K-map 50x50 (after LVQ)

Figure B.11: Experiments with Kohonen Maps


50
55
60
65

K-map
70
75

CV Accuracy (%)
C5.0

98

97.1
96.9

96

94.2
94 93.9

92.4
CV Accuracy (%)

92
91.5

90
89.8 89.8 89.8

89.2
89.1

88.6 88.7
88.6 88.6

88 88.0 88.0 88.0 88.0 88.0 88.0

87.5 87.5
87.1 87.1 87.1 87.1 87.1 87.1
86.7
86.5
86.3 86.3
86
85.5

84
5 10 15 20 25 30 35 40 45 50 55 60
CutOffTime (min)

C5.0 single C5.0 mean C5.0 boost

Figure B.12: Experiments with C5.0

138
C5.0 full

Cut-Off Time # of rules Single 10-fold CV Boost


5 13 97.1 96.9 97.1
10 29 93.9 92.4 94.2
15 41 89.2 89.1 91.5
20 45 88.6 87.5 89.8
25 45 88.6 88.7 89.8
30 45 88.6 87.5 89.8
35 48 87.1 86.3 88.0
40 48 87.1 85.5 88.0
45 48 87.1 87.1 88.0
50 48 87.1 86.5 88.0
55 48 87.1 86.3 88.0
60 48 87.1 86.7 88.0
Score/Avg 89.1 88.4 90.0

Table B.10: Experiments with C5.0

c339 > 0:
:...c339 > 0.8: 32 (5.0/2.0)
: c339 <= 0.8:
: :...c341 <= 0.6: 34 (9.0)
: c341 > 0.6: 33 (5.0/2.0)
c339 <= 0:
:...c175 > 0:
:...c080 <= 0.2: 2 (4.0/1.0)
: c080 > 0.2: 1 (4.0)
c175 <= 0:
:...c380 > 0:
:...c380 <= 0.6: 4 (4.0)
: c380 > 0.6:
: :...c380 <= 0.7: 3 (5.0)
: c380 > 0.7: 2 (2.0)
c380 <= 0:
:...c291 > 0:
:...c411 <= 0.8:
: :...c291 <= 0.8: 7 (3.0/1.0)
: : c291 > 0.8: 4 (4.0)
: c411 > 0.8:
: :...c291 <= 0.8: 6 (6.0)
: c291 > 0.8: 5 (8.0/2.0)
c291 <= 0:
:...c381 > 0:
:...c431 <= 0.7: 3 (2.0/1.0)

139
: c431 > 0.7: 4 (3.0)
c381 <= 0:
:...c382 > 0:
:...c382 <= 0.1: 7 (3.0)
: c382 > 0.1: 6 (4.0/1.0)
c382 <= 0:
:...c100 > 0:
:...c212 <= 0.8: 7 (4.0/1.0)
: c212 > 0.8:
: :...c045 <= 0: 6 (2.0)
: c045 > 0: 5 (3.0/1.0)
c100 <= 0:
:...c214 > 0:
:...c106 <= 0.8: 17 (9.0)
: c106 > 0.8:
: :...c103 <= 0:
: :...c099 <= 0.8: 16 (10.0)
: : c099 > 0.8: 15 (9.0/3.0)
: c103 > 0:
: :...c219 <= 0: 13 (2.0)
: c219 > 0: 12 (3.0)
c214 <= 0:
:...c387 > 0: 1 (3.0)
c387 <= 0:
:...c467 > 0:
:...c434 > 0: 14 (3.0)
: c434 <= 0:
: :...c436 <= 0: 16 (4.0)
: c436 > 0: 15 (3.0)
c467 <= 0:
:...c350 > 0:
:...c292 > 0:
: :...c207 <= 0: 13 (3.0)
: : c207 > 0: 12 (2.0)
: c292 <= 0:
: :...c351 > 0.6: 14 (3.0)
: c351 <= 0.6:
: :...c295 <= 0: 16 (2.0)
: c295 > 0: 15 (3.0)
c350 <= 0:
:...c435 > 0:
:...c434 <= 0.7: 12 (5.0/2.0)
: c434 > 0.7: 11 (2.0)
c435 <= 0:
:...c213 > 0: 11 (4.0/1.0)
c213 <= 0:
:...c353 > 0:
:...c207 <= 0.7: 11 (2.0)
: c207 > 0.7: 10 (4.0/1.0)
c353 <= 0:
:...c103 > 0:
:...c103 <= 0.8: 10 (2.0)

140
: c103 > 0.8: 9 (5.0/1.0)
c103 <= 0:
:...c080 > 0: 2 (2.0/1.0)
c080 <= 0:[S1]

SubTree [S1]

c478 > 0: 9 (3.0/1.0)


c478 <= 0:
:...c413 > 0:
:...c418 <= 0: 8 (4.0/1.0)
: c418 > 0: 17 (2.0/1.0)
c413 <= 0:
:...c318 > 0: 8 (3.0)
c318 <= 0:
:...c411 > 0: 8 (4.0/1.0)
c411 <= 0:
:...c156 <= 0: 60 (328.0/1.0)
c156 > 0: 8 (3.0/1.0)

Extracted rules:

Rule 1: (cover 4)
c080 > 0.2
-> class 1 [0.833]

Rule 2: (cover 3)
c387 > 0
-> class 1 [0.800]

Rule 3: (cover 2)
c380 > 0.7
-> class 2 [0.750]

Rule 4: (cover 4)
c080 <= 0.2
c175 > 0
-> class 2 [0.667]

Rule 5: (cover 2)
c080 > 0
c175 <= 0
-> class 2 [0.500]

Rule 6: (cover 5)
c380 > 0.6
c380 <= 0.7
-> class 3 [0.857]

Rule 7: (cover 2)
c381 > 0
c431 <= 0.7

141
-> class 3 [0.500]

Rule 8: (cover 4)
c291 > 0.8
c411 <= 0.8
-> class 4 [0.833]

Rule 9: (cover 4)
c380 > 0
c380 <= 0.6
-> class 4 [0.833]

Rule 10: (cover 3)


c381 > 0
c431 > 0.7
-> class 4 [0.800]

Rule 11: (cover 8)


c291 > 0.8
c411 > 0.8
-> class 5 [0.700]

Rule 12: (cover 3)


c045 > 0
c212 > 0.8
-> class 5 [0.600]

Rule 13: (cover 6)


c291 <= 0.8
c411 > 0.8
-> class 6 [0.875]

Rule 14: (cover 2)


c045 <= 0
c212 > 0.8
-> class 6 [0.750]

Rule 15: (cover 4)


c382 > 0.1
-> class 6 [0.667]

Rule 16: (cover 3)


c382 > 0
c382 <= 0.1
-> class 7 [0.800]

Rule 17: (cover 4)


c100 > 0
c212 <= 0.8
-> class 7 [0.667]

Rule 18: (cover 3)

142
c291 > 0
c291 <= 0.8
c411 <= 0.8
-> class 7 [0.600]

Rule 19: (cover 3)


c100 <= 0
c318 > 0
-> class 8 [0.800]

Rule 20: (cover 4)


c413 > 0
c418 <= 0
-> class 8 [0.667]

Rule 21: (cover 4)


c291 <= 0
c411 > 0
-> class 8 [0.667]

Rule 22: (cover 3)


c156 > 0
c381 <= 0
c382 <= 0
c478 <= 0
-> class 8 [0.600]

Rule 23: (cover 5)


c103 > 0.8
-> class 9 [0.714]

Rule 24: (cover 3)


c435 <= 0
c478 > 0
-> class 9 [0.600]

Rule 25: (cover 2)


c103 > 0
c103 <= 0.8
c213 <= 0
-> class 10 [0.750]

Rule 26: (cover 4)


c207 > 0.7
c353 > 0
-> class 10 [0.667]

Rule 27: (cover 2)


c434 > 0.7
c435 > 0
-> class 11 [0.750]

143
Rule 28: (cover 2)
c207 <= 0.7
c350 <= 0
c353 > 0
-> class 11 [0.750]

Rule 29: (cover 4)


c213 > 0
c214 <= 0
-> class 11 [0.667]

Rule 30: (cover 3)


c214 > 0
c219 > 0
-> class 12 [0.800]

Rule 31: (cover 2)


c207 > 0
c350 > 0
-> class 12 [0.750]

Rule 32: (cover 5)


c434 <= 0.7
c435 > 0
c467 <= 0
-> class 12 [0.571]

Rule 33: (cover 3)


c207 <= 0
c292 > 0
c350 > 0
-> class 13 [0.800]

Rule 34: (cover 2)


c103 > 0
c219 <= 0
-> class 13 [0.750]

Rule 35: (cover 3)


c292 <= 0
c351 > 0.6
-> class 14 [0.800]

Rule 36: (cover 3)


c434 > 0
c467 > 0
-> class 14 [0.800]

Rule 37: (cover 3)


c434 <= 0
c436 > 0
-> class 15 [0.800]

144
Rule 38: (cover 3)
c295 > 0
c350 > 0
c351 <= 0.6
-> class 15 [0.800]

Rule 39: (cover 9)


c099 > 0.8
c103 <= 0
-> class 15 [0.636]

Rule 40: (cover 10)


c099 <= 0.8
c106 > 0.8
c214 > 0
-> class 16 [0.917]

Rule 41: (cover 4)


c436 <= 0
c467 > 0
-> class 16 [0.833]

Rule 42: (cover 2)


c295 <= 0
c350 > 0
-> class 16 [0.750]

Rule 43: (cover 9)


c106 <= 0.8
c214 > 0
-> class 17 [0.909]

Rule 44: (cover 2)


c413 > 0
c418 > 0
-> class 17 [0.500]

Rule 45: (cover 5)


c339 > 0.8
-> class 32 [0.571]

Rule 46: (cover 5)


c339 <= 0.8
c341 > 0.6
-> class 33 [0.571]

Rule 47: (cover 9)


c339 > 0
c339 <= 0.8
c341 <= 0.6
-> class 34 [0.909]

145
Rule 48: (cover 328)
c080 <= 0
c103 <= 0
c156 <= 0
c214 <= 0
c291 <= 0
c318 <= 0
c339 <= 0
c350 <= 0
c353 <= 0
c380 <= 0
c382 <= 0
c387 <= 0
c411 <= 0
c413 <= 0
c435 <= 0
c467 <= 0
-> class 60 [0.994]

Default class: 60

B.4.6 Comparisons and Comments


We will compare three methods tested along the following dimensions:
Cross-validation Accuracy. This is de nitely an important parameter since we want
our evaluator to generalize the training exemplars into a concept of total ship cri-
sis level. With the allowed deviation of 0 boosted decision tree exhibits the best
average accuracy (i.e. score) of 90%, followed by 470-100-60 with 85%, and by
20x20 K-map (66%) [see gure B.13, p.147]. It seems that in our domain multi-
layer perceptrons with backpropagation learning would be comparable to decision
trees. Other researchers have reported similar results in di erent real-world do-
mains ([25]). In our experiments, Kohonen maps were signi cantly less accurate.
Whether it is inherent in this domain or rather a matter of tweaking parameters is
yet to be investigated.
Learning Time. Learning time is most important for incremental learning on-line when
Minerva can re ne its board-evaluator knowledge after completing a scenario. Cer-

146
ANN vs. K-map vs. C5.0

100

97.1

95.3
95
94.2

91.5

90 90.1 89.8 89.8 89.8

88.0 88.0 88.0 88.0 88.0 88.0

86.3
85.4
85
84.2 83.9 83.9
82.7 82.7 82.7
81.6 81.3
80
CV Accuracy (%)

75

71.1
70.4
70

66.7
66.0 65.8 66.0
65.6 65.3
65 64.5 64.2
63.8

62.0

60

55

50
5 10 15 20 25 30 35 40 45 50 55 60
CutOffTime (min)

C5.0 boost 20x20 K-map 479-100-60 ANN

Figure B.13: ANNs vs. K-maps vs C5.0

147
tainly to be practical, such a mode would require a short learning time. In our
experiments ANNs were the slowest (around 1.5 hours per one training session of
1000 epochs3) while C5.0 was very quick (4 seconds per session). K-maps were
slightly faster than ANNs but nowhere close to C5.0. Slowness of backpropagation
has been reported in other works as well ([25]).
Problem-Solving Time. Problem-solving time (the time of computing time till kill-
point ttkp(W ) given a state of the ship W ) is crucial if the board evaluator is to be
called many times during schedule stage. For each proposed action, we need up to
four board evaluations. Given a vast number of actions Minerva typically produces
at every cycle, we sometimes need 100 or more state evaluations per second. Current
implementation of ANNs in Prolog has board evaluation time of over 1.2 seconds
while the C++ implementation brings this time down to 0.006 seconds. Decision
trees are the quickest with the problem-solving time under 0.003 seconds (within
C5.0 itself) and 0.125 seconds for our Prolog implementation.
Explanation Convenience. While the rules produced by C5.0 are generally human-
readable and allow for relatively easy explanation facility, board evaluators pro-
duced by backpropagation or K-maps are much more obscure and harder to ex-
plain. It seems to us that it would be nearly impossible to generate comprehensive
explanations for those two types especially in real-time.
Overall among the three approaches tested decision trees/rules seems to be most
promising for the task of board evaluation in our domain.

B.5 Rule-based Scheduling Layer of Minerva-4


Minerva-4 has a handcoded rule-based scheduling layer shown below.

3
We used our own MS Visual C++ 5.0 code on a Pentium-II 300MHz under MS Windows NT 4.0.

148
%% sconcludes/2 is of form
%% sconcludes(SRuleID,ScoreDelta)

sconcludes(sr1000,1500).
sconcludes(sr1010,400).
sconcludes(sr1012,700).
sconcludes(sr1020,-100).
sconcludes(sr1022,2000).
sconcludes(sr1030,2000).
sconcludes(sr1040,500).

%% scondition/4 is of form
%% scondition(SRuleID,Action,TopGoal,TheFirstEdge)

scondition(sr1000,_,process_hypothesis(_),_).

scondition(sr1010,_,explore_hypothesis(_),_).
scondition(sr1012,_,explore_hypothesis(_),eh1).

%% Those are the only scheduling rules that


%%1 reference domain knowledge

scondition(sr1020,perform([invest_f,_,_]),_,_).
scondition(sr1022,perform([invest_f,_,Space]),_,_) :- vital_space(Space).
scondition(sr1030,perform([flood,_,_]),_,_).
scondition(sr1040,perform([fight_fire,_,_]),_,_).

:- dynamic ba/2.

%% Returns the best action. Fails if the agenda is empty


%% Dumps ratings to the trace file

best(BAction) :-
setof(A,A^(on_agenda(A), A = perform(_)),Actions),
retractall(ba(_,_)),
%% tell('tracefile.tra'),
forall(
member(Action,Actions),
(
compute_rating(Action,R),
dump_rating(Action,R),
adjust_ba(Action,R)
)
),
ba(BAction,_),

149
%% tell(user),
retractall(ba(_,_)).

adjust_ba(Action,R) :-
ba(_,RO),
RO < R,
retractall(ba(_,_)),
assert(ba(Action,R)),!.

adjust_ba(Action,R) :-
ba(_,RO),!.

adjust_ba(Action,R) :-
assert(ba(Action,R)),!.

%% Outputs the rating R of the action Action

dump_rating(Action,R) :-
cycle(C),
current_time(Time),
%% write('(cycle-rating '), write(C), write(' '),
%% write(Action), write(' '), write(R), write(')'), nl,!.
d_write_chain(C, R, Action, '', 'cycle-on-agenda', '', Time).

%% Returns rating R of the given Action

compute_rating(Action,R) :-
findall(SR,(sconcludes(SR,_),ssatisfied(SR,Action)),SRules),
adjust_rat(0,R,SRules),!.

adjust_rat(R,R,[]).

adjust_rat(Old,New,[SRule|Rest]) :-
sconcludes(SRule,Delta),
Tmp is Old + Delta,
adjust_rat(Tmp,New,Rest).

%% Holds if SRule is satisfied by Action

ssatisfied(SRule,Action) :-
forall(
scondition(SRule,A,Top,Edge),
(A = Action, compute_goal_edge(Action,Top,Edge))
).

150
%% Returns the top level goal TG and the first edge E for given Action

compute_goal_edge(Action,TG,E) :-
top_level(TG),
led_by(Action,TG,E).

led_by(Action,Goal,E) :-
aaction(Action,E,Goal).

led_by(Action,Goal,E) :-
aaction(Action,_,SubGoal),
led_by(SubGoal,Goal,E).

B.6 Critiquing and Problem-Solving Knowledge


Fact B.6.1 We have used the following functions:
1. M2 (A1; A2 ) (de nition 3.4.28, p.45) as our matching function.
2. Degree of closeness c(a; a0 ) (de nition 3.4.30, p.45) as shown in table B.11, p.152.
3. The following time-lag function (de nition 3.4.24, p.42):

t = ;23:0364 + 8:99091nmessages (sec:)


lag

where nmessages is the interface's trac (number of messages per minute) at time t.

151
Type of a; a0 Value of c(a; a0)
Fight re by level +1 if the arguments match perfectly otherwise -1
E/M compartment isolation +1 if the arguments match perfectly otherwise -1
Repair equipment in space +1 if the arguments match perfectly otherwise -1
Patch and plug rupture +1 if the arguments match perfectly otherwise -1
Dewater a space +1 if the arguments match perfectly otherwise -1
Set deck and overhead +1 if the arguments match perfectly otherwise -1
boundaries
Adjust remain/chillwater +1 if the arguments match perfectly otherwise -1
valves
Flood a magazine +1 if the arguments match perfectly otherwise -1
Start or stop re pumps +1 if the arguments match perfectly otherwise -1
Request permission to switch +1 if the arguments match perfectly otherwise -1
re pumps on/o
Request permission to ood a +1 if the arguments match perfectly otherwise -1
magazine
Report ship's general status +1 if the arguments match perfectly otherwise -1
Query repair party readiness +1 if the arguments match perfectly otherwise -1
Report MR&Z achieved +1 if the arguments match perfectly otherwise -1
Query MR&Z status 0
View readiness chart 0
Query repair status progress 0
Fight re by space 1 ; 0:05 abs(dframe ) where dframe is the di erence in argu-
ments (in frames)
1;0:05 abs(dframe )
Set bulkhead boundaries 6
where dframe is the di erence in arguments
(in frames)
Investigate space
(
1 if fbsecfore  tinvest  fbsecaft
1 ; 0:2 abs(dframe ) otherwise:
Here tinvest is the compartment to investigate, fbsecfore is
the correct secondary forward reboundaries (frame number),
and fbsecaft is the correct secondary aft reboundaries (frame
number).

Table B.11: Degree of Closeness Function

152
Appendix C
Minerva Graphical User Interfaces
(GUIs)
C.1 Explanatory GUI
Explanatory GUI allows a user to look into Minerva-5 reasoning. The information is
displayed in several ways as follows ( gure C.1, p.154):

Finding window lists all the ndings in Minerva format sorted chronologically;
Hypothesis window lists all the hypotheses in Minerva format sorted chronologically;
Action window lists all the external domain-level actions in Minerva format sorted
chronologically;
Strategy chain graphical display shows strategy network of a selected cycle. Actions
selected for execution are shown in boxes with a shadow. External actions are also
tagged with their total utility ratings.
Natural language (NL) explanation window that articulates Minerva's reasoning
for a particular chain in English.

The interface allows the user to:

153
Toolbar: allows to
navigate the graphs Strategy chain
and control different graphical Finding
Finding
options display window:
window:
lists
lists all
all the
the
findings
findings
chronologically
chronologically

Hypothesis
Hypothesis
window:
window:
lists
lists all
all the
the
hypotheses
hypotheses
chronologically
chronologically

Action
Action window:
window:

154
lists
lists all
all the
the
actions
actions
chronologically
chronologically

NL
NL explanation
explanation
window:
window:
Provides
Provides aa NL
NL
output
output for
for aa
strategy
strategy chain
chain

Figure C.1: Minerva-5 Explanatory GUI


1. zoom in/zoom out;
2. cycle through a scenario;
3. go to a select cycle;
4. go to the cycle where a speci c action was generated (by clicking on the action);
5. track new actions automatically as Minerva generates them (in this mode all the
windows are updated automatically as well);
6. view various length NL-form explanations for a particular chain by clicking on it;
7. go to the cycle where a particular hypothesis was concluded (by clicking on the
hypothesis);
8. use di erent graph viewing models (hierarchical, orthogonal, circular, etc.).
Primarily purposes of the interface include:
1. Explanation of Minerva actions when used as a part of the Advisory GUI (sec-
tion C.2, p.155).
2. Apprenticeship tutoring.
3. Minerva debugging.
Technically, the interface is implemented in MS Visual C++ 5.0 using MFC and The
Graphic Layout Toolkit. The data is retrieved from the common knowledge repository
using an ODBC interface.

C.2 Advisory GUI


Advisory GUI serves the task of advising (section 3.3.3, p.27), presenting a human subject
with an advice. The interface ( gure C.2, p.157) is centered on top-ranked and low-ranked
Minerva action lists which correspond to "do's" and "don'ts" at the moment. The lists

155
are dynamically updated as the information changes and an explanation of a particular
action could be obtained by clicking on the action. Such an explanation would consist
of the reasoning behind the action (the explanatory GUI (section C.1, p.153) will be
invoked to present it) and the reasoning behind the action's rank. The latter includes
prognosed ship states (obtained from the EPN predictor) and their rankings (computed
by the state evaluator). NL generation is facilitated.

C.3 Critiquing GUI


Critiquing GUI is intended to work either on its own or paired with the advisory GUI
(section C.2, p.155). In the former case, the interface works as an instructor aid delivering
a critique and performance measure (as de ned in section 3.3.4, p.30). In the latter
case, the two interfaces work together as an arti cial instructor providing the student
with advising and critiquing information. Together with the DCTrain ([28]) multimedia
immersive environment including the semi-automatic scenario generation facility ([37])
and Minerva itself the system forms a virtually autonomous training system with the
student being the only human in the loop.
Critiquing GUI is centered on "Errors of Omission" and "Errors of Commission" lists
as shown in gure C.3, p.158. The lists are formed as described in section 3.3.4, p.30.
Clicking on an action would invoke the explanatory interface providing the user with
reasoning behind the action and its ranking.

156
Toolbar: allows to
navigate the graphs Strategy chain
and control different graphical Finding
Finding
options display window:
window:
lists
lists all
all the
the
findings
findings
chronologically
chronologically

Hypothesis
Hypothesis
window:
window:
lists
lists all
all the
the
hypotheses
hypotheses
chronologically
chronologically

157
DO
DO list:
list:
Recommended: recommended
recommended
R3: set FB 370,338,300,254,4,2 on 3-319-0 actions
actions
R3: investigate 01-300-2
R3: fight fire 2-335-2
DON’T
DON’T list:
list:
Not Recommended: not
not recommended
recommended
DCCO: shut firemain valve 1-49-1 actions
actions
R5: investigate 3-300-0

NL
NL explanation
explanation
window:
window:
Provides
Provides aa NL
NL
output
output for
for aa
strategy
strategy chain
chain

Figure C.2: Minerva-5 Advisory GUI


NL
NL critique
critique window:
window: Toolbar: allows to
Strategy chain
Provides
Provides aa NL
NL output
output for
for the
the navigate the graphs
graphical
critique
critique selected
selected and control different Finding
display Finding
options window:
window:
lists
lists all
all the
the
findings
findings
chronologically
chronologically

Action
Action
window:
window:
lists
lists all
all the
the
student’s
student’s
chronologically
chronologically

158
Errors
Errors of
of Errors
Errors of
of
Commission
Commission List
List Omission
Omission List
List

Figure C.3: Minerva-5 Critiquing GUI


Appendix D
Experimental Data
This chapter will present us with a number of experimental results.

D.1 Blackboard Statistics


In the following tables we have collected statistics on the following parameters:
1. Cycles (or Pertinent Cycles) is the number of cycles during a scenario with a
non-empty strategy blackboard.
2. Actual Cycles is the total number of cycles for a scenario.
3. Nodes is the number of nodes in the strategy network within a cycle.
4. Top-level is the number of top-level goals within a cycle.
5. Edges is the number of edges in the strategy network within a cycle.
6. On-agenda is the number of lowest level nodes in the strategy network within a
cycle.

159
D.1.1 Minerva-3
Tables D.1, p.161 and D.2, p.162 show blackboard statistics collected on 110 diagnosis
runs.

D.1.2 Minerva-4
Table D.3, p.163 presents us with corresponding Minerva-4 blackboard statistics collected
from 15 damage control scenarios.

D.1.3 Minerva-5
Minerva-5 has the same domain and strategy knowledge as Minerva-4 and therefore has
blackboard statistics somewhat similar to Minerva-4's.

D.1.4 Comparative Chart


Table D.4, p.163 presents us with a comparative chart on Minerva-3 and Minerva-4/5.

D.2 Damage Control Scenarios


This section provides details on Minerva-4/5 evaluation experiments as described in sec-
tion 3.9.2.1, p.66.
Tables D.5, p.165, D.6, p.166, D.7, p.167, D.8, p.168, D.9, p.169, D.10, p.170, D.11,
p.171, D.12, p.172, D.13, p.173, D.14, p.174, and D.15, p.175 show Minerva-4, Minerva-5,
and SWOS students performance measured along the following directions:
1. Complete primary damage speci cations were logged as n blasts descriptions. Each
blast was described by its compartment, 3-value parameter vector (as DCTrain
requires), and the blast time.
2. Outcome of the scenario was de ned as follows:

160
Average Maximum
Filename Cycles Nodes Top-level Edges On-agenda Nodes Top-level Edges On-agenda
p1040.tra 68 78.514709 5.455883 110.897057 16.514706 180 9 237 37
p1042.tra 30 25.566668 3.6 24.066668 9.766666 50 6 49 18
p1043.tra 21 19.619047 2.238095 17.952381 8.142858 37 3 36 16
p1044.tra 43 48 3.395349 68.069771 11.767442 143 5 215 30
p1045.tra 17 16.941177 2.117647 16.117647 7.352941 31 3 34 14
p1046.tra 19 17.473684 2.473684 16.526316 7.31579 31 4 34 14
p1047.tra 37 51.486488 3.513514 74.918922 11.810811 143 6 215 30
p1048.tra 21 19.619047 2.238095 17.952381 8.142858 37 3 36 16
p1050.tra 61 83.016396 4.606557 119.049179 17.147541 179 7 236 37
p1051.tra 33 25.939394 4.121212 24.363636 9.484848 52 7 51 18
p1052.tra 28 23.107143 2.75 22.107143 9.142858 43 4 43 16
p1053.tra 15 17.266666 2.066667 16.6 7.6 31 3 34 14
p1054.tra 28 23.107143 2.75 22.107143 9.142858 43 4 43 16
p1055.tra 60 59.266666 3.216667 80.01667 14.65 143 8 193 34
p1056.tra 67 60.761192 3.880597 82.343285 14.477612 143 9 193 34
p1057.tra 15 13 1.933333 11.533334 5.533333 20 4 18 9
p1058.tra 58 45.982758 3.603448 61.258621 11.534483 143 7 215 30
p1059.tra 11 11.818182 1.636364 10.454545 5.090909 20 3 18 9
p1061.tra 61 59.786884 3.213115 81.491806 14.606558 143 7 193 34
p1063.tra 22 20.90909 2.772727 19.272728 8.545455 40 4 39 17
p1065.tra 67 78.149254 4.701492 111.253731 16.64179 179 8 236 37
p1066.tra 10 11.5 1.5 10.1 5 20 2 18 9
p1067.tra 18 16.5 2.111111 15.611111 7.111111 31 3 34 14
p1068.tra 11 11.818182 1.636364 10.454545 5.090909 20 3 18 9
p1069.tra 11 12 1.636364 10.454545 5.090909 20 3 18 9
p1070.tra 27 21.777779 2.592592 21.518518 8.888889 39 4 43 16
p1071.tra 70 80.242859 5.771429 111.542854 16.985714 182 9 239 38
p1072.tra 10 11.5 1.5 10.1 5 20 2 18 9
p1073.tra 21 20.571428 2.476191 20.095238 8.142858 39 4 43 16
p1074.tra 27 21.037037 2.407408 19.851852 8.888889 37 3 36 16
p1075.tra 37 51.513512 3.45946 74.702705 11.810811 143 5 215 30
p1076.tra 21 19.619047 2.238095 17.952381 8.142858 37 3 36 16
p1077.tra 28 23.107143 2.75 22.107143 9.142858 43 4 43 16
p1078.tra 37 51.567566 3.45946 74.810814 11.810811 143 5 215 30
p1079.tra 9 11.444445 1.444444 10 5 20 2 18 9
p1080.tra 9 11.444445 1.444444 10 5 20 2 18 9
p1081.tra 32 21.21875 2.59375 20.59375 8.71875 39 4 43 16
p1083.tra 28 23.107143 2.75 22.107143 9.142858 43 4 43 16
p1084.tra 21 19.619047 2.238095 17.952381 8.142858 37 3 36 16
p1086.tra 20 17.4 2.55 16.299999 7.25 31 4 34 14
p1087.tra 17 16.941177 2.117647 16.117647 7.352941 31 3 34 14
p1088.tra 27 23.222221 2.777778 22.25926 9.148149 43 4 43 16
p1089.tra 28 23.107143 2.75 22.107143 9.142858 43 4 43 16
p1090.tra 18 17.111111 2.5 16.222221 7.111111 31 4 34 14
p1091.tra 27 22.518518 2.703704 21.555555 8.925926 43 4 43 16
p1092.tra 11 12 1.636364 10.454545 5.090909 20 3 18 9
p1093.tra 28 23.107143 2.75 22.107143 9.142858 43 4 43 16
p1094.tra 21 20.571428 2.476191 20.095238 8.142858 39 4 43 16
p1095.tra 28 23.107143 2.75 22.107143 9.142858 43 4 43 16
p1096.tra 30 23.733334 3.366667 22.633333 8.9 45 5 45 16
p1097.tra 28 23.107143 2.75 22.107143 9.142858 43 4 43 16
p1099.tra 17 16.470589 2.058824 15.705882 7.176471 31 3 34 14
p1100.tra 18 17.666666 2.444444 16.722221 7.444445 31 4 34 14

Table D.1: Minerva-3 Blackboard Statistics (Part 1)

161
p1101.tra 10 11.5 1.5 10.1 5 20 2 18 9
p1102.tra 21 21.523809 2.619048 20.142857 8.190476 43 4 43 16
p1103.tra 27 22.518518 2.703704 21.555555 8.925926 43 4 43 16
p1104.tra 63 56.761906 3 72.412697 14.984127 143 5 181 34
p1105.tra 23 17.826086 2.521739 16.52174 7.521739 31 3 34 14
p1106.tra 20 18 2.45 16.85 7.65 31 3 34 14
p1107.tra 20 18 2.45 16.85 7.65 31 3 34 14
p1108.tra 22 22.318182 2.681818 20.90909 8.5 43 4 43 16
p1109.tra 12 11.75 1.666667 10.166667 4.916667 20 3 18 9
p1110.tra 43 44.255814 3.418605 62.255814 11.255814 116 5 186 22
p1111.tra 10 11.5 1.5 10.1 5 20 2 18 9
p1112.tra 37 51.513512 3.45946 74.702705 11.810811 143 5 215 30
p1113.tra 21 17 2.523809 15.857142 7.047619 31 4 34 14
p1114.tra 21 20.571428 2.476191 20.095238 8.142858 39 4 43 16
p1115.tra 9 11.444445 1.444444 10 5 20 2 18 9
p1116.tra 9 11.444445 1.444444 10 5 20 2 18 9
p1117.tra 18 16.5 2.111111 15.611111 7.111111 31 3 34 14
p1118.tra 17 16.941177 2.117647 16.117647 7.352941 31 3 34 14
p1119.tra 15 17.266666 2.066667 16.6 7.6 31 3 34 14
p1120.tra 44 50.159092 3.431818 71.409088 12.181818 143 5 215 30
p1122.tra 71 84.887321 6.647887 124.718307 17.507042 199 11 282 39
p1123.tra 37 51.513512 3.45946 74.702705 11.810811 143 5 215 30
p1124.tra 27 22.518518 2.703704 21.555555 8.925926 43 4 43 16
p1125.tra 11 11.272727 1.545455 9.818182 4.818182 20 2 18 9
p1126.tra 28 23.107143 2.75 22.107143 9.142858 43 4 43 16
p1127.tra 15 16.533333 2.066667 15.866667 7.2 31 3 34 14
p1128.tra 21 17.380953 2.619048 16.285715 7.190476 31 4 34 14
p1129.tra 65 50.323078 3.384615 63.246155 12.846154 143 5 215 30
p1130.tra 15 17.266666 2.066667 16.6 7.6 31 3 34 14
p1131.tra 11 12 1.636364 10.454545 5.090909 20 3 18 9
p1132.tra 17 16.941177 2.117647 16.117647 7.352941 31 3 34 14
p1133.tra 11 11.818182 1.636364 10.454545 5.090909 20 3 18 9
p1134.tra 68 71.264709 5.602941 100.117645 15.632353 146 10 198 35
p1136.tra 9 11.444445 1.444444 10 5 20 2 18 9
p1137.tra 9 11.444445 1.444444 10 5 20 2 18 9
p1138.tra 17 16.941177 2.117647 16.117647 7.352941 31 3 34 14
p1139.tra 20 17.049999 2.45 16.049999 7.1 31 4 34 14
p1140.tra 74 62.513512 4.972973 83.810814 15.054054 146 10 192 35
p1141.tra 28 21.714285 2.464286 20.535715 9.178572 40 4 39 17
p1142.tra 28 22 2.821429 20.821428 9.178572 40 4 39 17
p1143.tra 28 23.107143 2.75 22.107143 9.142858 43 4 43 16
p1144.tra 29 22.448277 3.206897 22.034483 8.655172 41 5 45 16
p1145.tra 30 23.733334 3.366667 22.633333 8.9 45 5 45 16
p1146.tra 27 22.518518 2.703704 21.555555 8.925926 43 4 43 16
p1147.tra 27 23.222221 2.777778 22.25926 9.148149 43 4 43 16
p1148.tra 28 23.107143 2.75 22.107143 9.142858 43 4 43 16
p1149.tra 28 23.107143 2.75 22.107143 9.142858 43 4 43 16
Avg/Max 27.68 27.18957628 2.69769532 31.37244242 8.8661007 199 11 282 39

Table D.2: Minerva-3 Blackboard Statistics (Part 2)

162
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
121 5.702479 2.206612 3.702479 2.768595 28 14 15 15 2032
24 5.75 2.291667 3.666667 2.916667 36 18 18 19 2879
24 5.5 2.166667 3.541667 2.791667 32 16 16 17 2433
25 5.28 2.08 3.4 2.68 32 16 16 17 2330
48 5.5 2.166667 3.541667 2.791667 32 16 16 17 2173
17 5.705883 2.235294 3.705882 2.941176 30 15 15 16 2392
19 6 2.315789 3.947368 3.105263 28 14 14 15 2598
22 6 2.363636 3.863636 3.045455 32 16 16 17 2056
95 11.936842 4.747368 7.6 5.8 56 20 38 24 2625
94 43.787235 18.74468 26.180851 20.531916 347 152 204 158 3031
74 13.405405 5.337838 8.459459 6.405406 64 24 40 29 2293
40 6.95 2.525 4.75 3.275 34 17 17 18 2094
41 5.634146 2.170732 3.682927 2.780488 30 15 15 16 2620
49.538 9.781 3.950 6.157 4.756 60.077 27.154 33.846 29.077 2427.385

Columns:
C1. number of pertinent cycles
C2. average number of nodes per cycle
C3. average number of top-level nodes per cycle
C4. average number of edges per cycle
C5. on-agenda nodes per cycle (average)
C6. nodes per cycle (max)
C7. top-level nodes per cycle (max)
C8. edges per cycle (max)
C9. on-agenda nodes per cycle (max)
C10. number of actual cycles performed by Minerva

Table D.3: Minerva-4 Blackboard Statistics

Category Minerva-3 Minerva-4 (Minerva-5)


Number of cycles 28 50
Avg. number of nodes per cycle 27 10
Avg. number of top-level nodes per cycle 3 4
Avg. number of edges per cycle 31 6
Avg. number of on-agenda nodes per cycle 9 5
Max number of nodes per cycle 199 60
Max number of top-level nodes per cycle 11 27
Max number of edges per cycle 282 34
Max number of on-agenda nodes per cycle 39 29

Table D.4: Minerva-3,4,5 Blackboard Statistics Comparison

163
(a) "dead" if a kill-point was reached within the rst 25 minutes of the scenario;
(b) "survived" if no kill-point was reached but some res didn't get extinguished
within the rst 25 minutes of the scenario;
(c) "victory" if all the res were extinguished and no kill-point was reached within
the rst 25 minutes.
3. Average cycle time was recorded for Minerva-4 and Minerva-5 as the time spent
per cycle averaged over all cycles starting from the rst damage report and ending
at the last damage-related message.

164
Compartment Blast parameters Time 1st cyc # 1st cyc time Last cyc # Last cyc time Avg cyc time Status
4-220-0-E S 0 20 8 7 6100 06:53.0 6251 11:50.0 1.967 Survived
4-110-0-E S 0 20 8 6 5173 04:47.0 5648 10:12.0 0.684 Survived
4-174-0-E S 0 20 8 6 6226 05:55.0 6628 11:44.0 0.868 Survived
2-161-1-T S 1 23 10 6 6228 08:11.0 6521 14:49.0 1.358 Survived
2-18-0-Q S 1 21 7 4 5812 04:34.0 6252 19:24.0 2.023 Survived
2-18-0-Q S 1 14 4 5 5777 07:14.0 6194 21:24.0 2.038 Survived
2-161-1-T S 0 13 5 6 5911 07:24.0 5963 09:08.0 2.000 Survived
3-310-2-L S 0 12 6 2 4728 03:24.0 5993 06:49.0 0.162 Survived
2-161-1-T S 1 18 7 6 6194 07:34.0 6503 14:48.0 1.405 Survived
2-18-0-Q S 0 16 6 7 6258 10:05.0 6588 21:23.0 2.055 Survived
2-338-2-L S 0 15 5 7 6255 10:09.0 6324 12:25.0 1.971 Survived
2-161-1-T S 1 11 4 7 6046 09:11.0 6102 11:02.0 1.982 Survived
1-18-0-Q S 1 25 11 6 6179 09:02.0 6405 16:43.0 2.040 Survived
4-126-0-E S 0 36 13 4 5797 04:12.0 6999 25:00.0 1.038 Survived
2-238-1-A S 0 32 13 3 5595 04:12.0 6156 08:05.0 0.415 Survived
3-370-0-E S 0 37 14 5 6022 05:21.0 6166 14:55.0 3.986 Survived
2-300-100-L S062 7 6116 09:17.0 6209 12:21.0 1.978 Survived
4-174-0-E S082 6 5898 06:08.0 6074 11:17.0 1.756 Survived
4-174-0-E S 0 12 5 6 6011 06:09.0 6594 25:00.0 1.940 Survived
3-220-100-A S 0 18 7 5 6063 08:18.0 6122 10:14.0 1.966 Survived
4-174-0-E S 0 11 3 4 6054 04:16.0 6631 09:12.0 0.513 Survived
4-426-0-A S044 5 5922 06:53.0 5998 09:22.0 1.961 Survived
3-410-0-K S058 0 281 00:10.0 816 00:45.0 0.065 Dead

165
3-370-0-E:3-78-0-M S 0 14 6:S 0 7 4 5 14 5732 05:14.0 5901 16:24.0 3.964 Dead
4-126-0-E:4-300-0-E S 0 11 6:S 0 17 5 5 12 5745 05:18.0 6072 22:31.0 3.159 Dead
2-238-1-A S 0 10 5 0 2256 01:35.0 5379 04:55.0 0.064 Dead
4-126-0-E S 0 24 10 6 6366 05:43.0 6719 24:29.0 3.190 Dead
2-238-1-A:3-338-0-E:-3-142-0-C S 0 24 10:S 0 50 18:S 3 100 40 7 15 17 4865 06:01.0 5216 21:04.0 2.573 Dead
3-370-0-E S 0 17 8 3 4871 03:18.0 5801 07:10.0 0.249 Dead
4-126-0-E S 0 15 7 7 5841 04:52.0 6055 18:14.0 3.748 Dead
2-238-1-A S 0 12 7 1 3761 02:25.0 6085 04:53.0 0.064 Dead
4-126-0-E:4-300-0-E S 0 15 7:S 0 12 6 5 14 6118 05:20.0 6494 19:54.0 2.324 Dead
2-238-1-A S 0 20 7 6 6114 06:13.0 6203 09:08.0 1.966 Dead
4-254-0-E S 0 19 5 0 320 00:11.0 5031 07:59.0 0.099 Dead
3-370-0-E:3-78-0-M S 0 15 7:S 0 7 3 4 11 5277 04:14.0 5875 10:14.0 0.602 Dead
4-126-0-E S 0 15 7 3 4656 03:15.0 5789 16:38.0 0.709 Dead
4-254-0-E:1-289-2-Q S 0 20 8:S 0 14 6 66 5927 05:14.0 6205 14:25.0 1.982 Dead
4-126-0-E S 0 16 6 5 6038 05:14.0 6094 08:43.0 3.732 Dead
3-370-0-E:3-78-0-M S 0 13 7:S 0 7 3 5 13 5920 05:15.0 6142 23:08.0 4.833 Dead
2-238-1-A S074 5 6451 07:21.0 6539 10:12.0 1.943 Dead
3-370-0-E S 0 13 5 3 5860 04:08.0 6226 07:02.0 0.475 Dead
4-126-0-E:4-300-0-E S 0 16 7:S 0 12 5 5 15 6027 05:12.0 6280 25:02.0 4.704 Dead
2-238-1-A S 0 11 6 6 6197 08:28.0 6258 10:28.0 1.967 Dead

Table D.5: DC Scenarios run with Minerva-4 (Part 1)


4-174-0-E S 1 40 16 1 1655 01:09.0 5640 05:26.0 0.064 Dead
4-174-0-E S 1 10 8 0 162 00:04.0 5060 05:02.0 0.061 Dead
3-370-0-E S 1 12 8 0 550 00:17.0 6151 20:09.0 0.213 Dead
3-370-0-E S 3 85 16 0 490 00:17.0 5380 17:44.0 0.214 Dead
3-370-0-E S 3 100 40 1 1729 01:16.0 5350 19:14.0 0.298 Dead
4-126-0-E S 1 26 12 0 504 00:17.0 6121 25:02.0 0.264 Dead
4-254-0-E S 1 20 40 2 3253 02:14.0 5482 07:34.0 0.144 Dead
4-254-0-E:1-220-3-L S 1 20 40:S 3 100 40 27 4020 02:31.0 6699 25:20.0 0.511 Victory
4-174-0-E:3-370-0-E S 3 100 40:S 3 98 40 39 4739 03:16.0 5819 22:24.0 1.063 Victory
4-174-0-E S 3 100 40 0 560 00:19.0 5351 04:47.0 0.056 Victory
4-254-0-E S 3 99 39 4 5650 04:10.0 6477 13:32.0 0.680 Victory
4-254-0-E S 3 100 40 0 474 00:16.0 4211 07:54.0 0.123 Victory
4-254-0-E:4-174-0-E S 3 100 40:S 3 100 40 24 3765 02:25.0 5641 11:51.0 0.302 Victory
2-200-2-Q S 0 32 7 2 3464 02:26.0 5106 06:52.0 0.162 Victory
1-206-3-A S 0 28 13 0 346 00:18.0 5651 19:49.0 0.221 Victory
-1-149-2-Q S 0 40 16 0 2293 01:36.0 5440 08:40.0 0.135 Victory
4-126-0-E:3-78-0-M S 0 8 3:S 0 20 8 7 20 6271 08:27.0 6461 20:34.0 3.826 Victory
4-220-0-E:4-220-4-F:-1-158-0- :-1-118-1- S 0 20 8:S 0 20 8:S 0 20 8:S 0 20 8 8 9 12 12 6115 05:10.0 6702 21:54.0 1.710 Victory
4-220-0-E S 0 20 8 6 6041 05:15.0 6186 09:56.0 1.938 Victory
4-110-0-E (a) S 0 20 8 6 6000 05:20.0 6190 11:06.0 1.821 Victory
4-110-0-E (b) S 0 20 8 6 5844 05:18.0 5997 09:58.0 1.830 Victory
4-220-0-E (a) S 0 20 8 6 6118 05:10.0 6274 10:13.0 1.942 Victory
4-110-0-E © S 0 20 8 6 6043 05:17.0 6209 10:24.0 1.849 Victory
4-220-0-E (b) S 0 20 8 6 6298 05:11.0 6459 10:23.0 1.938 Victory

166
4-174-0-E S 0 20 8 9 6280 05:12.0 6459 10:30.0 1.777 Victory
4-110-0-E (d) S 0 20 8 6 6269 05:13.0 6433 10:10.0 1.811 Victory
2-18-0-Q S195 10 6191 06:46.0 6579 19:59.0 2.044 Victory
2-161-1-T S 1 16 6 7 6253 08:15.0 6538 14:20.0 1.281 Victory
2-338-2-L S 0 14 5 0 1689 01:24.0 5205 04:44.0 0.057 Victory
2-18-0-Q S 1 13 5 6 6284 06:20.0 6707 19:58.0 1.934 Victory
2-338-2-L (a) S 0 14 5 0 2159 01:31.0 5199 04:26.0 0.058 Victory
2-18-0-Q S 1 24 6 0 1180 01:00.0 4997 11:10.0 0.160 Victory
2-18-0-Q S 1 11 5 3 6155 05:26.0 6588 19:59.0 2.016 Victory
2-161-1-T S173 0 2089 01:24.0 6572 24:07.0 0.304 Victory
2-338-2-L S 0 14 8 7 6309 07:57.0 6380 10:16.0 1.958 Victory
2-18-0-Q S 1 13 4 7 5958 06:25.0 6146 12:33.0 1.957 Victory
2-161-1-T S 1 11 5 5 6209 07:52.0 6473 13:26.0 1.265 Victory
2-18-0-Q S 1 17 5 4 6014 05:50.0 6390 18:11.0 1.971 Victory
2-161-1-T S 1 14 6 4 6306 04:40.0 6645 12:36.0 1.404 Victory
2-338-2-L S 0 14 3 5 6345 07:06.0 6438 10:08.0 1.957 Victory
2-338-2-L S 0 15 10 0 2184 01:36.0 5696 04:57.0 0.057 Victory
2-18-0-Q S176 0 2677 01:41.0 5640 11:25.0 0.197 Victory
2-161-1-T S 0 10 4 0 1856 01:17.0 5747 22:40.0 0.330 Victory
2-18-0-Q S 1 13 4 0 1448 01:05.0 5250 15:44.0 0.231 Victory

Table D.6: DC Scenarios run with Minerva-4 (Part 2)


2-18-0-Q S 1 12 3 13 6303 07:10.0 6684 19:46.0 1.984 Victory
2-161-1-T S183 6 6111 06:15.0 6171 08:13.0 1.967 Victory
2-338-2-L S 0 11 2 0 1792 01:28.0 5383 04:41.0 0.054 Victory
2-18-0-Q S 1 12 3 3 6151 04:32.0 6647 18:13.0 1.655 Victory
2-161-1-T S164 6 6306 07:02.0 6625 14:10.0 1.342 Victory
2-338-2-L S 0 11 6 0 2388 01:36.0 5530 04:51.0 0.062 Victory
2-18-0-Q S 1 11 5 4 5931 05:07.0 6251 16:01.0 2.044 Victory
2-161-1-T S 1 12 5 3 5556 04:05.0 6270 07:18.0 0.270 Victory
2-338-2-L S 0 30 10 3 5189 04:00.0 5931 07:41.0 0.298 Victory
2-161-1-T S 1 27 10 4 6081 06:08.0 6334 11:57.0 1.379 Victory
3-410-0-K S058 0 572 00:20.0 5297 23:15.0 0.291 Victory
4-174-0-E S022 5 5661 05:17.0 5751 07:46.0 1.656 Victory
4-220-0-E S 0 18 7 2 3227 02:14.0 5580 04:57.0 0.069 Victory
4-174-0-E S 0 11 4 5 6574 05:16.0 6971 17:01.0 1.776 Victory
3-319-0-Q S048 19 6102 07:17.0 6217 11:04.0 1.974 Victory
4-220-0-E S 0 12 10 6 6265 05:08.0 6347 07:48.0 1.951 Victory
4-220-0-E S 0 11 4 6 6411 05:14.0 6516 08:35.0 1.914 Victory
2-126-1-C S 0 10 5 8 6100 07:44.0 6177 10:13.0 1.935 Victory
3-97-200-L S 0 13 7 6 6207 07:23.0 6293 10:10.0 1.942 Victory
2-238-1-A S 0 21 7 9 6431 06:22.0 6562 10:41.0 1.977 Victory
1-126-0-C S 0 16 6 4 5986 04:26.0 6384 07:47.0 0.505 Victory
4-220-0-E S 0 12 7 7 6101 05:12.0 6170 07:25.0 1.928 Victory
4-126-0-E S 0 15 6 6 5902 05:12.0 6135 23:59.0 4.837 Victory
4-220-0-E S 0 11 3 7 6318 05:07.0 6429 08:39.0 1.910 Victory

167
4-110-0-E S 0 20 8 9 6165 05:10.0 6379 11:38.0 1.814 Victory
4-110-0-E S 0 20 8 0 1065 00:36.0 5565 04:57.0 0.058 Victory
4-110-0-E S000 0 1060 00:36.0 6414 05:41.0 0.057 Victory
4-174-0-E S 0 20 8 0 650 00:21.0 6068 24:52.0 0.272 Victory
4-174-0-E S 0 20 8 7 6396 05:11.0 7066 14:27.0 0.830 Victory
4-220-0-E S 1 20 8 7 6098 05:12.0 6179 07:50.0 1.951 Victory
4-220-0-E S 0 20 8 11 6028 05:11.0 6173 09:51.0 1.931 Victory
4-220-0-E (a) S 0 20 8 11 6144 05:12.0 6266 09:15.0 1.992 Victory
4-110-0-E S 0 20 8 8 6399 05:12.0 6542 09:27.0 1.783 Victory
4-220-0-E S 0 20 8 9 5867 05:12.0 5997 09:29.0 1.977 Victory
4-110-0-E (e) S 0 20 8 6 5993 05:12.0 6188 11:05.0 1.810 Victory
4-110-0-E (f) S 0 20 8 6 6139 05:18.0 6578 10:17.0 0.681 Victory
4-220-0-E © S 0 20 8 6 6024 05:09.0 6638 21:53.0 1.635 Victory
4-174-0-E (a) S 0 20 8 9 6317 05:16.0 6669 10:06.0 0.824 Victory
4-174-0-E (b) S 0 20 8 9 6439 06:06.0 6639 11:58.0 1.760 Victory
4-110-0-E S 0 20 8 7 5879 05:17.0 6021 09:33.0 1.803 Victory
4-110-0-E (g) S 0 20 8 6 6045 05:16.0 6184 09:24.0 1.784 Victory
4-220-0-E (d) S 0 20 8 6 6008 05:09.0 6162 10:09.0 1.948 Victory
4-174-0-E © S 0 20 8 9 6196 05:13.0 6379 10:31.0 1.738 Victory
4-174-0 S 0 20 8 10 6022 05:17.0 6230 11:30.0 1.793 Victory

Table D.7: DC Scenarios run with Minerva-4 (Part 3)


4-220-0-E (e) S 0 20 8 6 6208 05:09.0 6376 10:32.0 1.923 Victory
4-110-0-E (h) S 0 20 8 6 6091 05:12.0 6259 10:20.0 1.833 Victory
2-116-2-Q S 1 20 9 5 6280 06:49.0 6767 22:41.0 1.955 Victory
3-164-2-Q S 1 20 8 10 6081 05:17.0 6297 19:58.0 4.079 Victory
1-300-0-C S 1 20 8 7 5991 06:52.0 6080 09:45.0 1.944 Victory
2-200-2-Q S 1 20 8 6 6516 05:26.0 6845 10:03.0 0.842 Victory
2-430-1-Q:2-414-0-Q:3-370-0-E S 1 16 0:S 1 12 7:S 0 11 7 12 15 20 6045 07:32.0 6296 16:50.0 2.223 Victory
3-370-0-E:3-78-0-M S 0 12 4:S 0 11 5 4 16 6365 05:12.0 6527 15:28.0 3.802 Victory
4-126-0-E S 0 11 5 5 5679 05:25.0 5728 08:24.0 3.653 Victory
2-238-1-A S 0 10 4 0 1073 00:58.0 5316 05:13.0 0.060 Victory
2-238-1-A S 0 10 5 6 6275 08:02.0 6359 10:45.0 1.940 Victory
4-126-0-E S 0 12 6 5 5997 05:12.0 6185 17:51.0 4.037 Victory
2-238-1-A S 0 16 6 1 3094 02:03.0 5961 05:09.0 0.065 Victory
2-238-1-A S 0 17 7 1 3512 02:08.0 6313 05:15.0 0.067 Victory
3-370-0-E:3-78-0-M S 0 13 7:S 0 10 6 6 17 6147 06:22.0 6315 17:21.0 3.923 Victory
2-238-1-A S 0 17 7 5 6546 07:01.0 6638 09:56.0 1.902 Victory

168
3-370-0-E:3-78-0-M S 0 21 8:S 0 11 6 4 17 5810 04:16.0 6446 11:49.0 0.712 Victory
4-126-0-E S 0 17 8 6 6210 05:13.0 6448 24:00.0 4.745 Victory
2-238-1-A S 0 19 8 5 6218 06:49.0 6303 09:34.0 1.941 Victory
3-370-0-E:2-238-1-A S 0 17 9:S 0 17 9 5 15 6340 05:23.0 6642 25:02.0 3.904 Victory
4-126-0-E S 0 20 8 5 5928 05:18.0 6300 25:01.0 3.180 Victory
3-370-0-E:3-78-0-M S 0 23 8:S 0 13 4 12 19 6449 05:18.0 6600 15:14.0 3.947 Victory
4-126-0-E S 0 21 11 6 6274 05:17.0 6620 23:45.0 3.202 Victory
2-238-1-A S 0 10 4 6 6019 07:19.0 6094 09:43.0 1.920 Victory
3-370-0-E:3-78-0-M S 0 23 8:S 0 7 5 8 17 6030 05:14.0 6182 15:04.0 3.882 Victory
2-238-1-A S 0 16 7 5 6304 07:28.0 6357 09:13.0 1.981 Victory
2-238-1-A S087 3 5782 05:26.0 5834 07:08.0 1.962 Victory
4-126-0-E S 0 15 7 6 6032 06:15.0 6264 25:01.0 4.853 Victory
3-370-0-E:3-78-0-M S 0 15 6:S 0 7 2 5 13 6328 05:21.0 6487 13:25.0 3.044 Victory

Table D.8: DC Scenarios run with Minerva-4 (Part 4)


Compartment Blast parameters Time 1st cyc # 1st cyc time Last cyc # Last cyc time Avg cyc time Status
4-220-0-E S 0 20 8 7 6012 5:21 6098 9:51 3.244 Survived
4-110-0-E S 0 20 8 6 6760 5:17 6875 10:52 2.913 Survived
4-174-0-E S 0 20 8 6 6710 5:41 6825 11:24 2.983 Survived
2-161-1-T S 1 23 10 6 6713 6:49 6897 13:41 2.239 Survived
2-18-0-Q S 1 21 7 4 6181 4:31 6439 18:38 3.283 Survived
2-18-0-Q S 1 14 4 5 6308 7:55 6527 19:51 3.269 Survived
2-161-1-T S 0 13 5 6 6356 8:22 6533 15:14 2.328 Survived
3-310-2-L S 0 12 6 2 4958 3:09 6644 6:47 0.129 Survived
2-161-1-T S 1 18 7 6 6550 7:33 6604 8:46 1.352 Survived
2-18-0-Q S 0 16 6 7 6437 8:56 6618 17:46 2.928 Survived
2-338-2-L S 0 15 5 7 6561 6:39 6602 8:52 3.244 Survived
2-161-1-T S 1 11 4 7 6359 6:47 6522 12:42 2.178 Survived
1-18-0-Q S 1 25 11 6 6334 6:52 6433 12:08 3.192 Survived
4-126-0-E S 0 36 13 4 6376 5:25 6673 24:50.0 3.923 Survived
2-238-1-A S 0 32 13 3 5302 3:34 6639 17:49 0.639 Survived
3-370-0-E S 0 37 14 5 6498 4:39 6709 8:20 1.047 Survived
2-300-100-L S062 7 6533 7:10 6586 10:40 3.962 Survived
4-174-0-E S082 6 5987 4:23 6455 8:14 0.494 Survived
4-174-0-E S 0 12 5 6 6440 5:02 7063 19:31 1.395 Survived
3-220-100-A S 0 18 7 5 6632 5:49 6687 8:43 3.164 Survived
4-174-0-E S 0 11 3 4 5568 4:10 6276 9:25 0.445 Survived
4-426-0-A S044 5 5365 8:20 5411 10:32 2.870 Survived

169
3-410-0-K S058 0 1360 0:55 1436 1:30 0.461 Dead
3-370-0-E:3-78-0-M S 0 14 6:S 0 7 4 5 14 6124 5:44 6249 19:08 6.432 Dead
4-126-0-E:4-300-0-E S 0 11 6:S 0 17 5 5 12 6387 4:41 6619 12:56 2.134 Dead
2-238-1-A S 0 10 5 0 1995 1:31 5834 5:09 0.057 Dead
4-126-0-E S 0 24 10 6 6112 5:47 6307 23:30 5.451 Dead
2-238-1-A:3-338-0-E:-3-142-0-C S 0 24 10:S 0 50 18:S 3 100 40 7 15 17 6563 8:30 6649 13:05 3.198 Dead
3-370-0-E S 0 17 8 3 4853 2:55 6330 6:38 0.151 Dead
4-126-0-E S 0 15 7 7 5805 4:13 IGNORE IGNORE IGNORE Dead
2-238-1-A S 0 12 7 1 6492 8:16 6535 10:35 3.233 Dead
4-126-0-E:4-300-0-E S 0 15 7:S 0 12 6 5 14 6349 5:24 6448 24:50:00 11.778 Dead
2-238-1-A S 0 20 7 6 6588 8:09 6645 11:14 3.246 Dead
4-254-0-E S 0 19 5 0 1356 0:51 5065 5:22 0.073 Dead
3-370-0-E:3-78-0-M S 0 15 7:S 0 7 3 4 11 5446 4:46 5617 19:29 5.164 Dead
4-126-0-E S 0 15 7 3 4915 3:06 6149 24:02:00 1.018 Dead
4-254-0-E:1-289-2-Q S 0 20 8:S 0 14 6 66 6630 4:57 6775 13:05 3.366 Dead
4-126-0-E S 0 16 6 5 5664 4:53 5767 22:36 10.320 Dead
3-370-0-E:3-78-0-M S 0 13 7:S 0 7 3 5 13 5830 4:51 5911 13:21 6.296 Dead
2-238-1-A S074 5 5978 5:27 6031 8:16 3.189 Dead
3-370-0-E S 0 13 5 3 5487 4:23 5567 14:08 7.313 Dead
4-126-0-E:4-300-0-E S 0 16 7:S 0 12 5 5 15 6352 5:50 6605 12:50.0 1.660 Dead

Table D.9: DC Scenarios run with Minerva-5 (Part 1)


2-238-1-A S 0 11 6 6 6386 8:45 6439 11:36 3.226 Dead
4-174-0-E S 1 40 16 1 3699 2:33 4371 5:45 0.286 Victory
4-174-0-E S 1 10 8 0 1076 0:35 6005 5:00 0.054 Victory
3-370-0-E S 1 12 8 0 1111 0:36 7123 16:55 0.163 Victory
3-370-0-E S 3 85 16 0 4165 8:20 4221 18:15 10.625 Victory
3-370-0-E S 3 100 40 1 1978 1:27 6043 21:00 0.289 Victory
4-126-0-E S 1 26 12 0 928 0:42 5245 18:41 0.250 Victory
4-254-0-E S 1 20 40 2 3345 2:19 5410 25:30:00 0.674 Victory
4-254-0-E:1-220-3-L S 1 20 40:S 3 100 40 27 3517 2:15 6345 21:02 0.399 Victory
4-174-0-E:3-370-0-E S 3 100 40:S 3 98 40 39 5497 3:18 7423 25:15:00 0.684 Victory
4-174-0-E S 3 100 40 0 1235 :41 6473 5:31 0.055 Victory
4-254-0-E S 3 99 39 4 6209 4:21 6786 10:24 0.629 Victory
4-254-0-E S 3 100 40 0 540 :19 4044 10:23 0.172 Victory
4-254-0-E:4-174-0-E S 3 100 40:S 3 100 40 24 3585 2:15 5816 12:03 0.264 Victory
2-200-2-Q S 0 32 7 2 3627 2:20 5981 5:32 0.082 Victory
1-206-3-A S 0 28 13 0 740 :30 5831 11:23 0.128 Victory
-1-149-2-Q S 0 40 16 0 2634 1:49 5466 8:50 0.149 Victory
4-126-0-E:3-78-0-M S 0 8 3:S 0 20 8 7 20 6854 5:20 6978 18:27 6.347 Victory
4-220-0-E:4-220-4-F:-1-158-0- :-1-118-1- S 0 20 8:S 0 20 8:S 0 20 8:S 0 20 8 8 9 12 12 6480 5:14 6824 20:45 2.706 Victory
4-220-0-E S 0 20 8 6 6645 5:13 6730 9:42 3.165 Victory
4-110-0-E (a) S 0 20 8 6 6436 5:10 6558 11:07 2.926 Victory
4-110-0-E (b) S 0 20 8 6 6288 5:16 6378 9:37 2.900 Victory
4-220-0-E (a) S 0 20 8 6 6590 5:16 6667 9:20 3.169 Victory

170
4-110-0-E (c) S 0 20 8 6 6814 5:20 6946 11:48 2.939 Victory
4-220-0-E (b) S 0 20 8 6 6605 5:16 6772 14:04 3.162 Victory
4-174-0-E S 0 20 8 9 6409 5:19 6517 10:28 2.861 Victory
4-110-0-E (d) S 0 20 8 6 6170 5:16 6470 10:21 1.017 Victory
2-18-0-Q S195 10 6854 5:20 7170 20:03 2.794 Victory
2-161-1-T S 1 16 6 7 6642 7:39 6813 13:55 2.199 Victory
2-338-2-L S 0 14 5 0 839 :49 5072 4:37 0.054 Victory
2-18-0-Q S 1 13 5 6 6367 6:52 6610 19:56 3.226 Victory
2-338-2-L (a) S 0 14 5 0 2956 1:55 6211 4:58 0.056 Victory
2-18-0-Q S 1 24 6 0 1439 1:08 5200 15:19 0.226 Victory
2-18-0-Q S 1 11 5 3 6394 4:57 6652 19:08 3.298 Victory
2-161-1-T S173 0 1077 :53 5255 16:13 0.220 Victory
2-338-2-L S 0 14 8 7 6472 7:52 6534 11:08 3.161 Victory
2-18-0-Q S 1 13 4 7 6667 6:16 6960 22:18 3.283 Victory
2-161-1-T S 1 11 5 5 5535 6:38 5701 12:28 2.108 Victory
2-18-0-Q S 1 17 5 4 3992 5:45 4081 10:38 3.292 Victory
2-161-1-T S 1 14 6 4 6003 5:48 6168 11:26 2.048 Victory
2-338-2-L S 0 14 3 5 6344 5:29 6399 8:25 3.200 Victory
2-338-2-L S 0 15 10 0 1961 1:22 5016 4:31 0.061 Victory
2-18-0-Q S176 0 3021 2:00 6114 21:23 0.376 Victory

Table D.10: DC Scenarios run with Minerva-5 (Part 2)


2-161-1-T S 0 10 4 0 1623 1:21 5794 10:00 0.124 Victory
2-18-0-Q S 1 13 4 0 1697 1:20 5256 7:11 0.099 Victory
2-18-0-Q S 1 12 3 13 6432 8:16 6690 22:42 3.434 Victory
2-161-1-T S183 6 6379 6:23 6577 13:50 6.126 Victory
2-338-2-L S 0 11 2 0 2193 1:28 6239 4:55 0.051 Victory
2-18-0-Q S 1 12 3 3 6627 4:54 6729 10:22 3.216 Victory
2-161-1-T S164 6 6218 12:47 6368 17:42 1.967 Victory
2-338-2-L S 0 11 6 0 3292 2:05 6116 5:26 0.071 Victory
2-18-0-Q S 1 11 5 4 6539 5:09 6599 8:19 3.167 Victory
2-161-1-T S 1 12 5 3 5546 3:46 6879 5:44 0.085 Victory
2-338-2-L S 0 30 10 3 5309 3:37 6835 24:18:00 0.813 Victory
2-161-1-T S 1 27 10 4 5878 7:45 6038 13:17 2.075 Victory
3-410-0-K S058 0 970 0:47 4451 24:19:00 0.406 Victory
4-174-0-E S022 5 6584 4:44 6632 7:07 2.979 Victory
4-220-0-E S 0 18 7 2 2422 1:45 5889 5:06 0.057 Victory
4-174-0-E S 0 11 4 5 6632 5:05 6727 8:56 2.432 Victory
3-319-0-Q S048 19 6289 5:38 6342 8:27 3.189 Victory
4-220-0-E S 0 12 10 6 6228 8:06 6359 11:56 1.756 Victory
4-220-0-E S 0 11 4 6 5007 4:16 5526 7:35 0.383 Victory
2-126-1-C S 0 10 5 8 5495 7:19 5548 10:43 3.849 Victory
3-97-200-L S 0 13 7 6 5964 7:04 6011 9:58 3.723 Victory
2-238-1-A S 0 21 7 9 5297 6:58 5404 13:51 3.860 Victory
1-126-0-C S 0 16 6 4 4608 3:41 5685 6:51 0.176 Victory

171
4-220-0-E S 0 12 7 7 6026 5:50 6086 9:33 3.717 Victory
4-126-0-E S 0 15 6 6 5692 7:40 5851 18:47 4.195 Victory
4-220-0-E S 0 11 3 7 6242 7:23 6307 10:50 3.185 Victory
4-110-0-E S 0 20 8 9 5720 10:03 5820 14:58 2.950 Victory
4-110-0-E S 0 20 8 0 1975 1:22 5796 4:59 0.057 Victory
4-110-0-E S000 0 1753 1:14 5493 4:37 0.054 Victory
4-174-0-E S 0 20 8 0 1470 1:05 4681 6:02 0.092 Victory
4-174-0-E S 0 20 8 7 6107 7:00 6371 12:46 1.311 Victory
4-220-0-E S 1 20 8 7 5622 8:21 5844 17:17 2.414 Victory
4-220-0-E S 0 20 8 11 6123 11:34 6221 16:42 3.143 Victory
4-220-0-E (a) S 0 20 8 11 5860 10:53 5934 14:49 3.189 Victory
4-110-0-E S 0 20 8 8 6043 8:46 6380 14:24 1.003 Victory
4-220-0-E S 0 20 8 9 6076 5:23 6153 9:33 3.247 Victory
4-110-0-E (e) S 0 20 8 6 6244 6:38 6349 11:45 2.924 Victory
4-110-0-E (f) S 0 20 8 6 6082 5:27 6176 9:54 2.840 Victory
4-220-0-E (c) S 0 20 8 6 6177 6:07 6264 10:40 3.138 Victory
4-174-0-E (a) S 0 20 8 9 6755 9:16 6844 13:24 2.787 Victory
4-174-0-E (b) S 0 20 8 9 6818 14:54 6951 21:05 2.789 Victory
4-110-0-E S 0 20 8 7 6069 6:40 6171 11:39 2.931 Victory
4-110-0-E (g) S 0 20 8 6 6257 5:57 6379 11:55 2.934 Victory

Table D.11: DC Scenarios run with Minerva-5 (Part 3)


4-220-0-E (d) S 0 20 8 6 6565 6:08 6649 10:35 3.179 Victory
4-174-0-E (c) S 0 20 8 9 5731 7:52 5969 13:04 1.311 Victory
4-174-0 S 0 20 8 10 6552 5:52 6647 10:22 2.842 Victory
4-220-0-E (e) S 0 20 8 6 6342 5:11 6433 10:02 3.198 Victory
4-110-0-E (h) S 0 20 8 6 6315 6:19 6592 11:17 1.076 Victory
2-116-2-Q S 1 20 9 5 6899 6:55 7004 12:45 3.333 Victory
3-164-2-Q S 1 20 8 10 6640 5:20 6812 18:25 4.564 Victory
1-300-0-C S 1 20 8 7 3896 8:31 3932 11:10 4.417 Victory
2-200-2-Q S 1 20 8 6 1511 6:14 1943 12:28 0.866 Victory
2-430-1-Q:2-414-0-Q:3-370-0-E S 1 16 0:S 1 12 7:S 0 11 7 12 15 20 6404 6:19 6623 20:57 4.009 Victory
3-370-0-E:3-78-0-M S 0 12 4:S 0 11 5 4 16 4549 4:20 4668 16:54 6.336 Victory
4-126-0-E S 0 11 5 5 5930 5:23 6219 25:00:00 4.073 Victory
2-238-1-A S 0 10 4 0 1545 1:14 4758 4:43 0.065 Victory
2-238-1-A S 0 10 5 6 6584 7:28 6649 10:52 3.138 Victory
4-126-0-E S 0 12 6 5 6250 5:16 6350 25:00:00 11.740 Victory
2-238-1-A S 0 16 6 1 2852 2:02 5614 4:51 0.061 Victory
2-238-1-A S 0 17 7 1 3961 2:38 6040 5:12 0.074 Victory
3-370-0-E:3-78-0-M S 0 13 7:S 0 10 6 6 17 9282 13:38 9402 24:02:00 5.200 Victory

172
2-238-1-A S 0 17 7 5 6899 7:54 6948 10:26 3.102 Victory
3-370-0-E:3-78-0-M S 0 21 8:S 0 11 6 4 17 6165 4:59 6255 15:59 7.333 Victory
4-126-0-E S 0 17 8 6 6440 6:17 6540 25:00:00 11.130 Victory
2-238-1-A S 0 19 8 5 6375 7:07 6433 10:11 3.172 Victory
3-370-0-E:2-238-1-A S 0 17 9:S 0 17 9 5 15 6441 5:21 6834 25:15:00 3.038 Victory
4-126-0-E S 0 20 8 5 5978 5:12 6015 8:56 6.054 Victory
3-370-0-E:3-78-0-M S 0 23 8:S 0 13 4 12 19 6599 5:23 6704 16:30 6.352 Victory
4-126-0-E S 0 21 11 6 6444 5:19 6640 22:28 5.250 Victory
2-238-1-A S 0 10 4 6 6161 8:15 6218 11:15 3.158 Victory
3-370-0-E:3-78-0-M S 0 23 8:S 0 7 5 8 17 6623 5:22 6713 15:35 6.811 Victory
2-238-1-A S 0 16 7 5 6118 7:20 6190 11:07 3.153 Victory
2-238-1-A S087 3 6797 4:39 6849 7:27 3.231 Victory
4-126-0-E S 0 15 7 6 6416 5:19 6519 22:09 9.806 Victory
3-370-0-E:3-78-0-M S 0 15 6:S 0 7 2 5 13 6654 5:22 6855 22:04 4.985 Victory

Table D.12: DC Scenarios run with Minerva-5 (Part 4)


Scenario File Blasts Survived/Dead Victory/Non-victory
bak864308569.mdb 4-220-0,S 0 20 8,7 alive at 25 Non-victory
bak864310476.mdb 4-110-0,S 0 20 8,6 alive at 25 Non-victory
bak864312168.mdb 4-220-0,S 0 20 8,6 dead at 25 Non-victory
bak864314079.mdb 4-174-0,S 0 20 8,6 alive at 25 Non-victory
bak864318871.mdb 4-110-0,S 0 20 8,6 dead at 25 Non-victory
bak864320388.mdb 4-110-0,S 0 20 8,6 dead at 25 Non-victory
bak864324150.mdb 4-220-0,S 0 20 8,6 dead at 25 Non-victory
bak864326332.mdb 4-110-0,S 0 20 8,6 alive at 25 Non-victory
bak864328305.mdb 4-220-0,S 0 20 8,6 alive at 25 Non-victory
bak864329980.mdb 4-174-0,S 0 20 8,9 dead at 25 Non-victory
bak864332445.mdb 4-110-0,S 0 20 8,6 alive at 25 Non-victory
bak864179599.mdb 2-161-1,S 1 23 10,6 alive at 25 Non-victory
bak864182054.mdb 2-18-0,S 1 21 7,4 dead at 25 Non-victory
bak864190883.mdb 2-18-0,S 1 14 4,5 dead at 25 Non-victory
bak864194488.mdb 2-18-0,S 1 9 5,10 alive at 25 Non-victory
bak864196058.mdb 2-161-1,S 0 13 5,6 alive at 25 Non-victory
bak864198851.mdb 3-310-2,S 0 12 6,2 alive at 25 Victory
bak864200347.mdb 2-161-1,S 1 18 7,6 alive at 25 Non-victory
bak864202648.mdb 2-18-0,S 0 16 6,7 alive at 25 Non-victory
bak864203873.mdb 2-161-1,S 1 16 6,7 alive at 25 Non-victory
bak864204896.mdb 2-338-2,S 0 14 5,0 alive at 25 Victory
bak864263215.mdb 2-18-0,S 1 13 5,6 dead at 25 Non-victory
bak864264743.mdb 2-161-1,S 1 11 4,7 alive at 25 Non-victory
bak864265604.mdb 2-338-2,S 0 14 5,0 alive at 25 Victory
bak864265787.mdb 2-18-0,S 1 24 6,0 alive at 25 Non-victory
bak864268422.mdb 2-18-0,S 1 11 5,3 dead at 25 Non-victory
bak864270042.mdb 1-18-0,S 1 25 11,6 dead at 25 Non-victory
bak864271843.mdb 2-161-1,S 1 7 3,-2 alive at 25 Non-victory
bak864277196.mdb 2-338-2,S 0 14 8,7 alive at 25 Non-victory
bak864284520.mdb 2-161-1,S 1 11 5,5 alive at 25 Non-victory
bak864285497.mdb 2-18-0,S 1 17 5,4 alive at 25 Victory
bak864287174.mdb 2-161-1,S 1 14 6,4 alive at 25 Non-victory
bak864287883.mdb 2-338-2,S 0 14 3,5 alive at 25 Non-victory
bak864289961.mdb 2-338-2,S 0 15 10,-2 alive at 25 Non-victory
bak864292205.mdb 2-18-0,S 1 7 6,-2 alive at 25 Victory
bak864350010.mdb 2-161-1,S 0 10 4,-1 alive at 25 Victory
bak864350477.mdb 2-18-0,S 1 13 4,0 alive at 25 Non-victory
bak864356981.mdb 2-18-0,S 1 12 3,13 alive at 25 Non-victory
bak864358549.mdb 2-161-1,S 1 8 3,6 alive at 25 Non-victory
bak864359782.mdb 2-338-2,S 0 11 2,-1 alive at 25 Victory
bak864361099.mdb 4-126-0,S 0 36 13,4 dead at 25 Non-victory
bak864361801.mdb 2-238-1,S 0 32 13,3 alive at 25 Non-victory
bak864365581.mdb 3-370-0,S 0 37 14,5 dead at 25 Non-victory
bak864366202.mdb 2-18-0,S 1 12 3,3 alive at 25 Non-victory
bak864367540.mdb 2-161-1,S 1 6 4,6 alive at 25 Non-victory
bak864371860.mdb 2-338-2,S 0 11 6,-1 alive at 25 Victory
bak864372817.mdb 2-18-0,S 1 11 5,4 alive at 25 Victory
bak864376223.mdb 2-161-1,S 1 12 5,3 alive at 25 Non-victory
bak864378634.mdb 2-161-1,S 1 27 10,4 alive at 25 Non-victory
bak864049245.mdb 2-300-100,S 0 6 2,7 dead at 25 Non-victory
bak864051197.mdb 4-174-0,S 0 8 2,6 alive at 25 Non-victory

Table D.13: DC Scenarios run by SWOS Students (Part 1)

173
bak864061471.mdb 4-174-0,S 0 12 5,6 alive at 25 Non-victory
bak864065034.mdb 3-220-100,S 0 18 7,5 alive at 25 Non-victory
bak864067299.mdb 4-174-0,S 0 11 3,4 alive at 25 Non-victory
bak864069077.mdb 4-426-0,S 0 4 4,5 dead at 25 Non-victory
bak864070868.mdb 3-410-0,S 0 5 8,0 dead at 25 Non-victory
bak864072427.mdb 4-174-0,S 0 2 2,5 alive at 25 Victory
bak864073307.mdb 4-220-0,S 0 18 7,2 alive at 25 Victory
bak864130614.mdb 4-174-0,S 0 11 4,5 alive at 25 Non-victory
bak864133499.mdb 3-319-0,S 0 4 8,19 alive at 25 Non-victory
bak864135218.mdb 4-220-0,S 0 12 10,6 alive at 25 Victory
bak864137154.mdb 4-220-0,S 0 11 4,6 alive at 25 Non-victory
bak864139521.mdb 2-126-1,S 0 10 5,8 alive at 25 Non-victory
bak864143333.mdb 3-97-200,S 0 13 7,6 alive at 25 Non-victory
bak864145612.mdb 2-238-1,S 0 21 7,9 alive at 25 Non-victory
bak864146805.mdb 1-126-0,S 0 16 6,4 alive at 25 Non-victory
bak864147559.mdb 4-220-0,S 0 12 7,7 alive at 25 Victory
bak864150998.mdb 4-126-0,S 0 15 6,6 alive at 25 Non-victory
bak864152650.mdb 4-220-0,S 0 11 3,7 alive at 25 Non-victory
bak864154021.mdb 4-110-0,S 0 20 8,9 alive at 25 Non-victory
bak864154199.mdb 4-110-0,S 0 20 8,-1 alive at 25 Non-victory
bak864156190.mdb 4-174-0,S 0 20 8,7 dead at 25 Non-victory
bak864158375.mdb 4-220-0,S 0 20 8,11 alive at 25 Non-victory
bak864216349.mdb 4-110-0,S 0 20 8,8 alive at 25 Non-victory
bak864218550.mdb 4-220-0,S 0 20 8,9 alive at 25 Non-victory
bak864224172.mdb 4-110-0,S 0 20 8,6 alive at 25 Non-victory
bak864226002.mdb 4-220-0,S 0 20 8,6 alive at 25 Non-victory
bak864232255.mdb 4-174-0,S 0 20 8,9 dead at 25 Non-victory
bak864233985.mdb 4-174-0,S 0 20 8,9 dead at 25 Non-victory
bak864235825.mdb 4-110-0,S 0 20 8,7 alive at 25 Non-victory
bak864240162.mdb 4-110-0,S 0 20 8,6 alive at 25 Non-victory
bak864241974.mdb 4-220-0,S 0 20 8,6 alive at 25 Non-victory
bak864243418.mdb 4-174-0,S 0 20 8,9 dead at 25 Non-victory
bak864245011.mdb 4-174-0,S 0 20 8,10 dead at 25 Non-victory
bak864302407.mdb 4-220-0,S 0 20 8,6 dead at 25 Non-victory
bak864304937.mdb 4-110-0,S 0 20 8,6 alive at 25 Non-victory
bak864159995.mdb 4-220-0,S 0 20 8,8:4-220-4,S 0 20 8,9:-1-158-0,S 0 20 8,12:-1-118-1dead at 25 Non-victory
bak864046256.mdb 2-166-2,S 1 20 9,5 alive at 25 Non-victory
bak864047368.mdb 3-164-2,S 1 20 8,10 dead at 25 Non-victory
bak864048177.mdb 1-300-0,S 1 20 8,7 dead at 25 Non-victory
bak864052611.mdb 2-200-2,S 1 20 8,6 alive at 25 Non-victory
bak864055679.mdb 4-10016-0,S 1 24 7,6 alive at 25 Non-victory
bak864058294.mdb 2-430-1,S 1 16 0,12:2-414-0,S 1 12 7,15:3-370-0,S 0 11 7,20 alive at 25 Non-victory
bak864060789.mdb 3-370-0,S 0 12 4,6;3-78-0,S 0 11 5,16 alive at 25 Non-victory
bak864062875.mdb 4-126-0,S 0 11 5,5 alive at 25 Victory
bak864063482.mdb 2-238-1,S 0 10 4,-2 alive at 25 Non-victory
bak864064731.mdb 2-238-1,S 0 10 5,6 alive at 25 Non-victory
bak864069356.mdb 4-126-0,S 0 12 6,5 alive at 25 Non-victory
bak864070334.mdb 2-238-1,S 0 16 6,1 alive at 25 Victory
bak864071545.mdb 3-370-0,S 0 14 6,5:3-78-0,S 0 7 4,14 dead at 25 Victory
bak864075466.mdb 2-238-1,S 0 17 7,1 alive at 25 Victory
bak864133839.mdb 3-370-0,S 0 18 6,2:3-78-0,S 0 16 7,9 dead at 25 Non-victory
bak864135633.mdb 3-370-0,S 0 13 7,6:3-78-0,S 0 10 6,17 alive at 25 Victory

Table D.14: DC Scenarios run by SWOS Students (Part 2)

174
bak864138705.mdb 2-238-1,S 0 17 7,5 alive at 25 Non-victory
bak864139851.mdb 3-370-0,S 0 21 8,4:3-78-0,S 0 11 6,17 dead at 25 Victory
bak864141432.mdb 4-126-0,S 0 17 8,6 alive at 25 Non-victory
bak864150180.mdb 2-238-1,S 0 19 8,5 alive at 25 Non-victory
bak864154486.mdb 3-370-0,S 0 17 9,5:2-238-1,S 0 17 9,15 alive at 25 Victory
bak864155914.mdb 4-126-0,S 0 20 8,5 alive at 25 Non-victory
bak864157447.mdb 3-370-0,S 0 23 8,12:3-78-0,S 0 13 4,19 dead at 25 Victory
bak864159709.mdb 4-126-0,S 0 21 11,6 alive at 25 Victory
bak864161657.mdb 2-238-1,S 0 10 4,6 alive at 25 Non-victory
bak864219191.mdb 3-370-0,S 0 23 8,8:3-78-0,S 0 7 5,17 dead at 25 Victory
bak864222962.mdb 2-238-1,S 0 16 7,5 alive at 25 Non-victory
bak864229658.mdb 4-126-0,S 0 15 7,6 alive at 25 Non-victory
bak864237528.mdb 4-126-0,S 0 11 6,5:4-300-0,S 0 17 5,12 alive at 25 Victory
bak864240796.mdb 2-238-1,S 0 10 5,0 alive at 25 Victory
bak864241964.mdb 4-126-0,S 0 32 12,6 alive at 25 Victory
bak864246405.mdb 2-238-1,S 0 24 10,7:3-338-0,S 0 50 18,15:-3-142-0,S 3 100 40,17 dead at 25 Non-victory
bak864247571.mdb 3-370-0,S 0 17 8,3 dead at 25 Non-victory
bak864249116.mdb 4-126-0,S 0 15 7,7 alive at 25 Non-victory
bak864306364.mdb 2-238-1,S 0 12 7,1 alive at 25 Victory
bak864307962.mdb 3-370-0,S 0 15 6,5:3-78-0,S 0 7 2,13 alive at 25 Victory
bak864309904.mdb 4-126-0,S 0 15 7,5:4-300-0,S 0 12 6,14 alive at 25 Non-victory
bak864312598.mdb 2-238-1,S 0 20 7,6 alive at 25 Non-victory
bak864313387.mdb 3-370-0,S 0 15 7,4:3-78-0,S 0 7 3,11 dead at 25 Non-victory
bak864316650.mdb 4-254-0,S 0 19 5,-2 alive at 25 Non-victory
bak864322076.mdb 4-254-0,S 0 20 8,6:1-289-2,S 0 14 6,6 alive at 25 Non-victory
bak864322551.mdb 4-126-0,S 0 15 7,3 alive at 25 Non-victory
bak864324268.mdb 4-126-0,S 0 16 6,5 alive at 25 Non-victory
bak864325188.mdb 3-370-0,S 0 13 7,5:3-78-0,S 0 7 3,13 dead at 25 Non-victory
bak864331991.mdb 2-238-1,S 0 7 4,5 alive at 25 Non-victory
bak864333532.mdb 3-370-0,S 0 13 5,3 alive at 25 Non-victory
bak864335119.mdb 4-126-0,S 0 16 7,5:4-300-0,S 0 12 5,15 alive at 25 Non-victory
bak865447643.mdb 2-238-1,S 0 11 6,6 alive at 25 Non-victory
bak863838148.mdb 4-174-0,S 1 40 16,1 alive at 25 Non-victory
bak863838468.mdb 4-174-0,S 1 10 8,-1 alive at 25 Non-victory
bak863838968.mdb 3-370-0,S 1 12 8,-1 alive at 25 Non-victory
bak863839796.mdb 3-370-0,S 3 85 16,-1 alive at 25 Non-victory
bak863847098.mdb 3-370-0,S 3 100 40,1 alive at 25 Non-victory
bak863862392.mdb 4-126-0,S 1 26 12,-2 alive at 25 Non-victory
bak863866243.mdb 4-254-0,S 1 20 40,2 alive at 25 Non-victory
bak863867645.mdb 4-254-0,S 1 20 40,2:1-220-3,S 3 100 40,7 dead at 25 Non-victory
bak863926517.mdb 4-174-0,S 3 100 40,3:3-370-0,S 3 98 40,9 dead at 25 Non-victory
bak863933648.mdb 4-174-0,S 3 100 40,-1 alive at 25 Non-victory
bak863934960.mdb 4-254-0,S 3 99 39,4 dead at 25 Non-victory
bak863936496.mdb 4-254-0,S 3 100 40,-2 dead at 25 Non-victory
bak863990232.mdb 4-254-0,S 3 100 40,2:4-174-0,S 3 100 40,4 dead at 25 Non-victory
bak863996906.mdb 2-200-2,S 0 32 7,2 alive at 25 Non-victory
bak863998797.mdb 1-206-3,S 0 28 13,-3 alive at 25 Non-victory
bak864164285.mdb -1-149-2,S 0 40 16,-1 alive at 25 Non-victory

Table D.15: DC Scenarios run by SWOS Students (Part 3)

175
Bibliography
[1] J.E.Hopcroft, J.D.Ullman. Introduction to Automata Theory, Languages, and Com-
putation, Addison-Wesley, 1979.
[2] Naval Ships' Technical Manual. S9086-S3-STM-010. Chapter 555 "Shipboard Fire-
ghting". 1988.
[3] USS John Paul Jones (DDG 53) Instruction 3541.1B. Repair Party Manual and
Main Space Fire ghting Doctrine. 1994.
[4] H. Rogers, Jr. Theory of Recursive Functions and E ective Computability. McGrow-
Hill, B.C., 1967.
[5] J.L.Peterson. Petri Net Theory and the Modeling of Systems. Prentice-Hall, Inc.,
1981.
[6] P.H.Winston. Arti cial Intelligence. Addison-Wesley Publishing Company., 1984.
[7] Z.H.Najem. A Hierarchical Representation of Control Knowledge For A Heuristic
Classi cation Shell. Ph.D. Thesis. UIUC. 1993.
[8] D.L.Tate. Development of a Tactical Decision Aid for Shipboard Damage Control.
NRL/FR/5580-96-9837.
[9] The Damage Con-
trol Automation for Reduced Manning (DC-ARM) Program. Call for proposals at
http://chemdiv-www.nrl.navy.mil/6180/6180ext/dcarm.html-ssi.

176
[10] Y.T.Park, S.Donoho, D.C.Wilkins. Recursive Heuristic Classi cation. International
Journal of Expert Systems. 1993.
[11] Y.T.Park, K.W.Tan, D.C.Wilkins. Minerva 3.0: A Knowledge- based Expert System
Shell with Declarative Representation and Flexible Control. UIUC, Department of
Computer Science. 1991.
[12] D.C.Wilkins, J.A.Sniezek. Multimedia Scenario Generation and Critiquing: The
Damage Control Domain. ONR Proposal. 1995.
[13] B.G.Buchanan, E.H.Shortli e (eds.). Rule-Based Expert Systems: The MYCIN Ex-
periments of the Stanford Heuristic Programming Project. Addison-Wesley. 1984.
[14] W.J.Clancey, R.Letsinger. NEOMYCIN: Recon guring a rule-based expert system
for application to teaching. In W.J.Clancey, E.H.Shortli e (eds.) Readings in Medical
Arti cial Intelligence: The First Decade, pp. 361-381. Addison-Wesley. 1984.
[15] Guardian Project at
http://www-ksl.stanford.edu/projects/guardian/index.html.

[16] J.E.Larsson, B.Hayes-Roth, D.Gaba. Guardian: Final Evaluation. Knowledge Sys-


tems Lab, Stanford University. TechReport KSL-96-25. 1996.
[17] B.G.Silverman. Critiquing Human Error: A Knowledge Based Human-Computer
Collaborative Approach, London: Academic Press, 1992.
[18] Y.T.Park. Blackboard Scheduler Control Knowledge for Heuristic Classi cation:
Representation and Inference. Ph.D. Thesis, Department of Computer Science,
UIUC. Also TechReport UIUCDCS-R-93-1788. 1993.
[19] S.K.Donoho. Similarity-based learning for recursive heuristic classi cation. M.S.
Thesis, Department of Computer Science, UIUC. 1993.

177
[20] V.Bulitko, D.C.Wilkins. Minerva: A Blackboard Expert System For Real-Time
Problem-Solving and Critiquing. KBS Technical Report UIUC-BI-KBS-98-003, U-
niversity of Illinois at Urbana-Champaign, 1998.
[21] A.Barr, P.R.Cohen, E.A.Feigenbaum (eds.). The Handbook of Arti cial Intelligence.
Volume IV, Chapter XVI by H.P.Nii. Addison-Wesley. 1989.
[22] S.Haykin. Neural Networks: A Comprehensive Foundation. Macmillan College Pub-
lishing Company., 1994.
[23] E.A.Feigenbaum. The Art of Arti cial Intelligence: I. Themes and Case Studies of
Knowledge Engineering. Proceedings of the Fifth International Joint Conference on
Arti cial Intelligence, 1014-1029. Cambridge, MA. 1977.
[24] S.Ramachandran. Temporal Structures in Expert Critiquing Systems, Tech. Report,
UIUC-BI-KBS-96-008, University of Illinois at Urbana-Champaign, 1996.
[25] J.W.Shalvik, R.J.Mooney, G.G.Towell. Symbolic and Neural Learning Algorithms:
An Experimental Comparison. In B.G.Buchanan, D.C.Wilkins (eds.) Readings in
Knowledge Acquisition and Learning, Morgan Kaufmann,1993.
[26] J.R.Quinlan. Induction of Decision Trees. In B.G.Buchanan, D.C.Wilkins (eds.)
Readings in Knowledge Acquisition and Learning, Morgan Kaufmann,1993.
[27] T.G.Dietterich. Limitations on Inductive Learning. In B.G.Buchanan, D.C.Wilkins
(eds.) Readings in Knowledge Acquisition and Learning, Morgan Kaufmann,1993.
[28] D.C.Wilkins, J.A.Sniezek. Intelligent Scenario Generation and Critiquing for Crisis
Decision Making: The Damage Control Domain. A 2-year renewal proposal for ONR
grant. KBS. University of Illinois at Urbana-Champaign. 1997.
[29] See5: An Informal Tutorial. http://www.rulequest.com/see5-win.htm. 1998.
[30] W.F.Clocksin, C.S.Mellish. Programming in Prolog. 3rd ed. Springer-Verlag. 1987.

178
[31] D.C.Wilkins, J.A.Sniezek. An Approach to Automated Situation Awareness for Ship
Damage Control. KBS Technical Report UIUC-BI-KBS-97-012. University of Illinois
at Urbana-Champaign, 1997.
[32] D.C.Wilkins, J.A.Sniezek. Automated Situation Awareness for Ship Damage Control.
NRL Program Review. KBS. University of Illinois at Urbana-Champaign. 1997.
[33] O.J.Mengshoel, D.C.Wilkins. Recognition and Critiquing of Erroneous Agent Action-
s. In M.Tambe and P.Gmytrasiewicz (eds.), AAAI-96 Workshop on Agent Modeling,
p.61-68, AAAI Press, Portland, OR, 1996.
[34] T.H.Cormen, C.E.Leiserson, R.L.Rivest. Introduction to Algorithms. The MIT Press.
1990.
[35] R.Lindsay, B.G.Buchanan, E.A.Feigenbaum, J.Lederberg. Applications of arti cial
intelligence for organic chemistry: The DENDRAL project, New York: McGraw-
Hill, 1980.
[36] L.D.Erman, F.Hayes-Roth, V.R.Lesser, D.R.Reddy. The HEARSAY-II speech un-
derstanding system: Integrating knowledge to resolve uncertainty. ACM Computing
Survey 12:213-253, 1980.
[37] E.Grois, W.H.Hsu, M.Voloshin, D.C.Wilkins. Bayesian Network Models for Gener-
ation of Crisis Management Training Scenarios. (in press). 1998.

179
Vita
Vadim V. Bulitko was born on October 6, 1974 in Odessa, Ukraine. He had attended
Odessa State University for four years and received a Bachelor of Science degree in
Mathematics with highest honors in July 1995. His minors were Computer Science and
Psychology. Vadim has done some internship work at Automated Vision Systems, Inc.
in 1994.
In fall 1995 he came to the University of Illinois as an exchange student in Computer
Science. A semester later Vadim joined the Knowledge Based Systems Group at Beckman
Institute. In fall 1996 he began his graduate studies in Arti cial Intelligence with Dr.
David C. Wilkins being his research advisor. Vadim's research has focused on using
blackboard architectures for real-time control and critiquing and is presented in this
thesis.
Vadim received his Master of Science degree in Computer Science in May 1998. His
further endeavors are unknown.

180

Potrebbero piacerti anche