Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
% Entry-point:
% Data Abstraction
:- ensure_loaded( misc ).
1) Use situation calculus (i.e. the result function, and fluents) to
write down the axioms for the money-and-banana example. The monkey
needs to get the banana, which can be done by climbing a box after
moving it into position below the bananas, which are too high to
reach otherwise. In the initial state, the box is in the room, but
not under-bananas, the monkey does not have the bananas. Actions
are:
a) grab - gets the bananas if the monkey is on the box and the box is
under the bananas
b) climb - puts the monkey on the box if the monkey is near the box
c) move(item, position) - moves item into position if the item is
a box and the monkey is not on it.
Now, you need to prove that, for the initial situation S where the monkey
is not on the box and does not have tha bananas, the action sequence
(move, climb, grab) works, i.e. that: (note - form in exercise - using
the "holds" predicate is an alternate not used in the book, so providing
answer in terms used in book). The book adds one argument to each fluent
denoting the situation, and we will do likewise.
has-bananas(result(grab, result(climb,
result(move(box, under-bananas), S))))
Answer:
Note - we did not have a predicate for monkey location - unless on box!
Let us assume that if the monkey is not on the box it is near the box - by
no means an obvious (or correct!) assumption.
Initial state:
1. not on-box(S0)
2. not has-bananas(S0)
3. not box-at(under-bananas, S0)
box-at(in-room, S0) ; (Irrelevant fact)
0. box(box)
Now, add the CNF versions of the above axioms (variables have ? in front):
5. on_box(?S1) or on-box(result(climb,?S1))
Proof is as follows:
2) Write down the axioms and facts for example below in FOPC:
The taxonomic hierarchy for animals, contains dogs, which
which in turn contains German shepherds and chihuahuas (which are
disjoint). All dogs are carnivorous.
German shepherds are large. All large dogs can be guard dogs,
unless they are old. Fido is an old German shepherd.
Property axioms:
forall (X) dog(X) => carnivorous(X)
forall (X) german-shepherd(X) => large(X)
forall (X) (large(X) and dog(X) and (not old(X))) => can-be-guard-dog(X)
Table is:
animal 1.2
can-be-guard-dog 7.4
2) Fido is large.
Resolve b with 6, ?X5=Fido to get
9: large(Fido)
5) Fido is carnivorous.
Resolve 8 with 5, ?X4=Fido to get:
10: carnivorous(Fido)
e) Can you prove all the above using only either forward or
backward chaining?
This depends on how rules are encoded. If the direction of the => is
as in the original, we can get all the above results, except 4 for which we
need to have german-shepherd(?X) => not chihuahua(?X)
If the direction is not consistent, we may not be able to prove some
of the claims above. Obviously, for claim 3 we still cannot prove anything!
variable parents
-------------------------------
A none
E none
B A
C A E
D A
F B C
{A}, all supersets of {A}. Also {B} and {C} are cutsets.
d) Suppose that P(A) = 0.9, P(E) = 0.3, P(B|A) = 0.1, P(B|~A) = 0.8.
Find P(A|E)
5) You need to construct a 3-input neural network that has value 1 in the
output just when exactly 1 of its inputs is 1, and 0 otherwise
(assume inputs are in {0, 1}).
a) Can this be done without hidden units?
6) a) Using the perceptron learning rule (the "delta" rule), show the steps
of learning 3-input function AND with a single perceptron
and a step activation function with threshold 1. Initially all
weights are 1, and the learning rate is 0.4.
Note that we do not need to apply the algorithm, as the order of choices
was already (not necessarily correctly) decided for us. Resulting tree is:
As we can only prune nodes that are immediate parents on leaf nodes, the
only candidates are nodes denoted by the paths: (A=F) and (A=T, B=1).
In either of these nodes, the amount of information gain PER EXAMPLE is
the same, i.e. - (1/3 log (1/3) + 2/3 log (2/3)), but the former is more
frequent, so the average amount of information gain for a random pattern is
higher. So, if forced to prune,
we need to prune the branching at (A=T, B=1), and in this case
just answer by majority and decide "small".
The optimal tree has only 3 internal nodes and 4 leaf nodes, branching
on C initially:
C = T: (3 examples)
A = F: (1 example) decide "small"
A = T: (2 examples) decide "large"
C = F: (6 examples)
B = 1: (4 examples) decide "small"
B = 2: (2 examples) decide "medium"