Sei sulla pagina 1di 33

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

9/13

Study questions on artificial intelligence


Introduction and background concepts
T1: Cognition and computation
T2: State-space search
T3: Knowledge representation and rule-based inference
T4: Uncertainty and probabilistic reasoning
T5: Supervised learning and natural language processing
T6: Reinforcement learning and adaptation
T7: Distributed AI and multi-agent interaction
T8: Future prospects and philosophical considerations
Summary (multiple topics)

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

Multiple-choice questions on introduction


1.

2.

3.

4.

5.

6.

The course material associates AI most closely with


(a) logic; (b) perceptions; (c) hard computational problems;
(d) knowledge; (e) algorithms
The course materials point to a distinction between
(a) good and bad AI; (b) elegant and simplistic algorithms;
(c) toy and real-world problems; (d) logic and inference;
(e) none of these
Real-world problems are distinguished from ____ ones
(a) imaginary; (b) algorithmic; (c) interactive; (d) toy;
(e) none of these
AI in the 1980s is best associated with (a) real brains;
(b) the physical symbol system hypothesis; (c) expert
systems; (d) rational agents; (e) none of these
The early history of AI is best associated with (a) real brains;
(b) the physical symbol system hypothesis; (c) expert
systems; (d) rational agents; (e) none of these
AI in the 21st century is best associated with (a) real brains;
(b) the physical symbol system hypothesis; (c) expert
systems; (d) rational agents; (e) none of these

7.

The instructor associates AI most closely with (a) logic;


(b) feelings; (c) rational adaptive computational behavior;
(d) knowledge; (e) algorithms
8. Intelligence has been described here as (a) very fast
information processing; (b) the firing of neurons;
(c) remembering correct answers; (d) storing information;
(e) making wise choices about what to do next
9. This course will most emphasize (a) easy problems;
(b) expert systems; (c) robotics; (d) interaction; (e) none of
these
10. The behavior that interests us especially is (a) algorithmic;
(b) fast; (c) based on facts; (d) adaptive; (e) human
11. Early AI research explored (a) artificial neurons;
(b) simulation of an entire brain; (c) autonomous agents;
(d) reinforcement learning; (e) multi-agent systems
12. AI after 1969 explored (a) expert systems; (b) simulation of
an entire brain; (c) autonomous agents; (d) reinforcement
learning; (e) multi-agent systems

Multiple-choice questions on mathematics and computer-science background


1. Sets, functions, and logic
1.
2.
3.
4.

5.
6.
7.

8.
9.
10.
11.
12.
13.

14.
15.

denotes (a) set membership; (b) union; (c) AND; (d) a set;
(e) negation
denotes (a) set membership; (b) union; (c) AND; (d) a set;
(e) negation
denotes (a) set membership; (b) union; (c) AND;
(d) a relation between sets; (e) negation
When A and B are sets, (A B) is (a) a set of ordered pairs;
(b) an arithmetic expression; (c) a sequence of values;
(d) all of these; (e) none of these
(B C) is (a) a pair of sets; (b) a relation; (c) an arithmetic
product; (d) a sequence; (e) a concatenation
{1,2,3} {2,4,5} = (a) {}; (b) {1,2}; (c) 2; (d) {2};
(e) {1,2,3,4,5}
A relation on set A is (a) an element of A; (b) a subset of A;
(c) an element of A A; (d) a subset of A A; (e) none of
these
denotes (a) set membership; (b) union; (c) conjunction;
(d) a relation between sets; (e) negation
denotes (a) set membership; (b) union; (c) AND;
(d) a relation between sets; (e) negation
denotes (a) set membership; (b) union; (c) AND;
(d) a relation between sets; (e) logical negation
denotes (a) set membership; (b) union; (c) AND; (d) OR;
(e) implication
denotes (a) set membership; (b) union; (c) AND; (d) OR;
(e) implication
A computer program or subprogram may compute a
mathematical (a) expression; (b) function; (c) proof;
(d) theorem; (e) none of these
A function f: {1,2,3} {0,1} is a set of (a) integers;
(b) ordered pairs; (c) sets; (d) relations; (e) none of these
An operator often corresponds to a(n) (a) interface;
(b) function; (c) user; (d) program; (e) none of these

16. A mathematical function is a(n) (a) subprogram;


(b) algorithm; (c) mapping between sets; (d) two-way
relationship between ideas; (e) none of these
17. A recursive function definition (a) uses a while loop;
(b) lists all possibilities; (c) contains a call to the function
itself; (d) is impossible; (e) is inefficient
18. A logarithmic function is the inverse of an ___ function
(a) addition; (b) exponential; (c) reciprocal;
(d) multiplication; (e) factorial
19. The inverse of an exponential function is a (a) difference;
(b) reciprocal; (c) division; (d) logarithm; (e) power
20. A NOT gate is a(n) (a) software component; (b) hardware
component; (c) design tool; (d) algorithm; (e) Java operator
21. A NOT gate has how many inputs? (a) 0; (b) 1; (c) 2; (d) 3;
(e) a variable number
22. The OR gate (a) is a peripheral; (b) contains a register;
(c) yields a 0 if both its inputs are 1; (d) yields a 0 unless
both its inputs are 1; (e) produces a 1 if either of its inputs
is 1
23. The AND gate (a) is a peripheral; (b) contains a register;
(c) yields a 0 if both its inputs are 1; (d) yields a 0 unless
both its inputs are 1; (e) yields a 1 if either of its inputs is 1
24. A one-input circuit that outputs a 1 on input of 0 and a 0 on
input of 1 is (a) NOT; (b) OR; (c) AND; (d) MAYBE;
(e) XOR
25. NOT (0 AND 1) = (a) 1; (b) 0; (c) NOT(1); (d) 1 AND 0;
(e) 0 OR 0
26. (1 OR NOT 0) = (a) 1; (b) 0; (c) NOT(1); (d) 1 AND 0;
(e) 0 OR 0
27. (1 OR NOT 1) = (a) 1; (b) 0; (c) NOT(0); (d) 1 AND 1;
(e) 0 OR 1
28. Predicate logic is a(n) (a) algorithm; (b) language of
assertions; (c) language of arithmetic expressions; (d) set of
symbols; (e) set of operations
29. (x) x < x + 1 is (a) a numeric expression; (b) false; (c) true;
(d) an assignment; (e) none of these

David Keil

CSCI 300 Artificial Intelligence

30. (x) x = x + 1 is (a) a numeric expression; (b) false; (c) true;


(d) an assignment; (e) none of these
31. Predicate calculus extends propositional logic with
(a) inference; (b) negation; (c) implication; (d) variables;
(e) quantifiers
32. A logic is (a) a language; (b) a rule; (c) a set of truth values;
(d) a set of numeric values; (e) none of these
33. Logic manipulates (a) strings; (b) numbers; (c) truth values;
(d) programs; (e) objects
34. If p = false, q = false, and r = true, then which is true?
(a) p (q r); (b) p (q r); (c) (p q) r;
(d) p (q r); (e) p (q r)
35. An if-then assertion whose first clause is true is (a) never
true; (b) sometimes true; (c) always true; (d) meaningless;
(e) none of these
36. A rigorous demonstration of the validity of an assertion is
called a(n) (a) proof; (b) argument; (c) deduction;
(d) contradiction; (e) induction
37. Induction is a(n) (a) algorithm; (b) program; (c) proof;
(d) proof method; (e) definition
38. Sets A and B are disjoint iff A B = (a) A; (b) B; (c) U;
(d) ; (e) none of these
39. If {A1, A2, } partitions A, then A1, A2, (a) are the same;
(b) are disjoint; (c) are in a subset relation to each other;
(d) have a non-null intersection; (e) none of these
40. To maximize a function, find (a) the largest parameter;
(b) the largest value returned; (c) the parameter with which
the function will return the maximum value; (d) a function
that serves to return the largest values; (e) a series of
parameters for which the function has increasingly large
return values

2. Algorithms and interaction


1.

2.

3.

4.
5.

6.
7.

8.

9.

With ______, input may depend partly on previous output


(a) interaction; (b) any computation; (c) algorithms;
(d) all programs; (e) none of these
When a problem is complex, the complexity can often be
conquered in the design stage by (a) brute force;
(b) documentation; (c) modular decomposition;
(d) input/output; (e) logic gates
Pseudocode (a) has a precise syntax; (b) is a false solution;
(c) is a low-level language; (d) is an informal notation;
(e) none of these
The branch is a (a) hardware item; (b) control structure;
(c) data structure; (d) flowchart rectangle; (e) module
Which is not a recommended tool for program design?
(a) flowcharts; (b) pseudocode; (c) object-oriented analysis;
(d) hierarchy charts; (e) use of reserved words
Which is not a control structure? (a) sequence; (b) branch;
(c) loop; (d) a file; (e) all are control structures
Modular decomposition of processes is most closely
associated with which kind of design? (a) web-site
formatting; (b) spreadsheet; (c) database; (d) algorithm;
(e) none of these
Which is not a feature of algorithms? (a) precision;
(b) finiteness of time; (c) step-by-step sequencing;
(d) limited set of possible inputs; (e) definiteness of result
Which of these is a control structure? (a) hyperlink;
(b) Excel worksheet; (c) database table; (d) loop; (e) register

Framingham State University

10/13

10. Algorithms (a) are by definition efficient; (b) take finite


time; (c) are languages; (d) are a kind of program;
(e) none of these
11. Design tools include (a) output; (b) flowcharts; (c) registers;
(d) queries; (e) none of these
12. Control structures are used in (a) design; (b) output;
(c) input; (d) formatting; (e) none of these
13. The loop is a (a) language; (b) control structure;
(c) data structure; (d) program; (e) none of these
14. A sentinel is normally used in a (a) sequence; (b) branch;
(c) loop; (d) specification; (e) module hierarchy
15. Tracing is a method for (a) coding; (b) analysis;
(c) specification; (d) debugging; (e) modularity
16. A counted loop uses (a) jumps; (b) a sentinel; (c) multiway
branching; (d) floating-point data; (e) an index
17. A short sequence or array is accessed (a) sequentially;
(b) by query; (c) by index; (d) by random guess;
(e) manually

3. Arrangements of data
1.

A language is a (a) string; (b) number; (c) set of numbers;


(d) sequence of strings; (e) set of strings
2. For array A, |A| is (a) the absolute value of the sum of As
elements; (b) the absolute value of A; (c) the smallest
element of A; (d) the number of elements in A; (e) none of
these
3. A string is a (a) collection; (b) set; (c) tree; (d) sequence;
(e) list
4. A collection typically consists of (a) many items of different
types; (b) just one item; (c) many objects of the same class;
(d) an array of characters
5. Arrays are structures that are (a) linked; (b) branching;
(c) linear; (d) dynamically allocated; (e) none of these
6. A tree is a kind of (a) list; (b) array; (c) graph; (d) all of
these; (e) none of these
7. The height of a binary tree is (a) the number of nodes it
contains; (b) the maximum path length between two leaf
nodes; (c) the number of leaf nodes; (d) the maximum path
length from the root to a leaf node; (e) infinite
8. A graph is (a) a set of integers; (b) a set of vertices; (c) a set
of vertices and a set of edges; (d) a set of edges; (e) a set of
paths
9. A series of edges that connect two vertices is called
(a) a path; (b) a cycle; (c) a connection; (d) a tree;
(e) a collection
10. To design a communications network that joins all nodes
without excessive lines, we must find a (a) set of paths;
(b) connectivity number; (c) minimal spanning three;
(d) expression tree; (e) search tree

4. Probability and statistics


1.

2.

A possibility tree diagrams (a) the likelihood of one


outcome; (b) a series of events, each with n possible
outcomes; (c) one event with n outcomes; (d) a linear series
of events and outcomes; (e) none of these
A series of k events, each with n possible outcomes, has
____ paths through its possibility tree (a) 1; (b) k; (c) n;
(d) nk; (e) kk

David Keil
3.

4.

5.

6.

7.

8.

CSCI 300 Artificial Intelligence

A four-character PIN number, with 36 possibilities for each


character, has ____ possible values (a) 4; (b) 36; (c) 436;
(d) 364; (e) 36!
For finite disjoint sets A and B, |A B| = (a) |A| + |B|;
(b) max{|A|, |B|}; (c) |A B|; (d) |A| |B|; (e) |A| + |B|
|A B|
The Pigeonhole Principle states that if |A| > |B| then
(a) f : A B is bijective; (b) f : A B is surjective;
(c) f : A B is injective; (d) f : A B is not injective;
(e) f : A B is not surjective
The assertion that, if |A| > |B| then no injection from A to B
exists, is called (a) inconsistency; (b) incompleteness;
(c) uncountability; (d) undecidability; (e) the Pigeonhole
Principle
The possible orderings of elements of a set are
(a) truth values; (b) numbers; (c) sets; (d) combinations;
(e) permutations
The possible unordered selections from a set are
(a) truth values; (b) numbers; (c) sets; (d) combinations;
(e) permutations

Framingham State University


9.

10.

11.
12.

13.
14.

15.
16.

10/13

Permutations are ___ of a set (a) the elements;


(b) the possible orderings of elements; (c) the sizes of
subsets; (d) the subsets; (e) ways to describe elements
There are ____ permutations for n objects taken k at a time
(a) n; (b) n!; (c) (n k)! / n!; (d) n! / (n k)!;
(e) n! / ((n k)! k!)
___ are ordered (a) permutations; (b) combinations; (c) sets;
(d) subsets; (e) none of these
Combinations are ___ of a set (a) the elements;
(b) the possible orderings of elements; (c) the sizes of
subsets; (d) the subsets; (e) ways to describe elements
Combinations are expressed as (a) C(n, k); (b) nk; (c) n!;
(d) n! / k!; (e) kk
There are ____ combinations for n objects taken k at a time
(a) n; (b) n!; (c) (n k)! / n!; (d) n! / (n k)!;
(e) n! / ((n k)! k!)
___ are unordered (a) permutations; (b) combinations;
(c) sequences; (d) hierarchies; (e) none of these
C(n, k) is also known as (a) permutations;
(b) binomial coefficients; (c) Stirling numbers; (d) factorials;
(e) a multiset

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

Background Terminology (Introduction)


abstraction
action
adaptation
agent
algorithm
control theory
data structure
emergence
exhaustive search

function
generalization
good-old-fashioned AI
heuristic
inference
intelligence
interaction
knowledge
learning

natural language
NP-hard
partially observable
percept
physical symbol system
hypothesis
probability
reasoning
satisfice

search
state-space search
symbol
theorem
toy problem
Turing test
voice recognition

Problems to assess background concepts


All these background concepts, covered in
Statistics, Precalculus, or Computer
Science I, are considered core outcomes of
this course
0.1a Explain basic precalculus concepts
1.
2.
3.
4.
5.
6.
7.
8.

9.
10.
11.
12.
13.
14.
15.
16.

What is a function?
What is the composition of two functions?
When is x the maximum value of function f ?
What is the equation form of a linear function?
What is the equation form of a quadratic function?
What is the equation form of a polynomial function?
What is the equation form of an exponential function?
Roughly describe the growth of a family of rabbits, over n
years, if each pair of rabbits over one year old produces a
new pair of rabbits once a year and if rabbits live a
long time.
Describe the principle of mathematical induction.
Explain the factorial function.
Describe the binomial theorem, or the binomial
coefficient ( )?
What is a sequence?
Distinguish an arithmetic sequence from a
geometric sequence.
Compare the graph of a logarithmic function to that of a
linear function.
Compare the graph of an exponential function to that of a
linear function.
When is x the minimum value of function f ?

0.1b Write the truth table for a propositionallogic formula or logic circuit
Write the truth tables for the following:
1.

2.

3.

4.
5.
6.
7.
8.
9.
10.
11.

0.2

(p q) r
p (q r)
(p q) r
p (q r)
p (q r)
p (q r)
p (q r)
(p q) r

Design a looping algorithm

Write an algorithm, in pseudocode, that has an array of numbers


as a parameter and that computes
1.

Disjunction (OR), assuming all elements are 0 or 1 (false


or true)
2. Conjunction (AND), assuming all elements are 0 or 1 (false
or true)
3. Sum of all elements
4. Number of elements, starting with first, that are all the same
5. Number of zeroes
6. Smallest element
7. Subscript of the leftmost 1
8. Subscript of largest element
9. True if all values are the same, otherwise false
10. True if all values are in ascending order, otherwise false
11. Length of longest ascending sequence that starts with first
element

David Keil

CSCI 300 Artificial Intelligence

0.3a Find a path in a graph


1.

(For students who have taken Data Structures) Describe an


algorithm to search a graph for a path or for a shortest path.

Framingham State University

0.3b Explain the relation between the


logarithm function and the heights
of trees

(2-3) Find the shortest path from start () to finish (F), where
shortest means the minimum sum of the weights, and a path is
a series of circled vertices.

1.

2.

3.

2.

4.
5.
3.

10/13

0.4
1.
2.
3.
4.
5.

Describe the height of a tree as a function of the number of


vertices in the tree.
Explain the relation between the logarithm function and the
heights of trees.
What is the inverse of the power function, and how is this
related to the relationship between the size of a tree (in
vertices) and its height?
Describe the relationship between trees and the logarithm
function.
Describe the relationship between trees and the power
function.

Explain basic notions of combinatorics


What are permutations?
What are combinations?
How many orderings are there of n items? Explain.
What is the multiplication rule in combinatorics?
How many ways are there to put n items in order, taken k at
a time?

Pre-quiz open-ended questions


1.
2.

Coming into this course, what do you think are essential


features of intelligence?
Coming into this course, do you have a definition of
intelligence, or of AI?

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

Multiple-choice questions on Topic 1: Cognition


1. Cognitive science and mind
1.

2.

3.

4.

5.

6.

7.

8.
9.
10.

11.

12.

13.

14.

15.

Cognitive science is (a) an independent discipline;


(b) interdisciplinary; (c) a subdiscipline of AI;
(d) a subdiscipline of psychology; (e) none of these
The ___-based approach to cognitive science is concerned
with the form of reasoning (a) logic; (b) rule; (c) concept;
(d) analogy; (e) connections
The ___-based approach to cognitive science is concerned
with parallel distributed processing as in the brain (a) logic;
(b) rule; (c) concept; (d) analogy; (e) connections
The ___-based approach to cognitive science is concerned
with condition-action pairs (a) logic; (b) rule; (c) concept;
(d) analogy; (e) connections
The ___-based approach to cognitive science is concerned
with typical entities or situations (a) logic; (b) rule;
(c) concept; (d) analogy; (e) connections
The ___-based approach to cognitive science is concerned
with guidance derived from past situations (a) logic; (b) rule;
(c) concept; (d) analogy; (e) connections
In rule-based systems, rules represent (a) the cortex;
(b) sensations; (c) actions; (d) short-term memory;
(e) long-term memory
Behavior is (a) human action; (b) any action; (c) any output;
(d) goal-driven action; (e) collaboration
Emotions enable (a) sensory input; (b) inference;
(c) rule application; (d) focus and action; (e) none of these
Scientific evidence of consciousness is associated with
(a) sensory input; (b) reflex actions; (c) spiritual
development; (d) emotions; (e) brain processes
The study of thinking via cooperation is (a) distributed
cognition; (b) introspection; (c) psychology; (d) ethics;
(e) none of these
The origin of consciousness is said to be in (a) bacterial
organisms; (b) plants preference for sunlight; (c) animals
focused alertness to danger or opportunity; (d) humans
family life; (e) classroom experiences
A theory of consciousness (a) is proven mathematically;
(b) is derived clinically; (c) postulates representational
structures and operations on them; (d) should ignore
planning; (e) does not apply to decision making
Intentionality is described as (a) pursuit of goals or
understanding of meaning; (b) a logial assertion;
(c) presence of a brain state; (d) an algorithm; (e) a set of
variable values
The cognitivist hypothesis claims that (a) only humans may
think; (b) machines think; (c) thought consists of physical
computation on symbols; (d) thought is apart from the
physical world; (e) programs can obtain high IQ scores

2. The computational-representational
understanding of mind
1.

The theory of cognitive science presented asserts that


thinking can best be understood in terms of
(a) data processing; (b) knowledge representation;
(c) inference; (d) stimulus-response; (e) representational
structures and computational procedures

2.

3.

4.

5.

6.
7.

8.

In computational systems, physical states are


(a) representations; (b) determined by mental states;
(c) random; (d) static; (e) none of these
A representation is a structure that (a) has self-evident
meaning; (b) has no meaning; (c) stands for something else;
(d) acts on other structures; (e) none of these
Representations of typical entities or situations are
(a) processes; (b) programs; (c) concepts; (d) analogies;
(e) inferences
Analogic reasoning (a) makes deductive inferences;
(b) is rule-based; (c) adapts thinking about familiar
situations to new ones; (d) applies definitions; (e) none of
these
Elements of CRUM include (a) chemistry; (b) biology;
(c) logic; (d) calculus; (e) set theory
CRUM asserts that (a) the mind represents computations;
(b) the minds computational capacity operates on
representations of the world; (c) the mind is a representation
of a computer; (d) a computer is a representation of a mind;
(e) the mind is a computer
Concepts are (a) logical formulas; (b) data variables;
(c) representations of objects or situations; (d) unrelated to
each other; (e) assertions in predicate logic

3. The rational-agent approach to AI


1.

Rationality is associated with (a) interaction;


(b) common sense; (c) humans; (d) inference and expected
reward; (e) none of these
2. Rationality is associated most closely here with
(a) humanness; (b) maximizing reward; (c) proofs;
(d) creativity; (e) none of these
3. Any agent receives ___ from the environment
(a) knowledge; (b) actions; (c) percepts; (d) instructions;
(e) none of these
4. Any agent performs (a) knowledge; (b) actions; (c) percepts;
(d) instructions; (e) none of these
5. The easiest environment below is (a) stochastic, dynamic,
fully observable; (b) deterministic, static, fully observable;
(c) stochastic, static, partially observable; (d) stochastic,
dynamic, partially observable; (e) none of these
6. The most difficult environment below is (a) stochastic,
dynamic, fully observable; (b) deterministic, static, fully
observable; (c) stochastic, static, partially observable;
(d) stochastic, dynamic, partially observable; (e) none of
these
7. A reflex agent (a) learns from its environment; (b) reasons
based on past percepts; (c) acts only on current percept;
(d) maintains a model of its environment; (e) none of these
8. Rationality maximizes (a) correctness of inference;
(b) immediate reward; (c) information; (d) actual long-term
reward; (e) expected long-term reward
9. A rational agent (a) makes a correct deduction; (b) gathers
maximum information; (c) acts optimally; (d) acts as well as
possible; (e) none of these
10. The most difficult environment below is (a) deterministic
and fully observable; (b) episodic and static; (c) dynamic
and partially observable; (d) discrete and single-agent;
(e) fully observable and stochastic

David Keil

CSCI 300 Artificial Intelligence

11. Good Old Fashioned AI sees intelligence as related to


(a) interaction; (b) agent behavior; (c) emergence; (d) a set
of logical propositions about the world; (e) data retrieval
12. An autonomous agent is (a) situated in an environment;
(b) given instructions by a master; (c) a physical object;
(d) an algorithm; (e) none of these
13. Newell and Simon hypothesized that a necessary and
sufficient condition for intelligence is (a) emotion;
(b) rationality; (c) adaptation; (d) symbol manipulation;
(e) embodiment
14. The ` measures intelligence as (a) a machines ability to
detect a humans presence; (b) a humans ability to detect a
machines presence; (c) a humans inability to detect a
machines presence; (d) use of a Turing machine; (e) an IQ
test for machines
15. A well-known way to define machine intelligence is
(a) using computability theory; (b) using predicate logic;
(c) the Turing Test; (d) using complexity theory;
(e) as processing speed comparable to that of the brain

4. Models of computation
1.

2.
3.

4.

5.

6.

7.

8.

9.

10.

11.
12.

13.

Computation always involves (a) silicon; (b) deterministic


algorithms; (c) processing of symbols; (d) numbers; (e) none
of these
Connectionism is (a) silicon inspired; (b) parallel distributed
processing; (c) symbolic; (d) inference based; (e) abductive
Connectionist models of computation are based on
(a) the bit; (b) the neuron; (c) transition systems;
(d) pseudocode; (e) the Ethernet protocol
A transition system without loops is of equivalent
computational power to a (a) DFA; (b) Turing machine;
(c) pushdown automaton; (d) Java program; (e) logic circuit
Look-up tables are equivalent to (a) logic circuits;
(b) automata; (c) Java programs; (d) flowcharts;
(e) pseudocode
Logic circuits are equivalent to (a) look-up tables;
(b) automata; (c) Java programs; (d) flowcharts;
(e) pseudocode
Any predicate on a finite set may be computed by (a) a short
program; (b) a small flowchart; (c) a loop; (d) a logic circuit;
(e) none of these
A transition system is defined by (a) a set of states and a
relation on them; (b) a set of points and a mapping among
them; (c) a set of symbols and rules for sequencing them;
(d) a set of strings; (e) none of these
The ____ is a widely-used model of computation (a) PC;
(b) Macintosh; (c) operating system; (d) transition system;
(e) principle of mathematical induction
If a finite automaton terminates in an accepting state, then
the input string (a) belongs to the FAs language; (b) is nonnull; (c) is finite; (d) contains a repetition of symbols;
(e) none of these
A DFA is a(n) (a) function; (b) transition system; (c) RAM;
(d) Turing machine; (e) pushdown automaton
A pushdown automaton may have (a) more states than a
finite automaton; (b) random-access memory; (c) an infinite
alphabet; (d) a stack; (e) faster transitions
Algorithms (a) compute functions; (b) provide services;
(c) accomplish missions in multi-agent systems;
(d) may execute indefinitely; (e) none of these

Framingham State University

10/13

14. A feature of algorithmic computation is (a) alternation of


input and output; (b) processing before input; (c) output
before processing; (d) input, then processing, then output;
(e) none of these
15. The Turing machine model is said to capture
(a) regular languages; (b) interaction; (c) efficient
computation; (d) algorithmic computation; (e) all of these
16. A Turing machine (a) lacks an alphabet; (b) has tape instead
of states; (c) can compute any mathematical function;
(d) stores data on a tape; (e) none of these
17. The Church-Turing thesis associates the Turing machine
with (a) regular languages; (b) parsing; (c) lexical analysis;
(d) algorithms; (e) interaction
18. The Church-Turing Thesis refers to (a) a formalization of an
intuitive notion; (b) a theorem; (c) provable; (d) disprovable;
(e) a paper written at Harvard
19. A Turing machine has ____ memory (a) random-access;
(b) limited; (c) unbounded; (d) stack; (e) queue
20. An state-transition system with tape is a (a) finite transducer;
(b) DFA; (c) NFA; (d) PDA; (e) Turing machine
21. Unlike Turing machines, random-access machines have
(a) tape; (b) stack; (c) queue; (d) addressable storage;
(e) hard disk
22. In the brain, a concept is (a) a logical formula; (b) a reflex;
(c) a synapse; (d) a neuron; (e) a pattern of neuron
activations
23. Interfaces between neurons are called (a) axons;
(b) dendrites; (c) synapses; (d) potentials; (e) receptors
24. A neuron fires when (a) it receives an impulse; (b) it is not
inhibited; (c) it receives impulses beyond a certain threshold;
(d) a receptor molecule binds to a transmitter; (e) none of
these
25. Memory consists of (a) creation of neurons; (b) creation of
synapses; (c) creation of axons; (d) changes in weights of
synapses; (e) storage of electrical potential
26. The cerebral cortex supports (a) vision; (b) reflex;
(c) executive function; (d) fight-flight responses; (e) none of
these
27. The amygdala supports (a) vision; (b) reflex; (c) executive
function; (d) fight-flight responses; (e) none of these
28. Personality is in the (a) amygdala; (b) cerebellum; (c) cortex;
(d) spinal cord; (e) none of these
29. Brain processing uses (a) chemicals; (b) electricity;
(c) neurons; (d) neurons, glial cells, and chemicals;
(e) modus ponens
30. A feature of interactive computation is (a) alternation of
input and output; (b) processing before input; (c) output
before processing; (d) input, then processing, then output;
(e) none of these
31. Interactive systems (a) compute functions;
(b) provide services; (c) accomplish multi-agent missions;
(d) execute only finitely; (e) none of these
32. I/O in interactive systems is (a) static; (b) dynamic;
(c) finite; (d) constrained; (e) none of these
33. Interaction is distinguished from algorithmic computation by
the presence of (a) finite input; (b) persistent state; (c) input;
(d) processing; (e) none of these
34. A service is characteristic of (a) an algorithm;
(b) an interactive process; (c) a multi-agent system;
(d) a parallel system; (e) none of these

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

Terminology (Topic 1, Cognition and Computation)


algorithm
analogic reasoning
analogy
brain
case-based reasoning
Church-Turing thesis
cognition

cognitive science
computational
complexity
computational model
CRUM
consciousness
distributed cognition

dynamic system
emotion
episodic environment
finite automaton
goal-based agent
image
inheritance

intractable
mind
neuron
parallel distributed
processing
rational agent
reflex agent

representation
representational structure
social cognition
transition system
Turing machine
uncomputable
undecidable

Problems to assess subtopic outcomes


What features of software could satisfy
the definition of mind under the
computational-representational
understanding of mind?
5. Is the computational-representational
understanding of mind a good model
for intelligence? Defend your view,
referring to the major features that
define CRUM.
1.1 Describe some concepts
6.
Describe the computationalor problems in
representational understanding of mind
cognitive science
and any defects or especially useful
1. Relate cognition to computation.
features you see in it.
2. What ideas does cognitive science
7. Under the computationaldraw from fields other than computer
representational understanding of
science?
mind, what does the minds
3. Contrast the logic-based and rulescomputational capacity operate on,
based approaches to cognitive science.
and how?
4. Contrast the logic-based and image8. Give some limitations of the
based approaches to cognitive science.
computational-representational
5. Contrast the logic-based and
understanding of mind.
connectionist approaches to cognitive 9. What is a computational system?
science.
10. What is a representation?
6. How did human consciousness evolve? 11. Describe the cognitivist hypothesis.
7. Can cognition occur outside a single
12. Can symbols exist in brains? Explain.
human brain? Explain.
8. Are emotions part of cognition?
1.3a Distinguish classes of
9. What issues should a theory of
agent environments
consciousness address?
10. Distinguish cognition, consciousness, 1. Describe some dimensions of
classification of environments.
and mind.
2. Distinguish fully observable from
11. What is a concept?
partially observable environments,
saying which is harder. Give
1.2 Describe the
examples.
computational-represen- 3. Distinguish deterministic from
tational understanding
stochastic environments, saying which
is harder.
of mind (core)
4.
Distinguish episodic from sequential
1. What is the computationalenvironments, saying which is harder.
representational understanding of
5. Distinguish static from dynamic
mind?
environments, saying which is harder.
2. In the mind, what is representation and
6.
Distinguish discrete from continuous
what is computation?
environments, saying which is harder.
3. In a computer and in a brain, what is
7.
Describe some classes of environment
representation and what is
that are more difficult than static, fully
computation?
observable, episodic, deterministic
ones, and give features agents would
need for more difficult environments.

Topic objective:
Explain what cognition is,
with reference to biological,
computational, and agent
models

4.

1.3b Describe a reflex agent


within the rational-agent
model of AI (core)
1.
2.

3.
4.

5.

1.4

1.

Describe a reflex agent.


What is the rational-agent approach to
AI, and what is one kind of rational
agent?
In what environments can a reflex
agent operate effectively?
What kind of agent can operate in a
fully observable, deterministic, static,
discrete environment, and why?
Is a car today a rational agent, and why
or why not? What would a rational
agent require to get from your home to
FSC? Use the definition provided.

Contrast connectionist
and automata-based
models of computation

How does the brain support


intelligence?
2. Describe some significant distinctions
between a brain and a PC that are
noted in AI research.
3. Describe the connectionist model of
computation, with examples.
4. Describe the transition-system model
of computation, contrasting it to the
neural model.
5. What is an algorithm?
6. Distinguish algorithms from
interaction.
7. Describe two algorithmic models of
computation.
8. Name five significant distinctions
between a brain and a PC.
9. Compare the connectionist and
automata based models of
computation.
10. Compare a logic circuit; an algorithm;
and a neural network.
11. How do neurons contribute to
computation?

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

10

Multiple-choice questions on Topic 2: State-space search


1. Constraint and optimization problems

2. Goal-driven search

1.

1.

2.

3.

4.

5.

6.

7.

8.

9.

10.

11.

12.

13.

14.

15.

16.

Constraint-satisfaction problems aim at (a) constraints that


rule out all but a few cases; (b) constraints that rule out only
a few cases; (c) constraints that involve several variables;
(d) optimization of values; (e) none of these
Satisfiability is a(n) (a) optimization problem; (b) constraint
satisfaction problem; (c) algorithm; (d) heuristic;
(e) interactive problem
Local search (a) solves any problem; (b) is always effective;
(c) is never effective; (d) reduces difficulty of some
constraint satisfaction problems; (e) is more thorough than
global search
Function optimization searches for (a) a function;
(b) parameter values; (c) a return value; (d) an algorithm;
(e) a time analysis
A problem of finding values of several variables such that a
certain condition holds is called (a) graph search; (b) tree
traversal; (c) constraint satisfaction; (d) sorting;
(e) optimization
Constraint satisfaction is a problem of (a) finding values of a
set of variables such that a certain condition holds is called;
(b) SAT; (c) finding a maximal or minimal value;
(d) optimizing a path; (e) none of these
Finding the minimum value that satisfies a certain constraint
is a(n) _____ problem (a) constraint; (b) optimization;
(c) state-space search; (d) behavior-of-program;
(e) interactive computation
An optimization problem finds a maximum or minimum
value that satisfies a certain (a) formula in predicate logic;
(b) constraint; (c) time specification; (d) user; (e) protocol
A problem of finding a set of values that yields the highest
or lowest return value when used as parameters to a function
is (a) constraint satisfaction; (b) optimization;
(c) maximization; (d) minimization; (e) central tendency
A structure that shows possible outcomes of all steps of a
computation is a (a) flowchart; (b) module hierarchy;
(c) binary tree; (d) decision tree; (e) none of these
A problem of finding values of several variables such that a
certain condition holds is called (a) graph search;
(b) tree traversal; (c) constraint satisfaction; (d) sorting;
(e) optimization
Constraint satisfaction is a problem of (a) finding values of a
set of variables such that a certain condition holds is called;
(b) SAT; (c) finding a maximal or minimal value;
(d) optimizing a path; (e) none of these
Bounded rationality is associated with (a) optimality;
(b) constraint satisfaction; (c) well ordering; (d) satisficing;
(e) tractability
An optimization problem finds a maximum or minimum
value that satisfies a certain (a) formula in predicate logic;
(b) constraint; (c) time specification; (d) user; (e) protocol
A problem of finding a set of values that yields the highest
or lowest return value when used as parameters to a function
is (a) constraint satisfaction; (b) optimization;
(c) maximization; (d) minimization; (e) central tendency
For _______ problems, sometimes global maxima/minima
differ from local ones (a) optimization; (b) O(n); (c) BST
search; (d) sorting; (e) none of these

2.
3.

4.

5.

6.

7.

8.
9.
10.
11.

12.

13.

In fully accessible environments, current state is identified


by (a) actions; (b) percepts; (c) deduction; (d) chance;
(e) none of these
A goal is a(n) (a) path; (b) action; (c) percept; (d) set of
states; (e) number
A state space is a set of (a) three-dimensional coordinates;
(b) locations in the physical universe; (c) governmental
entities; (d) actual arrangements of values; (e) possible
arrangements of values
A set of possible arrangements of values is a(n) (a) state
space; (b) path; (c) combination; (d) random variable;
(e) none of these
Goal-driven state-space search arrives at (a) goals from
facts; (b) facts from goals; (c) rules from facts;
(d) search strategies; (e) heuristics
Data-driven state-space search arrives at (a) goals from
facts; (b) facts from goals; (c) rules from facts; (d) search
strategies; (e) heuristics
Games and puzzles are simple examples of (a) embodied
intelligence; (b) state-space search; (c) inference; (d) agent
interaction; (e) adaptation
Utility-based agents seek mainly (a) reward; (b) truth;
(c) points; (d) to be helpful; (e) knowledge
The breadth-first search (a) uses a queue; (b) uses a stack;
(c) searches an array; (d) searches a tree; (e) none of these
The depth-first search (a) uses a queue; (b) uses a stack;
(c) searches an array; (d) searches a tree; (e) none of these
Exploration may be useful for environments that are (a) fully
observable; (b) partially observable; (c) episodic; (d) onestate; (e) not observable
Optimizing search compares (a) costs of paths; (b) costs of
information; (c) reward values; (d) costs of algorithm
design; (e) none of these
One well-known strategy for state-space search is called
(a) measure and estimate; (b) generate and test; (c) try and
abandon; (d) forward and back; (e) design and revise

3. Exhaustive search and intractability


1.

2.

3.

4.

Analysis (a) computes a function; (b) separates something


into parts; (c) puts components together; (d) writes a
program; (e) is the entire problem-solving process
Best case for an algorithm (a) takes the same time for all
data; (b) assumes the data that the algorithm handles in the
greatest time; (c) assumes the data that the algorithm handles
in the least time; (d) is the expected time considering all
possible input data; (e) none of these
Worst case for an algorithm (a) takes the same time for all
data; (b) assumes the data that the algorithm handles in the
greatest time; (c) assumes the data that the algorithm handles
in the least time; (d) is the expected time considering all
possible input data; (e) none of these
Average case for an algorithm (a) takes the same time for all
data; (b) assumes the data that the algorithm handles in the
greatest time; (c) assumes the data that the algorithm handles
in the least time; (d) is the expected time considering all
possible input data; (e) none of these

David Keil
5.

6.
7.

8.

9.
10.

11.

12.

13.

14.

15.

16.

17.

18.

CSCI 300 Artificial Intelligence

A loop nested to two levels, each with roughly n iterations,


has running time (a) O(1); (b) O(n); (c) O(n2); (d) O(n lg n);
(e) O(2n)
A loop nested to n levels has running time (a) O(1);
(b) O(n); (c) O(n2); (d) O(n lg n); (e) O(2n)
The running time function of an algorithm is determined by
(a) the number of operations in a sequence structure;
(b) the number of branches in a selection structure;
(c) the time of the slowest of a series of loops; (d) the data;
(e) none of these
AI problems tend to involve (a) large numbers;
(b) combinatorial explosion of running time;
(c) easy choices once understood; (d) straightforward
inference; (e) none of these
In a game tree, vertices are (a) cities; (b) players; (c) moves;
(d) board positions; (e) pieces
Combinatorial explosion is (a) sudden increase in difficulty
of an environment; (b) a cause of collapse of an agents
effectiveness; (c) exponential size of state space; (d) failure
of search; (e) none of these
Exponential time is closely associated with (a) tractability;
(b) combinatorial explosion; (c) constraint problems;
(d) sorting problem; (e) interaction
Hard computational problems are identified
mathematically with (a) undecidability; (b) algorithms;
(c) functions; (d) intractability and exponential time;
(e) none of these
Hard computational problems are defined in the theory of
computational complexity as ones (a) for which no heuristics
exist; (b) for which no algorithms exist; (c) that have no
solutions; (d) that humans dont try solving; (e) that are
believed to require exponential time
Problems for which no polynomial-time solutions are known
are called (a) undecidable; (b) intractable; (c) NP;
(d) optimization; (e) none of these
The set of intractable problems is associated with
(a) polynomial time; (b) divide-and-conquer algorithms;
(c) greedy algorithms; (d) O(n2) problem; (e) NPcompleteness and exponential time
P is the set of (a) algorithms that execute in O(n) time;
(b) problems decidable in O(nk) time for some constant k;
(c) problems not decidable in O(nk) time;
(d) intractable problems; (e) exponential-time problems
Intractable problems (a) are undecidable; (b) lack acceptable
approximate versions; (c) are decidable but take an
unacceptably long time; (d) lack solutions; (e) none of these
The Triangle Inequality (a) helps find low-cost paths;
(b) helps find maximum-cost paths; (c) compares three
quantities; (d) compares triangles; (e) compares sides of a
triangle

Framingham State University

10/13

11

19. Cost(A, C) Cost(A, B) + Cost(B, C) is (a) false in all cases;


(b) the Triangle Inequality when A, B, C are states; (c) a
formula used in manufacturing; (d) a theorem in probability;
(e) a hypothesis

4. Heuristics
1.

2.

3.

4.

5.

6.

7.
8.

9.

10.

11.
12.

13.

14.

In game theory, a dominant strategy (a) always wins;


(b) is better than others regardless of opponent strategy;
(c) is a Nash equilibrium; (d) is better than random guessing;
(e) none of these
Zero-sum games (a) are unwinnable; (b) are for one player;
(c) are only win-lose; (d) may involve scores of zero;
(e) none of these
Heuristics are (a) axioms; (b) inference rules; (c) rules that
guide state-space search; (d) results of inference; (e) none of
these
Heuristics must often be used in (a) logical inference;
(b) state-space search; (c) backtracking; (d) robotic sensing;
(e) abstraction
A rule of thumb that guides state-space search is a(n)
(a) axiom; (b) inference rule; (c) heuristic; (d) theorem;
(e) adaptation
Best-first search uses a(n) (a) inference rule; (b) heuristic;
(c) form of knowledge representation ; (d) protocol; (e) none
of these
Minimax is a(n) (a) inference rule; (b) heuristic; (c) form of
knowledge representation ; (d) protocol; (e) none of these
Hill climbing is a(n) (a) problem; (b) heuristic strategy;
(c) best-first search; (d) expression of consciousness;
(e) form of representation
A drawback of hill climbing is (a) very long running time;
(b) undecidability; (c) tendency to become stuck at local
maxima; (d) the absence of goals; (e) none of these
Admissibility, informedness, and monotonicity are features
of all (a) algorithms; (b) heuristics; (c) formulas in predicate
logic; (d) problems; (e) robots
Minimax is a (a) problem; (b) algorithm; (c) game;
(d) neural-network design; (e) form of consciousness
The assumption that a game opponent will make the best
possible move is made in (a) depth-first search; (b) breadthfirst search; (c) all two-player games; (d) the minimax
algorithm; (e) none of these
The principle of rationality states that an agent will select an
action if (a) the action is reasonable; (b) deduction points to
the action; (c) the action derives from a valid logical
inference; (d) the agent has knowledge that the action will
lead to a goal; (e) the agent associates the action with the
goal
The notion of bounded rationality suggests satisficing as an
alternative goal to (a) optimality; (b) utility; (c) goal states;
(d) constraint satisfaction; (e) inference

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

12

Topic 2 Terminology (State-space search)


backtracking
backward chaining
belief state
best-first search
breadth-first search

constraint satisfaction
problem
data-driven search
depth-first search
depth-limited search
evaluation function

exploration
forward chaining
goal test
goal test function
goal-driven search
heuristic

hill climbing
local search
minimax
path
state
state transition

state-space search
tree search
triangle inequality
uninformed search

Problems to assess topic 2 outcomes


Topic objective:
Explain how heuristics offer ways to
pursue goals in exponentially large
search spaces
2.1
1.

Explain what constraint and optimization


problems are
Distinguish optimization problems from constraintsatisfaction problems, giving examples.

(2-12) Explain why the problem is of the constraint or the optimization type. Give a corresponding problem of the opposite type.
2.

Finding the pair of (x, y) pairs in a two-dimensional plane


that is closest together of all the pairs
3. Finding a set of weighted items in a collection, such that the
weights add up to less than x
4. Searching an array of location objects (x, y) for a location in
which y > b, for some constant b
5. Finding a path on a road map from city a to city b
6. Telling whether a set of (x, y) pairs contains two pairs whose
distance apart is less than six units
7. Telling whether a map can be colored with fewer than four
colors
8. Finding the smallest element of an array
9. Finding the shortest path on a road map from city a to city b
10. Finding the maximum-valued set of weighted items in a
collection, such that the weights add up to less than x
11. Finding the farthest-apart pair of locations in a collection of
ordered pairs
12. Finding the smallest number of colors that can color a map

(8-12) Describe the state space to search, and the goal state, in
8.
9.
10.
11.
12.

2.2b Perform a goal-driven analysis of a


problem with a game tree (core)
(1-3) Consider the following game trees. Player X is at the game
state denoted by the root vertex and may choose between move a
and move b, denoted by edges, leading to a game state that is a
vertex adjacent to the root. Then player Y will move, followed by
player X. After Xs second move, that player will immediately be
in a game state that is a win (W), tie (T), or loss (L).
Decide whether player Xs better next move is a or b, and
explain. What is the maximum size of the state space of paths for
a game tree of depth n, where players have a maximum of k
choices in making a move? (Adapted from Brookshear.)
1.

2.

2.2a Explain goal-based state-space search


(core)
1.
2.
3.
4.
5.
6.
7.

What does a state-space search do?


What is a goal-based agent?
Distinguish data-driven from goal-driven state-space search.
What are two ways to sequence the exhaustive search of a
state space? Explain.
In a goal-driven AI approach to state-space search, what
does the goal consist of ? Give an example.
Describe a kind of agent that may require exploration of a
state space.
What is the name for a set of desirable states, and how does
an agent operate in an environment defined in this way?

tic tac toe


a maze
the traveling-salesperson problem
the game of chess
choosing courses to register for in the next semester

3.

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

(4-6) (a) Convert the maze below, where the player starts at S
and tries to reach goal G, to graph form

2.3

(b) Perform partial depth-first and breadth-first searches, giving


the order in which the graph vertices are to be visited

1.
2.

(c) If n two-way branches are encountered in a maze, then how


large is the state space of paths?

3.

4.

5.

4.
5.

10/13

13

Apply the definition of intractability to a


computational problem
Relate intelligence to intractable problems.
Is a brain subject to the same limitations as a computer in
relation to the notions of tractability and decidability?
What does hard computational problems mean, formally
and in practice?
What sorts of running times is intractability associated with?
Describe in mathematical terms some kinds of
computational problems addressed by AI.

(6-15) Apply the definition of intractability to characterize the


problem below, and explain your reasoning, referring to the
appropriate definition.

6.

6.
7.
8.
9.
10.
11.
12.
13.
14.
15.

searching an array
sorting an array
satisfiability of propositional-logic formulas
evaluation of propositional-logic formulas
search of a road map for a path
traveling-salesperson problem
guessing passwords
guessing the order of n items
proving a theorem
finding the maximum value returned by a function with
n arguments

2.4

Explain how heuristics are used to


provide adequate solutions to hard
search problems (core)

(7-11) (a) Draw a game tree the following tic-tac-toe position,


denoting moves with notation
(b) Use your game tree to choose Xs next move and predict

game outcome
7.

8.

1.
2.

3.
9.

11.

10.

4.
5.
6.

Explain the hill-climbing heuristic.


Why are there often too many states to exhaustively search
in the state space? What are the rules of thumb that can
reduce the number of states visited? Give an example.
Which heuristic for game play assumes that the opponent
will play optimally? Explain.
Describe the minimax heuristic.
What is a heuristic, and what is its use? Give an example.
For what sorts of problems are approximation algorithms,
randomization, and heuristics used? What are heuristics and
when are they used?

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

14

Multiple-choice questions on Topic 3: Knowledge and inference


1. Knowledge, planning, and beliefs
1.
2.

3.

4.

5.

6.

7.

8.

9.
10.
11.
12.
13.

14.

15.

16.

A possible component of an agent may be a (a) compiler;


(b) processor; (c) state set; (d) reward; (e) knowledge base
Knowledge may be commonly represented in the language
of (a) mathematics; (b) logic; (c) a country; (d) computers;
(e) databases
One significant formalism for knowledge representation in
AI is (a) the ASCII code; (b) arithmetic; (c) predicate logic;
(d) Java; (e) none of these
Logic-based agents (a) answer queries; (b) adapt to their
environments; (c) operate in multi-agent systems; (d) seek to
prove assertions; (e) use training to improve ability to act
Knowledge-based agents (a) answer queries; (b) adapt to
their environments; (c) operate in multi-agent systems;
(d) seek to prove assertions; (e) use training to improve
ability to act
A knowledge base is expressed in a ______ language
(a) knowledge-representation; (b) regular; (c) natural;
(d) programming; (e) query
Rules for model-based reasoning include
(a) network protocols; (b) communication primitives;
(c) rules of engagement; (d) diagnostic and causal rules;
(e) grammar rules
Planning may start with (a) original state only; (b) goal state;
(c) either original or goal state; (d) intermediate or goal
state; (e) original or intermediate state
Planning is part of ____ seeking (a) knowledge; (b) goal;
(c) algorithm; (d) belief; (e) victory
Belief is a(n) (a) inference; (b) axiom; (c) propositional
attitude; (d) grammar rule; (e) instance of knowledge
Knowledge is (a) belief; (b) proof; (c) inference; (d) justified
true belief; (e) theorems
Justified true belief is (a) logic; (b) wisdom; (c) inference;
(d) knowledge; (e) none of these
A situation is a (a) search path; (b) set of all possible states;
(c) state of the environment; (d) belief state; (e) knowledge
base
A state of the environment that results from a previous state
and an action is a(n) (a) model; (b) space; (c) location;
(d) situation; (e) inference
Creation of a formal representation of a knowledge base of
domain-knowledge rules is (a) inference;
(b) database design; (c) policy search; (d) knowledge
engineering; (e) none of these
Knowledge-based agents use percepts and _____ knowledge
(a) domain; (b) universal; (c) first-order logic; (d) robotic;
(e) scientific

4.

5.
6.

7.
8.
9.
10.

11.

12.

13.

14.
15.

16.
17.

3. Logical inference
1.

2.

3.
4.

2. Concepts and instances


1.

2.

3.

An ontology is a (a) logic; (b) set of percepts; (c) set of facts


about the world chosen to be in a knowledge base;
(d) database; (e) none of these
Abstract knowledge about concepts in the world, including
things, actions, and relationships, comprise a(n) (a) database;
(b) ontology; (c) query; (d) knowledge base; (e) theorem
Semantic networks include (a) nodes and edges that are
concepts and objects; (b) servers and clients; (c) protocols;
(d) inferences; (e) none of these

The closed-world assumption is (a) part of first-order logic;


(b) a form of default reasoning; (c) a theorem; (d) provable;
(e) none of these
Case frames capture (a) syntax; (b) inference; (c) first-order
logic; (d) semantics; (e) real-time data
To represent concepts, an alternative to predicate calculus is
(a) first-order logic; (b) semantic networks; (c) context-free
grammars; (d) heuristics; (e) none of these
Case-based reasoning is (a) natural entailment;
(b) arithmetic; (c) analogic; (d) abductive; (e) none of these
Containment is a ___ relationship (a) kind-of; (b) part-of;
(c) inheritance; (d) mutual; (e) exclusive
Inheritance is a ___ relationship (a) kind-of; (b) part-of;
(c) multi-way; (d) mutual; (e) exclusive
Capturing deep semantic aspects of language is the
objective of (a) LISP; (b) Prolog; (c) case frames;
(d) heuristics; (e) none of these
A frame is a (a) fact about an instance of a concept;
(b) knowledge base; (c) relationship; (d) scheme to express
connections among concepts; (e) none of these
Fuzzy logic is related to a theory of (a) computation;
(b) possibility; (c) probability; (d) knowledge; (e) none of
these
Truth is quantified as a real number between 0 and 1 by
____ logic (a) predicate; (b) propositional; (c) fuzzy;
(d) modal; (e) temporal
Tall-persons is a ____ set (a) fuzzy; (b) null; (c) specific;
(d) infinite; (e) discrete
Semantic networks represent an approach to (a) inference;
(b) adaptation; (c) knowledge representation; (d) predicate
calculus; (e) logic programming
An object is an instance of a(n) (a) inference; (b) concept;
(c) network; (d) belief; (e) knowledge base
A concept it a (a) set of instances; (b) rule; (c) fact;
(d) belief; (e) knowledge base

5.
6.
7.

Predicate logic is a(n) (a) algorithm; (b) language of


assertions; (c) language of arithmetic expressions; (d) set of
symbols; (e) set of operations
Application of an inference rule is (a) a greater-knowledge
to less-knowledge state transition; (b) a state transition from
less knowledge to greater knowledge; (c) a full state-space
search; (d) a transition to an initial state; (e) none of these
An assertions value is (a) true; (b) s symbol; (c) a number;
(d) true or false; (e) none of these
A validity-maintaining procedure for deriving sentences in
predicate logic from other sentences is a(n) (a) proof;
(b) theorem; (c) algorithm; (d) inference rule; (e) inference
chain
(x) x = x + 1 is (a) a numeric expression; (b) false; (c) true;
(d) an assignment; (e) none of these
(x) x = x + 1 is (a) a numeric expression; (b) false; (c) true;
(d) an assignment; (e) none of these
An interpretation is (a) an assignment of truth values to
variables; (b) the value of an assertion; (c) the meaning of a
program; (d) a formula; (e) none of these

David Keil
8.

9.

10.

11.

12.
13.

14.
15.

16.

17.

18.

19.

20.

21.

22.
23.

CSCI 300 Artificial Intelligence

A formula is satisfiable if it has a(n) ____ under which it is


true (a) operation; (b) algorithm; (c) number;
(d) interpretation; (e) none of these
Quantifiers ____ variables for meaningful use
(a) give values to; (b) take values from; (c) bind; (d) assign;
(e) declare
A sentence that is not true under any interpretation is
(a) complete; (b) incomplete; (c) consistent;
(d) a contradiction; (e) valid
A sentence that is true under any interpretation is
(a) complete; (b) incomplete; (c) a contradiction;
(d) inconsistent; (e) valid
Inference rules maintain (a) completeness; (b) consistency;
(c) validity; (d) satisfiability; (e) falsehood
An inference rule that never produces contradictions is
(a) complete; (b) incomplete; (c) inconsistent; (d) sound;
(e) useless
(p (p q)) q is (a) false; (b) Modus Ponens;
(c) inconsistent; (d) not always true; (e) none of these
An interpretation of a set of formulas in predicate logic is
(a) a logical inference; (b) a heuristic; (c) an assignment of
truth values to symbols; (d) a theorem; (e) a truth value
The sentence, |= (in every interpretation where is true,
is true), is an instance of (a) entailment; (b) negation;
(c) validity; (d) satisfiability; (e) falsehood
Predicate calculus extends propositional logic with
(a) inference; (b) negation; (c) implication; (d) variables;
(e) quantifiers
Predicate calculus extends propositional logic with
(a) inference; (b) negation; (c) implication; (d) variables;
(e) functions
A formula in predicate logic is valid if (a) it is true for some
interpretation; (b) it is true for all interpretations; (c) it is true
for no interpretation; (d) it is an axiom; (e) it is not
disproven
A formula in predicate logic is satisfiable if (a) it is true for
some interpretation; (b) it is true for all interpretations; (c) it
is true for no interpretation; (d) it is an axiom; (e) it is not
disproven
A formula in predicate logic is a contradiction if (a) it is true
for some interpretation; (b) it is true for all interpretations;
(c) it is true for no interpretation; (d) it is an axiom; (e) it is
not disproven
Satisfiability is ___ validity (a) weaker than; (b) equivalent
to; (c) stronger than; (d) a subset of; (e) none of these
Inference rules enable derivation of (a) axioms;
(b) other inference rules; (c) new knowledge; (d) percepts;
(e) none of these

Framingham State University

10/13

15

24. The problem of evaluating a formula in propositional logic is


(a) intractable; (b) undecidable; (c) tractable; (d) (2n);
(e) polymorphic
25. Deciding whether a formula in propositional logic is
satisfiable is considered (a) intractable; (b) undecidable;
(c) tractable; (d) decidable; (e) polymorphic
26. SAT is the problem of deciding whether a formula in
propositional logic (a) holds; (b) has a set of variable
assignments that make it true; (c) is not a contradiction;
(d) is syntactically correct; (e) is probably true
27. Modus ponens asserts that (a) p q; (b) p q;
(c) p ((p q) q); (d) (p (p q)) q;
(e) (p (p q)) q
28. An algorithm that determines what substitutions are needed
to make two sentences match is (a) resolution; (b) inference;
(c) unification; (d) contradiction; (e) nonexistent
29. Unification is (a) an algorithm for making substitutions so
that two sentences match; (b) a proof method;
(c) an inference rule; (d) a theorem; (e) a knowledgerepresentation scheme
30. A way to add to a knowledge base monotonically is
(a) backward chaining; (b) inference; (c) querying;
(d) arithmetic; (e) AND
31. Backward chaining is ____ driven (a) AND; (b) data;
(c) goal; (d) logic; (e) inference
32. Forward chaining is ____ driven (a) AND; (b) data; (c) goal;
(d) logic; (e) inference

4. Expert systems and resolution proof


1.

2.

3.
4.
5.

6.
7.
8.

Expert systems (a) find all inferences; (b) try to unify goals
with facts; (c) try to summarize facts; (d) try to find
contradictions; (e) none of these
Separating knowledge from control is a feature of
(a) heuristics; (b) expert systems; (c) first-order logic;
(d) predicate calculus; (e) reinforcement learning
Expert systems are ____ based (a) data; (b) consciousness;
(c) rule; (d) proof; (e) none of these
Expert systems store knowledge as (a) numbers; (b) database
records; (c) inference rules; (d) proofs; (e) none of these
Expert systems separate (a) facts from opinions;
(b) knowledge from control; (c) code from design;
(d) inference from querying; (e) none of these
Prolog uses ____ proof (a) resolution; (b) unification;
(c) inductive; (d) constructive; (e) none of these
Prolog searches for (a) data; (b) high-utility states;
(c) proof of goal clauses; (d) refutations; (e) none of these
Prolog uses the ____ assumption (a) natural;
(b) responsibility; (c) closed-world; (d) best-world;
(e) optimal-utility

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

16

Topic 3 Terminology (Knowledge and inference)


belief
belief revision
case frame
case-based reasoning
causal rule
closed-world assumption
conjunction
diagnostic rule
disjunction

domain knowledge
entailment
expert system
first-order logic
formula
implication
inference rule
inheritance
interpretation

knowledge
knowledge base
knowledge engineering
knowledge
representation
knowledge-based agent
logic program
model-based reasoning
modus ponens

modus tollens
negation
planning
predicate calculus
Prolog
proof procedure
propositional calculus
resolution proof
satisfiability

script
semantic network
situation
soundness
state
truth maintenance
truth table
universal quantifier
validity

Problems to assess topic 3 outcomes


Topic objective:
Describe the representation and use of
knowledge in inference-based problem
solving
3.1
1.
2.
3.
4.
5.
6.
7.

3.2
1.
2.
3.
4.
5.
6.
7.
8.
9.

Distinguish knowledge-based from


goal-driven agents (core)
What are two ways to add knowledge to a knowledge base?
How does a knowledge-based agent operate?
Distinguish knowledge-based from goal-driven agents.
Distinguish state-space search from
knowledge-based inference.
How would an agent use the expression Dog(fido) to solve
a problem?
What is the frame problem that knowledge based
agents encounter?
What is knowledge?

What do case frames represent? How?


What do semantic networks represent? How?
How is knowledge given to and received from expert
systems?
How does inference contribute to a knowledge base?
Name relationships that categories may have. How may
several category relationships be organized?
What is a validity-maintaining procedure for deriving
sentences from other sentences in first-order logic?
What is unification used for?
What is a fuzzy set?
In what form is knowledge represented in
knowledge-based agents?

5.
6.

What is the relationship between belief and knowledge?


Distinguish causal rules from diagnostic rules.
What does first-order logic (predicate logic) consist of?
What are the quantifiers in predicate logic, and
their meanings?
Distinguish predicate logic from propositional logic.
What is entailment?

What is a proof?
Distinguish a complete system from a sound one.
Distinguish conjunction from disjunction.
What is an inference rule used for?

3.3b Use inference in propositional and


predicate logic
1.

What is an algorithm for determining what substitutions are


needed to make two sentences match? Why is this useful?
What is a satisfiable formula, and in what language is it
usually expressed?
In what language is knowledge represented, traditionally, in
AI? Why?
In what AI application is resolution proof used?
Show whether the formula p (q r) (p q) in
propositional logic is (a) valid (b) satisfiable

2.
3.
4.
5.

(6-13) Consider these axioms and inference rules.


i.
ii.
iii.
iv.
v.
vi.

Describe methods of representing and


using knowledge (core)

3.3a Explain a basic concept of


logical inference*
1.
2.
3.
4.

7.
8.
9.
10.

false q for any q (contradiction)


((p q) p) q for any p, q (modus ponens)
((p q) q) p for any p, q (modus tollens)
a is true
vii. e is false
ab
viii. b d
ca
ix. d f

Prove the following, citing by roman numeral the axiom(s) used:


6.
7.
8.
9.

3.4
1.

2.
3.
4.
5.
6.
7.
8.

b f
cd
bf
ae

10. c d
11. e d
12. d
13. f a

Describe how expert systems work


What application of AI stores knowledge as inference rules
and how does it use the knowledge? What is a common
language used for this application?
For what AI application is the Prolog language best known,
and how does it work?
How is expertise represented in an expert system?
Explain the main form of proof used by expert systems.
Describe expertise and its limits.
What rules do expert systems use?
Describe resolution theorem proving.
If (p q r) (p s t) is true, then what is true about
q, r, s, t ?

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

17

Multiple-choice questions on Topic 4: Uncertainty


1. Acting under uncertainty

2. Probability theory and belief

1.

1.

2.
3.
4.

5.

6.

7.

8.

9.
10.

11.

12.

13.

14.

15.

16.

17.

18.

Planning in a partially observable environment faces


challenges due to (a) adversity; (b) certainty; (c) uncertainty;
(d) impossibility of prediction; (e) none of these
Beliefs (a) are held permanently; (b) are proven;
(c) are axioms; (d) may be revised; (e) are avoided
Truth maintenance may alter (a) inference rules;
(b) algorithms; (c) beliefs; (d) facts; (e) none of these
A belief state is a (a) set of transitions; (b) set of states;
(c) state of the environment; (d) transition; (e) body of
knowledge
Partial-information games are solved using (a) deduction;
(b) belief states; (c) precise utility values; (d) proof; (e) none
of these
Readiness of the environment for the next action may be
verified by (a) belief state; (b) actions; (c) uncertainty;
(d) action monitoring; (e) none of these
Readiness of the environment for the next action may be
verified by (a) belief state; (b) actions; (c) uncertainty;
(d) plan monitoring; (e) none of these
Knowledge only provides a(n) ____ for diagnosis (a) plan;
(b) state; (c) degree of belief; (d) item of data; (e) none of
these
Probabilities are employed in ____ methods (a) stochastic;
(b) logical; (c) adversarial; (d) heuristic; (e) none of these
Stochastic methods are often used in (a) theorem proving;
(b) knowledge retrieval; (c) logical inference;
(d) planning under uncertainty; (e) none of these
Logic is ____ in that adding new facts always expands a
knowledge base (a) modal; (b) propositional; (c) deontic;
(d) monotonic; (e) nonmonotonic
Truth maintenance may require (a) inconsistency;
(b) soundness; (c) percepts; (d) belief retraction; (e) none of
these
Truth maintenance systems work with ____ logic
(a) propositional; (b) predicate; (c) modal; (d) fuzzy; (e) none
of these
An approach to algorithm design often used to address
intractable problems is (a) divide and conquer; (b) greedy;
(c) brute force; (d) dynamic programming; (e) probabilistic
One way to find an adequate though inexact solution to an
intractable optimization problem may be (a) brute force;
(b) approximation; (c) divide and conquer; (d) greedy
algorithm; (e) none of these
An adequate though inexact solution to an intractable
optimization problem may be (a) brute force;
(b) probabilistic; (c) divide and conquer; (d) O(2n);
(e) none of these
Uncertainty is a property of all environments that are
(a) partially observable or stochastic; (b) fully observable or
stochastic; (c) fully observable or deterministic; (d) episodic
or deterministic; (e) episodic or discrete
Diagnostic reasoning may be required when (a) causes and
effects are known with certainty; (b) results may be inferred
from percepts; (c) cause-effect relationships involve
uncertainty; (d) proof rules are ambiguous; (e) no percepts
are available

2.

3.

4.

5.

6.
7.
8.
9.

10.

11.
12.
13.
14.

15.

A set of possible outcomes is a(n) (a) random variable;


(b) probaiblity distribution; (c) compound event; (d) sample
space; (e) permutation
An outcome that is from a set of uncertain possibilities
characterizes a (a) random process; (b) sample space;
(c) event; (d) sequence; (e) permutation
Any probability value is (a) 0 or 1; (b) in the range of 0 to 1;
(c) some positive real number; (d) some positive or negative
real number; (e) an integer
The possible orderings of elements of a set are
(a) truth values; (b) numbers; (c) sets; (d) combinations;
(e) permutations
The possible unordered selections from a set are
(a) truth values; (b) numbers; (c) sets; (d) combinations;
(e) permutations
A sample space is (a) a random variable; (b) a sequence;
(c) a number; (d) a set of all possible outcomes; (e) an event
For sample space S, Kolmogorovs axiom asserts that P(S) =
(a) 0; (b) 0.5; (c) 1; (d) 2; (e) indeterminate
For sample space S, Kolmogorovs axiom asserts that P() =
(a) 0; (b) 0.5; (c) 1; (d) 2; (e) indeterminate
Kolmogorovs axioms are considered useful for rationalagent AI because (a) they predict outcomes in many domains;
(b) beliefs that violate the axioms result in poor bets;
(c) they help the agent prove theorems; (d) they help the
agent make inferences; (e) they are used in expert systems
The average of values for equally likely outcomes is a(n)
(a) probability; (b) random variable; (c) expected value;
(d) combination; (e) permutation
Expected value of a die throw is (a) 0; (b) 1; (c) 3.5; (d) 4;
(e) 6
Expected value of a coin toss is (a) 0; (b) 0.25; (c) 0.5; (d) 1;
(e) 2
A random variable is a(n) (a) truth value; (b) set; (c) function;
(d) relation; (e) number
A degree of belief in the absence of helpful information is
a(n) (a) prior probability; (b) conditional probability;
(c) random variable; (d) axiom; (e) event
A degree of belief given some helpful information is a(n)
(a) prior probability; (b) conditional probability;
(c) random variable; (d) axiom; (e) event

16. For independent events A and B, P(A B) =


(a) P(A) + P(B); (b) P(A) P(B); (c) P(A) P(B);
(d) P(A) / P(B); (e) 1.0
17. For independent events A and B, P(A B) =
(a) P(A) + P(B) P(~A) P(~B); (b) P(A) P(B);
(c) P(A) P(B); (d) P(A) / P(B); (e) 1.0

3. Bayesian inference
1.

2.
3.

Conditional probability may apply if events have a(n) _____


relationship (a) empty; (b) noncausal; (c) independent;
(d) dependent; (e) identity
Prior probability is (a) belief; (b) certainty; (c) conditional
probability; (d) unconditional probability; (e) none of these
Probabilities of different event outcomes are a(n) (a) event;
(b) probability distribution; (c) expected value; (d) sample
space; (e) compound event

David Keil

CSCI 300 Artificial Intelligence

Bayes Theorem enables computation of probabilities of


causes, given probabilities of (a) effects; (b) other causes;
(c) prior world knowledge; (d) inference rules; (e) none of
these
5. Evidence, in using Bayes Theorem, consists of (a) causes;
(b) effects; (c) prior world knowledge; (d) inference rules;
(e) none of these
6. Bayes Theorem is used in constructing (a) automata;
(b) belief networks; (c) semantic networks;
(d) knowledge bases; (e) none of these
7. ___ enables finding probabilities of causes, given effects
(a) Minimax; (b) Bayes Theorem; (c) Gdels Theorem;
(d) fuzzy logic; (e) Prolog
8. Belief networks use (a) Minimax; (b) Bayes Theorem;
(c) Godels Theorem; (d) fuzzy logic; (e) Prolog
9. Bayesian reasoning is (a) abductive; (b) deductive;
(c) diagnostic; (d) deterministic; (e) concept-based
10. Bayesian belief networks derive (a) evidence from results;
(b) effects from causes; (c) causes from effects; (d) policies
from utility; (e) probabilities from permutations
11. An example of an application of Bayes Theorem is
(a) Kolmogorovs Axioms; (b) the principles of mathematical
induction; (c) certain medical screening tests with false
positives may require follow-up tests; (d) determining the
monetary value of a poker hand; (e) the advice not to invest
heavily in games of chance
4.

4. Markov models
1.

2.

3.

4.

5.

A state-transition system with probabilistic transitions is a(n)


(a) semantic net; (b) Bayesian net; (c) finite automaton;
(d) Turing machine; (e) Markov chain
Observations that are probability functions of a current state
characterize (a) all Markov models; (b) Bayesian networks;
(c) schemas; (d) hidden Markov Models; (e) none of these
The assumption that the current state of a system depends
only on a finite list of previous states is called (a) Bayesian;
(b) Markov; (c) closed-world; (d) a predicate-logic axiom;
(e) incorrect
Hidden Markov Models and Bayesian inference are used in
(a) subsumption architecture; (b) speech recognition;
(c) theorem proving; (d) expert systems; (e) none of these
Bayes Theorem is about (a) entailment; (b) unconditional
probability; (c) combinations; (d) inverse probability;
(e) permutations

Framingham State University


6.

7.

8.
9.

10/13

18

Markov decision problems are associated with ____ problems


(a) toy; (b) hard; (c) deterministic; (d) stochastic; (e) none of
these
The Markov assumption is that (a) all outcomes have the
same probability; (b) the past does not affect the future;
(c) current state depends only on recent states; (d) probability
is unconditional; (e) system is deterministic
A solution to a Markov process is a(n) (a) path; (b) number in
[0..1]; (c) truth value; (d) big-O expression; (e) policy
A Markov chain is a (a) series of logical deductions;
(b) sequence of probabilistic inferences; (c) transition system
with deterministic transitions; (d) transition system with
probabilistic transitions; (e) type of linked list

5. Decision theory and expected utility


1.

Expected utility is important for ____ agents (a) reflex;


(b) rational; (c) knowledge-based; (d) social; (e) travel
2. Rational agents are likely to use (a) propositional logic;
(b) probability theory; (c) decision theory; (d) calculus;
(e) organization
3. Decision-theoretic agents are concerned immediately with
(a) past utility; (b) expected utility; (c) reward;
(d) information; (e) past action
4. The notion of bounded rationality suggests satisficing as an
alternative goal to (a) optimality; (b) utility; (c) goal states;
(d) constraint satisfaction; (e) inference
5. Rational agents are characterized by acting on
(a) input/output rules; (b) expert knowledge; (c) a belief state;
(d) a world state; (e) inference directly from facts
6. Value of information matters in (a) inference; (b) exploration;
(c) belief; (d) knowledge; (e) expected reward
7. Policy iteration updates (a) estimated utilities of states;
(b) goals; (c) a knowledge base; (d) state-action mappings;
(e) none of these
10. Decision theory joins (a) set theory and logic; (b) set theory
and probability theory; (c) probability theory and utility
theory; (d) utility theory and cognitivism; (e) cognitivism and
probability theory
11. A decision-theoretic agent uses mainly (a) a goal state;
(b) an action-state utility function; (c) a reflex mapping;
(d) deduction; (e) Bayes theorem
12. Utility theory assumes (a) perfect knowledge;
(b) a deterministic environment; (c) a partial ordering of
states by utility; (d) soundness and completeness of logic;
(e) a dynamic environment

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

19

Terminology for topic 4 (Uncertainty)


action monitoring
atomic event
Bayes' theorem
Bayesian inference
Bayesian network
belief network
belief state
chain rule

circumscription
closed world assumption
combination
conditional probability
decision theory
decision-theoretic agent
event
expected outcome

hidden Markov model


independent events
Kolmogorov axioms
Markov assumption
Markov chain
Markov process
minimal model
modal logic

model
nonmonotonic reasoning
permutation
plan monitoring
prior probability
probability density
function
probability theory

random variable
rational agent
rational decision
resolution proof
sample space
truth maintenance
unconditional probability

Problems to assess topic-4 outcomes


Topic objective:
Apply probability theory to describe
agents operating in uncertain
environments

3.
4.

In what sort of environment is Bayesian reasoning used, and


why?
Name and describe the following graph, including the
significance of possible labels on transitions.

2.1 Describe ways to operate under


uncertain knowledge (core)
1.
2.
3.
4.
5.
6.
7.
8.

Why is belief maintenance nonmonotonic?


What does a rational agent seek to maximize?
What decisions are rational?
Distinguish belief maintenance from deduction.
In what environments does uncertainty apply?
Compare belief to knowledge.
How does uncertainty affect planning?
What special actions may be required in partially
informed search?
9. Distinguish diagnostic reasoning from inference.
10. Distinguish toy problems from real-world ones.

4.2
1.
2.
3.
4.
5.
6.

Apply probability theory


Use standard notation to express the probability of rain,
given clouds.
In what cases is the probability of A, given B, zero?
If P(A) = 0.5 and P(B) = 0.3, then under what condition is
P(A B) = 0.15?
If P(A) = 0.5 and P(B) = 0.3, and A and B are independent,
then what is P(A B)?
What is the expected value of the roll of two dice, and why?
Three dice? Four?
What is the expected number of heads in two coin tosses?
Three? Four? Five?

(7-12) Suppose P(P) = 0.5, P(Q) = 0.3, P(R) = 0.4, P(P Q) =


0.2, P(Q R) = 0.25, P(P R) = 0.3. Showing your work, find:
7. P(P | Q)
9. P(Q | P)
11. P(P | R)
13. P(Q | R)

8. P(R | Q)
10. P(R | P)
12. P(P | Q R)
14. P(R | P Q)

4.3 Derive belief from evidence using a


belief network
1.
2.

What is Bayesian inference used for and how?


What are Bayesian belief networks used for?

5.

How can knowledge of P(Orange-barrels | Construction) be


used to help explain slow traffic? What other knowledge
would be needed?

(6-12) Showing your work, label the network above, given the
following a priori values:
Probability
Construction Traffic #6 #7 #8 #9 #10 #11 #12
T
T
.3 .3 .2 .25 .1 .1 .05
T
F
.2 .1 .3 .25 .1 .2
.1
F
T
.1 .05 .1 .05 .1 .2
.3
F
F
.4 .55 .4 .45 .7 .5 .55

4.4a Describe and construct a Markov model

Based on the weather-model data above, and assuming that it is


sunny today, and showing your work, give the probability that:
1.
2.
3.
4.
5.
6.
7.
8.
9.

The next three days will not all be sunny


The next three days will be cloudy
The next three days will be sun, clouds, rain
The next three days will be rainy
The next two days will be rain, then sun
Two of the next three days will be rainy
The next four days will all be sunny or cloudy
It will be cloudy two days from now
It will not be sunny two days from now

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

4.4b Describe applications of Bayesian


reasoning and Markov modeling

4.5

1.
2.
3.

1.
2.
3.
4.
5.
6.
7.

4.
5.
6.
7.

What is the Markov assumption?


What is a Markov process?
What do Bayesian belief networks and hidden Markov
models have in common?
What data do Bayesian belief networks use, and what are
they used for?
Luger05, pp. 381-382, #2: problem computing confidence in
a conclusion, given observations with confidence values.
P. 382, #16: Convert Dempster-Shafer tables to BBNs.
P. 383, #18: Use OMM to predict weather probabilities.

10/13

20

Describe and apply decision theory and


bounded rationality (core)

What is a utility-based agent?


Relate utility to decision theory.
What has utility? What is utility?
Distinguish reward from utility.
Distinguish value of information from value of reward.
What is a policy and what artifact in AI has one?
Dominant strategies, zero-sum, and the prisoners dilemma
are associated with which kind of theory? Define one of the
above terms.
8. What is the term for a mapping from perceived states to
actions, what environments require it, and how may it be
constructed?
9. What does a good policy maximize? Name a way to do this.
10. Distinguish exploration from exploitation. In which kind of
AI does this matter?

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

21

Multiple-choice questions on Topic 5: Supervised learning


1. Supervised learning

3. Connectionist learning

1.

1.

2.
3.
4.

5.

6.
7.

8.

9.

All learning (a) creates a knowledge base;


(b) improves performance measure; (c) enables prediction of
the future; (d) enhances utility; (e) speeds inference
Generalization is (a) deduction; (b) learning; (c) invalid;
(d) the application of knowledge-base rules; (e) none of these
Concept learning is (a) random search; (b) knowledge-base
querying; (c) generalization; (d) specialization; (e) deduction
Supervised-learning agents (a) answer queries; (b) adapt to
their environments; (c) operate in multi-agent systems;
(d) seek to prove assertions; (e) use training to improve
ability to act
Value iteration updates (a) estimated utilities of states;
(b) goals; (c) a knowledge base; (d) state-action mapping;
(e) none of these
Learning is categorized as interactive or (a) logical;
(b) reflex; (c) stochastic; (d) dynamic; (e) supervised
An agent A learns from experience E, with respect to task T,
under performance measure P, if (a) A can perform T;
(b) A exceeds P on T; (c) A improves P on T after E;
(d) E exceeds P on T; (e) T is done only after E
All learning is (a) gaining knowledge by percepts; (b) gaining
knowledge by inference; (c) applying knowledge;
(d) stochastic; (e) improving performance
A concept is (a) a set of numbers; (b) a function c : R R;
(c) a set X of possible instances ; (d) a function c : X {0,1}
where X is all possible instances; (e) a function c : X R
where X is all possible instances

2.

3.

4.

5.

6.

7.
8.

9.
10.

2. Symbol-based learning
1.

2.

3.
4.

5.

6.
7.

8.

Inductive inference is (a) supervised learning;


(b) reinforcement learning; (c) invalid; (d) Bayesian; (e) none
of these
Ockhams Razor recommends choosing (a) first hypothesis;
(b) state-space search; (c) heuristics; (d) simplest valid
hypothesis; (e) most-completely-matched hypothesis
Abduction is (a) random search; (b) knowledge-base
querying; (c) generalization; (d) specialization; (e) deduction
Backchaining from observations to hypotheses is
(a) deduction; (b) abduction; (c) specialization;
(d) knowledge-base querying; (e) random search
PAC learnability is an attribute of (a) first-order logic
clauses; (b) knowledge bases; (c) concepts;
(d) dynamic environments; (e) social environments
((p q) q) p is (a) Modus Tollens; (b) Modus Ponens;
(c) Bayes Theorem; (d) valid; (e) abductive inference
Decision-tree learning uses (a) percepts from the
environment; (b) policy iteration; (c) a training set;
(d) probabilistic transitions; (e) artificial neurons
A decision tree is mainly a way to represent (a) domain
knowledge; (b) rules of inference; (c) a concept; (d) a policy;
(e) an algorithm

11.

12.

13.

14.

15.

16.

17.

Connectionism is (a) silicon inspired; (b) parallel distributed


processing; (c) symbolic; (d) inference based; (e) abductive
Parallel distributed processing is (a) silicon inspired;
(b) connectionism; (c) symbolic; (d) inference based;
(e) abductive
Neural networks are (a) silicon inspired;
(b) parallel distributed processing; (c) symbolic;
(d) inference based; (e) abductive
A downside of neural nets is (a) difficulty of human
understanding of representation; (b) poor learning
performance; (c) poor response; (d) low storage capacity;
(e) none of these
Neurons fire according to (a) satisfaction of a first-order logic
clause; (b) an algorithm; (c) hardware conditions;
(d) a threshold function; (e) a not assertion
A nonsymbolic system is (a) neural nets; (b) first-order logic;
(c) knowledge bases; (d) explanation-based learning;
(e) context-free grammars
Perceptrons are (a) first-order logic sentences; (b) neural nets;
(c) social agents; (d) processors; (e) knowledge bases
Activity flows in loops in (a) backpropagation nets;
(b) perceptrons; (c) recurrent neural nets; (d) all neural nets;
(e) none of these
Backpropagation is used with (a) expert systems; (b) theorem
proving; (c) neural nets; (d) Markov chains; (e) none of these
Representation of input data in neural nets is stored as
(a) the quantity of output units; (b) the quantity of input units;
(c) the weights of output-unit connections; (d) the weights of
hidden-unit connections; (e) the number of connections
In training a neural net, weights of connections are changed
in response to (a) agent judgment; (b) Bayesian formulas;
(c) predicate-logic expression values; (d) errors detected in
output units; (e) correct outputs
Neural nets have
i. input units
ii. output units
iii. processing units
iv. hidden units
(a) i and ii; (b) i, ii, and iii; (c) i, iii, and iv; (d) i, ii, and iv;
(e) none of these
Connectionist learning is (a) inference based;
(b) symbol based; (c) non symbolic; (d) Bayesian; (e) none of
these
Neural networks are associated with (a) Bayesian reasoning;
(b) Ockhams Razor; (c) connectionist learning; (d) symbolbased learning; (e) none of these
Neurons in hidden layers are those (a) protected from firing;
(b) with external inputs and outputs; (c) with external inputs
but no external outputs; (d) with external outputs but no
external inputs; (e) without external inputs or outputs
Neural nets learn by (a) abduction; (b) symbolic methods;
(c) Bayesian inference; (d) adjusting weights of synapses;
(e) computing rewards
Perceptron learning adjusts (a) a knowledge base;
(b) inference rules; (c) probability estimates; (d) synapse
weights; (e) transitions

David Keil

CSCI 300 Artificial Intelligence

4. Evolutionary computation
1.

2.

3.

4.

13.

14.

15.

16.

Evolutionary computation uses the technique of maximizing


(a) fitness; (b) reward; (c) performance; (d) quantity of
output; (e) none of these
Evolutionary computation (a) is deterministic;
(b) seeks optimal solutions; (c) was developed in the 19th
century; (d) is probabilistic; (e) none of these
Evolutionary computation is modeled on (a) brute force;
(b) divide and conquer; (c) greediness; (d) natural selection;
(e) fractals
Function optimization searches for (a) a function;
(b) parameter values; (c) a return value; (d) an algorithm;
(e) a time analysis
Fitness measures are (a) parameters to functions;
(b) functions to be optimized; (c) return values;
(d) algorithms; (e) time functions
Genetic algorithms are (a) greedy; (b) brute-force; (c) a way
to compute fitness; (d) a form of evolutionary computation;
(e) used in the human genome project
Ant computing is (a) greedy; (b) brute-force; (c) a way to
compute fitness; (d) a form of evolutionary computation;
(e) used in the human genome project
Evolutionary computation is (a) a brute-force method;
(b) state-space search one state at a time;
(c) path optimization; (d) population based;
(e) DNA computing

Framingham State University


6.

7.

8.

9.

10.
11.
12.
13.
14.

15.

16.

5. Natural-language processing
1.
2.

3.
4.

5.

A language is a (a) knowledge base; (b) lexicon;


(c) grammar; (d) set of strings; (e) none of these
17.
Some steps in communication, in order, are (a) synthesis,
analysis, intention; (b) generation, perception,
disambiguation; (c) analysis, perception, generation;
(d) disambiguation, perception, anslysis; (e) generation,
18.
intention, analysis
A lexicon is a set of (a) meanings; (b) sentences; (c) concepts;
(d) words; (e) inferences
19.
A regular expression (a) is a language; (b) is an alphabet;
(c) defines a language; (d) defines an alphabet; (e) none of
these
The regular expressions that define infinite languages contain
(a) +; (b) (); (c) |; (d) *; (e) none of these

10/13

22

A regular language corresponds to (a) an alphabet; (b) the set


of all strings over an alphabet; (c) a regular expression;
(d) a natural language; (e) none of these
In regular expressions, the operator * stands for
(a) concatenation; (b) selection; (c) repetition; (d) addition;
(e) reversal
Parsing comes after (a) contextual interpretation;
(b) lexical analysis; (c) semantic interpretation; (d) all of
these; (e) none of these
Semantic interpretation comes before (a) contextual
interpretation; (b) lexical analysis; (c) parsing; (d) phonetic
analysis; (e) none of these
A grammar specifies (a) semantics; (b) pragmatics;
(c) phonetics; (d) syntax; (e) world knowledge
Semantics refers to (a) meaning; (b) grammar; (c) phonetics;
(d) effects on listener; (e) content
Production rules occur in (a) a lexicon; (b) grammars;
(c) semantic specs; (d) proofs; (e) queries
Parsing uses (a) perception; (b) grammars; (c) semantic nets;
(d) a knowledge base; (e) first-order logic proof rules
Context-free languages are (a) a subset of regular languages;
(b) a superset of regular languages; (c) a superset of decidable
languages; (d) accepted by DFAs; (e) none of these
A context-free language is characterized by a set of ______
rules (a) regular; (b) regular and irregular; (c) production;
(d) alphabets as; (e) terminal and nonterminal
The most difficult step in extracting a first-order logic
sentence from a natural-language sentence may be
(a) parsing; (b) lexical analysis; (c) semantic interpretation;
(d) inference; (e) search
Disambiguation of sentences may require (a) applying
grammar rules; (b) applying lexical rules; (c) querying a
knowledge base; (d) use of stochastic methods; (e) none of
these
Creating database entries from natural-language text is
(a) inference; (b) querying; (c) learning; (d) information
extraction; (e) none of these
Hidden state in natural-language processing includes
(a) the intention of the speaker; (b) the knowledge of the
listener; (c) the grammar of the language; (d) the languages
semantics; (e) the state of the world

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

23

Terminology for topic 5 (Supervised learning)


abduction
ambiguity
analogical reasoning
artificial neuron
associative memory
attractor
backpropagation
Chomsky hierarchy

computational learning
theory
concept
connectionist learning
context-free language
decision tree learning
decision tree learning

explanation based
learning
feed-forward net
grammar
Hebbian learning
hidden layer
inductive bias
inductive learning

language
learnability
learning
machine translation
multi-layer net
neural network
nonterminal symbol
PAC learning

parse tree
perceptron
pragmatics
recurrent net
regular language
semantics
supervised learning
syntax

Problems to assess topic-5 outcomes


Topic objective:
Describe ways to supervise agents to
learn and improve their behavior
5.1
1.
2.
3.
4.
5.
6.

Explain what learning is (core)


What is the goal of learning? Describe two ways to
attain this.
Relate a learning agents experience, E, task set T, and
performance measure P. give an example.
Compare rote memorization to the types of learning that we
have discussed.
What is a concept? How may it be learned?
Describe a decision-theoretic agent and its desired actions.
What is value iteration?

5.3a Describe the connectionist approach to


AI (core)
1.
2.
3.
4.
5.
6.
7.
8.
9.

What is a neural net used for, and how is it prepared for


operation?
What is a perceptron, and how does it work?
Describe perceptron learning.
What is the role of weight adjustment in learning?
Describe associative memory.
Describe backpropagation.
What is the main tool in connectionist learning, and how does
it work?
Describe neurons and how they work together.
Explain the linear-separability constraint with empirical data

5.3b Construct and train a perceptron


5.2a Describe methods of symbol-based
supervised learning
1.
2.
3.
4.
5.
6.
7.
8.

Distinguish abduction from deduction.


What is inductive inference? Give an example.
Describe some forms of supervision in supervised learning.
Describe two tools or techniques of concept learning.
Distinguish symbol-based learning from another type.
Discuss the meaning and validity of the formula
((p q) q) p
Describe three types of symbol-based learning.
What is this? Describe an automated way to construct it.

5.2b Apply the decision tree learning method


to sample data
Construct a decision tree for one of the examples shown in the
table (over). Challenge: use the Decision-tree-learning algorithm
in Russell-Norvig, p. 702 (see also p. 764, #18.6) to generate a
decision tree.

(a) Visit the following site:


http://www.eee.metu.edu.tr/~alatan/Courses/Demo/AppletPer
ceptron.html
and left-click some points on the grid; right-click some other
points in another part of the grid [SCREEN SHOT]. Press the
step button to run the learning algorithm and report the result in
a sentence or two. From your knowledge of perceptrons, describe
what happened at each learning step when you clicked step.
(b) Using the perceptron learning rule (Russell-Norvig, p. 724),
and possibly using the Java code at http://en.literateprograms.org/
Perceptron_(Java), construct a two-input, one-output perceptron
that implements the predicate designated by a linear function
below, numbered 0 to 7. (I will solve #0.) The perceptron is to
output 1 if input pair (x1, x2) is below-left of the linear graph of
the designated function, otherwise 0. Show your work.
For training data, classify each of 20 or 30 points in the
square defined by ((0,0), .. (9.9)) as valued 0 or 1. The resulting
triples are your training data. For example, (x, y) is valued 1 if it is
below-left of the line defined by the linear function below. Start
by drawing the graph of your function; then classify some points.
0. y = 1.1x + 10
4. y = .3x + 6
1. y = 1.2x + 14
5. y = .8x + 10
2. y = 1.8x + 10
6. y = 2x + 12
3. y = .5x + 4
7. y = 3x + 20
10. (Based on example from Luger, 2005, p. 461 >, addition,
(|a b| < c), subtraction, OR, AND)

David Keil

5.4
1.
2.
3.
4.
5.
6.

CSCI 300 Artificial Intelligence

Framingham State University

1.
2.
3.
4.
5.

What evolves in evolutionary computation?


How does evolutionary computation address the functionoptimization problem?
In what kind of AI are fitness functions used, and how?
In what kind of AI are genetic algorithms used? How?
Describe a population-based way to address functionoptimization problems.
Solve by genetic algorithm (a) CNF-SAT; (b) TSP (Luger05,
pp. 511-513)

6.
7.
8.
9.

Explain concepts of natural-language


processing

11.

What is the structure of a language called, and how is it


defined?
What is a language, and how are some languages specified?
Distinguish semantics, syntax, and pragmatics.
What are the terms for language structure, meaning in
language, and the effects of utterances?
What are the main aspects of natural language processing?

12.
13.

very simple English sentences


assertions in propositional logic
arithmetic expressions that may include whole numbers,
parentheses, +, , ,
assertions in predicate logic, consisting of quantifications
followed by predicate expressions, where predicate
expressions are identifiers as predicate names with
parenthesized parameter lists consisting of predicate
expressions, identifiers, or numerals
statements in Java, where you may assume that expression
has already been defined
reverse Polish notation expressions
(e.g., 2 5 3 + * for 2 * (5 + 3))
floating-point numerals, e.g., 12.345
the LISP language, defined as parenthesized lists of
identifiers, numerals, or lists

Decision-tree learning example data (outcome 5.2b). Below is data for eight different problems involving three attributes each;
each with five example cases.
x1
1.
2.
3.
4.
5.
6.
7.
8.

A1 A2
0 0
1 1
0 1
1 1
0 0
1 0
0 0
1 0

x2
A3
1
0
0
0
1
1
1
0

y
0
0
1
1
0
0
0
1

A1 A2
1 0
1 0
0 1
0 0
1 0
0 1
0 1
0 1

x3
A3
1
1
1
1
1
0
1
0

y
0
0
0
0
0
0
1
1

A1 A2
0 1
1 0
1 0
1 0
0 1
1 1
1 1
0 0

24

(6-13) Write a context-free grammar for

Describe evolutionary computation

10.

5.5

10/13

x4
A3
0
0
1
1
0
1
1
1

y
0
1
0
0
0
1
0
0

A1
1
0
0
0
1
1
1
0

A2
1
0
1
1
1
1
1
1

x5
A3
1
1
0
0
1
0
0
1

y
1
1
0
0
1
0
1
1

A1 A2
1 1
1 1
1 1
1 1
1 1
1 0
0 1
1 1

A3
0
1
1
1
0
0
0
1

y
1
1
1
1
0
1
0
0

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

25

Multiple-choice questions on Topic 6: Reinforcement learning


1. Interaction and intelligent behavior
1.

1.

2.

3.

4.

5.

6.
7.

8.

9.

10.

11.

12.
13.
14.

15.

16.

Online search is necessary for (a) inference; (b) learning;


(c) exploration of environment; (d) decisions; (e) none of
these
Algorithms (a) compute functions; (b) provide services;
(c) accomplish missions in multi-agent systems;
(d) may execute indefinitely; (e) none of these
A feature of algorithmic computation is (a) alternation of
input and output; (b) processing before input;
(c) output before processing; (d) input, then processing, then
output; (e) none of these
A feature of interactive computation is (a) alternation of input
and output; (b) processing before input; (c) output before
processing; (d) input, then processing, then output; (e) none
of these
Interaction is distinguished from algorithmic computation by
the presence of (a) finite input; (b) persistent state; (c) input;
(d) processing; (e) none of these
A mutual causal effect between two agents occurs in all
(a) interaction; (b) algorithms; (c) communication;
(d) computing; (e) none of these
Synchrony entails (a) communication; (b) taking turns;
(c) input; (d) autonomy; (e) none of these
Adaptation is required in ____ environments (a) static;
(b) episodic; (c) dynamic persistent; (d) dynamic episodic;
(e) none of these
Solutions to problems in interactive environments are often
(a) inferential; (b) algorithmic; (c) adaptive; (d) entirely
model-driven; (e) none of these
What kind of learning is required in dynamic and persistent
environments? (a) supervised; (b) reinforcement;
(c) connectionist; (d) training-intensive; (e) none of these
Dominant strategies and zero-sum are associated with ___
theory (a) game; (b) decision; (c) utility; (d) probability;
(e) none of these
Hebbian learning uses the fact that (a) the world contains
patterns; (b) patterns can be induced from sample data;
(c) decision trees give feedback; (d) connections between
neurons are strengthened when the neurons fire together;
(e) facts are hard to argue with
Hebbian learning in neural nets is (a) abductive; (b) adaptive;
(c) supervised; (d) symbol-based; (e) inferential
Learning is categorized as supervised or (a) dynamic;
(b) inferential; (c) stochastic; (d) interactive; (e) set-theoretic
In what kind of environment may an agents percepts depend
on its previous actions? (a) episodic; (b) amnesic;
(c) persistent; (d) one-state; (e) one-percept
What kind of environment may change autonomously with
respect to an agent? (a) static; (b) dynamic; (c) one-state;
(d) one-percept; (e) none of these
An adaptive learning agent learns from (a) training data;
(b) supervision; (c) sample (percept/action) pairs; (d) rules;
(e) actual percepts

2.

3.

4.

5.

6.

7.

8.

9.

10.

11.

12.

13.

14.

15.

16.

17.

18.

2. POMDPs and reinforcement learning


1.

Policy search occurs in (a) a state space; (b) a range of


possible belief values; (c) a belief state space;
(d) an environment; (e) none of these

19.

Policy search is used with which type of problem?


(a) deterministic fully observable; (b) stochastic fully
observable; (c) stochastic partially observable;
(d) all problems; (e) no problems
A policy is (a) a mapping from states to actions; (b) a planned
action sequence; (c) an algorithm; (d) a mapping of percepts
to actions; (e) none of these
In interactive environments, a agent requires (a) a reflex
mapping; (b) an action sequence; (c) planning under
uncertainty; (d) a policy; (e) none of these
Utility of a state is (a) reward obtained in that state;
(b) expected long-term reward; (c) unmeasurable;
(d) independent of reward; (e) none of these
Reward is (a) observable in advance; (b) a guide to utility;
(c) always obtained in a delayed way; (d) determined by
querying a knowledge base; (e) none of these
Value of information (a) is reward; (b) is utility;
(c) determines a states utility; (d) is part of the utility of a
state-action pair; (e) none of these
A good policy maximizes (a) information obtained by an
action; (b) reward obtained by an action; (c) utility of an
action; (d) knowledge; (e) none of these
A reward function maps (a) states to action sequences;
(b) states to actions; (c) states to reals; (d) (state,action) pairs
to reals; (e) none of these
A policy maps (a) states to action sequences; (b) states to
actions; (c) states to reals; (d) states to actions; (e) none of
these
A value function maps (a) states to action sequences;
(b) states to actions; (c) states to reals; (d) (state,action) pairs
to reals; (e) none of these
A policy is a mapping of (a) natural numbers to words;
(b) income to premium; (c) states to actions; (d) percepts to
outputs; (e) problems to algorithms
Reinforcement learning is distinguished from supervised
learning in that (a) RL has a teacher; (b) RLs environment is
interactive; (c) training is increased; (d) it is connectionist;
(e) none of these
The Markov property holds for a system if a system will go
into a given state (a) deterministically; (b) with a probability
that depends on all past history; (c) with a probability that
depends on recent history; (d) under all conditions; (e) none
of these
A set of hard problems in interactive environments are
(a) state-space search; (b) one-player game; (c) planning
action sequences; (d) POMDPs; (e) none of these
Belief state in solving POMDPs is (a) a set of states;
(b) a fuzzy assertion; (c) a probability estimate that a state
holds; (d) a probability distribution over all states; (e) none of
these
A POMDP is a (a) stochastic differentiation problem;
(b) stochastic decision problem; (c) deterministic process;
(d) delayed payoff; (e) none of these
Reinforcement-learning agents (a) answer queries;
(b) adapt to their environments; (c) operate in multi-agent
systems; (d) seek to prove assertions; (e) use training to
improve ability to act
Reinforcement learning is (a) model free; (b) model driven;
(c) goal driven; (d) data driven; (e) none of these

David Keil

CSCI 300 Artificial Intelligence

20. Reinforcement is (a) immediate; (b) calculated; (c) utility;


(d) often delayed; (e) none of these
21. Reinforcement learning searches (a) a knowledge base;
(b) for a concept; (c) for a policy; (d) for an action sequence;
(e) none of these
22. Reinforcement learning is appropriate with (a) concept
training; (b) knowledge engineering; (c) inference;
(d) partially observable interactive environments; (e) none of
these
23. Q learning learns (a) reward states; (b) utilities of states;
(c) values of (state, action) pairs; (d) concepts; (e) none of
these

2. Robotics and embodied intelligence


1.

2.

3.

4.

5.

6.

7.

The distinguishing feature of robotics is (a) rationality;


(b) percepts; (c) actions; (d) physical actions in a physical
environment; (e) humanlike behavior
Robust methods for robotics include (a) fine-motion
planning; (b) stochastic reasoning; (c) expert knowledge;
(d) state-space search; (e) supervised learning
In what branch of computing are agents often situated and is
intelligence embodied ? (a) batch processing;
(b) user interfaces; (c) robotics; (d) natural-language
processing; (e) none of these
In intelligent robotics, unlike other areas of AI, intelligence
may be (a) rational; (b) Bayesian; (c) complex; (d) situated;
(e) none of these
In intelligent robotics, unlike other areas of AI, intelligence
may be (a) rational; (b) Bayesian; (c) complex; (d) embodied;
(e) none of these
The environment of a mobile robot may be (a) knowledge
based; (b) multi agent; (c) deterministic; (d) fully observable;
(e) none of these
Percepts of a robot are obtained via (a) wireless internet;
(b) queries; (c) sensors; (d) effectors; (e) none of these

Framingham State University


8.

9.

10.

11.

12.

13.

14.

15.

16.

17.

10/13

26

Actions of a robot are taken via (a) wireless internet;


(b) belief maintenance; (c) message passing; (d) effectors;
(e) none of these
Image processing includes (a) tactile sensing;
(b) belief networks; (c) knowledge-base updates;
(d) object recognition; (e) none of these
Converting sensor input to an internal representation of the
environment is (a) perception; (b) information gathering;
(c) knowledge base querying; (d) navigation; (e) none of
these
An alternative to modeling the environment for robots is
(a) heuristics; (b) reactive control; (c) fuzzy logic;
(d) belief maintenance; (e) none of these
A subsumption architecture for robots relies chiefly on
(a) inference; (b) probabilistic reasoning; (c) interactions
among layers of a system; (d) creating a representation of the
world; (e) none of these
An architecture that uses interacting layers of a system to
organize response is (a) Markov; (b) reflex; (c) goal driven;
(d) subsumption; (e) none of these
The subsumption architecture (a) challenges the notion of
explicitly centralized representation; (b) uses first-order logic;
(c) supports supervised learning; (d) provides a heuristic for
state-space search; (e) none of these
Subsumption architecture sees intelligent behavior as
(a) following inference; (b) relying on case frames;
(c) emerging from interaction with the environment;
(d) dependent on scripts; (e) none of these
Robots in a multi-agent system are likely to be
(a) situated and autonomous; (b) virtual and under central
control; (c) situated and under central control; (d) virtual and
autonomous; (e) none of these
Robots in 2013 perform at the level of (a) Lego toys;
(b) insects; (c) mammals; (d) primates; (e) humans

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

27

Terminology for topic 6 (Adaptation)


adaptation
adaptive dynamic
programming
amnesic environment
deterministic problem
dominant strategy
dynamic persistent
environment
evolutionary
computation
expected utility
exploitation
game theory
genetic algorithm
genetic operator

genetic programming
greedy agent
interactive computation
iterated prisoners'
dilemma
Markov decision
problem
model-free learning
Nash equilibrium
No Free Lunch theorem
online search
ontogenetic learning
partially observable
Markov decision
process

persistent environment
phylogenetic learning
physical environment
policy iteration
policy search
POMDP
reinforcement learning
reward
sociogenetic learning
static environment
temporal difference
learning
value function
value iteration
value of information

Robotics
active sensor
effector
feature extraction
image processing
image segmentation
imaging sensor
information-gathering
action
locality
localization
manipulator
mobile robot
object recognition

passive sensor
reactive control
robot
sensor
situatedness
subsumption architecture
tactile sensor
tracking

Problems to assess topic-6 outcomes


Topic objective:
Explain adaptive learning from
the environment
6.1

Identify problems that require


interaction or adaptation

1.

What special behavior is required of an intelligent agent in a


dynamic environment?
2. In what sorts of environments is adaptation required? Why?
3. What are some characteristics of solutions to problems in
interactive environments?
4. Relate intelligence to interaction.
5. In interactive computation, what is the arrangement of
percepts and actions in time?
6. Distinguish interactive problems from algorithmic ones and
state a class of agents that solves each.
7. Why is a reflex agent unable to do well in a persistent
environment?
8. Describe three kinds of adaptive learning.
9. What are ontogenetic, sociogenetic, and phylogenetic
adaptation?
10. In what environments is the evolutionary computation
described in topic 5 (supervised learning) ineffective?
11. Distinguish a supervised learning agent from an adaptive
learning agent.
12. What kinds of neural nets can adapt to dynamic
environments?

6.2a Describe methods of


reinforcement learning
1.
2.
3.
4.
5.

What kind of learning is required in dynamic and persistent


environments? Why?
Policy search is used with which type of problem? Explain.
What is a POMDP, and what AI approaches are used with
POMDPs?
Describe the Markov property class of environments that
have it.
Distinguish reinforcement learning from supervised learning.

6.
7.
8.
9.

What sort of learning is Q learning? What does it learn?


Describe policy search.
Distinguish value iteration from policy iteration.
What kind of learning are adaptive dynamic programming
and temporal difference learning? Distinguish from
supervised learning.
10. How is exploration used in reinforcement learning?
11. Are exploration and exploitation mutually reinforcing?
Explain.
12. In what environments is reinforcement learning
recommended?

6.2b Describe policy search methods in a


sample environment
See the diagram below.

The questions below correspond to increasingly challenging


environments. Your agent environment is a two-dimensional grid
(see above). The cells of the grid are states. The agents policy is a
mapping from states to actions. Possible actions are to move up,
down, left, or right. Some states have positive or negative rewards
associated with them.
(See also the following helpful passages in Russell-Norvig,
2010: p. 653 (value iteration); p. 657 (policy iteration); p. 663
(POMDP value iteration); pp. 832ff. (passive reinforcement
learning); pp. 834ff. (adaptive dynamic programming); pp. 836ff.
(temporal difference learning); pp. 844ff. (Q learning).)
1.
2.

3.

Sketch a good policy for the environment above, using


arrows to denote actions.
Describe a way to compute a good policy, by estimating
utilities of states, assuming that rewards are known
beforehand.
Given an existing policy in an accessible environment,
describe a way to try to improve it

David Keil
4.

5.

6.
7.
8.

CSCI 300 Artificial Intelligence

Suppose that actions have their desired effects only with a


certain probability, describe how policy search is affected by
this constraint.
Given an environment that is observable only by exploring to
obtain information about rewards, describe a method to
search for a good policy.
Describe how temporal difference learning will operate in
such an environment.
Describe how Q learning will operate in such an
environment.
Suppose reward values of states, and accessibility of states,
may change dynamically as policy search occurs. How does
this shape a good learning strategy?

Framingham State University

6.3
1.

10/13

28

Explain features of robotic systems


(core)

In what branch of computing are agents situated and is


intelligence embodied ? Describe the interaction in this
branch of AI.
2. Describe the environment of a robot and describe its ways of
operating.
3. In what kind of AI are sensors and effectors found? Describe
some.
4. In what ways may robotic systems be designed for
robustness?
5. Describe features of a robotic vision system.
6. Describe the subsumption architecture for robots.
7. How could the NAO robot be an adaptive learning agent?
What is perception? Is it disjoint from action?
8. Describe limitations of robotics development.
9. Describe robotic image processing.
10. Describe object recognition.
11. How does a robot know where things are?
12. Describe reactive control as an alternative to modeling of the
environment.

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

29

Multiple-choice questions on Topic 7: Distributed AI


1. Multi-agent systems and distributed AI
1.

2.

3.

4.

5.
6.
7.

8.

9.

10.

11.

12.

13.

14.

15.
16.

17.
18.

19.

The most difficult environments are (a) dynamic and virtual;


(b) persistent and physical; (c) static and physical;
(d) dynamic and episodic; (e) none of these
Distributed AI is closely associated with (a) Bayesian
inference; (b) reinforcement learning; (c) multi-agent
systems; (d) natural language processing; (e) none of these
Multi-agent systems enable (a) knowledge representation;
(b) inference; (c) planning; (d) distributed AI; (e) none of
these
The most difficult environments are (a) persistent, dynamic,
and virtual; (b) episodic, dynamic and physical; (c) episodic,
static, and virtual; (d) persistent, dynamic, and physical;
(e) none of these
Distributed AI often consists of (a) coordination;
(b) inference; (c) belief; (d) planning; (e) knowledge
Coordination is used in (a) inference; (b) state-space search;
(c) probabilistic reasoning; (d) coordination; (e) none of these
Sociogenetic adaptation is closely associated with
(a) Bayesian inference; (b) reinforcement learning;
(c) multi-agent systems; (d) natural language processing;
(e) none of these
Learning in multi-agent systems is (a) supervised;
(b) reinforcement; (c) sociogenetic; (d) connectionist;
(e) none of these
Behavior is (a) planning; (b) inference; (c) action to obtain
information; (d) action to change the environment;
(e) none of these
Situated agents are often found in (a) expert systems;
(b) state-space search; (c) multi-agent systems;
(d) supervised learning systems; (e) none of these
Autonomous agents are often found in (a) expert systems;
(b) state-space search; (c) multi-agent systems;
(d) supervised learning systems; (e) none of these
In which branch of AI is a concurrent action list used?
(a) expert systems; (b) state-space search; (c) multi-agent
systems; (d) supervised learning systems; (e) none of these
A mission is characteristic of (a) an algorithm;
(b) an interactive process; (c) a multi-agent system;
(d) a parallel system; (e) none of these
The problem solved by a multi-agent system is called a(n)
(a) algorithm; (b) function; (c) service; (d) mission;
(e) process
Sequential-interactive agents offer a(n) (a) algorithm;
(b) function; (c) service; (d) mission; (e) process
Systems featuring mobility of agents and locality of
interaction often also typically feature (a) direct interaction;
(b) no interaction; (c) indirect interaction; (d) semantic
networks; (e) none of these
Interaction may be sequential or (a) algorithmic; (b) O(n);
(c) multi-stream; (d) data-driven; (e) none of these
A mission is characteristic of (a) an algorithm;
(b) an interactive process; (c) a multi-agent system;
(d) a parallel system; (e) none of these
Interaction involving more than two entities is
(a) algorithmic; (b) sequential; (c) serial; (d) multi-stream;
(e) threaded

20. Multi-stream interaction is (a) sequential; (b) algorithmic;


(c) the composition of sequential interaction; (d) more than
the composition of its parts; (e) unknown

2. Decentralized and self-organizing systems


1.

2.

3.

4.

5.

6.
7.

8.

Self-organization is observed in (a) English grammar;


(b) garden design; (c) social insects; (d) proofs;
(e) the minimax algorithm
Self-organization is (a) algorithmic; (b) sequentialinteractive; (c) decentralized; (d) centralized; (e) none of
these
Self-organization is associated with ____ behavior
(a) deductive; (b) reactive; (c) planning; (d) emergent;
(e) none of these
Emergent behavior is associated with (a) self-organization;
(b) planning; (c) reflex agents; (d) stochastic reasoning;
(e) none of these
Decentralized intelligence is associated with (a) early AI
research; (b) reinforcement learning; (c) distributed AI;
(d) expert systems; (e) none of these
Distributed AI uses (a) natural language; (b) coordination;
(c) heuristics; (d) policy search; (e) none of these
Emergent intelligence is (a) processing oriented;
(b) representation oriented; (c) without representation;
(d) single-agent; (e) none of these
Self-organization is (a) algorithmic;
(b) sequential-interactive; (c) decentralized; (d) centralized;
(e) none of these

3. Stigmergy
1.

2.
3.

4.

5.

6.

7.

8.

9.

Indirect interaction is in contrast to (a) deduction;


(b) distributed AI; (c) stigmergy; (d) anonymous
coordination; (e) none of these
Stigmergy uses (a) indirect interaction; (b) language;
(c) reasoning; (d) a knowledge base; (e) none of these
Indirect interaction features (a) anonymity;
(b) synchronization; (c) use of predicate logic;
(d) fuzzy reasoning; (e) none of these
Indirect interaction features (a) space decoupling;
(b) synchronization; (c) use of predicate logic;
(d) fuzzy reasoning; (e) none of these
Indirect interaction requires (a) mutual causality between
entities that do not exchange messages; (b) message passing;
(c) synchrony; (d) static I/O; (e) none of these
Indirect interaction is characteristic of systems featuring
(a) mobility of agents and locality of interaction;
(b) global interaction and mobility; (c) algorithmic problems;
(d) knowledge-based inference; (e) none of these
Ant foraging by use of chemical trails is an example of
(a) inference; (b) linguistic processing; (c) stigmergy;
(d) evolution; (e) none of these
Stigmergy makes use of (a) the environment; (b) first-order
logic; (c) adaptive learning; (d) Bayesian reasoning;
(e) none of these
Coordination via the environment is (a) minimax;
(b) stigmergy; (c) centralized; (d) a heuristic; (e) none of
these

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

30

Terminology for topic 7 (Distributed AI)


anonymity
asynchrony
autonomy
behavior
concurrent action list
configuration space

decentralized system
distributed AI
emergent behavior
emergent intelligence
indirect interaction
joint plan

mobility
model based approach
multi-agent system
multi-stream interaction
self-organizing system
social biology

sociogenetic adaptation
space decoupling
stigmergy

Problems to assess topic-7 outcomes


Topic objective:
Explain the relation between distributed
artificial intelligence and emergent
behavior
7.1
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.

Describe distributed AI (core)

7.2a Relate intelligence to self organization


and emergent behavior (core)
1.
2.
3.
4.
5.
6.
7.
8.

What is self-organization?
Is a single neuron intelligent? Explain how the brain
implements distributed intelligence.
What is emergent behavior?
What is decentralized intelligence?
In what branch of AI does indirect interaction occur? Explain.
What is stigmergy? Relate it to intelligent systems.
Contrast indirect interaction to message passing.
Contrast network to hierarchical structure.

In what branch of AI is coordination used? What is


coordinated and how?
Describe distributed AI.
What is sociogenetic adaptation? Contrast it to other kinds of
adaptation.
Describe an example of distributed AI or multi-agent
7.2b Apply multi-agent concepts in a
systems.
computer-based solution design
What is a multi-agent system?
or simulation
What are features of many multi-agent systems?
In which branch of AI is a concurrent action list used? How? 1. See the grid environment pictured and described in
Describe an application of multi-agent systems.
Assignment 6. Consider a much bigger version of the
In what kind of environment do agents in a multi-agent
problem in which multiple agents explore a partially
system operate? Explain.
observable version of the environment and may communicate
In what branch of AI does multi-stream interaction occur?
among themselves in some way. The agents must not only
Explain.
discover the reward state, but must also carry parts of the
What are methods and requirements of multi-agent planning?
reward back to the starting point, a little bit carried at a time
What is an inherent feature of a multi-agent environment?
by an agent. Agents are autonomous and may not be directed
What is synchronization, and what is it a concern in multior coordinated from a central place.
agent systems?
Describe ways to solve this version of the problem, in
Describe concurrent action planning.
which the policy problem consists of giving these
Contrast mission to service and function.
autonomous mobile agents each a uniform set of rules of
What are features of multi-stream interaction?
behavior, including actions pick-up and drop for bits of the
Describe swarm computing and coevolution.
reward.
2. Download and run the Game of Life. Comment on the
complex behavior that results from the parallel execution of
simple rules at each cell.
3. Install StarLogo (a downloadable Java app created at MIT) or
Netlogo on your computer. Run several of the demo
programs. Comment. See writings of M. Resnick, online.
4. Visit wolfram.com to explore. Write up results.
5. Explore robotic coordination as in RoboCup. Write up
results.

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

31

Multiple-choice questions on Topic 8: Philosophical challenges


1. Theories of mind
Dualism holds that (a) mind is not a different substance from
body; (b) mind and body are separate; (c) what is mental
dominates what is material; (d) every question has two valid
answers; (e) lifes value is in owning things
2. Materialism holds that (a) mind is not a different substance
from body; (b) mind and body are separate; (c) what is
mental dominates what is material; (d) every question has
two valid answers; (e) lifes value is in owning things
3. Idealism holds that (a) mind is not a different substance from
body; (b) mind and body are separate; (c) what is mental
dominates what is material; (d) every question has two valid
answers; (e) lifes value is in owning things
4. Ontology is the study of (a) the relationship of thought to
matter; (b) what is; (c) how we know things; (d) experience;
(e) none of these
5. Epistemology is the study of (a) the relationship of thought to
matter; (b) what is; (c) how we know things; (d) experience;
(e) none of these
6. Theory of mind is the study of (a) the relationship of thought
to matter; (b) what is; (c) how we know things;
(d) experience; (e) none of these
7. Phenomenology is the study of (a) the relationship of thought
to matter; (b) what is; (c) how we know things;
(d) experience; (e) none of these
8. Rationalism is a theory of (a) logic; (b) epistemology;
(c) mathematics; (d) ethics; (e) ontology
9. Empiricism is a theory of (a) logic; (b) epistemology;
(c) mathematics; (d) ethics; (e) ontology
10. What sees knowledge as obtained from mind? (a) empiricism;
(b) rationalism; (c) relativism; (d) individualism;
(e) communitarianism
11. What sees knowledge as obtained from the senses?
(a) empiricism; (b) rationalism; (c) relativism;
(d) individualism; (e) communitarianism

5.

1.

6.

7.

8.

3. Future agent architectures


1.

2.

3.

4.
5.

2. Weak and strong AI claims


1.

2.

3.

4.

The argument against strong AI based on phenomenology


asserted that (a) a system of symbols written on paper cant
have understanding; (b) a machine cant have the experience
of thinking; (c) Gdels theorem proves machines have
limited capacity; (d) machines dont reference actual things in
the world; (e) none of these
The Chinese Room argument asserted that (a) a system of
symbols written on paper cant have understanding;
(b) a machine cant have the experience of thinking;
(c) Gdels theorem proves machines have limited capacity;
(d) machines dont reference actual things in the world;
(e) none of these
The argument against strong AI based on intentionality
asserted that (a) a system of symbols written on paper cant
have understanding; (b) a machine cant have the experience
of thinking; (c) Gdels theorem proves machines have
limited capacity; (d) machines dont reference actual things in
the world; (e) none of these
The argument against strong AI that states that machines
cant have the experience of thinking is based on
(a) utilitarianism; (b) ontology; (c) ethics;
(d) phenomenology; (e) epistemology

Weak AI is (a) pre-1980 research; (b) the claim that machines


can play games; (c) the claim that machines can simulate
6.
intelligence; (d) the claim that machines can be intelligent;
(e) none of these
Strong AI is (a) post-2000 research; (b) the claim that
machines can play games; (c) the claim that machines can
7.
simulate intelligence; (d) the claim that machines can be
intelligent; (e) none of these
The AI Hypothesis is that computers (a) are intelligent;
(b) could soon be intelligent; (c) could in principle simulate
intelligence; (d) could never be intelligent; (e) could in
4.
principle be intelligent
1.
The Turing Test measures intelligence as (a) a machines
ability to detect a humans presence; (b) a humans ability to
detect a machines presence; (c) a humans inability to detect
a machines presence; (d) use of a Turing machine; (e) an IQ
test for machines

Decision-theoretic metareasoning uses (a) utility theory;


(b) theory of the value of information; (c) probability theory;
(d) complexity theory; (e) none of these
A reflective agent architecture reasons about
(a) the environment; (b) the state of the agent; (c) the states of
other agents; (d) abstract environments; (e) probabilities
The best program achievable to solve a problem of adaptation
to an environment has (a) rationality; (b) intelligence;
(c) tractability; (d) bounded optimality; (e) none of these
Bounded rationality is a feature of (a) proofs;
(b) AI programs; (c) problems; (d) learning; (e) none of these
Perfect rationality is (a) error-free deduction; (b) ability to
prove any true assertion; (c) unconditional maximum utility;
(d) maximum utility given computing resources; (e) none of
these
Bounded optimality is (a) error-free deduction; (b) ability to
prove any true assertion; (c) unconditional maximum utility;
(d) maximum utility given computing resources; (e) none of
these
Technological singularity occurs when intelligent machines
(a) understand humans; (b) rebel against humans; (c) invent
intelligent machines; (d) have civil rights; (e) are simpler and
simpler

Future ethical issues


Joseph weizenbaum warned that it was dangerous to think
that a computer program could some day (a) correct exams;
(b) provide effective psychotherapy; (c) be intelligent;
(d) never be intelligent; (e) run for President

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

32

Terminology for topic 8: Philosophical considerations


anytime algorithm
artificial intelligence
bounded optimality
bounded rationality

consciousness
constructivism
dualism
empiricism

epistemology
experience
intentional state
meaning

metareasoning
mind
monism
perfect rationality

phenomenology
rationalism
reflective
architecture

satisficing
strong AI
thinking
weak AI

Problems to assess topic-8 outcomes


Topic objective:
Defend a theory of mind, relating it to
ethical issues raised by artificial cognitive
systems
8.1

Explain two theories of mind (core)

1.

Distinguish ontology, epistemology, theory of mind, and


phenomenology.
2. Describe your theory of mind and compare it to other
theories.
3. Contrast rationalism and empiricism, and explain what
approaches to AI would seem most consistent with each.
4. Contrast two views on whether mind and body are of separate
substances.
5. Describe views for and against dualism.
6. Describe views for and against monism.
7. Contrast the operationalist view of mind with the dualist
view.
8. Contrast the functionalist view of mind with the dualist view.
9. Contrast the views that self is a physical or spiritual entity
from a view that it is an abstraction.
10. Explain two theories of epistemology.

8.2
1.
2.
3.
4.

Evaluate the weak and strong AI theses

Distinguish between weak and strong AI.


Describe arguments for or against Strong AI.
Describe arguments for or against Weak AI.
Argue that intelligence is associated with (a) humanness;
(b) computation or symbol manipulation
5. Argue that human intelligence is artificial; not artificial.
6. Does AI deny the uniqueness of human thought? Defend
or refute.
7. What would a computer program require to have
consciousness?
8. How have human intelligence and AI been compared?
9. Defend or refute the claim that NAO can simulate
intelligence.
(10-20) Defend or refute the claim that
10. NAO can simulate intelligence
11. NAO could be intelligent.
12. NAO is intelligent.
13. Siri can simulate intelligence.
14. Siri could be intelligent.
15. Siri is intelligent.
16. a goal-based agent may have or simulate intelligence
17. a rational agent may have or simulate intelligence
18. a learning agent may have or simulate intelligence
19. an adaptive agent may have or simulate intelligence
20. a multi-agent system may have or simulate intelligence

21.
22.
23.
24.

Describe the AI hypothesis.


Describe the Turing test.
Describe the Chinese Room argument.
Discuss the claim that intelligence is rational adaptive
behavior.

8.3 Explain bounded optimality or other


proposed architectures
1.
2.
3.
4.

5.
6.
7.
8.
9.

8.4

Distinguish bounded rationality from bounded optimality.


Define and justify bounded optimality.
What are some future prospects of AI that have been
discussed in the literature?
What are the main obstacles to a perfect intelligence that
instantly knows exactly what to do next? How are the
obstacles overcome, according to some AI researchers?
Describe architectures for intelligent agents.
Is perfect rationality a realistic and adequate research goal?
Is calculative rationality a realistic and adequate research
goal?
Is bounded rationality a realistic and adequate research goal?
Distinguish bounded rationality from bounded optimality

Discuss ethical issues raised by future


prospects for AI (core)

1.

What are some ethical issues related to AI? What are your
views on them?
2. Can AI exceed human intelligence? Explain.
3. Is AI tending to catch up with human intelligence? Explain.
(4-11) Describe ethical problems in the use of software to:
4. grade exams
5. select persons for criminal investigation
6. recommend criminal verdicts
7. sentence persons found guilty
8. make recommendations on acceptance for employment
9. make recommendations on acceptance for employment
10. make recommendations on acceptance for educational
opportunity
11. target suspected combatants
12. What privacy issues are raised by the accumulation and data
mining of massive databases?
13. What is the technological singularity and what problems does
it raise?
14. May drones be made accountable for damages?
15. Do sentient robots have civil rights?
16. Can machines have ethical obligations?
17. Is bionics compatible with the notion of humanity?
18. Was Joseph Weizenbaum correct in the 1970s?

David Keil

CSCI 300 Artificial Intelligence

Framingham State University

10/13

33

Study questions on course summary (multiple topics)


1.
2.
3.
4.
5.
6.

Describe some stages in the development of AI discussed in


this course.
How do the different approaches discussed in this course
implement intelligent behavior?
In the terms discussed in this class, contrast the notion of
reasoning and inference with the notion of adaptation.
What are some promising areas of application of AI
concepts? Give reasons and specifics.
What is artificial intelligence?
What are misconceptions about artificial intelligence you are
aware of, and where do they come from?

7.
8.
9.
10.

Does artificial intelligence exist? If not, could it? Explain.


Contrast state-space search and policy search.
Describe different forms of rationality.
Describe a range of environments and the ways that
intelligent agents behave in them.
11. Describe two or three ways in which artificial-intelligence
research has been inspired by natural phenomena.
12. Describe ways to store, acquire, and maintain knowledge or
belief.
13. Contrast different kinds of learning.

Potrebbero piacerti anche