Sei sulla pagina 1di 85

Outline

What is Data, information, and knowledge ?


AI VS KBS?
What is Artificial intelligence ?
View of AI.
History of AI.
Application of AI
Chapter one

AI Introduction
Objective
To understand AI and related concept
(Knowledge base and intelligent system)
Understand the components of AI system
Get a feel of application areas of AI
Get a feel of scholars view to define AI
Briefly discuss the difference between Expert
system and other systems
Data, Information, and Knowledge
• What is Data and Information? Are they different
from Knowledge?
Data: Unorganized and
unprocessed facts; static; a set of
discrete facts about events
Information: Aggregation of
data that makes decision making
easier
Knowledge is derived from
information in the same way
information is derived from data;
it is a person’s range of
information
What is Knowledge
Knowledge includes facts about the real world
entities and the relationship between them
 It is an Understanding gained through experience
 familiarity with the way to perform a task
 an accumulation of facts, procedural rules, or
heuristics

Characteristics of Knowledge:
 It is voluminous in nature and requires proper
structuring.
 It may be incomplete and imprecise.
 It may keep on changing (dynamic).
Types (Categorization) of
Knowledge
Shallow (readily recalled) and deep (acquired
through years of experience)
 necessary to make decision/solve problem in complex
situations

Procedural (repetitive, stepwise) versus


Episodically (grouped by episodes or cases)

Explicit (already represented/documented) and


tacit (embedded in the mind)
 Up to 95% of information is preserved as tacit knowledge
Explicit and Tacit Knowledge

Explicit (knowing-that)
knowledge:
 knowledge codified and
digitized in books, documents,
reports, memos, etc.

Tacit (knowing-how)
knowledge:
 knowledge embedded in the
human mind through
experience and jobs
Knowledge base
Knowledge base is used to store facts and
rules.
In order to solve problems, the computer
needs an internal model of the world.
 This model contains, for example, the
description of relevant objects and the relations
between these objects.
 All information must be stored in such a way
that it is readily accessible.
 Various methods have been used for KR, such
as logic, semantic networks, frames, scripts,
etc...
Knowledge base systems (KBSs)
Deal with treating knowledge and ideas on a
computer.
 Emphases to the importance of knowledge.

Use inference to solve problems on a computer.


 Knowledge-based systems describes programs that
reason over extensive knowledge bases.

Have the ability to learn ideas so that they can


obtain information from outside to use it
appropriately.
 The value of the system lies in its ability to make the
workings of the human mind understandable and
executable on a computer.
Attributes of KBS
Learn
 The more you use knowledge, the smarter they get and the
smarter you get, too
Improve with use
 Knowledge is enhanced rather than depleted when used
and they grow up instead of being used up
Anticipate
 Knowing what you want, KBS recommend what you want
next
Interactive
 There is two-way communication between you and KBS
Remember
 KBS record and recall past actions to develop a profile
Customize
 KBS offer unique configuration to your individual
specifications in real time at no additional cost
AI vs. KBS
Knowledge based system is part of Artificial
Intelligence
AI also requires extensive knowledge of the
subject at hand.
 AI program should have knowledge base
 Knowledge representation is one of the most
important and most active areas in AI.
 AI programs should be learning in nature and
update its knowledge accordingly.
Intelligence
Intelligence is the capability of observing, learning,
remembering and reasoning.
Intelligence is a general mental capability that involves
the ability to reason, plan, solve problems, think
abstractly, comprehend ideas and language, and learn.
Intelligence draws on a variety of mental
processes, including memory, learning, perception,
decision-making, thinking, and reasoning.
Memory of Intelligent system is used to store
knowledge base which is the key for success for
artificial intelligent systems.
AI attempts to develop intelligent agents.
Characteristics of Intelligent system

Use vast amount of knowledge


Learn from experience and adopt to changing
environment
Interact with human using language and speech
Respond in real time
Tolerate error and ambiguity in communication
Human Intelligence
How do people Reason?
 They create categories
 They use specific rules
– if ‘a’ then ’b’
if ‘b’ then ‘c’
abc
 They use Heuristics - “Rule of thumb”
 They use Past Experience – “CASES”
- Similarities of current and previous case
- Store cases using key attributes
 They Use “Expectations”
 How does our brain work when we solve a problem?
 Do we think it over and suddenly find an answer?
 What do we do when solving a complicated factorization problem, a
puzzle or a mystery?
Artificial intelligence
AI can have two purposes. One is to use the power of
computers to augment human thinking, just as we use
motors to augment human or horse power.

Robotics and expert systems are major branches of that.

The other is to use a computer's artificial intelligence to


understand how humans think. In a humanoid way.

Herb Simon
Definition of AI
Artificial intelligence is the study and
developments of intelligent machines and
software that can reason, learn, gather
knowledge, communicate, manipulate and
perceive the objects.
Artificial Intelligence

• AI is the branch of Computer Science that


deals with ways of:
– representing knowledge using symbol rather than
numeric value and with rule-of-thumb and method of
processing information

• AI is the effort to develop computer based


system that behave as human.
– Such system should be able to learn Natural
Language.
– Able to do text processing, communicate in natural
language and speech
View of AI
Views of AI
AI is found on the premise that:
 workings of human mind can be explained in terms of
computation, and
 computers can do the right thing given correct
premises and reasoning rules.

Views of AI fall into four categories:

Thinking humanly Thinking rationally


Acting humanly Acting rationally
Views of defining AI
What is AI (Artificial Intelligence)
 Different scholars define AI differently

(A) AI as a system (B) AI as a system


that think rationally
that think humanly

Concerned with Concerned with


thought processing behaviors of agents
and reasoning

(C) AI as a system (D) AI as a system


that Act humanly that Act rationally
Views of defining AI
measures success
of AI in terms of
human being
(A) AI as a system performance(C) AI as a system
that think humanly that Act humanly

measures success
of AI in terms of
ideal concept of
(B) AI as a system (D) AI as a system
that think rationallyintelligence that Act rationally
(rationality)
Thinking humanly: The Cognitive
Modeling
Reasons like humans do
 Programs that behave like humans
Requires understanding of the internal activities of
the brain
 see how humans behave in certain situations
and see if you could make computers behave in
that same way.
Example. write a program that plays chess.
 Instead of making the best possible chess-
playing program, you would make one that play
chess like people do.
Acting humanly: The Turing Test
Can machines act like human do? Can machines
behave intelligently?
Turing Test: Operational test for intelligent
behavior
 do experiments on the ability to achieve human-level
performance,
 Acting like humans requires AI programs to interact
with people
Suggested major components of AI: knowledge,
reasoning, language understanding, learning
Acting humanly: Turing Test
 Turing (1950) on his famous paper "Computing machinery and
intelligence":
 "Can machines think?"  "Can machines behave intelligently?"
 Operational test for intelligent behavior: the Imitation Game

Predicted that by 2000, a machine might have a 30% chance


of fooling a person for 5 minutes
Anticipated all major arguments against AI in following 50
years
Active areas of research to achieve this: Machine learning,
NLP, Computer vision, etc
Computer expected
capabilities(Turing Test)
Natural language processing:- to enable it
to communicate successfully in English.
 knowledge representation:- to store what
it knows or hears;
 Automated reasoning:- to use the stored
information to answer questions and to draw
new conclusions;
Machine learning to adapt to new
circumstances and to detect and extrapolate
patterns.
Thinking Rationally: The Laws of
Thought
A system is rational if it thinks/does the right thing
through correct reasoning.

Aristotle: provided the correct arguments/


thought structures that always gave correct
conclusions given correct premises.
 Abebe is a man; all men are mortal; therefore Abebe is
mortal
 These Laws of thought governed the operation of the
mind and initiated the field of Logic.
Acting rationally: The rational agent
Doing the right thing so as to achieve one’s
goal, given one’s beliefs.
AI is the study and construction of rational
agents (an agent that perceives and acts)
Rational action requires the ability to represent
knowledge and reason with it so as to reach
good decision.
 Learning for better understanding of how the
world works
History of AI
Formally initiated in 1956 and the name AI was
coined by John McCarthy.
The advent of general purpose computers
provided a vehicle for creating artificially intelligent
entities.
Used for solving general-purpose problems

Which one is preferred?


General purpose problem solving systems
Domain specific systems
History of AI
Development of knowledge-based systems: the
key to power
 Performance of general-purpose problem solving
methods is weak for many complex domains.
 Use knowledge more suited to make better reasoning
in narrow areas of expertise (like human experts do).
 Early knowledge intensive systems include:
• The Dendral program (1969): solved the problem of
inferring molecular structure (C6H13NO2).
• MYCIN (1976): used for medical diagnosis.
• etc.
History of AI
Shifts from procedural to declarative
programming paradigm.
 Rather than telling the computer how to compute a
solution, a program consists of a knowledge base of
facts and relationships.
 Rather than running a program to obtain a solution,
the user asks question so that the system searches
through the KB to determine the answer.
Simulate human mind and learning behavior
(Neural Network, Belief Network, Hidden Markov
Models, etc. )
Applications of AI and KBS
Solving problems that required thinking by
humans:
Playing games (chess, checker, cards, ...)
Proving theorems (mathematical theorems, laws
of physics, …)
Classification of text (Politics, Economic, sports,
etc,)
Writing story and poems; solving puzzles
Giving advice in medicine, law, … (diagnosing
diseases, consultation, …)
How to make computers act like
humans?
The following sub-fields are emerged
 Natural Language processing (enable computers
communicate in human language, English, Amharic, ..)
 Knowledge representation (schemes to store
information, both facts and inferences, before and during
interrogation)
 Automated reasoning (use stored information to answer
questions and to draw new conclusions)
 Machine learning (adapt to new circumstances and accumulate
knowledge)
 Computer vision (recognize objects based on patterns
in the same way as the human visual system does)
 Robotics (produce mechanical device capable of controlled
motion; which enable computers to see, hear & take actions)

 Is AI equals human intelligence ?


Programming paradigms
Each programming paradigms consists of two
aspects:
 Methods for organizing data/knowledge,
 Methods for controlling the flow of computation

Traditional paradigms:
Programs = data structure + algorithm

AI programming paradigms:


Programs = knowledge structure + inference mechanism
Basic Kinds of Systems
 System is a set of components that interact to
each other in a logical way to achieve specific
goals.

 There are different types of system based on


the services, the user type, and the method of
operations

 Some of the systems includes:


– Information Systems
– Database Management System
– Information Retrieval System
– Expert System
Expert system
An expert system, also known as a knowledge based
system,
 a computer program that contains some of the subject-
specific (domain specific) knowledge of one or more
human experts.

a system with two basic components:


• Knowledge base, which model the knowledge of an
expert in the area under consideration
• Inference engine

When it is used by non expert user, it can serve as an


expert that guide the user to make an expert decision.
(doctors, engineers, lawyers, etc)
Expert system
 Examples:
• Dendral, MYCIN, PUFF, ELIZA, BTDS, etc.

Dendral expert system:


 The primary aim to aid organic chemists with
identification of unknown organic molecules by
analyzing information from mass spectrometry graphs
and the knowledge of chemistry
Expert system
MYCIN:
• Written in LISP around 1970s and derived from Dendral
expert system
• was designed to diagnose infectious blood diseases and
recommend antibiotics, with the dosage adjusted for
patient's body weight
• It would query the physician/patient running the
program via a long series of simple yes/no or textual
questions.
• At the end, it provides
• a list of possible cause bacteria ranked from high to low based
on the probability of each diagnosis,
• its confidence in each diagnosis' probability
• the reasoning behind each diagnosis
• It has around ~50 rules
Expert system
PUFF:
 PUFF can diagnose the presence and severity of lung
disease and produce reports for the patient's file

 Is an Expert System that interprets lung function test


data and has become a working tool in the pulmonary
physiology lab of a large number of hospital
Expert system
ELIZA :
• ELIZA is a very well-known artificial intelligence
program designed to emulate a Rogerian
psychotherapist.
• The basic elements of Carl Rogers' new way of therapy
was to have a more personal relationship with the
patient, to help the patient reach a state of realization
that they can help themselves
• ELIZA was showcased for a number of years at the
MIT AI Laboratory.
• ELIZA has no reasoning ability, cannot learn
• ELIZA only appears to understand because "she" uses
canned responses based on keywords, as well as string
substitution
THE DIFFERENCE BETWEEN HUMAN BEINGS AND
ARTIFICIALLY INTELLIGENT SYSTEMS

Human Brain  Computers


 Natural device
 Self-willed and creative  Non-natural device
 Basic unit – neuron
 Storage device –  Limited creativity
electrochemical
 Low crunching  Basic unit – RAM
 Advanced detective
reasoning  Storage device is
 Lower speed
 Emotion-driven electromechanical
 Limited volume
 Fuzzy logic  High crunching
What computers can do better
Numeric computations
Information storage
Repetitive operation
How do AI differ from Human Intelligence

The successful AI systems are neither artificial


nor intelligent
The successful AI is based on:
 Human expertise
 Knowledge
 Selected reasoning pattern
Problems of AI:
The following points are taken as drawbacks of
Artificially Intelligent systems:
 AI do not come up with new and novel
solutions
 Existing AI systems try to reproduce the
expertise of humans, but do not behave like
human experts
 Lack of common-sense
 Reasoning: the human intelligence performs
better in this respect
Chapter Two
Intelligent Agent
Objective:
Define what an agent is in general
Gives ideas about agent, agent function, agent
program and architecture, environment, percept,
sensor, actuator (effectors),
 Gives idea on how agent should act
How to measure agent success
Rational agent, autonomous agent
Types of environment
Types of agent

44
What is an agent?
 Is any thing that can be viewed as perceiving its
environment through sensors and acting upon
the environment through the effectors
 Human as an agent has eyes, ears, and other
organs for sensors; and hands, legs, mouth, and
other organs as effectors
 Robots as an agent has camera, sound recorder,
infrared range finder for sensors; and various
motors for effectors.

45
Agents and environments

The agent function maps from percept histories to


actions:
[f: P*  A]
The agent program runs on the physical
architecture to produce f
agent = architecture + program

46
Agent program vs agent
function
Notice the difference between the agent
program, which takes the current percept as
input, and the agent function, which takes the
entire percept history.
Ideal Example of Agent
Vacuum-cleaner world

 Percepts: location and contents


 e.g., [A, Dirty]
 Actions: Left, Right, Suck

48
Cont…
Percept sequence Action

[A, Clean] Right


[A, Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck
[A, Clean], [A, Clean] Right
[A, Clean], [A, Dirty] Suck

[A, Clean], [A, Clean], [A, Clean] Right


[A, Clean], [A, Clean], [A, Dirty] Suck
Performance measure
A performance measure embodies the
criterion for success of an agent's
behavior.
When an agent is plunked down in an
environment, it generates a sequence of
actions according to the percepts it
receives.
If the sequence is desirable, then the
agent has performed well.
Obviously, there is not one fixed measure
suitable for all agents.
 Performance measure (how?)
◦ Subjective measure using the agent
 How happy is the agent at the end of the action
 Agent should answer based on his opinion
 Some agents are unable to answer, some are delude them
selves, some over estimate and some under estimate their
success
 Therefore, subjective measure is not a better way.
 Objective Measure imposed by some authority is an
alternative

51
 Objective Measure
◦ Needs standard to measure success
◦ Provides quantitative value of success measure of agent
◦ Involves factors that affect performance and weight to each
factors
E.g., performance measure of a vacuum-cleaner agent could
be
 amount of dirt cleaned up,
 amount of time taken,
 amount of electricity consumed,
 amount of noise generated, etc.
The time to measure performance is also important for
success.
It may include knowing starting time, finishing time, duration
of job, etc

52
 Omniscience versus Rational Agent
 Omniscience agent is distinct from Rational agent
 Omniscience agent is an agent that knows the actual
outcome of its action.
 Rational agent is an agent that tries to achieve more
success from its decision
 Rational agent could make a mistake because of
unpredictable factors at the time of making decision.
 Omniscient agent that act and think rationally never
make a mistake
 Omniscient agent is an ideal agent in real world
 Agents can perform actions in order to modify future
percepts so as to obtain useful information
(information gathering, exploration)

53
 Factors to measure rationality of agents
1. Percept sequence perceived so far (do we have the entire
history of how the world evolve or not)
2. The set of actions that the agent can perform (agents
designed to do the same job with different action set will
have different performance)
3. Performance measures ( is it subjective or objective? What
are the factors and their weights)
4. The agent knowledge about the environment (what kind of
sensor does the agent have? Does the agent knows every
thing about the environment or not)
 This leads to the concept of Ideal Rational Agent

54
 Ideal rational Agent

 For each possible percept sequence, an ideal rational agent


should do what every action is expected to maximize its
performance measure, on the basis of the evidence
provided by the percept sequence and what ever built-in
knowledge the agent has.
 Ideal rational agent implementation require perfection
 In real situation such agent is difficult to achieve
 Why car accident happened? Because drivers are not
perfect agent

55
 Autonomy
◦ An agent is autonomous if its
behavior is determined by its own
experience (with ability to learn and
adapt)
◦ Agent that lacks autonomous, if its
actions are based completely on built-
in knowledge

56
Cont.
 Example: student grade decider agent:
• Knowledge base given: rules for converting
numeric grade to letter grade
• Case 1: agent always follows the rule (lacks
autonomous)
• Case 2: agent that modify the rules by learning
exceptions from the knowledge base as well as
grade distribution.

57
Structure of Intelligent Agent
 Structure of AI Agent refers to the
design of intelligent agent program
(function that implement agent
mapping from percept to actions) that
will run on some sort of computing
device called architecture

58
Cont.
Design of intelligent agent needs prior knowledge of
Performance measure or Goal the agent supposed
to achieve,
On what kind of Environment it operates
What kind of Actuators it has (what are the
possible Actions),
What kind of Sensors the agent has (what are the
possible Percepts)
Performance measure  Environment  Actuators
 Sensors are abbreviated as PEAS
Percepts Actions Goal  Environment are
abbreviated as PAGE

59
Examples of agents structure and
sample PEAS
Agent: automated taxi driver:
Environment: Roads, traffic, pedestrians,
customers
Sensors: Cameras, sonar, speedometer, GPS,
odometer, engine sensors, keyboard
Actuators: Steering wheel, accelerator, brake,
signal, horn
Performance measure: Safe, fast, legal,
comfortable trip, maximize profits

60
Cont.
 Agent: Medical diagnosis system
◦ Environment: Patient, hospital, physician, nurses,

◦ Sensors: Keyboard (percept can be symptoms,
findings, patient's answers)
◦ Actuators: Screen display (action can be questions,
tests, diagnoses, treatments, referrals)
◦ Performance measure: Healthy patient, minimize
costs, lawsuits

61
Examples of agents structure and sample PEAS
 Agent: Interactive English tutor
◦ Environment: Set of students
◦ Sensors: Keyboard (typed words)
◦ Actuators: Screen display (exercises, suggestions,
corrections)
◦ Performance measure: Maximize student's score on
test

 Agent: Part picking robot


◦ Environment: Conveyor belt with parts
◦ Sensors: pixels of varying intensity
◦ Actuators: pickup parts and sort into bins
◦ Performance measure: place parts in correct bins

62
Agent programs
An agent is completely specified by the agent
function that maps percept sequences into actions
Aim: find a way to implement the rational agent
function concisely
Skeleton of the Agent
FUNCTION SKELETON-AGENT (percept) returns
action
static memory, the agent’s memory of the world
memory UPDATE-MEMORY (memory, percept)
action  CHOOSE-BEST-ACTION (memory)
memory UPDATE-MEMORY (memory, action)
RETURN action

63
Table-lookup agent
Table look up agent store all the
percept sequences –action pair into
the table
For each percept, this type of agent
will search for the percept entry and
return the corresponding actions.
Table look up couldn’t be the right
option to implement successful agent
Why?

64
Cont.
Drawbacks:
Huge table
Take a long time to build the table
No autonomy
Even with learning, need a long
time to learn the table entries

65
Agent types
 Based on memory of the agent, and they
way the agent takes action we can divide
agents into five basic types:
 These are (according to their increasing
order of generality) :
1. Simple reflex agents
2. Model-based reflex agents
3. Goal-based agents
4. Utility-based agents
5. Learning agent

66
Cont.
Notation of model:
 Rectangles: used to represent the
current internal state of the agent
decision process
 Ovals: used to represent the
background information used in
the process
67
Simple reflex agents

68
Simple reflex agent function prototype

Function SIMPLE_REFLEX_AGENT(percept)
return action
static : rules, a set of condition –action-
rule

stateINTERPRET-INPUT(percept)
ruleRULE-MATCH(state, rules)
ActionRULE-ACTION[rule]
Return action

69
Simple reflex agents example
Consider an artificial robot that stand at the center of Meskel
square (environment)

Agent has camera and microphone (sensor)

if the agent perceive a sound of very high frequency (say


above 20Khz) the it start to fly up to the sky as much as
possible

if the agent perceive an image which looks like a car it run


away in the forward direction
Otherwise it just turn in any direction randomly

70
Model-based reflex agents (also called a
reflex agent with internal state)

71
Model-based reflex agents

Function MODEL_BASED_AGENT(PERCEPT) return


action
static state, a description of the current world state
rules, a set of condition action rules

stateUPDATE_STATE(state, percept)
ruleRULE_MATCH(state, rues)
actionRULE_ACTION[rule]
stateUPDATE_STATE(state,action)
return action

72
Goal-based agents

73
Goal-based agents structure
 Function GOAL_BASED_AGENT(PERCEPT) return action
static state, a description of the current world state
goal, a description of the goal to achieve may be in
terms of state
stateUPDATE_STATE(state, percept)
actionSetPOSSIBLE_ACTIONS(state)
actionACTION_THAT_LEADS_TO_GOAL(actio
nSet) stateUPDATE_STATE(state,action)
return action

74
Utility-based agents

75
Utility-based agents structure
 Function UTILITY_BASED_AGENT(PERCEPT) return action
static state, a description of the current world state
goal, a description of the goal to achieve may be in
terms of state
stateUPDATE_STATE(state, percept)
actionSetPOSSIBLE_ACTIONS(state)
actionBEST_ACTION(actionSet)
stateUPDATE_STATE(state,action)
return action

76
Learning agents

77
Cont.
Learning agents are not really an
alternative agent type to those described
above.
All of the above types can be learning
agents
 Performance element can be replaced
with any of the 4 types described above.

78
Cont.
The Learning Element is responsible for
suggesting improvements to any part of
the performance element.
 it could suggest an improved condition-
action rule for a simple reflex agent
 it could a modification to the knowledge of
how the world evolves in a model based
agent.

79
Cont.
 The input to the learning element comes from
the Critic.
 analyses incoming percepts and decides if
the actions of the agent have been good or
not.
 it will use an external performance standard.
Example, in a chess-playing program, the
Critic will receive a percept and notice that
the opponent has been check-mated.
 It is the performance standard that tells it that
this is a good thing.
80
Cont.
 Problem Generator is responsible for
suggesting actions that will result in new
knowledge about the world being acquired.
 actions may not lead to any goals being
achieved in the short term,
 they may result in percepts that the learning
element can use to update the performance
element. example, the taxi-driving system
may suggest testing the brakes in wet
conditions, so that the part of the
performance element that deals with “what
my actions do” can be updated.

81
Types of Environment
 Based on the portion of the environment observable
Fully observable: An agent's sensors give it access to the
complete state of the environment at each point in time.
(chess vs. driving)
Partially observable
Fully unobservable
 Based on the effect of the agent action
 Deterministic : The next state of the environment is
completely determined by the current state and the action
executed by the agent.
 Strategic: If the environment is deterministic except for the
actions of other agents, then the environment is strategic
 Stochastic or probabilistic

82
Types of Environment
 Based on the number agent involved
Single agent A single agent operating by itself in an
environment.
Multi-agent: multiple agents are involved in the
environment

 Based on the state, action and percept space pattern


Discrete: A limited number of distinct, clearly defined state,
percepts and actions.
Continuous: state, percept and action are consciously
changing variables
Note: one or more of them can be discrete or continuous

83
Types of Environment cont …
 Based on the effect of time
Static: The environment is unchanged while an agent is
deliberating.
Dynamic: The environment changes while an agent is not
deliberating.
semi-dynamic: The environment is semi-dynamic if the
environment itself does not change with the passage of time
but the agent's performance score does
 Based on loosely dependent sub-objectives
Episodic: The agent's experience is divided into atomic
"episodes" (each episode consists of the agent perceiving and
then performing a single action), and the choice of action in
each episode depends only on the episode itself.
Sequential: The agent's experience is a single atomic
"episodes"
84
Environment types example

Chess with Chess without Taxi driving


a clock a clock
Fully observable Yes Yes No
Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No

 The environment type largely determines the agent design


 The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent

85

Potrebbero piacerti anche