Sei sulla pagina 1di 52

UNIT-1

INTRODUCTION TO AI
&
INTELLIGENT AGENTS
WHAT IS AI?
Systems that think like humans
• "The exciting new effort to make computers
think . . . machines with minds, in the full and
literal sense." (Haugeland, 1985)
• "[The automation of] activities that we
associate with human thinking, activities such
as decision-making, problem solving, learning
. . ." (Bellman, 1978)
Systems that act like humans
• "The art of creating machines that perform
functions that require intelligence when
performed by people." (Kurzweil, 1990)
• "The study of how to make computers do
things at which, at the moment, people are
better." (Rich and Knight, 1991)
Systems that think rationally
• "The study of mental faculties through the use
of computational models." (Chamiak and
McDermott, 1985)
• "The study of the computations that make it
possible to perceive, reason, and act."
(Winston, 1992)
Systems that act rationally
• "Computational Intelligence is the study of the
design of intelligent agents." (Poole et al.,
1998)
• "AI . . .is concerned with intelligent behaviour
in artifacts." (Nilsson, 1998)
Acting Humanly: The Turing Test
approach
• Computer should possess the following
capabilities
– Natural language processing
– Knowledge representation
– Automated reasoning
– Machine learning
– Computer vision
– Robotics
Thinking humanly: The cognitive
modeling approach
• Think like a human, three ways to test
– Introspection-trying to catch our own thoughts as
they go by
– Psychological experiments-observing a person in
action
– Brain imaging-observing the brain in action.
• Cognitive science- computer models from AI
and experimental techniques from psychology
Thinking rationally: The “laws of
thought” approach
• Building the program from the logic for
intelligent systems.
• Taking informal knowledge and state it in
formal terms even the knowledge is uncertain.
• Difference between solving problem “in
principle” and “in practice”.
Acting rationality: The rational agent
approach
• Agent-something that acts
• Rational agent- acts to achieve best outcome.
• Limited rationality- acting appropriately when
there is not enough time to do all the
computations.
AGENTS AND ENVIRONMENT
• An agent is anything that can be viewed as
perceiving its environment through sensors
and acting upon that environment through
actuators.
• Human- eyes, ears and other organs for
sensors and hands, legs, vocal tract for
actuators.
• Robots- cameras and infrared range finders for
sensors and various motors for actuators.
• Percept-input to agent at any given instant.
• Percept sequence- complete history of
everything the agent has ever perceived.
• The action of an agent depends on the history
of percept sequence, not on anything it hasn't
perceived.
• Agent function- maps given precept sequence
to an action.
• Construct a table by trying out all possible
precept sequences and recording which
actions the agent does in response.
• Agent function for an agent will be
implemented by an agent program.
• Agent function – abstract mathematical
description.
• Agent program – a concrete implementation.
• Consider the vacuum cleaner world.
• World has just two squares A and B.
• Vacuum agent perceives which square it is in
and whether there is dirt in it.
• A partial tabulation of this agent function is
given
The Concept of Rationality
• A rational agent is one that does the right thing.
• Agent generates sequence of actions according to the
percepts it receives, the sequence of actions causes the
environment to go through a sequence of states.
• If the sequence is desirable the agent has performed
well.
• Desirability is captured by performance measure.
• This performance measure in terms of environment
states.
• If we measure in agent states it could achieve perfect
rationality.
• No fixed performance measure ,so far
designer must devise one measure depending
on the scenario.
• For example, the vacuum cleaner agent.
– One proposal is-amount of dirt cleaned up per
particular period of time.
– Alternate is- having a clean floor.
– Better design performance measure according to
what one actually wants in the environment
rather than according to how one thinks the agent
should work.
Rationality
• Depends on
– The performance measure that defines the criterion
of success.
– The agents prior knowledge of the environment.
– The actions that the agent can perform.
– The agents percept sequence to date.
• Definition of rational agent:
– For each possible percept sequence, a rational agent
should select an action that is expected to maximize
its performance measure, given the evidence
provided by the precept sequence and whatever built-
in knowledge the agent has.
• Example: (vacuum cleaner agent)
– Performance measure is one point for each clean
square at each time step.
– Geography of environment, left and right actions if
inside.
– Available actions are left, right and suck.
– Perceives its environment and check for dirt.
• Under these conditions agent is rational.
• And some times irrational also –once all dirt is
cleaned up agent will oscillate needlessly, better
do nothing if all squares are clean for that check
and re-clean occasionally.
Omniscience, learning and autonomy
• An omniscient agent knows the actual
outcome of its actions and can act accordingly
which is impossible in reality.
• Rationality maximizes expected performance,
while perfection maximizes actual
performance.
• Information gathering- doing actions in order
to modify future percepts which is provided
by exploration.
• Rational agent must not only gather information but
also learn as much as possible from what it perceives.
• Initial configuration could reflect some prior
knowledge of the environment but as the agent gains
experience this may be modified and augmented.
• We say that an agent lacks autonomy if it relies on
prior knowledge rather than on its own percepts.
• Autonomous-should learn in order to compensate
partial or incorrect prior knowledge.
THE NATURE OF ENVIRONMENTS
Specifying the task environment
• Task environments are the problems to which
rational agents are the solutions.
• Specify performance measure, the
environment, and the agent’s actuators and
sensors for the task environment of any agent.
PEAS Description : An automated taxi
• Performance measure:
– getting to correct destination
– Minimizing fuel consumption and wear and tear.
– Minimizing trip time or cost
– Minimizing violation of traffic laws and disturbances
to other drivers
– Maximizing safety and passenger comfort
– Maximizing profits.
• Environment:
– Roads,traffic,pedestrians,stray animals,road
works,polics cars,potential and actual passengers,and
climatic conditions.
– More restricted the environment the easier the deisgn
problem.
• Actuators include-
– control over engine through accelerator and
control over steering and braking.
– Display screen for output
– Voice synthesizer to talk back
– And some way to communicate with other
vehicles
• Sensors-
– Controllable video cameras.
Properties of task environments
• Fully observable vs partially observable
– If the agent’s sensors give it access to the
complete state of the environment at each point
in time-fully observable
– Agent need not maintain any internal state to
keep track of the world
– Partially observable-vacuum agent, automated
taxi
– Agent with no sensors- unobservable.
• Single agent vs multiagent :
– Agent solving crossword puzzle-single agent
– Agent playing chess –two agent environment
– Main issue – which entities must be viewed as an
agents.
– Competitive mutliagent environment- A trying to
maximize its performance measure which results
in minimizing B’s performance measure. Ex:Chess
– Cooperative mutliagent environment- Agent
performance measure maximizes the performance
measure of remaining agents.Ex:Taxi-driving
• Deterministic vs. Stochastic
– If next state of environment is completely determined
y the current state and the action is executed by the
agent then it is deterministic otherwise stochastic.
– If environment is partially observable it cold appear to
be stochastic. Ex:taxi driving
– Uncertain environment-not fully observable and not
deterministic.
– Stochastic- outcomes is quantified in terms of
probabilities
– Nondeterministic-actions are characterized by their
possible outcomes.
• Episodic vs. Sequential:
– Agent’s experience is divided into atomic
episodes.
– Next episode does not depend on the actions
taken in previous episodes.
– Episodic-agent that has to spot defective parts.
– Sequential-current decision will effect all future
decisions. Ex-taxi driving.
• Static vs. Dynamic
– If the environment change while agent is
performing action then environment is dynamic
otherwise static.
– If the environment changes bcz of actions
performed by agent then it is semidynamic.
– Taxi driving-fully dynamic
– Chess-semidynamic
– Crossword puzzles-static
• Discrete vs. Continuous
– State of the environment to the way time is
handled and to the percepts and actions of the
agent.
– Chess has a discrete set of percepts and actions.
– Taxi driving is a continous state and continous
time problem.
• Known vs. Unknown
– Known environment-outcomes for al actions are
given
– Unknown –the agent have to learn as it works to
take good decisions.
– Known environment can be partially observable-
example cards game.
– Unknown environment can be fully observable-a
new video game.
• The scenarios in the table depends on how the
task environment is defined.
• Example : medical - diagnosis can be single-agent
or multi-agent,sequential or episodic.
• Chess can be sequential or episodic.
• To design an agent experiments are carried out
for different environments selected from
environment class.
• The code repository includes environment
generator for each environment class that selects
particular environment in which to run the agent.
Structure of Agents
• The job of AI is to design an agent program
that implements the agent function-mapping
from percepts to actions.
• Program must be appropriate for the
achitecture.
Agent Programs
• Take the current percept as input from sensors
and return an action to the actuators.
• Agent functions considers-percept sequence
where as agent program considers-current
percept.
• Table driven approach-keeps track of the
percept sequence and then uses it to index
into a table of actions to decide what to do.
• For each percept sequence an entry into the
table resulting in tables of large size.
• Instead of constructing tables we develop
programs that act smart.
• Basic kinds of agent programs
– Simple reflex agents
– Model-based reflex agents
– Goal-based agents
– Utility-based agents
Simple reflex agents
• Selects actions on the basis of the current
percept ignoring the history.
• Example-vacuum agent bothers about “does
the current location contains dirt”.
• Agent program for a simple reflex agent in two
state vacuum environment
• Simple reflex behaviors occurs rather in
complex environments.
• Example in automated taxi-if the car in front
brakes and its brake lights come on then
initiate braking.
– If car-in-front-is-braking then initiate-braking
• We call such a connection a condition-action
rule.
Schematic diagram of a simple reflex
agent
•Rectangles-
current internal
state of agent’s
decision process
•Ovals –
represent the
background
information
• The agent program-
– Interpret-input function generates an abstracted
description of current state from the percept
– Rule-match function returns the first rule in the
set of rules that matches the description.
• Simple reflex agents are of limited intelligence.
• Will work only if the correct decision can be
made on the basis of only the current percept.
• Only the environment is fully observable.
• In automated car if car must apply brake
observing the lights of the front vehicle-chance of
either applying brake continuously or
unnecessarily.
• In vacuum agent a randomized action must be
taken in order to avoid infinite loop in moving left
in square A and moving right in square B.
Model-based reflex agents
• Effective way of handling partial observability is
keep track of the part of the world it cant see
now.
• Maintain internal state .
• Updating the internal state includes-
– How the world evolves independently of the agent
– How the agent affect the world
• How the world works-is a model
• Agent using this knowledge-model based agent.
Goal based agents
• In addition to the knowledge of current state the agent
should also know the goal in order to make a decision
• For example at road junction- the taxi can turn left,
turn right, or go straight, the decision depends on goal
i.e the destination.
• Search and planning are the subfields for finding action
sequences to achieve goals.
• In reflex agents built in rules directly maps from
percepts to actions.
• A goal based agent where as will not take the action by
only mapping but also observes the environment to
take decision.
Utility based agents
• Goals alone are not really enough to generate high-
quality behaviour in most environments.
• For example taxi reaching the destination, there may
be many action sequences in which some are quicker,
some are cheaper and some are more reliable.
• Performance measure should allow a comparison of
different world states in result of actions taken.
• Best depends on how much happy the agent is by
taking an action.
• We replace utility in place of happiness scientifically.
• A utility function maps a state (or a sequence
of states) onto a real number, which describes
the associated degree of happiness
Problem solving agents
• Problem solving agents are goal based agents
that decide what to do by finding sequences
of actions that lead to desired states.
• This chapter discuss about the elements that
constitute to problem and its solutions.

Potrebbero piacerti anche