Sei sulla pagina 1di 6

AGENTS AND THEIR ENVIRONMENTS

Dr S.Srinivasan, Mr Vivek Jaglan, Dheeraj Kumar


HOD Computer Applications ,PDM Bhadurgarh , India
dss_dce@yahoo.com
Assistant Professor , PDM Bhadurgarh , India
jaglanvivek@gmail.com
Assistant Professor , PDM Bhadurgarh ,,India.
Dheerajkumar47@gmail.com

ABSTRACT
The world itself is a very complex place for architecture agents to act in. The real
world is partially observable, stochastic, sequential, dynamic and continuous. Most
architecture is designed to only deal with fractions of the total possible
environmental complexity by acting in particular domains. For example, some
architecture assumes that the world is static and that the only things that change in
the world are via an agent's actions. Other architectures may operate in dynamic
environments but require that world be consistent or predictable. Some require a
limited number of clearly defined percepts and actions. In some cases the
experience is divided in to atomic episodes and the choice of action in each
episode depends only on episode itself .The links below briefly describe and define
some of the possible environmental considerations made when developing
cognitive architectures, mostly without direct reference to specific architectures.

Keywords: test bed, closed world assumption, Agent, Sensor, Actuator, Multi-agent system.

1 INTRODUCTION need to adapt to new situations, nor do its designers


need to concern themselves with the issue of
An agent is anything that can be viewed as inconsistencies of the world model within the agent
perceiving its environment through sensors and itself. An example of such an environment is a
acting upon that environment through actuator [2]. simulated office setting, where the doorways and
A Vacuum cleaner agent is a reactive agent which halls never change. Other static environments
means it responds when it senses dirt on the floor. include those for simple problem solving like
Where as a Taxi Driver agent should behave in a crossword, chess w/clock, poker, backgammon,
different way since its environment is entirely eight-puzzle in which nothing changes except
different. It is a Goal directed agent. [5] It has to through the action of the agent.
keep all the percepts it receives in its knowledge
base . The architectures of these two typical 2.1.2 Dynamic Environments
examples should certainly differ greatly. In a dynamic environment, the next state of
Subsumption Architecture , ATLANTIS , Theo , environment is completely determined by the
Prodigy , ICARUS , Adaptive , Intelligent current state and the action executed by the agent. If
Systems(AIS) A Meta-reasoning Architecture for 'X' design goal of an architecture is to create an agent
(MAX) , Homer , Soar , Teton , RALPH-MEA , that operates in a real world environment, it has to
Entropy Reduction Engine are twelve architectures include some mechanisms that allow the agent to
we have considered for discussion so far operate in a dynamic environment. Certainly there
environments are concerned [ 4 ] . The features of are real world environments that are not dynamic
12 architectures are given in a Table 2. but static; but these are usually controlled situations
and thus not representative of the full range of
environments in which intelligent agent operates.
2 ENVIRONMENTS
Furthermore, there may be dynamic simulated
environments in which an intelligent agent could be
2.1.1 Static Environments put to good use. Some of the examples are English
In this section various environments with which tutor , part-picking, refinery controller .
the agents have to work are discussed. A static
environment consists of unchanging surroundings in
which an agent navigates, manipulates, or perhaps Generally planning systems have trouble
simply problem solves. The agent, then, does not
dealing with dynamic environments. In particular, numerous potential challenges in the real world.
issues such as truth maintenance in the agent's The agent's sensors and effectors may be imperfect,
symbolic world model and re-planning in response it may be required to produce new plans based on
to changes in the environment must be addressed. In updated information very rapidly, and it might have
the planning-type architecture it self these to reason about the temporal aspects of its plans.
capabilities have to be incorporated but it will not
reactive due the complex sensory data with a world To avoid such problems, simulators may be
model. One approach to this problem is that the used and this will help researchers to focus on
planning component may be dropped altogether as higher-level cognitive functions such as learning
is done in subsumption-type architectures. and planning. However, it may be that the solutions
to these lower-level problems need to arise from
2.1.3 Consistent Environments within the architecture rather than from outside of
The environment can be assumed to change it, which would have a profound impact on the
much slower (if at all) with respect to the speed of ultimate architecture design. If this is indeed the
the reasoning and learning mechanisms. In this case then ignoring these issues is ultimately a
sense, the change is not so much the change in state disservice to the potential growth of the
as in dynamic environments but rather a change in architecture.
the underlying principles that drive the
environment. Although this is true for some By choosing to address the issues incumbent in
environments, adopting this hypothesis as part of acting in the real world it is also possible to draw
the framework of the architecture may have serious insights into their interaction with each other .
implications (including complete loss of
functionality) in which the properties of the
2.1.6 Complex Environments
environment do change at about the same rate of Both real and simulated environments can be
speed as the mechanisms of deliberation. very complex. Complexity in this case includes both
the enormous amount of information that the
2.1.4 Simulated Environments environment contains and the enormous amount of
To develop an architecture that has to act in the
input the environment can send to an agent. In both
real world requires agents that are capable of cases, the agent must have a way of managing this
handling the multitude of uncertain events caused complexity. Often such considerations lead to the
by its normally dynamic and unpredictable nature. development of sensing strategies and attentional
mechanisms so that the agent may more readily
When an agent operates in a simulated focus its efforts in such rich environments.
environment, the architecture is able to avoid
dealing with such issues as real-time performance To build an agent for users of complex data
and unreliable sensors. A simulator can avoid access and management systems for resource and
uninteresting variables and allow the agent to focus environmental applications. Gathering good
on the critical issues of a task. Thus, the agent examples of this highly specialized and complicated
serves as a test bed for higher-level cognitive activity is costly and difficult. There is usually only
functions such as planning and learning without a small set of such good examples available to
real-world implementation issues getting in the guide the development of an agent. Consequently,
way. agents are trained, rather than being learned
inductively from example sets. One approach is that
Agent Operates in a simulated environment also agents use planning and plan generalization
offers the advantage that it may be exposed to a (learning) as their basic mechanism. Plans for yet
variety of different tasks and surroundings without unseen combinations of goals are created by the
an inordinate amount of development time. Thus, merging of plans for individual goals, with the
agents with such architecture can be utilized for minimum of re-planning. An example illustrates
involving space exploration and undersea diving merging of existing plans, and shows a simple
without the necessity of developing the necessary practical solution to the mutual goal clobbering
hardware to transport the agent to either location. problem. Plans are built from low-granularity agent
commands.
2.1.5 Real-World Environments
In order for agents to operate in the real world, 2.1.7 Knowledge Rich
they are normally designed to meet different criteria Many real and simulated environments are rich
. Agents that operate in the real world require robust in detail and other information. The ability to
perception mechanisms and are often faced with incrementally add knowledge without significant
dynamic and unpredictable environments and a slowdown is an important functionality for agents in
higher degree of complexity than they might such environments.
encounter otherwise. The Agent has to face
The richness and diversity of information can be A predictable environment is an environment
difficult or impossible to capture during for which an agent has an adequate (or perhaps
development, so learning is frequently employed to complete) world model. For example, an agent that
capture domain knowledge as the agent experiences had a sophisticated, first-principles model of
its environment. Newtonian physics could predict with reasonable
accuracy the results of throwing an objects of
2.1.8 Input-Rich Environments known mass and force .However, since such models
Sometimes a domain presents more perceptual are computationally prohibitive, most agents
information than an agent can even observe, let consider a dynamic world unpredictable as well.
alone process intelligently. Additionally, it is
important that such a (possibly continual) influx of This assumption does not hold for agent's that
perceptual data not overwhelm the agent and thus behave in simulated, dynamic worlds. Since those
cause a degradation in its reactivity. However, it worlds can generally be predicted exactly these
must respond to relevant information; otherwise, it dynamic environments can be considered to be
may behave irrationally. Such considerations have predictable.
driven the development of architectural mechanisms
such as selective attention to manage this 2.2.4 Synchronous Environments
environmental complexity. Events in a domain sometimes occur
asynchronously with respect to the agent. In such a
2.2 Environmental Effects on the Agent situation, if the agent does not constantly perceive
its world, events may go unnoticed, leading to
2.2.1 Limited Resources seemingly irrational behavior. In order to avoid this,
Generally an agent cannot possess infinite architectures often shift to more parallel approach,
resources. There are some limitations in memory in terms of the sensing strategies used.
and processing capabilities. The limited
computation resources available to the agent 2.2.5 Concurrent Environments
directly influence the types of processing it can Multiple domain events can occur
afford to do. Given these limitations the agent may simultaneously. In such cases, it is important that
not be completely rational . However, it is desirable the agent take actions appropriate to all relevant
that the agent perform as well as it is capable, events. If it can only pay attention to some of the
obeying some bounded rationality constraints. concurrent events, its rationality will suffer. Thus,
many architectures use parallel methods in their
2.2.2 Complete Knowledge sensing strategies.
Sometimes an agent knows all possibly relevant
information about its domain. In this case, learning 2.2.6 Varying Environments
is not required for domain understanding and the All of the events occurring in the environment
behavior of the system can be pre-coded, dependent may not demand the same level of attention from
on perceptions. the agent. Agent has to pay greatest attention
according to priority, so as to maintain salient
behavior. For example a taxi driver agent should
Associated with these environments is the
closed world assumption, under which any fact not follow road and signals and not all in its way.
known to the agent can be taken to be false. This is
2.2.7 Limited Response Time
similar to complete world knowledge, in that the
agent knows everything that is true about its An agent rarely has an unbounded amount of
time to take actions in response to an environmental
domain. This assumption greatly simplifies
declarative representation tasks. event. This limits the amount of processing required
before taking an action, and usually also limits the
amount of knowledge brought to bear. As a result,
2.2.3 Un predictable Environments much architecture turns to either interruptible
Some times dynamic environments may be processing or situated action. However, the agent
unpredictable. This means that not only is the world must still act as rationally as possible, given the
changing but it changes in a way that the agent can time allowed, according to some bounded
not (fully) comprehend. This often occurs when an rationality constraint.
agent's representation of the world is incomplete (or
non-existent). Because of this unpredictability, it 2.2.8 Multiple Tasks
may be desirable that the agent's processing be A domain may require an agent to perform
interruptible, to handle unexpected, and urgent many different types of tasks simultaneously in
contingencies. order to "survive". As a result, many architectures
support multiple, simultaneous goals. When
multiple tasks and goals may be considered
simultaneously, additional concerns include declarative style of knowledge representation. At
behavioral coherence and saliency. the furthest extreme, both explanation and
knowledge addition can take place in some natural
2.2.9 Supervisors language. [4]
Humans in the environment may be constantly
monitoring the agent's performance. Often, they
would like explanations of an agent's decisions, In the Table 1, Rows indicate the environments
either for verification or debugging purposes. More and the column indicates the particular architecture.
commonly, they are able to add new knowledge Y in the cell means that the architecture
directly to the agent's database. To facilitate this corresponding to column uses the environment
type of input, architectures often adopt a uniform, represented by the row.

Table-2
Table-1
Sl No Architecture Features
Adaptive intelligent systm

Entropy reduction engine


Environments

Meta reasoning
Subsumption

architecture

Ralph-mea

1 Subsumption Complicated
Intelligent behavior
Atlantis

Prodigy

Homer
Icarus

Teton
Theo

Into
Soar

Simple behavior
Static modules
Organized into
Environment Y Y
layers
Dynamic
Environment Y Y Y Y Y Y Y Y Y Y
2 THEO Plan-Then-Compile
Consistent
By this
Environment Y Y Y
Integrating learning ,
Simulated
planning and
Environment Y Y Y Y Y Y
knowledge
Real World
representation
Environment Y Y Y Y Y Y
3 ICARUS Specific
Complex
representation of
Environment Y Y Y Y Y Y Y long term memory
Knowledge- .It uses 3
Rich independent
Environment Y Y Y asynchronous
Input-Rich modules responsible
Environment Y for
Limited 1.Perception
Resources Y Y Y Y 2.Planning
3.Effecting
Complex 4 PRODIGY Storing the
Knowledge Y Y Y Y Y knowledge in a form
of first order
Un- predicate logic
predictable Y Y Y Y Y Y Y (FOPL) called
Asynchronou Prodigy Description
s Y Y language (PDL) .
Has a modular
Concurrent Y Y Y
architecture that
Varying
stores the knowledge
Priority Y Y Y Y
symbolically.
Limited
5 ATLANTIS Integrating planning
Response
and reacting in a
Time Y Y Y Y
Multiple
Tasks Y Y
Supervisor Y Y Y
heterogeneous
asynchronous
architecture for
mobile agents .
It consists of 3
layers:
1.Control layer 8 HOMER --- Is not designed
2.sequencing layer for general
3.deliberative layer intelligence .
6 Adaptive To reason about and --- underlying
Intelligent interact with other philosophy is to
System (AIS) dynamic entities in synthesize several
real time . key areas of AI to
--- problem solving form one complete
techniques system . (like
--- when planning, learning ,
encountering un- natural language
expected situation , understanding ,
decides whether to robotic navigation ) .
and how to respond . HOMER answers
--- focus attention on questions posed by
the most critical users and carries out
aspects of current instructions given by
situation . users .
--- operating --- is a modular
continuously without structure.
rebooting . It consists of :
--- able to coordinate 1.memory
with external agent . 2.planner
(more or less similar 3.natural language
to human being ) interpreter and
7 Meta Many ideas in MAX generator .
Reasoning may traced to 4.reflective
(MAX) Prodigy. processes
--- rule-based 5.plan executer
forward chaining
engine that operates 9 SOAR --- originally known
on productions . as STATE
--- is designed to OPERATOR AND
support to modular RESULT .
agents. --- mail goal is that
--- they are used to full range of
respond to a capabilities to be
dynamic handled by an
environment in a intelligent agent
timely manner . from highly routines
--- modules are to extremely difficult
categorized in to open-problems
Behavior and --- the underlying
monitor . SOAR architecture is
the view that
--- Some of the symbolic system is
modules are: necessary and
1.attention focusing sufficient condition
2.multiple problem for general
solving strategies intelligence . This is
3.execution known as Physical
monitoring Symbolic system
4.goal-directed Hypothesis (PSSH)
exploration --- ultimate aim is to
get general 3 CONCLUSION
intelligent agent
--- is based on a This paper has tackled the question how a
production system developer can choose among the many development
ie. It uses explicit options when implementing an agent application.
production rules to One key aspect here is to understand that agent
govern its behaviors technology currently oers many problem specific
. solutions that address only certain types of
10 Teton --- is a problem application domains. We argue that one important
solver foundation for making accurate choices is the
--- uses two memory availability of well defined and comparable surveys
areas and evaluations of artifacts such as environment and
1.Short-Term capabilities. Therefore, we have in a tabular form for
memory evaluating many kinds of Architectures with respect
2. Long-Term to environments .In future work we want to employ
memory the two tables to study Multi-Agent System
--- like human Technology. The idea is to Integrate state-of-the art
beings , interruption AI techniques into intelligent agent designs, using
are allowed . examples from simple, reactive agents to full
--- it has a feature knowledge-based agents with natural language
called Execution capabilities and so on. This leads to the study of
Cycle which always Multi-Agent systems and its applications. In depth
look for what to do analysis of various Agent architectures is to build a
next . Multi Agent System that will be suitable for our
11 RALPH-MEA --- is a multiple future work on Supply Chain Management.
execution
architecture
--- like human being 4 REFERENCES
, selecting best one [1] Newell, A. (1990). Unified Theories of
from the Cognition. Harvard University Press. Cambridge,
environment Massachusetts.
--- RALPH MEA [2] Rich, E., Knight, K. (1991). Artificial
uses Execution Intelligence, 2nd Edition. McGraw-Hill, New York,
Architecture (EA) to New York.
select from one state [3] Simon, H. (1962). The architecture of
to best one . complexity. Proceedings of the American
--- it uses the Philosophical Society, 26, 467-482.
following : [4] Simon, H. (1991). Cognitive Architectures in a
1.Condition action rational analysis: comment. In K. VanLehn
2.Action utility (ed.), Architectures for Intelligence, pp. 25-39,
3.Goal based Lawrence Erlbaum Associates, Hillsdale, N.J
4.Decision Theoretic [5] Artificial Intelligence: Modern Approach by
12 Entropy --- focuses on Stuart J. Russel, Peter Norving, Prentice Hall . Series
Reduction problems that require in Artificial Intelligence.
Engine (ERE) planning ,
scheduling and
control
--- uses many
different problem
solving methods
such as :
1.problem
reduction
2.temporal
projection
3.rule-based
execution

Potrebbero piacerti anche