Sei sulla pagina 1di 4

The 9th lntemational Conference on Computer Supported Cooperative Work in D s g Proceedings ein

Agent-based Interaction Model for Collaborative Virtual Environments


Xiaohong Mi, Jiaxin Chen Electron. In$ Eng. Coll. Henan Univ. o Sci. & Techno, He 'Nan Province, China f cjx@mail. haust.edu.cn

Abstract
Interaction among users in the context of CoIIaborative firtual Environments (CVEr) afects the eflciency o collaborative work In mosi o the current f f CSCW application systems, users ' interaction is still bused on the traditional ways such as @ped chat and so on. In order to enrich the interaction among users, sumeone has proposed to add 3D avatars into the CVEs. Howevel; the poor behavior usually shown by the avatars controlled by users makes it dtficult to achieve un acceptable level o immersionfor their users. This paper f provides with a new point of view, proposing an agent-based model for the study o avatar'interaction in f CVEs through the analysis o dryerent interaction lavers f among users, and presenting a semi-autonomous avatar uppronch. By attaching Q semi-autonomous intelligent virtual agent to the avatars, we can enhance the immersion and interaction among users.
Keywords: Collaborative Virtual Environments; Interaction; 3D Avatars; Intelligent Agent; Decision Mechanism

1 Introduction
The concept of Computer Supported Cooperative Work (CSCW) has broken through the traditional application of computer, for it provides users with a WYSIWIS (What You See Is What I See) Collaborative Virtual Environments [l]. In addition, CVEs also allow users to collaborate in closely coupled and highly synchronized tasks, These tasks require very close coordination between two or more users. But In most of the current CSCW application systems, users' interaction is still based on the traditional ways such as typed chat and so on. Thus, it is necessary to provide a means of interaction among users for better collaboration. In order to enrich the interaction among users, someone has proposed to add 3D avatars into the CVEs. Having avatars as user representations in CVEs stems from the need of an identity that every user feels when he enters into the environment. It has several other functions: inform the user's presence to others, identify and differentiate users, visualize the users position and

orientation, direction of interest, and enable communication among users [2]. However, the poor behavior usually shown by the avatars controlEed by users makes it difficult to achieve an acceptable level of immersion for their users. The solution that we propose is to automate the static avatars, trying to make them interact in the same way as what users would do in real world. We advocate for the attachment of a semi-autonomous intelligent virtual agent to the avatars. By using AI techniques, we can build intelligent agents, and then the user can choose to take absolute control over the actions of his avatar, delegating the management of the rest to the agent. With more intelligence, avatars are able to perceive the environment and to make their own decisions, and thus can enhance the interaction among users. Some previous works have also dealt in some way wt the partial autonomy of avatars in an interactive ih environment. One of the most interesting proposals is The CyberCafe, described by Rousseau and Hayes-Roth in [3]. They introduce the concept of synthetic actors. A synthetic actor may be autonomous or a user's avatar. An autonomous actor receives directions from the scenario and other actors, and decides on its own behavior on the virtual stage with respect to those directions [4]. An avatar is largely directed by a user who selects actions to perform, although it also receives directions from the scenario and from the other actors. In fact, the user chooses the actions to be performed by the avatar, but the way to cany them out is chosen by the avatar. These actors are able to improvise their behavior in an interactive environment and they own a repertoire of actions that are automatically planned to achieve each goal. The first problem of automating part of the behavior of an avatar is that if the user decides to delegate some functions on her personal agent, she will expect the behavior exhibited by the avatar to be similar to her own behavior in the same situation. She will also expect her avatar to behave in a consistent way. And, moreover, she wit1 expect a different behavior of her avatar towards the different avatars that populate the virtual world. In order to do so, the intelligent agent attached to our avatars must be able to manage several knowledge dimensions, such as user dimension and so on. And it also needs a decision mechanism that allows it to select the most appropriate action in very situation,

40 1

The 9th International Conference on Computer Supported Cooperative Work in Design Proceedings

This paper goes into a description of the different interaction layers among users, and shows the limitations of the current CVEs in each of these layers. Then we describe the set of dimensions in the virtual agents knowledge base that are needed to enhance users interaction. Afterwards, the application of the agent-based interaction model in a CVE is discussed and some experimental results are presented.

should be improved by adding more intelligent capabilities to the avatars, thus increasing the user immersion as well.

2.3. User-anothers avatar interaction


Entering most CVEs we can only find inexpressive and static avatars, because they are merely used as a signal to indicate the presence and location of their users. Once they have met, the communication turns to the traditional user-user Iayer. If the avatar can provide some information about its owner, such as name, e-mail address, vocation and so on, it can lead into a reinforcement o f the interaction among users. This can be accompIished by building a user knowledge database.

2 Interaction Layers
The design of CVEs enhances the collaboration of users, and the 3D avatar is an important factor. In a multi-user collaborative environment, if someone wants to know others present work, he has to achieve it by observing the actions of their avatars [ 5 ] . Considering the actors of CVEs are not only the users but also their avatars, we propose a four-interaction-layer as follows: user-user interaction, user-own avatar interaction, user-others avatar interaction and avatar-avatar interaction. Its shown as Figure 1.

2.4. Avatar-avatar interaction


In current CVEs, since avatars are not aware of anything, they cannot interact intelligently with other avatars without the intervention of their users. With more intelligent avatars, which are able to perceive the environment and to make their own decisions, this interaction layer could be exploited to enhance user-user interaction and to make avatars more useful for their owners.

3 Architecture of an Intelligent Agent


We should analyze, as a starting point, some of the most remarkable ideas of previous works. A good approximation to the architecture of an avatar is The CyberCafe[3]. According to this architecture, a participant has a mind and a body. We have adopted the architecture that is shown in Figure 2. Each avatar is implemented as i t ~ lagent, i.e. something: ?hat can be s viewed a perceiving its environment through sensors and acting upon that environment through effectors 171, which consists o f two main components: a physical body and an AI engine. The body is the 3D geometric representation of the watar (together with its position and speed), which provides the AI engine with all necessary sensing and actuator services, whereas the A I engine (mind) supplies all functionality necessary for world representation, goal planning, sensing and acting, and emotions. This feature will allow us to cope with the unexplored interaction layers.

Figure 1. Interaction layers among users


2.1. User-user interaction This i s the kind of interaction in which users communicate directly without the invention of their avatars. Typed and voice chat is the most common tool for this kind of interaction. People can discuss some collaborating problems and acquire others present work. However, it is neither natural nor fast.

2.2.User-own avatar interaction


The communication between a user and his own avatar is one of the poorest exploited. Most of the current CVEs consider the avatar just as a puppet that receives commands and executes them without doing any intelligent processing or learning, and they have no awareness of the others [6]. The direction i layer 2 n

402

The 9th International Conference on Computer Supported Cooperative Work in Design Proceedings

Figure 2. Architecture of an intelligent avatar

Within this architecture, our aim will be the description of the avatars mind. The mind will control the actions to be performed by the avatars body in the virtual world. .In order to build this mind we have developed an intelligent agent that can be linked to the. avatar. In fact, an intelligent agent is a computer system capable of flexible autonomous action in some environment. The main features of agent is showed as follows: (1) Autonomy: Capable of acting independently, exhibiting conbol over their internal state by flexible; (2) Social Ability: The ability to interact with other agents (and possibly humans) via some kind of agent-communication language, and perhaps cooperate with others, and agents interact with environment through sensors and effectors as shown i Figure 3; n

In order to perform the most appropriate actions in very situation, the agent must provide a set of decision mechanism that depends on the following howledge base [XI: (1) User Dimension: First, the agent must have knowledge about its own user, in order to behave in the same way she would do. It has to learn about her goals, her concentration, her reactions, her personality, her likes and dislikes, etc. (2) Introspective Dimension: On the other hand, the agent must manage some knowledge about itself (its mind) and the avatar it is controlling (its body): external appearance, personality, moods, past experiences, location in the CVEs, etc. (3) Social Dimension: A third kind of knowledge to be managed is to concern the rest of the avatars inhabiting in the CVEs: their appearance, personality traits, mood, attitudes, past history of interaction, etc. (4) Environment Dimension: Finally, the agent also has to manage some knowledge about the CVEs in which it is located: geometry, objects, exits, utility, etc. Among the interaction model we have advocated, sensors apperceive the information of environment with its own environment dimension, and then it will decide on what to do and how to do it according to this information and its knowledge base; afterward, effectors will perform the corresponding actions. Of course, this needs an action database, too. Thus this model can improve the interaction Iayers by delegating some functions on their personal agents. The structure of the intelligent agent we advocate is shown in Figure 4.

4 Algorithm for sensors and effectors


The key elements of agent are its sensors and effectors. Every time an avatar performs an action, the agent attached to it first senses the environment via a vision cone, if it gains awareness of other avatars in its path, it will analyze this information, and then send it to the decision mechanism; in the end, the effectors select

Figure 3. Principle of agent

Sensors

--l

Decision Mechanism

What f should do n a w

Environrnent Dimension

Agent
Figure 4. Structure of intelligent agent

i effectors

403

The 9th International Conference on Computer Supported Cooperative Work in Design Proceedings

the proper actions for the avatar to perform. The logic control of the avatars behavior is as shown as follows

I
1 I

image = Body.Sense0 return VisionCone.GetImage0 (image)

too many controls. It has enhanced the interaction among users in some dewee. However. the model we have built now is very simple, much work should be done in later. The hrther work will concentrate on the implement of the knowledge base and action database to provide more flexible interactions between avatars, thus the user can delegate more action managements to the agent.

Mind.UpdateWorldMode1

Knowledgeaase. Modifyworld (image) WorldModel.ModifyWorld (image)

Renference
Grudin, Computer Supported Cooperative Work:
History and Focus, IEEE Computer, May 1994, pp.
19-26.

{.

1 1 Mind.ReviaePlan I

ActionPlanner.Plan 0 { KnowledgeBase.GetGoal8 0 ExploreSolutions 0 Knowledge8ase.GetObjectInfo

WorldModel.GetObjectAttribs ( 1

Createplan ( 1 lastArtion = SelectLastPlannednctionO MotionContro1.Decompose (1astAction) action = Mind.PickAction0 ( microA = ActionPlanner.GetMicroAction0 return MotionControl.GetCurrentAction()

I return ConvertActionToEvent (action) I //Acting algorithm


DoActing (detailedType1 I switch d e t a i l e d w e
case STEP

return microA

{ SetHeading ( 1 Setvelocity ( 1 Setposition ( 1 ) caQe MOVEHAND

SetHandPosition case NOD (

1 1

()

...

B.Roehle, Channeling the data flood, IEEE Spectrum, March 1997, pp. 32-38. Rousseau, D. and Hayes-Roth, B., Improvisational SyntheticActors with Flexible Personalities,Report No. KSL 97-10, Knowledge Systems Laboratory, Department of Computer Science, Stanford University, Stanford, Califomia, 1997. Hayes-Roth, E., Brownston, L., Sincoff, E, Directed Improvisation by Computer Characters. Technical Report KSL-95-04. Knowtedge Systems Laboratory.. Stanford University. Stanford. California, 1995. Wenfeng Guo, Yingying Wang,Achievement of .the dynamic control over avatar actions in VRML worlds, Microcomputer and its Application,2002(10), 55-57. Herrero. P., Amusement Project Deliverable 5.ldAwareness of Interaction and of Other Participants. Amusement Esprit Project 25197, 1998. Russell S., Norving P., Artificial Intelligence,A Modern Approach, Prentice hall, 1995. hbert, R., de Antonio, A., Sanchez-Segura, M. I., Segovia, J., Wow Can Virtual Agents Improve Communication in Virtual Environments?, In Proceedings of the Second Workshop on Intelligent Virtual Agents, VA99, Salford, UK, 1999,pp. 139-142. [9] Adma Szarowicz, Juan Amiguet-Vercher, Peter Forte, Multi-agent Interaction for Crowd Scene Simulation, American Association for Artificial Intelligence,2001.

Actuator.ExecuteChange

t AnimationScriptF1LE.write 1
return new SensingEvent

(line)

5 Conclusions and Future Directions


In this paper, we have discussed about the attachment of intelligent agents to avatars, and advocated it as a way to soIve the shortage of interaction layers in current CVEs. An agent-based interaction mode1 has been advanced. By using AI techniques, we have built some simple intelligent agents. With more intelligence, avatars are able to perceive the environment and to make their own decisions without overloading the user with

404

Potrebbero piacerti anche