Sei sulla pagina 1di 9

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/241169864

A Virtual Reality based human-network


interaction system for 3D internet applications
ARTICLE MAY 2010
DOI: 10.1109/OPTIM.2010.5510551

READS

64

6 AUTHORS, INCLUDING:
Doru Talaba

Adrian Nedelcu

Universitatea Transilvania Brasov

Universitatea Transilvania Brasov

48 PUBLICATIONS 126 CITATIONS

16 PUBLICATIONS 20 CITATIONS

SEE PROFILE

SEE PROFILE

Mihai Machedon-Pisu
Universitatea Transilvania Brasov
21 PUBLICATIONS 9 CITATIONS
SEE PROFILE

All in-text references underlined in blue are linked to publications on ResearchGate,


letting you access and read them immediately.

Available from: Mihai Machedon-Pisu


Retrieved on: 29 February 2016

2010, 12th International Conference on Optimization of Electrical and Electronic Equipment, OPTIM 2010

A Virtual Reality Based Human-Network Interaction


System for 3D Internet Applications
Vlad Cristian STOIANOVICI*, Doru TALABA**, Adrian-Valentin NEDELCU*, Mihai MACHEDON PISU*, Florin
BARBUCEANU**, Adrian STAVAR**
Transilvania University of Brasov, Romania, * Department of Electronics and Computers;
**
Department of Product Design and Robotics
E-mail: stoianovici.vlad@unitbv.ro, talaba@unitbv.ro, adrian.nedelcu@unitbv.ro, mihai.machedon@unitbv.ro,
florin.barbuceanu@unitbv.ro, adrian.stavar@unitbv.ro

Abstract - The field of Ubiquitous Wireless Sensor Networks has


recently been the subject of an exponential development in
various telecommunications areas. One of the open issues in this
field is to create an interface that would allow the user to
naturally interact with the various objects that compose the
ubiquitous ambient. An emerging solution to this challenge
seems to be the integration of Virtual Reality technologies into
these applications. The goal of this research is to investigate a
way of assimilating a fully immersive, three-dimensional virtual
environment to Wireless Sensor Networks and hence, to develop
a human-network interaction (HNI) conceptual system. The
integration, in this system, of the paradigm of Televirtuality
through the recent concept of 3D Internet is also studied. To
illustrate the concept, a human - intelligent building interaction
system is implemented, that allows the user to naturally interact
with intelligent objects, actuators, or remote equipment
altogether connected within the same network.
Keywords: WSN, 3D Environments, Televirtuality, Interaction,
Telepresence

I.

INTRODUCTION

The need for advanced and improved technology and the


ultra-dynamic nature of todays data and information
processing systems, have led us to a point, where the human
body simply cannot keep up. We are limited by the way we
perceive information, time and space. It is thus a matter of
technology evolution to come up with ways of compensating
for these shortcomings. An area where the processed
information increased dramatically is the field of Ubiquitous
Wireless Sensor Networks that has recently been the subject
of massive research. One of the open issues is to create an
interface that would allow the user to naturally interact with
the various objects that compose the ubiquitous ambient. An
emerging solution to this challenge seems to be the
integration of Virtual Reality technologies into these
applications.
Indeed, the need for a new type of interaction, between the
human user and the general machine-based informational
cloud emerges. The real-time interaction should provide
support for the spatial translation of the real-world position of
the user to the site of interest, namely the location of the
pervasive sensors network or intelligent space, by locally
building a 3D virtual environment, based on the data
collected from the sensors. Thus the user can naturally and
comfortably interact in real-time with different intelligent

978-1-4244-7020-4/10/$26.00 '2010 IEEE

objects, switches, actuators or equipment placed in a remote


and possibly hazardous location.
The goal of this research is to investigate a way of
assimilating a fully immersive, three-dimensional virtual
environment to Wireless Sensor Networks and hence, to
develop a Human-Network Interaction (HNI) concept.
The aim is to integrate powerful Virtual Reality concepts
such as haptic feedback, tracking and 3D environment
immersive devices (eg. data glove, CAVE) with
Televirtuality and Future Internet (WEB 3.0) approaches like
3D Internet, Internet of Things or Internet of Services [1], [2],
[3], [4] in order to investigate the added value with respect to
the conventional desktop based solutions.
The latest technological developments in the fields of
Robotics, Data Transmissions, Micro and Nano Technologies
and Consumer Electronics have been considered in order to
fully utilize their features and capabilities in the proposed
concept with wide applicability in domotics and industrial
environments.
The paper is structured as follows: Related Work surveys
similar research activities and underlines the differences to
the considered approach; Definition of Concepts represents
a short introduction to the concepts that were considered for
the integration of a user informational cloud interaction
interface; Implementation of Concepts shows the way the
paradigms are implemented to form a functional system.
System Interaction Metaphors defines the way interaction is
established between the user and the 3D virtual environment.
The Integrated System describes the hardware and software
architectures. Experiments and Results underline the trials
conducted and the obtained results. The last section
Conclusions and Future Work summarizes a number of
findings and future developments.
II. RELATED WORK
MITs Media Lab has a very interesting research in what
concerns the mixture of Ubiquitous Sensor Portals, 3D virtual
reality modeling and future internet applications and services,
in programs such as DopplerLab or BodySignature. In [5], the
expression cross-reality implementation is coined as the
term to define 3D virtual replicas of the MIT E15 and E14
buildings, in which, based upon the information collected

1076

from an extensive WSN, remote users can browse and


interact with the physical facility from anywhere in the world.
Nevertheless, in their approach, these projects seek to
implement a multiuser-to-single-resource application, while a
single-user-to-multiple-resources
application
is
not
considered.
Experiments carried out at Pukyong National University
focused on high precision three-dimensional received signal
strength indication (RSSI)-based location information in an
interactive virtual reality on PDA [6]. The developed system
operates by capturing and extracting signal strength
information at multiple pre-defined reference nodes to
provide information in the area of interest, thus updating
users location in 3D indoor virtual map. Although the
application is optimized for mobile and portable 3D
interaction applications only, therefore with low degrees of
immersion and presence, it has great functionality.
The Ottawa Universitys School of Information
Technology and Engineering is also undertaking similar
experiments involving the integration of virtual reality, WSN
or context aware computing. The implemented application
allows users to play live experiences captured from the
physical world for analysis, evaluation, monitoring and
training. The novelty of this system resides in two aspects: it
uses an optimized recording technique that saves processing
time and storage space; it records scene updating commands
independent from the 3D player being used [7]. Their
implementation is specialized in handling multiple WSN, in
emergency response - scenario applications, so it is
imperative to have low response latencies and a very sound
infrastructure, in order safeguard human life. Their focus is
driven exclusively by fail-safe scenarios rather than on
analyzing low costs solutions and affordability that would be
interesting for domestic goods, industry and entertainment
applications, to name just few sectors highly concerned by the
new interaction technologies.
In Europe, The Chloe@University project approaches the
integration of an indoors, mobile mixed reality guiding
system. With a see-through head-mounted display (HMD)
connected to a small wearable computing device,
Chloe@University provides users with an efficient way to
guide someone in a building. A 3D virtual character in front
of the user guides him/her to the required destination [8]. This
approach places the interaction, on the fine line between
Augmented Reality (AR) and Augmented Virtuality (AV)
and is oriented towards the users guidance in the same area
where the WSN is implemented. The fully immersive virtual
environment is not analyzed, although the voice commands
input approach from Chloe@University is a very stimulating
idea for future developments.
Apart from these approaches, the present research intends
to explore the added value of a multi-modal Virtual Reality
system integrated as a user interface for ubiquitous wireless
sensor networks applications. The interaction modalities
involved are, visual 3D and haptics for output and tracking,
voice commands and user input devices for input.

The added value of this approach in respect to the


conventional desktop based systems is anticipated to be the
intuitiveness. Indeed, the conventional systems provide a
trivial 2D visual interface, offering user information under the
form of movie-clips, pictures, sound and text, on monitors or
similar display devices and mouse or/and keyboard
interaction, that require a learning process that the user has to
go through, developing an ability in using them and adapting
to their functional limitations; while the 3D virtual
environment systems offer a completely self-explanatory and
rich experience, providing the user with very natural
interaction inside the environment and sensory feedback.
The goal is for the user to interact in the 3D GUI as he
would in the real world, by manipulating objects, visually
analyzing, inputting voice command information, and so
on. By switching from 2D to 3D, interaction becomes
experience and perception; instead of dull and strictly mustbe-followed functional metaphors, a natural and comfortable
way of doing things. Everyday-life activities would take place
inside the CAVE: remote control and monitoring of a robot
operated industrial facility, remote maintaining and
monitoring a holiday house (e.g. heating, air-conditioning),
going to clothing stores and trying on outfits, all could
become a matter of several minutes of immersion.
In the future, having a CAVE-room (immersion room) in
our homes will be a trivial thing. Users will use the CAVE for
entertainment, training of any kind (e.g. home-flight
simulator, improving speed of running based on statistical
interpretation of biomechanical data from trackers), a whole
new way of studying and learning, or fast-forwarding their
mundane activities. Long travels or hospital convalescence
could be shortened by allowing a user/patient to perform
various actions in the 3D world that would not be harmful to
their current condition. Hospitals of the future will no longer
require doctors and patients to be in the same location,
physical examinations and operations could take place
remotely, in very precise, immersive environments.
Second Life-type applications are on the point of
significantly taking off in the virtual world, and this
application is a means of facilitating that end-result (e.g.
IEEE Islands in Second Life [9]).
Scenarios such as the ones from movies like Matrix or
Avatar, where immersion and interaction take place at the
Central Nervous System (CNS) level, and the neural signal is
captured directly from the CNS are not that far off into the
future, from a technological point of view.
III. DEFINITION OF CONCEPTS
A. Wireless Sensor Networks (WSN)
Also known as Pervasive Networks or Ubiquitous Sensor
Networks, WSN are integrated into structures, machinery,
and the environment. Coupled with the efficient delivery of
sensed information, it can provide tremendous benefits to the
society including fewer catastrophic failures, conservation of

1077

natural resources, improved manufacturing productivity,


improved emergency response, etc [10].
The ideal wireless sensor is networked and scalable,
consumes very little power, is smart and software
programmable, capable of fast data acquisition, reliable and
accurate over long term, costs little to purchase and install,
and requires no real maintenance. Choosing a sensor is a
matter of the desired design considerations: battery life,
sensor update rates, and size. Examples of low data rate
sensors include temperature, humidity, and peak strain
captured passively. Examples of high data rate sensors
include strain, acceleration, and vibration. A WSN generally
consists of a base station (or gateway) that can
communicate with a number of wireless sensors via a radio
link. Data is collected at the wireless sensor node,
compressed, and transmitted to the gateway directly or, if
required, uses other wireless sensor nodes to forward data to
the gateway. The transmitted data is then presented to the
system by the gateway connection [11].
B. Haptic Devices and Immersion
The most natural perception means of, the human body, are
the senses: sight, taste, hearing, touch and smell. When trying
to recreate or to emulate a real environment in the 3D virtual
world, it is only normal to try and stimulate these senses in a
way that is as natural as possible. The area of robotics that
deals with these issues is called Virtual Reality (VR), and the
devices that stimulate the human bodys senses are called VR
devices. Among them, haptic devices (or haptic interfaces)
are mechanical devices that mediate force feedback
communication between the user and the computer. Haptic
devices allow users to touch, feel the mechanical resistance
and manipulate three-dimensional objects in virtual
environments and tele-operated systems. Visual, audio and
haptic devices are the most employed and the most
technologically advanced VR devices. Due to the nature of
3D Virtual Environment applications, which do not require
the stimulation of taste and smell, but rather utilize stimulus
of the sight, hearing and touch, their concurrent use within
what is usually called Multi-modal Interfaces draws a high
degree of immersion. Immersion is the physical feeling of
being in a virtual space, at a sensorial level, by means of
interfaces, and is related to the belief that the perceived
virtual world as actually existing [12].
The most immersive VR system is a room-sized advanced
visualization solution that combines high-resolution,
stereoscopic projection and 3D computer graphics to create a
complete sense of presence in a virtual environment, i.e. the
CAVE (Cave Automatic Virtual Environment) [13]; lighter
individual solutions like those based on Head Mounted
Displays (HMD) can be used for similar applications.
C. Televirtuality
Teleoperation was the field that determined the
development and design of VR devices, because tools were
needed to perform various operations in remote hostile
environments that typically required human presence. The

final goal of teleoperation is a fully automatic system which


can enable the mass-production of micro/nano robots or
machines in the future [14].
A similar concept to Teleoperation is Televirtuality or
Telepresence (Remote Presence). It involves communicating
at a distance with computers or virtual images, exploiting
all the functional possibilities offered by computer graphics
techniques: symbolic description of images, interactive
modeling and animation, simulation, stereographic
visualization, gestural interaction and coupling of the body
with the image, immersion in virtual worlds, haptic
feedback, etc [15]. By collecting, processing and utilizing
data from a remote WSN, a data model, or virtual world, can
be created and presented as a 3D Environment. The user can
manipulate apparent objects in the world, more specificwireless sensor nodes, and in doing so, alters the data model
[16].
To fully enable this approach in a Future Internet concept,
it is clear that a novel communications support is necessary in
order to implement the concept of Televirtuality.
D. Future Internet
According to [1], the Future Internet will consist of at least
three complementary and slightly overlapping sub-domains,
namely (i) Internet of Services, Software and Virtualisation;
(ii) Internet of Things and Enterprise environments; and (iii)
Networked Media and 3D Internet.
1) The Internet of Services, Software and Virtualization
will feature semantically enriched services centered on the
user. It will feature semantically enriched services centered
on the user. Services will enable the use of the Internet of
Things. The Future Internet will lead to demand for
innovative services characterized by on-the move access,
personalized cross-media streaming services, software-as-aservice and resource-as-a-service (RaaS). Virtualization
will be used for increasing the efficiency of infrastructure use,
extending the scalability of platforms and widespread
creation of organizations. Services will increasingly bridge
real-virtual life (virtual environments). Issues such as
interoperability will be treated like services [2].
2) The Internet of Things treats the connection of everyday
objects and devices to large databases and networks through a
simple system of item identification. WSN and Radiofrequency identification (RFID) offer this functionality. Data
collection will benefit from the ability to detect changes in the
physical status of things, using sensor technologies. Advances
in miniaturization and nanotechnology mean that smaller and
smaller things will have the ability to interact and connect.
All those premises lead to the conclusion that the Internet of
Things will connect the worlds objects in both a sensory and
an intelligent manner so that industrial products and everyday
objects will take on smart characteristics and capabilities by
benefiting from integrated information processing. Such
developments will turn the merely static objects of today into
newly dynamic ones, embedding intelligence in our
environment [3].

1078

3) 3D Internet is a revolutionary concept that provides a


complete virtual environment facilitating services,
interaction, and communication over the Internet. The use and
intuitiveness of 3D environments are an immediate
consequence of the way our brains work, a result of a long
evolutionary process ensuring adaptation to our world.
Although the 3D Internet is not a solution to all problems, it
provides an HCI (human-computer interface) framework that
can decrease mental load and open doors to rich, innovative
interface designs through spatial relationships. Another
important point is the Webplace metaphor of the 3D Internet
which enables interaction between people in a natural way. In
this sense, the 3D Internet can be seen as a natural successor
of Web 2.0. 3D Internet is basically a more natural way of
navigating, exploring and finding information, and it
represents the interaction interface between the user, the
Internet of Services and the Internet of Things. [4].
IV. IMPLEMENTATION OF CONCEPTS
A. Wireless Sensor Networks
In the authors approach, one of the key functions of the
pervasive sensors is position detection. The optimum way to
go about position detection is by employing an algorithm
such as modified ML (Maximum Likelihood). A correlation
between RSSI (Received Signal Strength Indicator) and
distance is needed in order to approximate the RSSI value as
close as possible to the measured distance [17]. An adaptive
solution is given by the logarithmic correlation. Based on
previous experiments, for a 10m x 10m grid, the logarithmic
correlation gives a mean error of 1.29% and a maximum error
of 3.43%, while the power correlation mean error is 2.29%
and the maximum is 5.64%. By combining the logarithmic
correlation with the ML-based algorithm for localization, it is
possible to determine one nodes position starting from the
RSSI measurements to its adjacent reference nodes. The real
position of the node is known and can be compared with the
estimated position. Thus, it is possible to calculate the
position error. By calculating the position error for 121

Fig. 1. Position error distribution for different power levels: 0 dBm, -7


dBm, -15 dBm, -25 dBm

equally distributed test points in a 10m x 10m grid, the


position error distribution can be obtained for different power
levels. Fig. 1 represents the error distribution in meters, for
four power levels: 0dBm, -7dBm, -15dBm and -25dBm [18].
B. Future Internet
The Internet of Things approach, envisions a global
network of interconnected intelligent objects, more
precisely every object is supposed to have its own unique ID
on the network and represent a unique global network node,
[3]. Currently there are two solutions equally used for the
implementation of this unique ID; some researchers prefer a
clean-slate approach like Stanfords POMI2020 and others
prefer to use already existing protocols like IPv6 or RFID.
Because of the variety that the Internet of Services will
bring, once everything will be interconnected the need for a
summary network node capability description emerges. This
information, along with the unique ID of the network node
could very well be hosted by the implementation of RFID
protocols.
C. Virtual Environments
Based on the information obtained from the process of
RSSI position detection, a 3D replica of the real sensed
environment can be locally recreated as to provide the user
with a fully immersive natural way of interaction with the
system. Thus the 3D Internet, is likely to generalize the
concept of 3D Graphical User Interaction for internet
browsing and general Internet services, and offer at least
theoretically - the possibility, to recreate a 3D replica of the
entire known world, by processing the huge amount of
information collected from all the existing sensors, of course,
provided that enough resources are available and allocated to
this end.
V. SYSTEM INTERACTION METAPHORS
The involvement of multi-modal interface is a very
attractive idea, as it offers a wide range of 3D fully immersive
interaction modalities both for input and output operations.
The combination and integration of these modalities for input
(by fusing multiple input modalities) and for output (by
fission of the output information) lead to the development of
what is usually called interaction metaphors. The focus of this
paper falls upon 2 types of complementary interaction
metaphors, one that is based on the current desktop
technologies, and one that utilizes a specialized 3D
interactive system composed of such components as trackers,
data gloves, haptic devices, etc.
When it comes to virtual reality, interaction implies 3 main
concepts: navigation, selection and manipulation.
The desktop based approach is adapted to the features
available in any office or home PC, and usually concerns
visualization and interaction devices. The 3D environment is
displayed on an ordinary display monitor, while the user
employs the mouse in order to interact with the environment.
In the case of this metaphor, navigation is preferably done by
mouse (instead of keypad and mouse). The mouse is mainly

1079

VI. THE INTEGRATED SYSTEM


It is the goal of this paper to present a functional core of a
system, which will enable the user to remotely interact in a
very natural way, with an intelligent building system in order
to learn information, such as room temperature, methane
concentration levels, equipment or appliances status, interact
with different actuators, so as to turn on lights or equipment,
open/close doors, trigger alarms, or to simply evaluate or log
a specific activity and its parameters in time and space. The
user interacts with the system by immersion into a virtual
environment, built upon information, streamed over Internet
links and collected by a WSN from an intelligent area.
Fig. 2. Optic tracking system

used to move the camera, thus changing the users


perspective on the virtual environment. Selection of 3D
objects, in the virtual environment is carried out by mouse
right-click, while the manipulation of 3D objects entails
selecting an object and moving it along a particular trajectory
(e.g. in this application, manipulation is used to open/close
doors or windows).
The second interaction metaphor was defined out of the
necessity of natural and intuitive interaction in the 3D world,
hence extending the functionality of the mouse metaphor. It
fully exploits the advances in the field of VR technologies by
employing the CAVE as a visualization device. As a selection
and manipulation device, in the CAVE, one can use a haptic
data glove, which allows the user to manipulate objects in a
natural manner, by using his hands. The stereoscopy is
obtained through passive stereo glasses, equipped with head
tracking sensors. Sensors on the glove and on the glasses are
tracked by special video-cameras in the CAVE, positioned so
that at any point, at least 4 markers are observed, for accurate
position detection (Fig. 2). This is called optic tracking.
There is also the possibility of electromagnetic tracking,
with a device called Flock of Birds (Fob).
Thus the users perspective on the virtual environment is
changed according to the movement of his head. In order to
navigate through this environment one can use voice
commands (for the teleportation to a certain point), walking
compensation devices (i.e. treadmills) or virtual pointers
(representing the camera) which can be manipulated (moved)
with the data glove.

A. Hardware
The integrated systems hardware architecture is presented
in Fig. 3.
The regular line represents direct connections among peers
or to a Gateway, directional lines describe the way that the
information is moving, the dotted line represent indirect
connections through peer nodes.
The systems main modules are:
1) A WSN of Crossbow MICAz Motes, with: Chipcon
CC2420 radio module, 2400 MHz frequency band, 250 kbps
maximum bit rate; MTS400 multi-sensor board, temperature,
humidity, barometric pressure and ambient light sensing
capabilities; Extension board equipped with relays used to
control actuators (closing/opening doors, remote operation of
equipment); ATMega 128L microcontroller, 8 bit, 7.7 MHz
clock, 128 kB Flash memory, 4kB SRAM memory;
The tested configuration consists of 6 sensor nodes, each
placed in one room of the monitored building. The network
has a mesh topology. A gateway node is employed as the
interface between the WSN and the WSN server
2) The Future Internet infrastructure, which is anticipated
to evolve into an intelligent communication cloud, that can
fully employ and exploit the potential of the authors
application, but for now it is considered just as the traditional
Internet communication system that connects the WSN
Module to the User Interaction Module.
3) In Fig. 3 in the User Interaction Module, there is an OR
logic gate that marks a double interaction approach (in Fig. 3
and Fig. 4 WSN = Wireless Sensor Network; WN = Wireless
Node; GW = Gateway; VR = Virtual Reality). The first one
consists of an interaction through regular conventional

Fig. 3. Integrated systems hardware architecture

1080

desktop interaction devices (e.g. mouse or keypad), and


desktop display devices (e.g. Monitor, LCD) particular to 2D
GUIs. This approach was necessary to experiment with
information flow and functionality of the system. The second
module affiliated with the OR gate is one that offers the three
musts of virtual reality: full immersion, presence and
interaction. The CAVE is composed of several VR devices
and systems. The combination of this elements features adds
up to full immersion, the feeling of presence and functional
interaction. The major systems found in the Cave are as
follows: Optical Tracking system 8 Optitrack Flex:V100
OptiTrack FLEX:V100 cameras (i.e. integrated image capture
and processing units); or Electromagnetic Tracking system
Flock of Birds from Ascension; Projection System 6
Hitachi CP-SX 1350 Multimedia projectors, paired up into 3
projection units, each for a different CAVE wall (frontal, left
and right), each unit having 2 differently polarized projectors,
in order to achieve stereoscopy; for the 3 projection units 3
reflection 1.5 meters by 3 meters mirrors were necessary;
Sound system a Genius 5.1 surround audio system;
Processing system 2 servers and 6 clients, 1 of the servers is
for running the 3D Virtual Environment, the other server is
for managing the User Input devices, and the 6 clients, which
provide the projectors with the appropriate input to project;
Haptic system Fakespace haptic data glove, and a Space
Mouse from 3D Connection; Visualization system
Stereoscopic glasses with differently polarized lens, in
association with the Projection system have the cumulative
effect of visual stereoscopy. A calibration stage is necessary
before starting to conduct experiments in the CAVE. All the
above mentioned equipment and devices, excluding the
stereoscopic glasses, have associated software.
B. Software
Both the desktop devices-based implementation and the
haptic devices-based implementation have the same general
software general architecture, depicted in Fig.4.
The data from the sensor network is collected by a
Gateway on which typically middleware is implemented in
order to preprocess sensor data, i.e. Crossbows XServe [19],
with features such as database logging, file logging, TCP/IP
support, unit conversion, etc. The WSN servers software
interprets and converts data into information, but also relays
the feedback from the user to the specific objects, equipment

and switches. The sensor information is then organized into a


database, and based on this centralized information the VR
server builds the 3D client application.
The VR programming framework on top of which the 3D
client interface has been developed is wellknown within the
Vr research community as XVR (eXtreme Virtual Reality)
[20], and includes functionalities such as: Scene-graph
management, Import models from 3DSMax 4.0 or higher
(Maya plugin under development), advanced OpenGL
rendering engine, real-time physics using Tokamak physics
engine, positional 3D audio support via direct audio or
OpenAL, remote connections support: TCP and UDP
management, input devices management via DirectInput. The
client interface is embedded in a HTML page and can be
accessed remotely using a standard web browser. It offers a
simpler and more functional alternative to VRML, but it can
also export files in VRML format.
The authors developed software has the following flow:
1) First it initializes parameters such as lighting, scaling,
camera positions, but also objects or menus.
2) The application starts rendering the virtual
environment based upon the previously compiled data,
and starts performing such functionalities as
recognizing interaction devices (2D or 3D) and hence
interpreting the users behavior.
3) The 3D client also makes use of a time dependent
interruption system.
4) Further functionality is added on special events (e.g.
OnExit).
The typical virtual components present in the considered
environment are: a 3D model of a house, 3D indicators
associated with the sensor node in each room; alarm
indicators, 3D switches for the actuators control.
The objects that constitute the virtual interface are
implemented by importing 3D models, previously designed
with specialized software for high quality graphics i.e.
3DStudioMax.
The XVR framework has a very useful feature that allows
solving the problem of connecting the VR server to the WSN
server or even the user side VR interaction devices, by
embedding the client interface in a HTML page that can be
remotely accessed using a standard web browser, so that the
connection happens in a care-free, transparent way.

Fig. 4. Integrated systems software architecture

1081

Navigation within the virtual environment is performed


either in the CAVE immersive system using the tracking
devices or on the desktop environment using the mouse.
1) Navigation by mouse: - It involves using the
mouse in order to rotate and translate the camera in a classic
manner, similar to computer games. There are six predefined
fixed locations to which the users can self - teleport.
Selection of 3D objects is performed by mouse using a
collision detection technique. When the mouse cursor is over
an indicator button, the corresponding information will be
displayed (i.e. the parameters measured by the sensor node
associated to the indicator). Clicking over a switch button,
changes the appearance of the respective button, in order to
indicate that the switchs state has changed.
The 3D environment can be manipulated with the aid of
the mouse. The user can opt to open the doors/windows of the
house, with the mouse playing the role of the door handle
2) Fully immersed navigation: - Moving forward,
backward, to the left to the right, looking in all direction is
done in a very natural way. By wearing passive stereo glasses,
equipped with head tracking sensors, in the CAVE, all moves
are tracked, perceived and interpreted as interaction with the
3D client. So in order to move forward, one simply must walk
forward, or to look right, one must simply turn their head to
the right. The haptic Data Glove allows users to interact with
objects, and to have force feed-back, improving their
perception of the 3D GUI. In order to navigate inside the
CAVE, besides walking to the point of interest, users can
select one of the 6, previously mentioned, predefined locations
by accessing the 3D menu and touching with one finger, from
the data glove-wearing hand, the desired location, in the menu.
The menu is accessed by touching the small sphere-shaped
object that will always be at the bottom right of the user view.
Touching this object with the data glove-wearing hand will
cause the menu to pop-up in front of the user. Touching the
sphere-shaped object again, will cause the menu to disappear
and so will allow choosing an option from the menu, followed
by the appropriate action. Touching a sensor-object will cause
the specific sensor information to pop-up; the lack of contact
with the sensor-object will cause the information to fold back
down. Touching a switch will cause its color to change (i.e.
the mark of status change), while touching it again, will
change the status and color to the original ones (Fig. 5).
The image shows the application being run in two ways
(i.e. the two interaction metaphors): the first one (Fig. 5), uses
the mouse interaction metaphor, the user is preparing to touch

Fig. 6. Application being run using the CAVE navigation metaphor

a switch, which originally has the green status color (the


mouse pointer can be seen near the switch from the first
quadrant), the second one uses the CAVE interaction
metaphor, where the user just touched the previously
mentioned switch, which now changed its status color to red
(Fig. 6). The door or windows of the intelligent building can
be open or closed by pushing/pulling on them.
XVR can be implemented to execute full CAVE-like
rendering. Apart from XVR the set of applications necessary
to the utilization of the CAVE hardware contains software
such as Virtual Reality Server, VFAB Viewer, VFAB
Multiuser, Instant Reality, etc.
VII. EXPERIMENTS AND RESULTS
The implemented application was tested in a single-story,
6 rooms, 200m , building. Each room is equipped with a
wireless sensor node, adding up to a 6-node WSN. Each node
has the capacity of monitoring parameters, such as,
temperature, humidity or methane concentration, and has
been equipped with a relay (switch) in order to simulate user
control, related to the functionality of the application.
The first performance metric evaluated was the average
PER (Packet Error Rate) over the WSN. A value of 3.4%
resulted from the experiments.
The performance of the RSSI localization algorithm used
in this implementation, and explained above, has also been
evaluated. The average localization error was of 2.7 meters
(Fig.1 and section IV.A Wireless Sensor Networks).
The time interval after which virtual reality sickness
becomes a high-discomfort factor in the CAVE, has been
gradually reduced to 45-70 minutes, after fine-tuning Cave
calibration and user adaption to the virtual environment.
The VR Environment developed in the XVR framework is
deployable on any PC that supports an internet browser with
ActiveX capabilities. In so doing, the VR environment is
independent of proprietary, expensive software, adding-up to
a very versatile and user-friendly platform.
Quality of Experience (QoE) is a subjective measure of a
user's experiences with an application or service. The QoE
level is low, because further development of the virtual
environment is needed, meaning that interaction, immersion
and presence need to be significantly improved.

Fig. 5. Application being run using the mouse navigation metaphor

1082

VIII.

CONCLUSION AND FUTURE WORK

The role of 3D Internet in the development and


implementation of the recently launched concept of the
Internet of the future is crucial, as it should provide the user
with a highly natural interaction within the internet
environment, richer in content, with more divers than ever.
This paper described a conceptual application that tries to
undertake such expectations, by assimilating paradigms like
Televirtuality, Teleoperation, Internet of Things and Wireless
Sensor Networks into a novel approach.
Also, the two utilized interaction metaphors, one based on
standard 2D visualization devices, and the other on
immersive 3D systems that fully exploit the advances made in
the field of VR, and their functionality were illustrated.
As for future system developments, the 3D GUI approach
will include a functional database, which centralizes all the
information provided by the WSN. Based on this information,
and its statistical interpretation, further developments will be
possible, like an adaptive WSN node location reconfiguration
process, to better accommodate the users needs in terms of
coverage, data accuracy, dynamic data relevance or to shut
down, according to network traffic, unused nodes, to increase
power efficiency. Voice commands will also be implemented,
in order to facilitate virtual environment navigation.
A further development of the virtual environment is also
necessary; so far, detail levels and the functionality levels are
at a minimum, affecting the presence, immersion and
interaction parameters, reflecting in low QoE (Quality of
Experience) levels.
Also, other possible approaches are possible, concerning
novel visualization and haptic devices in the virtual
environment, like the Head Mounted Display (HMD) device
called the Virtual Cocoon, which supposedly will be able to
stimulate al the 5 senses [21]. This will also add to the effort
of reducing virtual reality sickness, which occurs after 45-70
minutes of using the CAVE.
An additional functionality could be added to the existing
networks, by exploiting the paradigm of Virtualisation. This
implies the compensation, by the neighboring nodes of a
defective WSN node, in a transparent way.
WSN can also be associated with MEMS or NEMS
(Micro/Nano-Electro-Mechanical Systems) to form new
kinds of micro-networks for a very wide range of applications
(e.g. medical, military, surveillance).
Wireless sensor networks use many types of
communication technologies and protocols (e.g. Wi-Fi,
Bluetooth, ZigBee). WSN should be able to adaptively
manage the radio spectrum resource. Sensor networks are
also associated with the process of spectrum sensing in
adaptive radio spectrum management and Cognitive Radio as
can be seen in one of the authors previous publications [22].

Garbacea from the University of Transilvania, Brasov,


Romania for their academic support.
REFERENCES
[1]
[2]

[3]
[4]
[5]
[6]
[7]

[8]
[9]
[10]
[11]
[12]

[13]
[14]
[15]
[16]
[17]
[18]
[19]

[20]
[21]

IX. ACKNOWLEDGMENT
The authors would like to thank professors: Marcello
Carrozzino, Antonio Frisoli and Franco Tecchia, from Scuola
Superiore SantAnna, Pisa, Italy, Florin Sandu and Florin

[22]

1083

M. Campolargo, The Future of the internet elements of a european


approach., EU-Japan Symposium on New Generation Networks and
the Future of the Internet, 910 June 2008
European Comission Information Society and Media, The future of
the internet. A compendium of european projects on ICT research
supported by the EU 7th framework programme for RTD,,
ftp://ftp.cordis.europa.eu/pub/fp7/ict/docs/ch1-g848-280-futureinternet_en.pdf, ISBN 978-92-79-08008-1.
International Telecommunication Unit, The Internet of Things,
www.itu.int/internetofthings, November 2005, Accessed 1 February
2010.
T. Alpcan, et al, Towards 3D Internet: why, what, and how?, IEEE
Int. Conf. on Cyberworlds (Cyberworlds 2007), Hannover, Germany,
October 2007.
J. Lifton, et al., Metaphor and manifestationCross-reality with
Ubiquitous Sensor/Actuator networks, IEEE Pervasive Computing,
Vol. 8, Nr. 3 (2009), pp. 24-33.
W. Chunga, B. G. Leeb, C. S. Yangb,3D virtual viewer on mobile
device for wireless sensor network-based RSSI indoor tracking
system, Sensors and Actuators B 140 (2009) 3542
A. Boukerche, A.R Lopes, R.B. de Araujo, A capture and access
mechanism for accurate recording and playing of 3D virtual
environment simulations Tenth IEEE International Symp. on
Distributed Simulation and Real-Time App., Terremolinos, 2006
N.Magnenat-Thalmann, A virtual 3D mobile guide in the Intermedia
Project, The Visual Computer: Int. Journal of Computer Graphics,
Springer, Vol. 24, No. 7-9, pp. 827-836, July 2008.
IEEE.org,
IEEE
Islands
in
Second
Life,
http://www.ieee.org/societies_communities/technical_activities/secondl
ife/index.html, visited on 21.02.2010
Lewis, F.L., Wireless Sensor Networks, Smart Environments:
Technologies, Protocols, and Applications, ed. D.J. Cook and S.K. Das,
John Wiley, New York, 2004.
C. Townsend, S. Arms Wireless Sensor Network: Principles and
Applications, Sensor technology Handbook, ed J.S. Wilson, Elsevier,
2005, pp. 439-449
White
Paper,
Mimic
Technologies
Inc.,
05.05.2003,
http://www.hitl.washington.edu/people/tfurness/courses/inde543/readin
gs-03/berkley/white%20paper%20-%20haptic%20devices.pdf visited
on 05.01.2010
J Ihren., K. Frisch, The fully immersive cave. Proc. 3rd International
Immersive Projection Technology Workshop, 59--63. 1999
M. Sitti, H Hashimoto, Teleoperated Nano Scale Object
Manipulation, Recent Advances on Mechatronics, Springer Verlag
Pub., Ed. by O. Kaynak, May 1999
P. Queau, et al, "Televirtuality: The merging of telecommunication and
virtual reality," Computers and Graphics, Vol. 17, No. 6, Nov. 1993,
pp. 691-693.
Jacobson, Robert Televirtuality: Being There in the 21st Century.
Silicon Valley Networking Conf. (Santa Clara, CA, April 23-25, 1991).
K. Srinivasan, P. Levis, RSSI is under appreciated, Proc. of the Third
Embedded Networked Sensors (EmNets 2006), May 2006, pp.15-20.
M. Machedon-Pisu, A. Nedelcu, Energy Efficient Tracking for
Wireless Sensor Networks, Proc. IEEE Int. Workshop on Robotic and
Sensors Environments (ROSE 2009), Nov. 2009, pp. 163-168
Crossbow
Technology,
XServe
Gateway
Middleware,
http://www.xbow.com/Technology/GatewayMiddleware.aspx, visited
on 03.11.2009
VRmedia Italy, XVR, http://www.vrmedia.it/Xvr.htm, visited on
15.11.2009
Physorg.com, Concept design of a mobile Virtual Cocoon. The first
virtual reality technology to let you see, hear, smell, taste and touch,
http://www.physorg.com/pdf155397580.pdf, visited on 23.01.2010
V. Stoianovici, V. Popescu, M. Murroni, A Survey On Spectrum
Sensing Techniques For Cognitive Radio Bulletin of the Transilvania
University of Brasov, Vol.1 (50) 2008, Brasov, ISSN 2065-2119

Potrebbero piacerti anche