Sei sulla pagina 1di 17

Virtual Reality (2011) 15:311–327

DOI 10.1007/s10055-010-0176-4

SI: CULTURAL TECHNOLOGY

Virtual and augmented reality for cultural computing


and heritage: a case study of virtual exploration of underwater
archaeological sites (preprint)
Mahmoud Haydar • David Roussel •
Madjid Maı̈di • Samir Otmane • Malik Mallem

Received: 24 April 2009 / Accepted: 12 October 2010 / Published online: 29 October 2010
 Springer-Verlag London Limited 2010

Abstract The paper presents different issues dealing with these demonstrators are the results of the ongoing dialogue
both the preservation of cultural heritage using virtual between the archaeological requirements and the techno-
reality (VR) and augmented reality (AR) technologies in a logical solutions developed.
cultural context. While the VR/AR technologies are men-
tioned, the attention is paid to the 3D visualization, and 3D Keywords Underwater archaeology  Mixed reality 
interaction modalities illustrated through three different Virtual reality  Augmented reality  Cultural heritage 
demonstrators: the VR demonstrators (immersive and Cultural computing
semi-immersive) and the AR demonstrator including tan-
gible user interfaces. To show the benefits of the VR and
AR technologies for studying and preserving cultural her- 1 Introduction
itage, we investigated the visualisation and interaction with
reconstructed underwater archaeological sites. The base Cultural computing (CC) implies the application of com-
idea behind using VR and AR techniques is to offer puter technology in the field of culture, arts, humanities or
archaeologists and general public new insights on the social sciences. It is an emerging field, and the answer to
reconstructed archaeological sites allowing archaeologists the computability of culture is not clear as mentioned by
to study directly from within the virtual site and allowing Wang (2009). For Tosa et al. (2009), the CC is a method
the general public to immersively explore a realistic for cultural translation that uses scientific methods to rep-
reconstruction of the sites. Both activities are based on the resent the essential aspects of culture. An other definition
same VR engine, but drastically differ in the way they of CC can be given as a computer technology that can
present information and exploit interaction modalities. The enhance, extend and transform human creative products
visualisation and interaction techniques developed through and processes (CC 2009). Virtual and augmented reality
technologies will provide new means to create and trans-
form culture. On the one hand, VR technology provides us
M. Haydar (&)  D. Roussel  M. Maı̈di  S. Otmane  with the possibility of immersion within multimodal
M. Mallem
interactions (audio, video and haptics) to enhance user
Laboratoire IBISC, Université d’Evry, 40 rue du Pelvoux,
91020 Evry, France presence in digitalised culture (digital theatre, digital
e-mail: mahmoud.haydar@ibisc.fr dance, digital music, digital heritage, etc.). On the other
D. Roussel hand, AR or mixed reality technology provides us with the
e-mail: david.roussel@ibisc.fr possibility to extend, transform and combine different
M. Maı̈di cultures in the same mixed environment (for example
e-mail: madjid.maidi@ibisc.fr combine object from digital dance with others from digital
S. Otmane music over space and time). However, we need to develop
e-mail: samir.otmane@ibisc.fr new interfaces and new interaction metaphors to allow 3D
M. Mallem visualisation of culture and 3D interaction with different
e-mail: malik.mallem@ibisc.fr objects of such culture. In this paper, we focus our

123
312 Virtual Reality (2011) 15:311–327

contribution on the domain of archaeology by developing alliance between the huge amounts of information col-
interfaces and new interactions for both presentation of lected in the field and graphical representation which
existing archaeological underwater sites including inter- supports the analysis. The GIS graphical representations
actions with artefacts in a cultural context as well as most often originate from cartography, that is to say
developing tools for preserving cultural heritage. merging vectors, images, and symbology in 2D visualiza-
Most of the work mentioned in this paper is related to tion tools. The old culture of chart reading is very useful in
research & development performed within the VENUS the use of GIS and probably one of the obstacles in the way
project (Virtual Exploration of Underwater Sites), spon- of a truly 3D GIS. As a matter of fact, even without the
sored by the European Community. The main goal of the realistic representation, the strength of the GIS is linked to
VENUS project is to provide scientific methodologies and the symbolic cartographic representation of the data
technological tools for the virtual exploration of deep offering a synthetic expression of the data analysis. If the
underwater archaeological sites. Such sites are generally 2D representation is sufficient to demonstrate the archae-
out of reach for divers and require new technologies and ological work concerning an urban scale or larger, applied
tools in order to be surveyed by archaeologists. The first to a period for which traces of the elevations do not exist, it
step of the proposed methodology consists in performing a is far from being the same when one is studying a building,
bathymetric and photogrammetric survey of the site with or in this present case, a ship. The need for a 3D repre-
remote operated or autonomous underwater vehicle. sentation is then of first importance and the global under-
Bathymetric and photogrammetric data are then used to standing of the study revolved around that kind of
reconstruct the seabed while photogrammetric data are representation. For instance, Karma VI was an interface for
processed in the ‘‘Arpenteur’’ photogrammetric tool1 that ESRI’s spatial Database Engine (which produced the arc-
measures points on artefacts (in our case Amphorae) from GIS more recently) that supports powerful visualization,
several geolocalised points of view in order to reconstruct manipulation, and editing of standard GIS data in a VR
artefacts shapes, dimensions and location which are then environment (Germs et al. 1999), However, as mentioned
stored in an archaeological database for further examina- by Vote et al. (2002): ‘‘these Immersive Virtual Reality
tion by archaeologists. Our role in the project consists in applications weren’t developed for archaeological inquiry
gathering the reconstructed elements in an immersive vir- and therefore don’t consider the specific research tasks
tual environment (VE) providing tools for archaeologists to archaeologists need to perform’’. Therefore, specially tai-
survey such virtual sites in the most natural and easy way lored multimedia databases have been developed to con-
as possible. We have developed several ‘‘demonstrators’’ in nect field measurements and findings to VR visualisation
order to assess the benefits of virtual and augmented reality tools such as the one developed in the 3D Murale project
(VR & AR) in the field of underwater archaeology. All the (Cosmas et al. 2001). Although efforts have been provided
demonstrators are based on the same VR engine described in underwater archaeology to turn underwater photogram-
in Sect. 3.2.1, but drastically differ in the input and output metric surveys into interactive virtual environments as
modalities: The most simple demonstrator is called the mentioned in Drap and Long (2001) which used VRML
‘‘low end’’ demonstrator and offers all the functionalities on output for 3D visualisation purposes, one the goals of the
a simple laptop. The virtual reality demonstrators (respec- archaeological demonstrator within VENUS project is to
tively the semi-immersive and immersive demonstrators) transport archaeologists within such a reconstructed
use large screen or head-mounted stereo display to enhance archaeological site but most of all allow them to interact
immersion and allow interaction with 3D joysticks. Finally, with the gathered data by connecting tools in the VE to an
the augmented reality demonstrator features a real map of underlying archaeological database.
the site augmented with virtual elements where interaction
is provided through the use of various tangible tools related 2.1 Immersive virtual reality
to the corresponding tools of the VE.
First of all, we need to establish a definition of immersive
virtual reality. A commonly accepted definition of virtual
2 Related work reality has been provided by Rheingold (1991) as an
experience in which a person is ‘‘surrounded by a three
Drap and Long (2001) mentioned that for many years dimensional computer generated representation, and is
Geographic Information Systems have become common able to move around in the virtual world and see it from
tools for archaeologists who see in this technology the different angles, to reach into it, grab it, and reshape it’’.
Cruz-Neira et al. (1993) have proposed a definition more
1
‘‘Arpenteur’’ photogrammetric tool is developed by Pierre Drap’s confined to the visual domain: ‘‘a VR system is one
team at LSIS, in Marseille, France (Drap and Grussenmeyer 2000). which provides real time viewer centered head tracking

123
Virtual Reality (2011) 15:311–327 313

perspective with a large angle of view, interactive control, workstation. Bowman et al. (2002) also studied human
and binocular display’’. However, all definitions of VR behaviour and performance between an HMD and a four-
agree on three distinctive features: (1) immersion, (2) sided spatially immersive display (SID or CAVE). In par-
interaction and (3) real time. Billinghurst et al. (1998) ticular, he studied users’ preferences for natural orientation
defined three kinds of information presentation paradigms (user turns his head or entire body naturally while making a
requiring increasingly complex head tracking technologies: turn) versus manual orientation (user uses the joystick to
rotate the world about its vertical axis) in the virtual
• In ‘‘Head stabilised’’ information is fixed to the user’s
environment. The results indicated that participants have a
viewpoint and does not change as the user changes
significant preference for the natural rotation with the
viewpoint orientation or position. Any VR system that
HMD and for manual rotation in the SID. This suggests
does not provide tracking is then considered as ‘‘Head
that HMDs are an appropriate choice when users perform
stabilised’’. In the VENUS project, the low-end dem-
frequent turns and require spatial orientation. Even though
onstrator running on a laptop with no tracking devices
HMD’s field of view and resolution have drastically
could be considered as a ‘‘Head stabilised’’ system.
increased lately, one can not consider HMD’s field of view
• In ‘‘Body stabilised’’ information is fixed relative to the
as larger than standing in front of large screen or several
user’s body position and varies as the user changes
screens (in the case of a CAVE); however, this drawback is
viewpoint orientation, but not as they change position.
easily compensated by the ‘‘look around’’ features provided
The semi-immersive and immersive demonstrators are
by tracked HMDs: By tracking head orientation, the user
‘‘Body stabilised’’ systems since head’s orientation
experiences a hemispherical information surround—in
changes viewpoint, but motion within the virtual
effect a ‘‘hundred million pixel display’’ and nowadays
environment is controlled with 3D joysticks rather than
even more as (Reichlen 1993) coined this term.
head’s position.
• And finally in ‘‘ World stabilised’’ information is fixed
2.2 Virtual reality applied to archaeology
to real world locations and varies as the user changes
viewpoint orientation and position. The augmented
Even though archaeology benefits from information tech-
reality demonstrator is typically a ‘‘world stabilised’’
nology for many years through web publications of
system, as position and orientation of the viewpoints
research and findings Hall et al. (1997) and Watts and
needs to be computed in order to register the virtual
Knoerl (2007), very few attempts have been proposed to
environment over the real map. However, the immer-
immerse users within an archaeological virtual reality such
sive demonstrator could also use body motion in a
as the one defined above. More specifically, virtual reality
‘‘World stabilised’’ way for small motion within the
has been considered in this field as a visualisation tool until
tracking space, and whereas ‘‘ Body stabilised’’ is
now. For instance, VR display of ancient Greece for the
considered when travelling through the environment
broad public by the Foundation of the Hellenic World in a
with the 3D joysticks.
museum has received an enthusiastic welcome (Gaitatzes
et al. 2001); however, (Barceló 2000) stressed that virtual
2.1.1 Immersion with head-mounted displays reality in the archaeology field should not be used only as a
visualisation tool but also as a way to manipulate the
The main difference between the semi-immersive and the archaeological interpretation. The ARCHAVE system cre-
immersive demonstrator is the use of a head-mounted ated by Vote et al. (2002) and dedicated to the archaeo-
display (HMD) as the display device in the immersive logical analysis in VR of the excavated finds from the Great
demonstrator. We call it ‘‘fully’’ immersive as the HMD is Temple site at Petra, Jordan, provided some interesting
tracked by an optical ART (2009) system, so that the user outcomes on the use of immersive VR for archaeology:
can look around. Ruddle et al. (1999) found that the ability The system used a CAVE for display and was interfaced
to look around with HMD allowed users to be less static in with an artefact database containing over 250,000 cata-
the environment, as they do not have to stop travelling, take logued finds. Concerning visualisation, using an immersive
a look around and choose a new travel direction since CAVE allowed to examine the data in the context of a ‘‘life
looking around is allowed during travel: On average, par- size’’ representation; the immersive VR visualization gave
ticipants who were immersed in the virtual environment the archaeologists the opportunity to explore a site in a new
using the HMD navigated the environment twelve per cent and dynamic way and, in several cases enabled them to
faster. The decreased time was attributed to the partici- make discoveries that opened new lines of investigation
pants utilising the ability to ‘‘look around’’ while they were about the excavation. However (as mentioned by Acevedo
moving when immersed, as the participants spent eight et al. 2001), archaeologists ‘‘consistently needed an easily
per cent more time stationary when using a desktop accessible overview of the model, much like the experience

123
314 Virtual Reality (2011) 15:311–327

they obtain by flying high up over the virtual model, so they superimposed virtual objects. Rather than replacing the real
could study how the different artefacts were distributed world, the user is immersed in an environment where the
over the entire site’’. This problem has been addressed by virtual and real objects coexist in the same space.
accessing a ‘‘Miniature model for a site wide analysis’’ at Interaction with the augmented environment is per-
any time during exploration. formed with tangible user interfaces (TUIs) that use
Van Dam et al. (2002) propose a review of experiments physical objects as tools to establish an interface with the
in immersive VR for scientific visualisation including the virtual environment. The TUI provides a natural and
above-mentioned ARCHAVE system. Clearly, visualisa- intuitive interface, a user can manipulate virtual 3D objects
tion by itself will not solve the problem of understanding by simply handling physical objects. A considerable
truly large datasets that would overwhelm both display and amount of research has been done in the domain of TUIs,
human vision system. They advocate a human computer and new human computer interaction approaches were
partnership that uses algorithmic culling and feature proposed to improve the physical interaction with compu-
detection used to identify small fraction of the data that tational media. Kato et al. (2000) implemented table top
should be visually examined in detail by the human. AR environments with conventional markers and paddles
Immersive VR could then be a potent tool to let humans for object manipulation. They advocate designing the form
‘‘see’’ patterns, trends and anomalies in their data well of physical objects in the interface using established tan-
beyond what they can do with conventional 3D desktop gible user interface design methods. Some of the tangible
displays. The immersive surrounding context provides a design principles include:
kinesthetic depth perception that helps users to better
• Object affordances should match the physical con-
apprehend 3D structures and spatial relationships. It makes
straints of the object to the requirements of the task.
size, distance and angle judgments easier since it is more
• The ability to support parallel activity where multiple
like in being in the real world than looking through the
objects or interface elements are being manipulated at
screen of a desktop monitor to the world behind it; the
once.
advantages arise from the difference between ‘‘looking at’’
• Support for physically based interaction techniques
a 2D image of a 3D environment on a conventional display
(such as using object proximity or spatial relations).
screen and ‘‘being in’’ that 3D environment and basing
• The form of objects should encourage and support
spatial judgments relative to one’s own moving body.
spatial manipulation
Concerning 3D interactions, (Hinckley et al. 1997) pre-
• Support for multihanded interaction.
sented a formal user study of interactive 3D virtual sphere
with multidimensional input devices and found out that So that in an AR interface, the physical objects can
multidimensional input tasks presented a clear advantage further be enhanced in ways not normally possible such as
over conventional devices. The study provides clear evi- providing dynamic information overlay, context sensitive
dence that test users were able to take advantage of the visual appearance and physically based interactions.
integrated control of 3D to perform a task more quickly One of the most obvious benefits of tangible user
than with 2D input techniques. interface pointed out by Kato et al. (2000) is that users do
not need to learn any complicated computer interface or
2.3 Augmented reality and tangible interfaces command set to use tangible interfaces.

Azuma et al. (2001) define an AR system as a system that 2.4 Augmented reality applied to archaeology
‘‘supplements the real world with virtual (computer gen-
erated) objects that appear to coexist in the same space as Augmented or Mixed Reality could be applied to archae-
the real world’’. Moreover, the system should feature ological field in several ways: The first and most obvious
characteristics quite similar to VR features mentioned way is an ‘‘on site’’ or outdoor augmentation performed on
above such as: mobile devices as an ‘‘augmented walkthrough’’. ‘‘On site’’
augmentations generally focus on the correct registration of
• combines real and virtual objects in a real environment;
virtual elements over real ones and interactions with virtual
• runs interactively (2), and in real time(3); and
content depends on the mobile device used as there is no
• registers (aligns) real and virtual objects with each
workplace covered by sensors to support tangible interac-
other.
tions for instance. In this context, the Archeoguide project
On the one hand, virtual reality technologies immerse (Vlahakis et al. 2002) is a good example of such a para-
the user in a synthetic environment in which, he cannot see digm by enhancing the archaeological site of Olympia in
the real world around him. On the other hand, augmented Greece with augmented views of reconstructed ruins. The
reality allows the user to see the real environment with second way to apply AR to the archaeological field is an

123
Virtual Reality (2011) 15:311–327 315

‘‘off-site’’ or indoor augmentation. ‘‘Off-site’’ augmenta- of a 3D environment on a conventional display screen and
tion generally relies on a tracked workspace where users ‘‘being in’’ that 3D environment and basing spatial judg-
can then focus on interactions with the virtual content such ments relative to one’s own moving body. On the other
as in the VITA project presented (Benko et al. 2004) where hand, by combining augmented reality techniques with
archaeologists can review the findings and layers of an tangible user interface elements, we can create interfaces in
excavation of the acropolis at Monte Polizzo, Western which users can interact with spatial data as easy as real
Sicily. Users can then cooperate by combining speech, objects. Tangible AR interfaces remove the separation
touch and 3D hand gestures over either a ‘‘world in mini- between the real and virtual worlds and so enhance natural
ature’’ (WIM) representation or a life-size model of a interactions. The semi-immersive demonstrator is based on
portion of the excavation. Since Archaeologists are used to a large stereo display, and 3D navigation and interaction
report all findings on a map for off-site review, the ‘‘World are based on 3D wireless joysticks (also called ‘‘flysticks’’).
in Miniature’’ paradigm can then be used to augment such a The immersive demonstrator is based on the same navi-
map as presented in Sect. 3.3. gation and interaction devices but uses a tracked HMD to
provide complete surroundings to the user. And finally the
augmented reality demonstrator is based on a camera
3 Virtual and augmented reality for underwater tracking system associated with a see through HMD reg-
archaeology istering the users viewpoint and allowing interaction with
tangible interfaces over a map of the site.
Before starting to build virtual environments reflecting the
surveys performed on wreck sites, a case study has been 3.1 Tracking technologies
performed on a typical underwater archaeological survey,
and archaeologists were interviewed concerning the The tracking technology is an optical tracking from ART
requirements of such virtual environments. Archaeologists (2009) using two infrared cameras (see Fig. 6) for tracking
are mainly interested in the cargo that leads to determine unique patterns composed of retroreflective balls that could
the period of the wreck but also in the environment which be used as input devices such as flysticks and handles or
could explain the artefacts’ layout. A full list of require- head-tracked devices such as LCD glasses or HMD.
ments was built concerning visualisation (full view and Optical tracking allows users to wear or handle any wire-
close range), navigation (free navigation, artefact’s-based less devices as long as they can be equipped with unique
navigation or diver’s navigation) and interaction (artefact’s retroreflective balls pattern (as shown in Fig. 1).
individual data facts, inventory and artefacts statistics in The ART system is capable of measuring position and
terms of types, dimensions, locations and fragment status orientation of several patterns at the same time (up to 64
for broken artefacts). These requirements were later balls) at a 60 Hz measure rate (54 Hz if LCD shutter
transposed into a list of features implemented in various glasses synchronisation is required). The achievable accu-
ways within the different demonstrators. racy of the tracking system is 0.4 mm for position and
Although all proposed technologies have been already 0.12 for orientation within a 3 9 3 9 3 m volume.
used in other fields, and even though computer graphics is a However, the average measured accuracy precision of the
common tool to represent cultural heritage findings and system is actually between 0.5 and 2 mm for position and
results, our goal is to introduce VR and AR technology as a 0.3 for orientation due to slow decalibration of the system
working tool for archaeologists allowing them to perform over several months, which could be easily corrected by a
actual archaeological tasks rather than just a presentation system recalibration from time to time.
tool as there is no previous such systems in underwater
archaeology. 3.2 Virtual reality demonstrators
Within the framework of the VENUS project, we pro-
pose two distinct forms for the archaeological demonstrator We developed two versions of the VR application that uses
featuring both VR semi-immersive and VR immersive different devices technology. The first version works with
technologies. The immersive surrounding context of VR simple input/output devices (mouse, keyboard and moni-
provides a kinesthetic depth perception that lets users better tor) in order to easily run the demonstrator without needing
apprehend 3D structures and spatial relationships. It makes any specific devices that can be difficult to transport.
size, distance and angle judgments easier since it is more In the second version, we employed more advanced
like being in the real world than looking through the screen visualisation and tracking devices to offer a semi or com-
of a desktop monitor to the world behind it; the advantages plete immersive navigation and more natural interaction
arise from the difference between ‘‘looking at’’ a 2D image with the environment.

123
316 Virtual Reality (2011) 15:311–327

Fig. 1 Tracking devices: a ART tracking camera, b flystick, c tracked LCD glasses and d tracked HMD

3.2.1 Virtual reality system description • Seabed: Seabed meshes are loaded from an XML file
containing 3D vertices and texture information.
This section presents the structure and the construction of • Artefacts: An initial request to the database is per-
virtual environment and the corresponding virtual reality formed to retrieve artefacts parameters such as location,
system. orientation, status and artefacts models. Then, regis-
tered artefacts and markers 3D models are loaded.
3.2.1.1 Virtual environment structure All virtual envi- • Virtual Environment: These elements are placed in the
ronments for the VENUS project are developed around the virtual environment, and navigation and interaction
‘‘OpenScenegraph’’ (OSG) open source high performance managers are started. When 3D interaction devices are
3D graphics toolkit for VE modelling and visualization available, a connection to input devices is opened by
(Burns and Osfield 2004). The choice of OSG was moti- using a VRPN server (Taylor II et al. 2001). The
vated by the need of a high-level API abstracting rendering interaction manager handles inputs and eventually
features for the 3D objects, scene control and cameras sends queries to the database.
views management, which is also flexible enough to
develop specially tailored visualisations and interactions
3.2.1.2 Virtual reality system architecture The architec-
techniques wherever they are necessary. The main structure
ture of the VR system is composed of a database containing
of the VE developed for archaeologists contains the various
all required data such as photos, artefacts parameters, 2D/3D
seabeds (large bathymetric seabed and photogrammetric
objects location, etc (see Fig. 2). The archaeological data-
seabed with textures) and the various artefacts (in our case
base contains the pictures taken during the survey and the 2D
amphorae) lying on the seabed and recorded in the
and 3D points of artefacts lying on the seabed measured
database.
during the photogrammetry process. When these points are
The construction of the VE is divided into 3 principal
labelled to belong to a recognised artefact type, an actual
steps:

Fig. 2 VR system architecture

123
Virtual Reality (2011) 15:311–327 317

Fig. 4 VE devices technology

Fig. 3 Tools in the virtual environment recognized this task as the most common to all virtual
environments. It allows users to explore, investigate and/or
artefact could then be reconstructed in terms of location, operate in a virtual space. They identified two main com-
orientation and size, and all these artefacts’ parameters are ponents for navigation: travel and way finding (Bowman
stored in the database. Therefore, such a database could be et al. 1997), where they classified the different navigation
shared between the photogrammetric reconstruction process techniques into three basic motion tasks: the choice of
and the virtual environments designed to immersively direction or target, the choice of motion speed/acceleration
explore the site. In order for VE users to extract and study and choice of entry conditions (Bowman et al. 2005). For
properties of the cargo (registered artefacts), users interac- 3D interaction, we used 2 flysticks tracked by an ART
tion with artefacts are translated into SQL queries sent to the camera system that allows motion control and hence nav-
database, and results are displayed through selections or igation, and each flystick has 8 buttons and offers important
numeric data display depending on the nature of the results. number of choice to accomplish multiple tasks simulta-
Queries to the database can concern partial or complete neously. Display can be performed by a large screen with
inventory, metrology statistics (average size, similar sets, active stereo visualization or by a tracked head-mounted
etc.) or spatial relationships between artefacts. display (HMD) to increase immersion (see Fig. 4 for
tracked devices details).
3.2.1.3 Virtual reality interface and interactions The VE A new navigation technique has been developed (Haydar
interface is composed of many classical tools: menu bar, et al. 2009) using both hands to determine motion direction
information panel and popup message. The menu bar and control speed. A similar technique has been proposed
contains multiple submenus, the first menu is a selection by Mine et al. (1997) and is based on the knowledge of both
menu that proposes visualization/selection of any type of hands position where speed is computed according to the
artefacts registered in the site, the second menu allows to distance between the two hands. Such a technique is cog-
switch between analysis and exploring navigation mode, nitively difficult because the user may have difficulty in
the third one allows to hide some parts (terrain) of the VE controlling the motion speed through the gap between his
to highlight others parts (artefacts). Otherwise, several two hands. We used the angle between the hands rather than
classical menus are provided by the interface, like, font and the distance that is easier to control. The motion direction is
colour manager, help and exit menu. The information panel then given by the orthogonal axis to the segment joining
displayed on the bottom of the VE (Fig. 3) shows infor- hands positions. The viewpoint of the user is defined by
mation about objects loading progress, user location or two 3D points, the eye point and the at point. Changing
interaction result (e.g. amphora Id 21 was selected). A 3D user’s viewpoint can be a translation of the viewpoint, or a
popup message is displayed when the mouse passes over an rotation, or both translation and rotation. A viewpoint
object (or when the flystick selection ray casts an objects) translation can be done by translating both eye and at points
showing the type of the objects or other information on while a rotation is done by a rotation of the at point around
selected objects. the eye point. The motion speed is defined by the value of
3D interactions with a virtual environment can be the translation step and the value of the rotation angle in
divided into three principal tasks: navigation, selection and each frame update. Motion speed is computed according to
manipulation. The navigation or the control of the user’s angle a between the hands. The rotation direction and speed
viewpoint is the most important task and most used when is given by the angle b between the segment joining hands
using the virtual environment. Bowman et al. (2005) positions and the horizontal axis (Fig. 5).

123
318 Virtual Reality (2011) 15:311–327

Y′ speed is considered as maximal. The values of the new two


α points positions eye0 (xeye0 , yeye0 , zeye0 ) and at0 (xat0 , yat0 , zat0 )
rP1
rP2
are given by the equations:
α first, we apply the rotation Sr around the point eye:
8    
< xat0 ¼ xeye þ ðxat  xeye   cos Sr þ yat  yeye   sin Sr Þ
P2 y 0 ¼ yeye þ ð yat  yeye  cos Sr  xat  xeye  sin Sr Þ
: at
rP2 zat0 ¼ zat
rP1
X β
X′ ð3Þ
C Δy
β Δx
Then, we apply the translation:
P1 8
>
> xeye0 ¼ xeye þ St  sin h
>
>
Y >
> yeye0 ¼ yeye þ St  cos h
< 0
zeye ¼ zeye
Fig. 5 Speed and new position computing ð4Þ
> xat0 ¼ xat1 þ St  sin h
>
>
>
> yat0 ¼ yat1 þ St  cos h
>
:
! zat0 ¼ zat1
Considering C(xc, yc, zc) the centre of ½P1 P2  as the !
origin of the coordinate system. All coordinates can be Where h is the camera rotation around the ZZ 0 axis of the
computed in the new coordinate system by a simple viewer’s coordinate system. We multiply by h to overlay
translation T(xc, yc, zc). Having the position of the two the camera and the viewer coordinate system.
points eye(xeye, yeye, zeye) and at(xat, yat, zat) of the current
frame view f (Camera position), and having the positions 3.2.2 Virtual reality demonstrators setup
P1 ðxp1 ; yp1 ; zp1 Þ and P2 ðxp2 , yp2 , zp2 Þ of the hands (flysticks’
positions) and the angles rp1 and rp2 between the hands and 3.2.2.1 Semi-immersive demonstrator setup The semi-
! immersive demonstrator (see Fig. 6) allows a human scale
the XX 0 axis, we need to compute the two new positions representation of the virtual environment with a simulta-
eye0 (xeye’, yeye’, zeye’) and at0 (xat’, yat’, zat’) of the new neous navigation and interaction. It mimics the divers’
frame view f0 . We define the coefficients, St, Sr, St.max, paradigm and hence recreates but also enhances the diving
Sr.max as the translation speed, rotation speed, max motion process by allowing user interaction with the data collected
speed and max rotation speed, respectively. St.max and on the cargo. Several archaeologists can easily share the
Sr.max are predefined according to application needs. The same immersion level to collaborate in front of the large
values of a and b are given by the equations: screen and benefit from the stereo view. However, only one
(
a ¼ rp2  rp1  stereo viewpoint could be modified on the display
D ð1Þ according to a tracked head position.
b ¼ arcsin jP1 Py 2 j where Dy ¼ yp2  yp1
The semi-immersive demonstrator uses the Evr@ plat-
Then, St and Sr are defined as follows: form at UEVE featuring a large screen (3.2 9 2.4 m)
if jaj  10 ) St ¼ St:max allowing a user standing 1.5 m away from the screen to
if jaj  140 ) St ¼ 0 experience a 94 horizontal field of view and a 77 vertical
if jbj  10 ) Sr ¼ 0 field of view. Navigation and interaction through the virtual
if jbj  60 ) St ¼ 0 environment are controlled by one or two flysticks.

Otherwise, 3.2.2.2 Immersive demonstrator setup The immersive


(  a

St ¼ 1  140  St:max ð2Þ demonstrator takes up most of the semi-immersive dem-
b onstrator’s features such as life-size stereo viewing,
Sr ¼ 90  Sr:max
simultaneous navigation and interaction but adds the
The b value is limited between -90 and 90, and the look around capability that has proved to enhance users’
rotation direction is defined by the sign of b. The rotation is orientation and mobility within the virtual environment
clockwise when b is negative and anticlockwise when b is (see Ruddle et al. 1999 and Bowman et al. 2002).
positive. To avoid the motion noise, due to user hands The same tracking technologies as in the semi-immer-
shaking, we define a noise angle value for rotation and sive demonstrator are used, but a head-mounted display
translation. When b is between -10 and 10, we consider (HMD) is used for viewing the virtual environment. The
that motion is a pure translation and the rotation speed is chosen NVisor helmet offers a 50 field of view with a
null, whenever a is between -10 and 10 the translation 100% overlap between each eye display, hence ensuring a

123
Virtual Reality (2011) 15:311–327 319

Fig. 6 Semi-immersive demonstrator setup

proposed by Stoakley et al. (1995) and later applied to


augmented reality (Bell et al. 2002) that provides a much
more familiar interface to archaeologists. Indeed, archae-
ologists have more ease working with maps where they can
see the real world rather than a totally immersive environ-
ment in which it is difficult to be localised. Moreover, the
augmented reality paradigm offers the opportunity to
introduce a tangible interface (Ishii and Ullmer 1997;
Poupyrev et al. 2001) to the tools developed in the VR
demonstrator for archaeologists. These elements lead to the
definition of a new demonstrator for archaeologists: AR
VENUS.
In AR VENUS, archaeologists use a real map repre-
senting the deep underwater site. AR VENUS proposes to
enrich this environment and complete the real world per-
Fig. 7 Immersive demonstrator setup ception by adding synthetic elements to it rather than to
immerse the archaeologist in a completely simulated arti-
complete stereo display with a double 1,280 9 1,024 res- ficial world. AR VENUS provides an easy tool to interact
olution (see Fig. 7). with the real world using tangible interface (in our case
From a larger point of view, such a HMD with ‘‘see physical objects equipped with visual targets) to select and
through’’ capabilities could also be used in outdoor envi- manipulate virtual objects by using a pose estimation
ronments such as terrestrial archaeological site as the one algorithms to display artefacts models at the right location
reported by Drap et al. (2006). However, in this case, on the 2D map. Users need to wear special equipment, such
another kind of localisation sensors are required, such as as ‘‘see through’’ head-mounted display, to see the map,
the ones developed for the RAXENV2 Project (Zendjebil augmented in real time with computer-generated features
et al. 2008). (see Fig. 8a).

3.3 Augmented reality demonstrator 3.3.1 3D map overlay

Since archaeologists interest is mainly focused on the nature The first step in AR VENUS is to project the 3D models of
of the cargo one of the first feedbacks from archaeologists the seabed on the real 2D map using a system of visual
concerning VR demonstrators was that immersive naviga- markers identification and a pose estimation algorithm. For
tion did not provide much help to archaeological tasks in this visual tracking module, we used a simple webcam for
opposition to general public concerns where immersive tracking visual markers made up with printed 60 9 60 mm
navigation provides a deeper experience of a site. This black and white fiducials. The tracking algorithm computes
observation leads us to propose an augmented map-based the real camera position and orientation relative to the
navigation paradigm such as the ‘‘World in Miniature’’ physical markers in real time and also identify the content
of the fiducial as a unique identifier (see Fig. 8b). Some
2
http://raxenv.brgm.fr/. fiducials are stuck on the real map in order to compute the

123
320 Virtual Reality (2011) 15:311–327

shift between the virtual model and the real one represented
on the 2D map. The size ratio between the small target and
the large map did not provide a correct registration, which
led us after trying a larger target to consider a multitarget
tracking approach since these targets are lying on the same
map plane.

3.3.3 Tangible interface

We saw in the previous section that static fiducials are used


to register the virtual environment and artefacts; however,
other targets can also be moved around the map and asso-
ciated with virtual tools allowing the users to interact with
the augmented environment using tangible user interfaces
(TUI). Many research work has been done in this field over
the last years (Kato et al. 2000), and several interfaces have
been developed for manipulation and exploration of digital
information (Gorbet et al. 1998) using physical objects as
interaction tools in virtual environments (Ishii and Ullmer
1997). Several moving targets have been associated with
virtual tools. These tools are activated whenever the camera
identifies their corresponding patterns and discarded when
they are not visible anymore such as:
• selection tool: The selection tool represents two sides
of the same tangible ‘‘pallet’’ or ‘‘tile’’ as denoted by
Billinghurst et al. (2001). The first side is used to trigger
nearby object search and when an object is selected we
can then flip to the other side for a closer inspection of
Fig. 8 The AR VENUS system and pose estimation
the selected object (Fig. 9a). Tracking of the selection
tile is important in this case since objects are selected
pose of the virtual environment over the real map, whereas when the tile’s target is close to the object of interest.
others are used to interact. The selection of amphorae is performed using a probe
We used OSGART library to identify targets and over- target, and the selected object stays attached to the
lay the 3D models on the real scene. OSGART has been selection probe for closer investigation. The developed
designed to provide an easy bidirectional transition from technique consists in computing distance between the
VR to AR (Looser et al. 2006) by integrating ARToolkit marker centre attached to the selection tool and the
(Kato et al. 1999) within OpenSceneGraph. The tracking centre of amphorae got from the archaeological data-
library finds all squares in the binary image. For each base. For unselecting, another marker is attached in the
square, the pattern inside the square is captured and mat- other side of the selection probe, when this marker is
ched to some pretrained pattern templates. The square size visible, the amphorae is deselected and placed into its
and pattern orientation are used to compute the position of original position on the map.
the camera relative to the physical marker; hence, the pose • measuring tool: the measuring tool allows to measure
accuracy mostly depends on the marker size. Figure 8b distances (expressed in the VE dimensions) between
shows the different steps of pose estimation algorithm (also two specific targets moved around on the real map
called registration). (see Fig. 9b).
• inventory tool: Whenever the inventory tool target
3.3.2 Virtual objects registration appears to the tracking camera, a virtual site inventory
(in terms of artefacts’ type and occurrences) is attached
We used single and multiple targets with different scale to above the target and can be placed on the map or
improve the tracking stability and accuracy. We started our handled for a closer inspection (see Fig. 9c).
tests using a single marker. The obtained results with a • grid tool: The grid tool allows to display a north south
single marker were not accurate, and we noticed a large oriented regular grid on the site (see Fig. 9d). The grid

123
Virtual Reality (2011) 15:311–327 321

Fig. 9 AR VENUS tools

target uses the same principle of a two-sided tile used The goal of the workshop was to present the already
for selection tool in order to provide two different grid developed features to archaeologists but also to trigger
steps. Tracking the targets on the tile is not important in feedbacks by having the archaeologists actually experiment
this case, as only target recognition is used to trigger the demonstrator by themselves. After a short presentation
the two possible grids. of the goals and means of the archaeologists demonstrators
and a quick demo of the main features, each of the
archaeologists was asked to perform the following tasks:
• Artefacts group selection.
4 Evaluation
• Individual artefact selection.
• Circular area selection.
Some recent work (Dünser et al. 2008; Domingues et al.
• Distance measuring between two artefacts.
2008) addressing the evaluation problems in VR and AR
• Add an annotation on the site, modify it then remove
systems confirms the domination of objective and subjec-
the annotation.
tive measurement. Objective and subjective measurements
• Switch between seabed display mode (regular ? grid
are measurements methods used in empirical evaluations.
? topographic)
Usually, we found these measurements in many evalua-
• Display snapshot locations and images related to an
tions (McMahan and Bowman 2007; Looser et al. 2007;
artefact.
Bowman et al. 2002). Objective measurements are studies
that include objective measurements such as completion The survey was conducted individually with 11
times, accuracy/error rates and generally, statistical analyses archaeologists on two laptops running the low-end dem-
are made on the measured variables. Subjective measure- onstrator. By the end of the test, they were asked to fill in a
ment studies users using questionnaires. They also employ questionnaire dedicated to help us pointing what is good
statistical analysis of the results or a descriptive analysis. and bad in the VR demonstrator for archaeologists.
The first questionnaire was about participant skills and
4.1 Subjective evaluation habits concerning computer usage and more specifically
the use of 3D and VR software as well as their current state
The functionalities offered by the various demonstrators by the end of the test. The second questionnaire was about
have been evaluated by archaeologists during a workshop. the usability of the archaeological demonstrator in terms of

123
322 Virtual Reality (2011) 15:311–327

Fig. 10 Usability summary

navigation, interaction, accessing the tools and displayed 4.1.1.1 Navigatio Navigation (whether free or guided by
data. The third questionnaire referred to users’ satisfaction the artefacts) was considered as easy (or very easy) by 64%
and allowed users to suggest improvements. The exploi- of the users; however, 36% gave marks between 2 and 3
tation of these evaluation results is part of the validation of that tends to indicate they encountered some difficulty in
developed software as well as part of an iterative cycle navigating. This might explain the suggestions for new
methodology. This workshop was a unique opportunity navigation modes (such as diver’s mode) we have found in
to get feedback from a large panel of (well known) the possible improvements.
archaeologists.
The survey has been performed by 7 women and 4 men, 4.1.1.2 Interaction Interaction marks features the same
aged between 31 and 60 with an average of 43 ± 10 years distribution as navigation, 64 % found it easy and 36 %
old, hence providing a rather senior researchers panel. encountered some difficulties. Once again, this might
Most of them are using computers on a daily basis. More explain the number of improvements requests concerning
than half of the users have already used 3D software, and a the ‘‘Interaction with artefacts’’ feature (14 improvements
bit less than a half of them already used virtual reality requests out of 27).
software.
4.1.1.3 Tools access Tools are mainly situated in the
4.1.1 Usability menu and were considered as easy by a majority of 73% of
the users; however, 27% of them gave a mark of 3 and
Users were asked to rank the difficulty (on a 1–5 scale) of would have appreciated a clear separation of tools and
various aspects of the demonstrator such as navigation, various options in the menu.
interaction, accessing tools and also rank whether data
display associated with various selection methods were 4.1.1.4 Individual data facts Individual data facts pre-
satisfying. Individual data facts refer to the data sheet senting the data sheet of a selected artefact were considered
associated with a single selected object (as shown on as satisfactory by 55% of the users; however, this also
Fig. 11), type data facts refer to the information displayed means that 45% of them were not satisfied or at least had
by selecting all instances of a specific type of artefact (see not a good opinion of it. This also shows through the
also Fig. 12) and area data facts refer to the information improvements request as 5 of them directly concerned
displayed during area-based selection (see also Fig. 13). ‘‘Individual data facts’’. The comments made on this
Figure 10 presents the assessments of these various topics. point focused on the numerical aspect of the information
The following analysis on the most frequent actions is presented to the user (location and statistics), whereas
based on the results of the questionnaires as well as they would have appreciated pieces of information
archaeologists’ comments during the test written down by like orientation and tilt or object visual reference (as dis-
the two experimenters. played in classical artefacts catalogues such as the Dressel

123
Virtual Reality (2011) 15:311–327 323

Fig. 11 Individual data facts

Fig. 12 Type data facts,


selected artefacts and fracture
lines

catalogue), which has been partially fulfilled by a new enhancement of selected artefacts’’ occurred 4 times in the
information panel (see Fig. 11). survey.

4.1.1.5 Type data facts Type data facts present the exact 4.1.1.6 Area data facts 55% of the users were satisfied
opposite repartition, as 45% of the users only were satisfied by information panel displayed during circular area selec-
by the information displayed by selecting types. Once tion featuring the population of selected artefacts showing
again, the numerical nature of the information panel artefacts’ type and occurrences (see Fig. 13). But 45% had
(see Fig. 12) could have been enhanced by showing an no opinion of were dissatisfied once again by the numerical
instance (the 3D model) of the selected type in the infor- nature of the information panel that could have been
mation panel. Besides, selecting types lead to a viewpoint enhanced with artefacts type silhouettes.
change encompassing all instances of the selected type as This subjective evaluation performed with archaeolo-
well as a highlight of the selected artefacts. As a matter of gists allowed us to improve the core functionalities of the
fact, this highlighting could have easily been confused demonstrators and also add some new features to the virtual
with fracture lines on artefacts (see also Fig. 12) repre- environment such as ‘‘diver’s navigation’’ mode that allows
senting broken artefacts at the moment, and this might also archaeologists to review the site and measured artefacts just
explain why the improvement request concerning a ‘‘better like divers would during the photogrammetric survey.

123
324 Virtual Reality (2011) 15:311–327

Fig. 13 Area data facts

4.2 Objective evaluation

In addition to the subjective evaluation mentioned above,


we also evaluated the benefits of immersive VR versus
nonimmersive by comparing users performances with three
distinct forms of the VR demonstrators: in our case, the
low-end demonstrator, the semi-immersive demonstrator
and the immersive demonstrator.

4.2.1 Evaluation setup

The evaluated platforms are:


Fig. 14 Navigation task completion time under various conditions
• D1: The low-end demonstrator (nonimmersive), featur-
ing only standard input/output devices such as key-
board, mouse and 19’’ monitor.
the following order: D1, D2 and D3 for group 1, D2, D3
• D2: The semi-immersive demonstrator, featuring large
and D1 for group 2 and finally D3, D1 and D2 for group 3.
stereo screen display and two flysticks for navigation
Each group carried out four tests of each task and for
and selection tasks. Snowplough is used as a navigation
each demonstrator. The evaluation is based on task com-
metaphor with the flysticks (as exposed in 3.2.1.3)
pletion time for the navigation.
• D3: The immersive demonstrator, featuring a tracked
VR helmet allowing to look around and the same two
4.2.2.1 Task completion time Navigation task consisted
flysticks.
in reaching a sequence of hotspots within the virtual site
Navigation on demonstrators (D2) and (D3) uses the (progressively appearing and disappearing once they have
same technique and same interface. been reached).
The average completion time for this task was 99 ± 26 s
4.2.2 Evaluation protocol and experiment with the immersive demonstrator, 120 ± 43 s with the
semi-immersive demonstrator and 187 ± 81 s with the
In this experiment, a navigation task was carried out on all nonimmersive demonstrator with a significant ANOVA
three platforms. Fifteen volunteers (six women and nine (P = 0.000185 0.01). These results (presented in
men) performed this experiment. All of them were right Fig. 14) show that immersive conditions have an influ-
handed. Each subject was given pretest along with a short ence on navigation task performance, and the immersive
briefing. The subjects were divided into 3 groups. Each demonstrator provided the best navigation task perfor-
group performed the experiment using demonstrators in mances.

123
Virtual Reality (2011) 15:311–327 325

5 Conclusion and perspectives

We have described an attempt to introduce VR and AR


technologies in the field of underwater archaeology as a
working tool rather than the usual presentation tool. Pur-
suing this goal, we tried to apply various levels of
immersion and interaction facilities to a virtual environ-
ment designed to allow archaeologists to review and study
underwater wreck sites possibly unreachable to divers
reconstructed in terms of environment (seabed) and content
(cargo) only through bathymetric and photogrammetric
surveys. Several demonstrators using VR and AR tech-
nologies were built around this environment allowing us to
Fig. 15 Illustration of user learning for various conditions in
navigation task start evaluating the benefits of these technologies with
archaeologists.
4.2.2.2 User learning Learning is defined here by the A first subjective evaluation with archaeologists allowed
improvement of group performance during task repetitions. us to review and amend the features of the virtual envi-
Figure 15 shows user learning during the navigation task for ronment and also introduce new features resulting from this
various conditions (immersive, semi-immersive and non- first ‘‘handover’’ and enter the iterative cycle of refining the
immersive demonstrators). The results show that using features. In a similar way, preliminary evaluations have
immersive condition, the average completion time was been performed to compare immersive and nonimmersive
123 ± 35 s during the first test and 81 ± 20 s during the VR demonstrator in terms of performance gain in order to
fourth test. The average completion time under semi- assess the benefits of immersive VR. However, deeper
immersive condition was 147 ± 71 s during the first test evaluation needs to be performed as well as an evaluation
and 101 ± 32 s during the fourth test. Similarly, the average of the AR demonstrator against the other ones.
completion time under nonimmersive condition was Using these innovative methods of research and dis-
242 ± 115 s for the first test and 125 ± 54 s for the last test. semination can capture the imagination of the general
These results show a navigation performance improvement public and generate interest not only in the historical aspect
of 34.22, 31.09 and 48.24% for immersive, semi-immersive of archaeology but also in the work and expertise that goes
and nonimmersive conditions, respectively. into supporting these archaeological surveys.
In this section, we introduced the first results of
evaluation of the three demonstrators (Immersive, semi- Acknowledgments VENUS is partially supported by the European
Community under project VENUS (Contract IST-034924) of the
immersive and nonimmersive). Two types of evaluation
‘‘Information Society Technologies (IST) programme of the 6th FP
have been accomplished: objective and subjective evalua- for RTD’’. The authors are solely responsible for the content of this
tions. We compared principally the influence of the paper. It does not represent the opinion of the European Community,
immersion type with the navigation and selection tasks and the European Community is not responsible for any use that
might be made of data appearing therein.
performances. Results show that the performances for the
navigation task are clearly better with the immersive
demonstrator. However, the performances for the selection References
task are better with the nonimmersive. It is necessary to
note that the preliminary objective evaluation introduced Acevedo D, Vote E, Laidlaw DH, Joukowsky MS (2001) Archaeo-
here is based on task completion time performance mea- logical data visualization in VR: analysis of lamp finds at the
great temple of Petra, a case study. In: Proceedings of the 12th
surements. Other measurements (selection and/or naviga- IEEE conference on visualization (VIS ’01). IEEE Computer
tion errors) can be considered in the future. In the same Society, Washington, DC, pp 493–496
way, the use of the virtual guides (Rosenberg 1993; ART (2009) Advanced realtime tracking
Otmane et al. 2000; Prada and Payandeh 2009) as assis- Azuma R, Baillot Y, Behringer R, Feiner S, Julier S, MacIntyr B
(2001) Recent advances in augmented reality. IEEE Comput
tance tools for navigation and in selection could also Graph Appl 21(6):34–47
contribute to improve 3D interaction tasks performances. Barceló JA (2000) Visualizing what might be: an introduction to
Some other evaluations concerning the augmented reality virtual reality techniques in archaeology. Virtual reality in
(AR) demonstrator still need to be performed. We will archaeology. British Archaeol Rep S843:9–36
Bell B, Höllerer T, Feiner S (2002) An annotated situation-awareness
attempt to study the influence of this type of immersion on aid for augmented reality. In: UIST ’02: proceedings of the 15th
navigation and selection task performances and to compare annual ACM symposium on user interface software and
it with the three previous demonstrators. technology. ACM, New York, pp 213–216

123
326 Virtual Reality (2011) 15:311–327

Benko H, Ishak EW, Feiner S (2004) Collaborative mixed reality Gaitatzes A, Christopoulos D, Roussou M (2001) Reviving the past:
visualization of an archaeological excavation. In: ISMAR ’04: cultural heritage meets virtual reality. In: VAST ’01: proceed-
proceedings of the 3rd IEEE/ACM international symposium ings of the 2001 conference on virtual reality, archeology, and
on mixed and augmented reality. IEEE Computer Society, cultural heritage. ACM, New York, pp 103–110
Washington, DC, pp 132–140 Germs R, Van Maren G, Verbree E, Jansen FW (1999) A multi-view
Billinghurst M, Bowskill J, Dyer N, Morphett J (1998) An evaluation VR interface for 3D GIS. Comput Graph 23(4):497–506
of wearable information spaces. In: VRAIS ’98: proceedings of Gorbet MG, Orth M, Ishii H (1998) Triangles: tangible interface for
the virtual reality annual international symposium. IEEE Com- manipulation and exploration of digital information topography.
puter Society, Washington, DC, pages 20 In: CHI ’98: proceedings of the SIGCHI conference on human
Billinghurst M, Kato H, Poupyrev I (2001) Collaboration with factors in computing systems. ACM Press/Addison-Wesley
tangible augmented reality interfaces. In: HCI ’2001: interna- Publishing Co, New York, pp 49–56
tional conference on human computer interaction, New Orleans Hall Rebecca A, Hall Andrew W, Barto Arnold J III (1997)
Bowman AD, Kruijff E, Laviola I, Poupyrev J (2005) 3D user Presenting archaeology on the Web: The La Salle Shipwreck
interfaces: theory and practice. Addison Wesley Longman Project. Int J Nautical Archaeol 26(3):247–251
Publishing Co., Inc., Redwood City Haydar M, Maı̈di M, Roussel D, Mallem M (2009) A new navigation
Bowman DA, Datey A, Ryu YS, Farooq U, Vasnaik O (2002) method for 3D virtual environment exploration. In: AIP (ed) The
Empirical comparison of human behavior and performance with 2nd Mediterranean conference on intelligent systems and auto-
different display devices for virtual environments. In: Proceed- mation (CISA 2009), vol 1107. AIP, Zarzis (Tunisia), pp 190–195
ings of the human factors and ergonomics society annual Hinckley K, Tullio J, Pausch R, Proffitt D, Kassell N (1997) Usability
meeting (HFES’ 02). Human Factors and Ergonomics Society, analysis of 3D rotation techniques. In: Proceedings of the 10th
pp 2134–2138 annual ACM symposium on user interface software and
Bowman DA, Koller D, Hodges LF (1997) Travel in immersive technology. ACM, New York, pp 1–10
virtual environments: an evaluation of viewpoint motion control Ishii H, Ullmer B (1997) Tangible bits: towards seamless interfaces
techniques. In: VRAIS ’97: proceedings of the 1997 virtual between people, bits and atoms. In: CHI ’97: proceedings of the
reality annual international symposium (VRAIS ’97). IEEE SIGCHI conference on human factors in computing systems.
Computer Society, Washington, DC, page 45 ACM Press, New York, pp 234–241
Burns D, Osfield R (2004) Open scene graph a: introduction, b: Kato H, Billinghurst M, Blanding B, May R (1999) ARToolKit
examples and applications. In: VR ’04: proceedings of the IEEE (Technical Report). Technical report, Hiroshima City University
virtual reality 2004. IEEE Computer Society, Washington, DC, Kato H, Billinghurst M, Poupyrev I, Imamoto K, Tachibana K (2000)
page 265 Virtual object manipulation on a table-top AR environment. In:
CC (2009) The cultural computing program ISAR’ 00 : proceedings of the international symposium on
Cosmas, John, Take I, Damian G, Edward G, Fred W, Luc VG, Alexy Z, augmented reality, Munich, pp 111–119
Desi V, Franz L, Markus G, Konrad S, Konrad K, Michael G, Looser J, Billinghurst M, Grasset R, Cockburn A (2007) An
Stefan H, Marc W, Marc P, Roland D, Robert S, Martin K (2001) evaluation of virtual lenses for object selection in augmented
3D MURALE: a multimedia system for archaeology. In: VAST reality. In: Proceedings of the 5th international conference on
’01: proceedings of the 2001 conference on virtual reality, computer graphics and interactive techniques in Australia and
archeology, and cultural heritage. ACM, New York, pp 297–306 Southeast Asia. ACM, New York, pp 203–210
Cruz-Neira C, Sandin DJ, DeFanti TA (1993) Surround-screen Looser J, Grasset R, Seichter H, Billinghurst M (2006) OSGART—a
projection-based virtual reality: the design and implementation pragmatic approach to MR. In: International symposium of
of the CAVE. In: Proceedings of the 20th annual conference on mixed and augmented reality (ISMAR 2006), Santa Barbara
computer graphics and interactive techniques, ACM, New York, McMahan RP, Bowman DA (2007) An empirical comparison of task
pp 135–142 sequences for immersive virtual environments. In: IEEE sym-
Domingues C, Otmane S, Davesne F, Mallem M (2008) Creating 3D posium on 3D user interfaces
interaction technique empirical evaluation with the use of a Mine Mark R Jr., Brooks Frederick P, Sequin Carlo H (1997) Moving
knowledge database of interaction experiments. In: Human objects in space: exploiting proprioception in virtual-environ-
system interactions, 2008 Conference on, pp 170–175 ment interaction. In: SIGGRAPH ’97: Proceedings of the 24th
Drap P, Grussenmeyer P (2000) A digital photogrammetric worksta- annual conference on computer graphics and interactive tech-
tion on the WEB. J Photogramm Remote Sensing 55(1):48–58 niques, vol 31. ACM Press/Addison-Wesley Publishing Co,
Drap P, Long L (2001) Towards a digital excavation data manage- pp 19–26
ment system: the ‘‘Grand Ribaud F’’ Estruscan deep-water Otmane S, Mallem M, Kheddar A, Chavand F (2000) Active virtual
wreck. In: VAST ’01: proceedings of the 2001 conference on guide as an apparatus for augmented reality based telemanipu-
virtual reality, archeology, and cultural heritage. ACM, New lation system on the Internet. In: Proceedings of the 33rd annual
York, pp 17–26 simulation symposium, (SS 2000). IEEE Computer Society
Drap P, Nedir M, Seinturier J, Papini O, Chapman P, Boucault F, Viant Poupyrev, Ivan, Tan Desney S, Billinghurst M, Kato H, Regenbrecht H,
W, Vannini G, Nuccioti M (2006) Toward a photogrammetry and Tetsutani N (2001) Tiles: a mixed reality authoring interface. In:
virtual reality based heritage information system: a case study of INTERACT 2001 conference on human computer interaction,
shawbak castle in jordan. In: Joint event conference of the37th Tokyo, pp 334–341
CIPA international workshop dedicated on e-documentation and Prada R, Payandeh S (2009) On study of design and implementation
standardisation in cultural heritage, 7th VAST international of virtual fixtures. Virtual Real J 13(2):117–129
symposium on virtual reality, archaeology and cultural heritage, Reichlen BA (1993) Sparcchair: a one hundred million pixel display.
4th eurographics workshop on graphics and cultural heritage and In: 1993 IEEE virtual reality annual international symposium,
1st Euro-med conference on IT in cultural heritage pp 300–307
Dünser A, Grasset R, Billinghurst M (2008) A survey of evaluation Rheingold H (1991) Virtual reality. Summit Books, London
techniques used in augmented reality studies. In: international Rosenberg L (1993) Virtual fixtures: perceptual tools for telerobotic
conference on computer graphics and interactive techniques. manipulation. In: Proceedings of IEEE virtual reality interna-
ACM, New York tional symposium, pp 76–82

123
Virtual Reality (2011) 15:311–327 327

Ruddle RA, Payne SJ, Jones DM (1999) Navigating large-scale Vlahakis V, Ioannidis N, Karigiannis J, Tsotros M, Gounaris M,
virtual environments: what differences occur between helmet- Stricker D, Gleue T, Daehne P, Almeida L (2002) Archeoguide:
mounted and desk-top displays? Presence: Teleoperators Virtual an augmented reality guide for archaeological sites. IEEE
Environ 8(2):157–168 Comput Grap Appl 22(5):52–60
Stoakley R, Conway Mathew J, Pausch R (1995) Virtual reality on a Vote E, Feliz DA, Laidlaw DH, Joukowsky MS (2002) Discovering
WIM: interactive worlds in miniature. In: CHI ’95: Proceedings petra: archaeological analysis in VR. IEEE Comput Graph Appl
of the SIGCHI conference on human factors in computing 22(5):38–50
systems. ACM Press/Addison-Wesley Publishing Co, New York, Wang F-Y (2009) Is culture computable?. IEEE Intell Syst 24(2):2–3
pp 265–272 Watts Gordon P, Kurt Knoerl T (2007) Out of the blue—public
Taylor II, Russell M, Hudson Thomas C, Seeger A, Weber H, Juliano J, interpretation of maritime cultural resources, chapter entering the
Helser Aron T (2001) VRPN: a device-independent, network- virtual world of underwater archaeology. Springer, pp 223–239
transparent VR peripheral system. In: VRST ’01: proceedings of Zendjebil IM, Ababsa F, Didier J, Vairon J, Frauciel L, Hachet M,
the ACM symposium on virtual reality software and technology. Guitton P, Delmont R (2008) Outdoor augmented reality: state of
ACM, New York, pp 55–61 the art and issues. In: Virtual reality international conference,
Tosa N, Matsuoka S, Ellis B, Ueda H, Nakatsu R (2009) Cultural pp 177–187
computing with context-aware application: ZENetic computer.
Lect Notes Comput Sci 3711:13–23
Van Dam A, Laidlaw DH, Simpson RM (2002) Experiments in
immersive virtual reality for scientific visualization. Comput
Grap 26(4):535–555

123

Potrebbero piacerti anche