Sei sulla pagina 1di 16
Convegno Nazionale XIV ADM  XXXIII AIAS Innovazione nella Progettazione Industriale Bari, 31 Agosto -

Convegno Nazionale XIV ADM XXXIII AIAS

Innovazione nella Progettazione Industriale

Bari, 31 Agosto - 2 Settembre 2004

Convegno Nazionale XIV ADM  XXXIII AIAS Innovazione nella Progettazione Industriale Bari, 31 Agosto - 2

SMART TOOLS FOR VIRTUAL REALITY BASED CAD

M. Fiorentino, G. Monno, A. E. Uva

Politecnico di Bari, m.fiorentino | gmonno | a.uva @poliba.it

Abstract

Usability in virtual reality based design tools (VRAD) is a major issue since its interaction techniques are not yet fully investigated. Human factors such as pointing precision, fatigue, hand vibrations, lack of limb support, and interaction anisotropy should be taken into account for a more effective interface as compared to the 2D. This work presents an ongoing study addressing human performances in VR during common CAD tasks: picking, pointing, and line drawing. Tests confirm performances reduction along the user’s head to hand direction, mainly due to occlusion and lack of appropriate feedback. Three virtual tools are here presented in order to overcome the interaction anisotropy: the “Ortho Tool”, the “Smart Transparency”, and the “Smart Object Snap”. The new interface has shown better user performances and improved model understanding. Results achieved in this work contribute not only toVRAD development, but also to other virtual reality applications, because their context can be easily extended.

Keywords: Virtual Reality, 3D interaction, CAD, Precise manipulation

1.

Introduction

Rapid development in computer graphics, position tracking, image recognition and wireless connections are nowadays disclosing new and interesting features for the next generation of virtual reality based design tools (VRAD).

The main advantages of VRAD against traditional 2D CAD tools can be summarized as follows:

- stereo vision: better shape perception (solids, free form);

- spatial input: 3D interaction advantages from real life skills;

- six degrees of freedom (6DOF): translation and rotation based control;

- head tracking: enhancement of navigation and model insight;

- mixed reality: fusion of digital (virtual) images with real world;

- tangible interfaces: improvement of precision\usability due to real object devices;

- interaction freedom: encouragement of creativity and of the development of new ideas.

Previously many authors attempted to develop VRAD systems. Unfortunately, lack of an effective interface and the not complete understanding of the principles governing the modelling in such an innovative environment, often limited the efforts to academic prototypes not suitable for valuable industrial use [7].

The ongoing development by the authors of the Spacedesign VRAD application [8], has given as well rise to different issues concerning the interface design: widget and snap dimension, tracking filtering, and user’s intention recognition.

Previous tracking systems suffered from drawbacks in precision, latency, resolution, and repeatability. Therefore, much of the research effort was diverted towards tracking error reduction, filtering, and position prediction. Recent developments in optical devices set new precision standards, which are far below user accuracy, offering new interesting insights [11].

Human factors such as pointing precision, fatigue, hand vibrations, lack of limb support, and interaction anisotropy should be taken into account in a VR-based CAD in order to obtain a more effective interface as compared to the 2D (Figure 1).

The aim of this work is thus the following: first, to further investigate the human asymmetry while performing basic modelling tasks in VR; second, to propose solutions in order to improve interface efficiency.

propose solutions in order to improve interface efficiency. Figure 1. Human issues in VRAD applications. 2.

Figure 1. Human issues in VRAD applications.

2. Related Work

Virtual reality is not a novel technology; thirty years of study concerned two different fields:

applied research and human-computer interaction (HCI).

Applied research in VRAD was carried out in the past using various types of input devices (3D mouse, gloves, wand, fly stick, gaze-control, etc.), different configurations (immersive VR, semi-immersive VR, desktop VR, etc.), and - for specific modelling tasks - solid modelling [17], free form [6], conceptual styling [19], assembly [18], virtual prototyping [3], and sculpting [5]. VRAD implementations are scientifically relevant, but in most of the cases the related literature lacks systematic performance evaluation and sometimes misses the intent of defining clear development guidelines. But, at present, VRAD interface is far from being mature, and further research is needed for the understanding of the interaction basics and of the definition of standards and benchmarks, in a way similar to the traditional CAD.

In a different way, human-computer interaction (HCI) research follows a general approach, but the guidelines provided should be then applied to VRAD specific modelling task, and the results achieved can vary according to the case.

2

The simplest form of interaction in 3D space - the pointing - was explored by many authors; Boritz [4] investigated 3D point location using a six degree of freedom input device. Four different visual feedback modes were tested: monoscopic fixed viewpoint, stereoscopic fixed viewpoint, monoscopic head-tracked perspective, and stereoscopic head-tracked perspective. The results indicate that stereoscopic performance is superior to the monoscopic one, and that asymmetries exist both across and within axes.

Zhai et al. [21] presented an empirical evaluation of a three-dimensional interface, decomposing tracking performance into six components (three in translation and three in rotation). Tests revealed that subjects’ tracking errors in the depth dimension were about 45% (with no practice) to 35% (with practice) larger than those in the horizontal and vertical dimensions. It was also found out that subjects had initially larger tracking errors along the vertical axis than along the horizontal axis, likely due to attention allocation strategy.

Poupyrev et al. [14] developed a test bed which evaluates manipulation tasks in VR in an application-independent way. The framework provided systematic task analysis of immersive manipulation, and suggested a user-centred non-Euclidean reference frame for the measurement of VR spatial relationship.

Grossman et al. [10] investigated 3D pointing using a true volumetric display, where the target size varied in three spatial dimensions. The effect of the user's physical movement angle on pointing performance was considered. Results show that target acquisition time along the depth direction has greater impact on performance than the other two axis. The authors proposed and validated an extended Fitt’s ‘law model’ which accounts for the movement angle.

Mine et al. [12] explored manipulation in immersive virtual environment using the user’s body as reference system. They presented a unified framework for VE interaction based on proprioception, a person's sense of the position and orientation of his/her body and limbs. Test were carried out on the body-relative interaction techniques presented.

The short survey here presented illustrates how interaction in VR is still an unexplored topic, and how at the moment the interface usability stands on the way to the development of VRAD applications. Many research studies have pointed out that the user interaction performances vary according to the position of the user’s limbs in the virtual environment, but at the present time no VRAD application takes into account this issue for the interface design.

The purpose of this research is to examine human bias, consistency, and individual differences when pointing, picking and line sketching in a virtual environment (VE), in order to provide useful information and solutions for future VRAD improvement.

3. Experiment Design

The aim of this set of tests is to give a qualitative and quantitative evaluation of human performances in a general VRAD application. We selected a set of the most frequent tasks carried out in a CAD: pointing, picking, and line sketching. These tasks are similar in both 2D and 3D CAD system. Using a semi-immersive head tracked stereoscopic display and a 6DOF pointer, the following tests were carried out:

- measurement of the ability of the user to point to a fixed 3D position;

- analysis of the sketched lines traced by the user when following a virtual geometry, in order to discover preferred sketching methods and modalities;

3

- measurement of the user’s the ability to pick points in 3D space in order to evaluate human performance in object selection.

The SpaceXperiment [9] application was used for the tests. Position, orientation and timestamp of the pointer (pen tip) and of the user’s head were recorded for subsequent analysis.

3.1. Participants

Voluntary students from the faculty of mechanical engineering and architecture were recruited. All participants were regular user of a windows interface (mouse and keyboard), but none of the subjects had been in a VR environment before. All the users were given a demonstration of the experiments and were allowed to interact in the virtual workspace for approximately 20 minutes in order to become acquainted with the stereo perception of the virtual space. Moreover all the user performed a double set of tests. The first set was considered a practice session and the second a data collection session. All subjects were right handed, and had normal or corrected-to-normal vision. Informed consent was provided before the test sessions.

3.2. Apparatus

The experiments were conducted in the VR3lab facility at the Cemec (Politecnico di Bari, Italy) on the VR system, which normally runs the Spacedesign VRAD application.

The Virtual Reality system is composed of a vertical screen of 2.20m x 1.80m with two polarized projectors (Figure 2) and an optical 3D tracking system by Art [1]. Horizontal and vertical polarized filters in conjunction with the user’s glasses make possible the so called passive stereo vision. The experiment was conducted in a semi-dark room.

The tracking system uses two infrared (IR) cameras and IR-reflective spheres (the markers), to calculate the position and orientation of the user’s devices in space by triangulation. The markers, which are of 12mm diameter, are attached to the interaction devices following a unique pattern which allows them to be univocally identified by the system.

During the test sessions the system records the three-dimensional position of the user’s devices, and stores the results in text data files for subsequent off-line analysis.

The user handles a transparent Plexiglas pen with 3 buttons, which is visualized in VR with a virtual simulacrum. The user is also provided with a virtual palette (a Plexiglas sheet) that can be used to retrieve information and to access the virtual menus and buttons (Figure 3).

and to access the virtual menus and buttons (Figure 3). Figure 2. LCD projectors and the

Figure 2. LCD projectors and the polarized filters.

4

menus and buttons (Figure 3). Figure 2. LCD projectors and the polarized filters. 4 Figure 3.

Figure 3. A test session at the Wall.

3.3.

Software Implementation

The experimental test bed comprises a hardware system and a software application called SpaceXperiment, the program for the testing of 3D interaction in a virtual reality environment. SpaceXperiment is built upon the Studierstube library [16], which provides the VR interface by means of the “Pen and Tablet Metaphor”: the non-dominant hand holds the transparent palette with virtual menus and buttons; the other handles the pen for application- related tasks. The incoming data from the tracking system are sent directly via a Ethernet network to the SpaceXperiment application via the OpenTracker [15] library. This is an open software library, based on XML configuration syntax, used to deal with tracking data from different sources and control the transmission and filtering.

The visualization system is calibrated in such a way that the size of the virtual objects displayed on the screen corresponded to their virtual dimensions.

Because of the similarity of the platform between SpaceXperiment and Spacedesign, test results from the former can be easily applied to the latter.

3.4. Tracking System Calibration

After following the correct calibration procedure as described by the manufacturer, we performed a series of tests to verify the precision and accuracy of the tracking. We fixed the markers in 10 different positions within the tracked volume and recorded the measures.

We found out that the average precision in target position measure is 0.8 mm. This outcome is consistent with the product specifications - average of 0.4 mm with four cameras - because our system uses two cameras only. In any case, the systematic error is lower than the user’s average measured precision, which during pointing is approximately 6 mm in mean (see Figure 4).

4. Human performance Evaluation

Three test have been carried out in order to evaluate interaction techniques in VR: Pointing, picking and line sketching.

4.1. Pointing

In this first experiment we investigated the user’s accuracy in pointing to a fixed target in virtual space. Each participant was asked to place the tip of the virtual pen as close as possible to the midpoint of the crosshair marker. Once the subject had reached the marker in a stable manner, he/she clicked on the pen button and kept the pen in the still position for 5 seconds. Each user repeated the experiment 10 times for 3 different points: MDP (Medium Difficulty Point), HDP (High Difficulty Point) and LDP (Low Difficulty Point). Each experiment recorded the pen position for 5 seconds (on our system this corresponded to approximately 310 sample points per experiment) for a total of 186000 sampled points. We applied a statistical analysis to the measured data to evaluate mean, variance, and deviation from the target point.

The error isotropy was verified in the workspace using a world-fixed reference frame by projecting the error vectors onto three orthogonal reference directions: horizontal, vertical and perpendicular to the screen (i.e. depth).

5

Figure 4. Average errors and ranges in 3D pointing experiment. We noticed (Table 1) that:

Figure 4. Average errors and ranges in 3D pointing experiment.

We noticed (Table 1) that:

- the error along the depth direction is always greater than the error along the horizontal and vertical directions (Figure 4);

- the magnitudes of the error along the horizontal and vertical directions are comparable and always at least 1.9 times smaller than the error along the depth direction.

Table 1. Error values (mm) for the performed test.

Error

Total deviance(mm)

Horizontal Range(mm)

Vertical range(mm)

Depth range (mm)

Max

17,31

7,28

9,53

19,50

Mean

6,21

4,81

5,29

10,12

4.2. Sketching

The aim of this experiment was to evaluate the user’s ability to sketch as precisely as possible a reference geometry displayed in the 3D environment. This test simulated the typical CAD task of transferring a geometrical idea into an unconstrained 3D space sketching.

The user traced a free hand sketch simply by moving the pen while pressing its button. The subjects repeated the task for different patterns: horizontal line, vertical line, depth line (line perpendicular to the screen) and rectangular frame aligned with the screen plane. The users were required to perform the experiment 5 times for 4 geometries with 5 different modalities as follows: in the most comfortable fashion (user’s choice), in reversed tracing direction, at low, medium and high sketching speed. The combinations of the previous modes were counterbalanced across subjects according to a Latin square and each condition was performed for an equal number of times. We collected a total of 2000 sketches. The divergence of the sketch from the displayed geometry represented the error. For the error metric measurement, we considered the deviance, which is the distance between the pen tip and its closest point on the reference geometry. The range of the deviance error was evaluated in each reference direction: horizontal range, vertical range, and depth range.

The following considerations could be drawn: the higher error value along the axis perpendicular to the screen, already noticed in the previous experiment, was confirmed in all

6

sketching modalities and geometries; besides, also the ratios among the error components along the reference directions were in accordance.

4.3. Picking

Previous experiments have shown a systematic pointing anisotropy related to direct input in a virtual environment. We decided to investigate on the picking task, since it is one of the most used operations in VRAD applications (selection, control point manipulation, direct sketching, etc.). The aim of this test was to evaluate the user’s performances in picking a 3D cross hair target located in a random position within the workspace. The user picked the midpoint of the target using the pen button. Each subject repeated the picking operation for 30 points randomly chosen from 3 different positions: in front, to the right, and on top of the user’s head. After each picking, he/she had to return to a ‘home’ position before picking the next target. Different sounds accompanied each step in order to guide the user along the experiment.

The error vector, computed as the difference between the target and the picked position, was projected onto each screen-aligned reference frame directions: depth, horizontal, and vertical direction.

We used ANOVA to verify the anisotropic behaviour of the interaction. The error values demonstrated a significant effect of the reference directions (F(2,357) = 29.17; p < 0.0001) rejecting the null hypothesis. Multiple Means Comparison (MMC) showed a significantly higher error in depth direction but no significant difference along horizontal and vertical axes (Figure 5).

difference along horizontal and vertical axes (Figure 5). Figure 5. Error box plot for different reference

Figure 5. Error box plot for different reference directions.

We verified if the screen-aligned frame is the best fitting reference to evaluate the picking error anisotropy. We decided to fit an ellipsoid to the error value for each of the 3 picking points. Principal Component Analysis (PCA) applied to the error vectors returned the directions and the axis lengths of the “best fit” ellipsoid. The results show that the principal (major) axis always converges towards the user’s head (Figure 6).

7

Figure 6. Principal directions pointing to user head (blue points). The results suggested that a

Figure 6. Principal directions pointing to user head (blue points).

The results suggested that a different reference frame could be proposed for the error decomposition. So, instead of using depth, horizontal, and vertical directions, we decided to test a user-centred reference frame whose principal direction V1 was directed from the pointer to the user’s head; the direction V2 was perpendicular to V1 and parallel to the horizontal plane; and the third direction V3 was perpendicular to V1 and to V2 (Figure 7).

direction V3 was perpendicular to V1 and to V2 (Figure 7). Figure 7. User-centred reference frame.

Figure 7. User-centred reference frame.

In order to verify this new frame of reference, we designed a new set of experiments.

4.4. User-Centred Reference Frame Evaluation Test

We performed the previous picking test, but with targets located in a random position within the workspace. We collected a total of 390 picking samples. For each point, we projected the error vector onto two different reference frames: the first one, called “SO” (depth, horizontal, vertical) is screen oriented, and the second one, called “UC” (V1, V2, V3), is user-centred.

We compared the contribution of each error component to the total error. Changing the reference frame from SO to UC, the average contribution along V1 increased from 47.6% to 55.8% ( F(1,778)=13.32; p<0.0003), while the contribution along V2 decreased from 23.0% to 17.2% ( F(1,778)=8.15; p<0.0045), and contribution along the V3 diminished from 30.4% to 22.4% ( F(1,778)=18.65; p<0.0001).

8

Figure 8. Error box plot changing the reference frame. The ANOVA showed a significant effect

Figure 8. Error box plot changing the reference frame.

The ANOVA showed a significant effect of the reference frame change (Figure 8). The following Table 2 shows how the squared variance sigma changes with the reference frame.

Table 2. Sigma values (mm) changing the reference frame.

Reference Frame

Depth vs V1 (mm)

Horizontal vs V2 (mm)

Vertical vs V3 (mm)

Screen aligned (D, H, V)

3.366

1.666

1.878

User-Centred (V1,V2,V3)

3.759

1.560

1.458

These results show that the user-centred reference frame best fitted the error vectors as compared to the other: the error component along V1 was greater than the one along the depth direction.

4.5. Discussion

The performed tests demonstrated a systematic anisotropy in the error vector distribution during all the basic modelling tasks: pointing, picking, and line sketching. The following interaction principles can be thus pointed out:

- the error along the depth direction (perpendicular to the screen) is always greater than the error along the horizontal and vertical directions;

- the magnitudes of the error along the horizontal and vertical directions are comparable, and always at least 1.9 times smaller than the error along the depth direction;

- the principal axis of the error direction distribution always converges towards the user’s

head.

The results of these experiments can be explained mainly in terms of to occlusion issues, as the user’s hand and the pointing device hide the screen and thus the stereo effect vanishes. This problem can be solved by using an offset between the real pen and the virtual pen. This solution was previously proven to have no influence on interaction precision for offset values minor to 20 cm [13].

Yet using an offset is not sufficient, and other interaction tools should be developed in order to take into account the anisotropy. The following section presents some of the solutions developed by the authors.

9

5.

Smart Tools Development

Transparent physical tools (rules, French curves, squares, etc.) can be introduced into a virtual environment in order to offer real constraint during modelling, just as real world tools do during drawing and sculpturing. For example, the Plexiglas sheet, handled by the user during the VRAD session for displaying the menu, can also be used as a planar reference (i.e. sketching on a plane), without interfering with the stereoscopic vision. Practical use observations have shown the effectiveness of such equipment, and how designers use them within the digital modelling context in a natural and personal fashion.

The virtual aids, on the other hand, are software tools specifically developed to support the user during the interaction. For example, the geometrical snapping constrains the user input to determined geometries such as planes, lines, or grids; the topological snapping assists the locating of topological meaningful positions.

The word “smart tool” in HCI interface design stands for software objects which change their behaviour according to the surrounding context (i.e. user’s limbs position, gestures, speed and acceleration of the input devices, previous commands, etc.).

In order to address the user’s limitation in depth perception and interaction, as seen in the previous sections, we propose a set of “smart” virtual tools:

5.1. Smart Transparency

During a modelling session in VR, the model itself and its parts can occlude the vision of the working area, thus forcing the user to modify his/her point of view (by moving his/her head) to a non-optimal one. The simplest way to solve this problem is to use a very helpful feature of computer graphics: the transparency [20].

The main idea is to show all the disturbing CAD objects as semi-transparent. The Smart Transparency is a daemon of the VRAD application which computes in real time the object to be focused from the pen position, calculates the viewing frustum from the user’s head location, and then displays the occluding geometries as semi-transparent (Figure 9).

This feature allows a better perception of the 3D world without taking out important parts of the model. In our practical experiments, this technique has proved to be very effective, especially when modelling large and complex geometries (i.e. assembling parts).

when modelling large and complex geometries (i.e. assembling parts). Figure 9. Smart Transparency avoids occlusion. 10
when modelling large and complex geometries (i.e. assembling parts). Figure 9. Smart Transparency avoids occlusion. 10

Figure 9. Smart Transparency avoids occlusion.

10

5.2.

3D Ortho Tool

Drawing straight lines along world space main axes turned out to be an hard task in a virtual reality environment (see Line Sketching experiment in section 4.2). The lack of limb support, as is the case with 2D table drawing, should be in some way compensated in VE by means of a specific software tool. We developed the “3D Ortho Tool” in order to constrain the user to sketch lines which are horizontal, vertical, or perpendicular to the screen directions.

The tool is activated by the user using a virtual push button on the palette. When the user wants to draw a straight line and presses the button for the first time, the program stores in memory the first point position. The vector “Actual Vector”, from the first to the actual point (instant pen tip position) is computed at every movement of the user. The nearest principal direction (horizontal, vertical, or perpendicular to the screen) towards the “Actual Vector” passing through the first point is considered as “Actual Snap Direction”. We call “Snap Angle” the angle between the “Actual Vector” and the “Actual Snap Direction”.

If the snap angle is minor to the snap threshold, the actual point is projected from the nearest point onto the “Actual Snap Direction” (Figure 10).

The snap threshold angle is defined by the user by means of a virtual 3D slider on the palette

[16].

The Ortho Snap is represented in the virtual workspace as a semi-transparent cone. The Snap Angle is also translated into a colour change: as the user pointer moves away from the principal line, the colour turns from green to bright red, indicating a greater tool intervention (Figure 10).

red, indicating a greater tool intervention (Figure 10). Figure 10. The 3D Ortho Tool visual feedback.
red, indicating a greater tool intervention (Figure 10). Figure 10. The 3D Ortho Tool visual feedback.

Figure 10. The 3D Ortho Tool visual feedback.

5.3. 3D Object Snaps

The “3D Object Snaps” are the natural extension to the 3D input of the Object Snap tools already available on most 2D CAD systems (i.e. AutoCAD ® [2]).

The Object Snaps (OSnap for short) are drawing aids which are used in conjunction with the other modelling commands to improve user’s accuracy when using an imprecise input device in a desktop environment (e.g. the mouse). Osnaps allows to snap onto a specific object location, simply by picking a point which is near to the required position. For example, the user can accurately pick the end point of a line or the centre of a circle.

When the user moves the mouse cursor close enough to an Osnap location, it is highlighted with a marker, and the cursor jumps to the snap spot. This feature is known as Magnet. Each

11

Osnap has a different marker, as shown for the Autocad® application in the first two columns of Table 3.

Object Snaps can be easily extended to a 6DOF input in a virtual environment, where they are very useful due to tracking error, fatigue, hand vibration, and lack of limb support. Compared to the 2D version of the tool, 3D Object Snap uses a sensible volume instead of a flat area, and the marker is displayed as a “wire framed” 3D geometry (see Table 3 and Figure 11), which varies according to the snapped topology (Endpoint, Midpoint, Perpendicular, Centre, etc.).

topology (Endpoint, Midpoint, Perpendicular, Centre, etc.). Figure 11. The Endpoint Object Snap feedback. We implemented

Figure 11. The Endpoint Object Snap feedback.

We implemented a sensible volume of the 3D OSnap tool which is an ellipsoid instead of a sphere in order to take into account the experimented interaction asymmetry: the principal axes are oriented along the User-Centred reference frame (V1, V2, V3, as explained in Figure 7, while the axis dimensions are extracted from the usability tests (see section 4.5).

The ellipsoid region is updated in real time for each movement of the user in real time. The implemented “User-Centred Ellipsoid Snap” has the following default dimensions:

- Semi-axis along V1 direction = 16,9 mm;

- Semi-axis along V2 direction = 6,5 mm;

- Semi-axis along V3 direction = 7,0 mm.

This solution increases the pointing efficiency thanks to a better alignment of the snapping zone, without nevertheless affecting the resolution, because it allows a volume reduction as compared to a sphere or a world aligned ellipsoid.

By adjusting the influence area by the slider, and by activating the object snap according to the specific task, the user can model in 3D using previous geometries as a reference, supported in the fundamental task of pointing with enforced precision inside of the virtual space.

Table 3 illustrates the correspondence between Autocad® Osnaps and their 3D counterpart developed by the authors. Snap Tips appear if you let the virtual cursor hover over an Osnap location for a second or so.

12

Table 3. 3D Object Snaps.

Object Snap

Autocad ® feedback

3D Osnap feedback

Use

Centre

Object Snap Autocad ® feedback 3D Osnap feedback Use Centre Snaps to the centre of a
Object Snap Autocad ® feedback 3D Osnap feedback Use Centre Snaps to the centre of a
Object Snap Autocad ® feedback 3D Osnap feedback Use Centre Snaps to the centre of a

Snaps to the centre of a circle or arc.

End point

Use Centre Snaps to the centre of a circle or arc. End point Snaps to the
Use Centre Snaps to the centre of a circle or arc. End point Snaps to the
Use Centre Snaps to the centre of a circle or arc. End point Snaps to the

Snaps to the endpoint of a line, polyline, or arc.

Intersection

to the endpoint of a line, polyline, or arc. Intersection Allows you to select the intersection
to the endpoint of a line, polyline, or arc. Intersection Allows you to select the intersection
to the endpoint of a line, polyline, or arc. Intersection Allows you to select the intersection

Allows you to select the intersection between two items.

Midpoint

you to select the intersection between two items. Midpoint Snaps to the midpoint of a line
you to select the intersection between two items. Midpoint Snaps to the midpoint of a line
you to select the intersection between two items. Midpoint Snaps to the midpoint of a line

Snaps to the midpoint of a line or arc.

two items. Midpoint Snaps to the midpoint of a line or arc. Locates the point or
two items. Midpoint Snaps to the midpoint of a line or arc. Locates the point or

Locates the point or entity nearest to the cursor position.

Nearest

the point or entity nearest to the cursor position. Nearest Node Snaps to a point entity.

Node

point or entity nearest to the cursor position. Nearest Node Snaps to a point entity. Perpendicular
point or entity nearest to the cursor position. Nearest Node Snaps to a point entity. Perpendicular
point or entity nearest to the cursor position. Nearest Node Snaps to a point entity. Perpendicular

Snaps to a point entity.

Perpendicular

position. Nearest Node Snaps to a point entity. Perpendicular Locates a perpendicular point on an adjacent
position. Nearest Node Snaps to a point entity. Perpendicular Locates a perpendicular point on an adjacent
position. Nearest Node Snaps to a point entity. Perpendicular Locates a perpendicular point on an adjacent

Locates a perpendicular point on an adjacent entity.

13

Quadrant

Quadrant Locates the 0, 90, 180, or 270 degree position (quadrants) of a circle. Tangent Places
Quadrant Locates the 0, 90, 180, or 270 degree position (quadrants) of a circle. Tangent Places
Quadrant Locates the 0, 90, 180, or 270 degree position (quadrants) of a circle. Tangent Places

Locates the 0, 90, 180, or 270 degree position (quadrants) of a circle.

Tangent

180, or 270 degree position (quadrants) of a circle. Tangent Places an entity at the tangent
180, or 270 degree position (quadrants) of a circle. Tangent Places an entity at the tangent
180, or 270 degree position (quadrants) of a circle. Tangent Places an entity at the tangent

Places an entity at the tangent point of an arc or circle.

6. Conclusions and Future Work

This work presents an experimental study addressing human performance evaluation in VR during basic modelling tasks: picking, pointing, and line drawing.

All the performed tests show a systematic anisotropy in the error vector distribution. We evaluated qualitatively and quantitatively this spatial pattern, and proposed three virtual tools in order to improve the interaction effectiveness: “Smart Transparency”, “3D Ortho Tool”, and “3D Object Snap”.

“Smart Transparency” improves the user’s perception of the model by displaying the occluding geometries as semi-transparent.

The “Ortho Tool” assists the user in the definition of world oriented lines.

The “3D Object Snap” takes advantage of innovative user-oriented ellipsoid-shaped snaps in order to address the anisotropy while preserving high snapping resolution.

Early tests using these tools into a VRAD application showed their effectiveness. Smart Tools offer an interesting contribution for developers of CAD applications in Virtual Reality, but the results achieved so far can easily be extended to other applications, because their context is clearly very wide. We are currently testing the effectiveness of this new snap design in complete VRAD sessions.

References

[1]

ART, GmbH, www.ar-tracking.de.

[2]

Autocad® is a trademark of Autodesk Inc., www.autodesk.com.

[3]

Beier K.-P., “Virtual Reality in Automotive Design and Manufacturing”, proceedings, Convergence '94, International Congress on Transportation Electronics, SAE (Society of Automotive Engineers), Dearborn, Michigan, October 1994.

[4]

Boritz, J. B., Kellogg S., “A Study of Interactive 3D Point Location in a Computer Simulated Virtual Environment”, Proc. of the ACM Symp. on VR Software and Technology, Lausanne, Switzerland, 1997, pp. 181-187.

[5]

Chen H., Sun H., “Real-time Haptic Sculpting in Virtual Volume Space”, Proceedings of the ACM Symposium on Virtual Reality Software and Technology, November 11-13, 2002, Hong Kong, China.

14

Dani T.H., Wang L., Gadh. R., “Free-Form Surface Design in a Virtual Enviroment”, proceedings of ASME '99 Design Engineering Technical Conferences, 1999, Las Vegas, Nevada.

[7] Desiger J., Blach R, Wesche G., Breining R., “Towards Immersive Modelling- Challenges and Recommendations: A Workshop Analysing the Needs of Designers”, Eurographics 2000.

[6]

[8]

Fiorentino M., De Amicis R., Stork A., Monno G., “Spacedesign: Conceptual Styling and Design Review in Augmented Reality”, In Proc. of ISMAR 2002 IEEE, Darmstadt, Germany, 2002, pp. 86-94.

[9]

Fiorentino M., Monno G., Renzulli P. A., Uva A. E., “3D Pointing in Virtual Reality:

Experimental Study”, XIII ADM - XV INGEGRAF International Conference on Tools And Methods Evolution In Engineering Design, Napoli, June 3th and June 6th, 2003.

[10] Grossman T, Balakrishnan R., “Pointing at Trivariate Targets in 3D Environments”, Proceedings of the 2004 Conference on Human Factors in Computing Systems, p.447- 454, April 24-29, 2004, Vienna, Austria.

[11]

Hinckley, Pausch, Goble, Kassell, “A Survey of Design Issues in Spatial Input" in proc. of ACM UIST'94 Symposium on User Interface Software & Technology, 1994, pp. 213-

222.

[12]

Mine M. R., Brooks F. P., Sequin C. H., “Moving Objects in Space: Exploiting Proprioception in Virtual-Environment Interaction”, Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, 1997.

[13] Paljic A., Burkhardt J.-M., Coquillart S., “A Study of Distance of Manipulation on the Responsive Workbench”, IPT'2002 Symposium (Immersive Projection Technology), Orlando, USA, 2002.

[14]

Poupyrev I., Weghorst S., Billinghurst M., Ichikawa T., “A Framework and Testbed for Studying Manipulation Techniques for Immersive VR”, Proc. of the ACM Symposium on Virtual Reality Software and Technology , Lausanne, Switzerland, 1997, pp. 21-28.

[15]

Reitmayr G., Schmalstieg D., “An Open Software Architecture for Virtual Reality Interaction”, Proc. of ACM Symposium on Virtual Reality Software and Technology 2001 (VRST 2001), pp. 47-54, Banff, Alberta, Canada, Nov. 15-17, 2001.

[16] Schmalstieg D., Fuhrmann A., Szalavari Z., Gervautz M., “Studierstube – An Environment for Collaboration in Augmented Reality”, Proc. of CVE 96 Workshop, Nottingham, GB, 1996, pp. 19-20.

Stork A., De Amicis R., “ARCADE/VT - A Virtual Table-Centric Modelling System”, IPT 2000 The Fourth International Immersive Projection Technology Workshop, June 19-20, 2000, Iowa State University, Ames, Iowa, USA.

[18] Sun H., Hujun B., Tong Ngai Man, Wu Lam Fai, “Interactive Task Planning in Virtual Assembly”, December 1999, Proceedings of the ACM Symposium on Virtual Reality Software and Technology.

[17]

[19]

Wesche

G.,

Droske

M.,

“Conceptual

Free-Form

Styling

on

the

Responsive

Workbench”, proceedings of VRST 2000, Seoul, Korea, 2000, pp 83-91.

 

[20]

Zhai S., Buxton W., Milgram P., “The ‘Silk Cursor’: Investigating Transparency for 3D Ttarget Acquisition”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Celebrating Interdependence, 1994.

15

[21] Zhai, S., Milgram P., “Anisotropic Human Performance in Six Degree-of-Freedom Tracking: An Evaluation of Three-Dimensional Display and Control Interfaces”, IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, Vol. 27, No.4, 1997, pp. 518- 528.

16