Sei sulla pagina 1di 13

1.

From the context of Virtual reality, explain any 5 of the following


a) Virtual data bases
b) B) Real-time image generation

Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused
on producing and analyzing images in real time. The term can refer to anything from rendering an
application's graphical user interface (GUI) to real-time image analysis, but is most often used in
reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One
example of this concept is a video game that rapidly renders changing 3D environments to produce
an illusion of motion.
Computers have been capable of generating 2D images such as simple lines, images
and polygons in real time since their invention. However, quickly rendering detailed 3D objects is a
daunting task for traditional Von Neumann architecture-based systems. An early workaround to this
problem was the use of sprites, 2D images that could imitate 3D graphics.
ray-tracing and rasterization. Using these techniques and advanced hardware, computers can now
render images quickly enough to create the illusion of motion while simultaneously accepting user
input. This means that the user can respond to rendered images in real time, producing an
interactive experience.

c) Database interaction
One of the main results is the visualquery system, which can be defined as a query system
basedon the use of visual representations to depict the domainof interest and to express the
related requests (Catarciet al., 1997). Usually, visual query systems provideuser-friendly query
interfaces for accessing a DB, andinclude a language for expressing queries in agraphical form.
For users with limited technical skills,visual query systems have proven to be an
effectiveinstrument to interact with DB systems

Presenting to user: Several factors should be considered when designing asystem to visualise
in a virtual environment informationretrieved from a DB
d) Physical simulation
is the equipment that is used for human immersion in virtual reality . besides hardware and software
use the additional means to enhance the immersion effect, for example, water spray, the effect of
wind, vibration,[2] physical motion[3] etc.

The simulator base on the 3-dof or 6-dof platform to simulating the motion of the VR movies or games
Classification:

 Interactive
 Non-interactive
One can also classify virtual amusements on the user's body position during operation: sitting, standing,
suspended, etc.

e) Immersive and Non-immersive VR syatems


Non-immersive VR is a type of the virtual reality technology that provides users with a computer-
generated environment without a feeling of being immersed in the virtual world. The main
characteristic of a non-immersive VR system is that users can keep control over physical
surrounding while being aware of what’s going on around them: sounds, visuals, and haptics.
Fully immersive virtual reality is a realistic simulation technology that enables users to interact
with a 3D virtual environment with special haptic devices. Unlike non-immersive VR based on
typical displays, fully immersive VR provides computer-generated surroundings via head-
mounted displays (HMDs) that isolate users from the real world. Thus, they become unaware of
physical objects and sounds.
One of the significant differences between fully immersive and non-immersive VR technologies
lays in digital content perception. Unlike fully immersive virtual reality, non-immersive VR
transmits the same image for both user’s eyes. Therefore, users perceive this image only in two
dimensions: height and width while fully immersive VR technology provides a digital image
perceived in three dimensions that include height, width, and depth.

f)Hybrid VR systems

Mixed reality (MR) is the merging of real and virtual worlds to produce new environments and visualizations,
where physical and digital objects co-exist and interact in real time. Mixed reality does not exclusively take place
in either the physical or virtual world, but is a hybrid of reality and virtual reality, encompassing both augmented
reality and augmented virtuality via immersive technology.[2]
The first immersive mixed reality system that provided enveloping sight, sound, and touch was the Virtual
Fixtures platform, which was developed in 1992 at the Armstrong Laboratories of the United States Air Force.
The project demonstrated that human performance could be significantly amplified, by overlaying spatially
registered virtual objects on top of a person's direct view of a real physical environment.[3]

f) The CAVE
A VR CAVE is a virtual reality space; essentially an empty room in the shape of a cube in
which each of the surfaces – the walls, floor and ceiling – may be used as projection
screens to create a highly immersive virtual environment. 3D CAVE users typically wear
stereoscopic eyewear and they interact with visual stimulus via wands, data gloves,
joysticks, or other input devices.
The word CAVE itself is an acronym that stands for Cave Automatic Virtual Environment.
Wearing stereoscopic glasses enables VR CAVE users to see 3D graphics and images that appear to be
floating, suspended in the air. You can walk all the way around these objects, study them closely and get
a full view and a proper understanding of exactly how they would look in reality.
The projection systems used require extremely high-resolution in order to maintain the illusion of reality.
User’s movements are tracked by the sensors in the VR CAVE and the video with which they are
interacting adjusts continuously to adapt to the individual user’s perspective.

2. Write a short note on tracking sensors any 5 among


the following,
a) Flock of birds

Pulsed DC magnetic technology for six degrees-of-


freedom tracking with a universal interface.

Immunity from distortion from non-magnetic


conductive metals. You can use this tracker around
stainless steel (300 series), titanium, and aluminum
without measurement errors.

Freedom from blocking and occlusion errors. No need to keep a clear line-of sight
between sensors and transmitter.
b) ADL-1:

This device features a head-band which is worn by a user, typically while seated in front of a video display
(the operating volume is only big enough for this device to be used while stationary). The device tracks the
head with 6 DOF and features a 300Hz sample rate with quoted latency of less than 2ms. Latency is
important when producing head-slaved video images since too much delay between head movement and
image update can cause nausea ("simulator sickness") in users

c) Logitech Head tracker


d) Dextrous Hand master
e) Spaceball 2003
f) BioMuse
g) Space ball

3. Explain any 4 mechanism of the following:


a) Mechanical Trackers: rely on a physical connection between the target and a fixed
reference point. A common example of a mechanical tracking system in the VR
field is the BOOM display. A BOOM display is an HMD mounted on the end of a
mechanical arm that has two points of articulation. The system detects the
position and orientation through the arm. The update rate is very high with
mechanical tracking systems, but the disadvantage is that they limit a user's range
of motion.
b) Magnetic Trackers: measure magnetic fields generated by running an electric current
sequentially through three coiled wires arranged in a perpendicular orientation to
one another. Each small coil becomes an
electromagnet, and the system's sensors
measure how its magnetic field affects the
other coils. This measurement tells the
system the direction and orientation of the
emitter. A good electromagnetic tracking
system is very responsive, with low levels
of latency. One disadvantage of this
system is that anything that can generate
a magnetic field can interfere in the
signals sent to the sensors.

c) Ultrasonic Trackers: emit and sense ultrasonic sound waves to determine the position
and orientation of a target. Most measure the time it takes for the ultrasonic
sound to reach a sensor. Usually the sensors are stationary in the environment --
the user wears the ultrasonic emitters. The system calculates the position and
orientation of the target based on the time it took for the sound to reach the
sensors. Acoustic tracking systems have many disadvantages. Sound travels
relatively slowly, so the rate of updates on a target's position is similarly slow. The
environment can also adversely affect the system's efficiency because the speed
of sound through air can change depending on the temperature, humidity or
barometric pressure in the environment.
d) Optical Trackers: use light to measure a target's position and orientation. The signal
emitter in an optical device typically consists of a set of infrared LEDs. The sensors
are cameras that can sense the emitted infrared light. The LEDs light up in
sequential pulses. The cameras record the pulsed signals and send information to
the system's processing unit. The unit can then extrapolate the data to determine
the position and orientation of the target. Optical systems have a fast upload rate,
meaning latency issues are minimized. The system's disadvantages are that the
line of sight between a camera and an LED can be obscured,
e) Hybrid Inertial Trackers
4. Explain the working principle of the following Navigation and manipulation Interfaces
a) Cubic mouse:
a new input device that allows users to intuitively specify three-dimensional coordinates in graphics
applications. The device consists of a cubeshaped box with three perpendicular rods passing through
the center and buttons on the top for additional control. The rods represent the X, Y, and Z axes of a
given coordinate system. Pushing and pulling the rods specifies constrained motion along the
corresponding axes. Embedded within the device is a six degree of freedom tracking sensor, which
allows the rods to be continually aligned with a coordinate system located in a virtual world.
We use the Cubic Mouse for navigating around the car model, for positioning cutting planes inside the
model (Figure 2), and for performing chair cuts.
We work with geologists and geophysicists from oil and gas companies to evaluate virtual environment
technology for reservoir discovery
b) Trackballs: 3Dconnexion manufactures a line of human interface devices for manipulating and
navigating computer-generated 3D imagery. A graphical input device that is based on a
fixed spherical ball. It inputs six different
values defined by the orientation of the ball
and the pressure together with the direction
that is applied to it. It allows complex objects
to be positioned and rotated in three-
dimensional space using the single input
device. Internally a spaceball is normally made
from a set of strain-gauges.

c) 3D probes
5. Write a short note on any four of the following ( 20 marks)
a) Head mounted Displays
A head-mounted display (HMD) is a type of computer display
device or monitor that, as the name implies, is worn on the
head or is built in as part of a helmet. This type of display is
meant for a total immersion of the user in whatever experience
the display is meant for, as it ensures that no matter where the
user’s head may turn, the display is positioned right in front of
the user's eyes.
The monitors in an HMD are most often Liquid Cystal
Displays (LCD), though you might come across older models that use Cathode
Ray Tube (CRT) displays. LCD monitors are more compact, lightweight, efficient
and inexpensive than CRT displays.
Application :aviation
HMDs are increasingly being integrated into the cockpits of modern helicopters and fighter aircraft. These are
usually fully integrated with the pilot's flying helmet and may include protective visors, night vision devices, and
displays of other symbology.
Military, police, and firefighters use HMDs to display tactical information such as maps or thermal imaging data
while viewing a real scene.

b) Hand-supported Displays
These are personal graphics displays
that the user holds in one or both
hands in order to periodically view a
synthetic scene. This means that the
user can go in and out of the
simulation as required by the
application. HSDs are similar to HMDs
in their use of special optics to project
a virtual image in front of the user. In
addition, HSDs incorporate features not present in HMDs, namely push buttons used to interact with the
virtual scene. An example of a hand-supported graphics display is the virtual binoculars SX shown in
Figure 3.7a [NVIS Inc., 2003]. These are constructed to resemble the look and feel of regular binoculars,
to help the realism of the simulation. Internally, however, virtual binoculars incorporate two miniature
LCOS displays and a tracker, which measures the user's viewing direction. The computer then updates
the graphics based on tracker and pushbutton information, similar to trackball-mediated interaction.

Floor-supported Displays

Floor-supported displays use an articulated mechanical


arm to offload the weight of the graphics display from
the user. More importantly, floor-supported displays
integrate sensors directly in the mechanical support
structure holding the display. If six sensors are used, it
is possible to determine the position and orientation
of the end of the supporting arm relative to its base.
Boom3C display produced by Fakespace Labs
[Fakespace Labs, 2001]. As illustrated in Figure 3.8a, raw analog data from the arm sensors are first
converted into floating-point angles (based on internal calibration constants). Then direct-kinematics
equations associated with open kinematic links are used to obtain
the 3D position of the end of the arm. These parameters are then
sent to the graphics workstation, which updates the images for
the two eyes (assuming stereo graphics).
Each joint of the Boom3C supporting arm shown in Figure 3.8b
has built-in optomechanical shaft encoders with a position
resolution of 0.1°. As stated in Chapter 2, the major advantage
that mechanical trackers have is their low latency.
Disadvantage:less motion

windows VR
c) Desk-supported Displays
Excessive display weight becomes an issue for HMDs and HSD as well as floor – user fatigue neck back
pain , undesirable -leads to inertia while rotation.Soln: desksupported displays. These are fixed and
designed to be viewed while the user is sitting. Thus the user's freedom of motion is limited.

d) Large-volume Displays

e) Monitor-based Large-volume Displays

f) Projector-Based Displays

6. What human sensors are responsible for touch? What are the ideal characters of Haptic feedback actuators?
The skin houses four types of tactile sensors (or receptors), namely Meissner corpuscles (the majority),
Merkel disks, Pacinian corpuscles, and Ruffini corpuscles. When excited they produce small electrical
discharges, which are eventually sensed by the brain.

7. Design a cyber Touch glove and tactile mouse, make a drawing and explain.

Tactile mouse

f an electrical actuator that can vibrate the mouse outer


shell.the actuator shaft translates up and down in
response to a magnetic field produced by its stationary
element. The shaft has a mass attached to it, creating
inertial forces of more than 1 N, which are felt by the
user's palm as vibrations. The actuator is oriented
perpendicular to the mouse base, such that the
vibrations occur in the Z direction. minimizes the negative effects vibrations could have on the X-Y mouse
translation and resulting pointing inaccuracy. The mouse pad needs to be thicker than usual, and elastic in
order to absorb the reaction forces from the supporting desk. Furthermore, the iFeel mouse uses optical
position measurement rather than the mechanical ball used by many other mouse designs.
The host software detects contact between the screen arrow controlled by the mouse and hapticly enabled
window objects. As a result, haptic commands indicating the onset and t tactile feedback are sent to the
mouse processor. The processor then converts the high-level commands into vibration amplitude and
frequency and drives the actuator through an actuator interface.

Cyberglove

six vibrotactile actuators Each actuator consists of a plastic


capsule housing a DC electrical motor. The motor shaft has
an off-centered mass, which produces vibrations when
rotated. Thus, by changing the speed of rotation, it is
possible to change the vibration frequency.
During VR simulations the CyberGlove reads the user's hand
configuration and transmits the data to the host computer
over an RS232 line. These data, together with those of a 3D
tracker attached to the wrist, are used to drive a virtual
hand, which is then displayed by the graphics display.
Whenever the fingers or palm of the virtual hand interact
with virtual objects, the host computer sends commands
necessary to activate the vibrotactile actuators. These
signals are received by the actuator driver unit, which applies the corresponding. currents using D/A
converters and operational amplifiers.

8. What is temperature feedback? How it is realized? Make a drawing and explain.

help identify an object material. Such variables are surface temperature, thermal conductivity, and
diffusivity.
DC current applied to dissimilar materials placed in contact creates a temperature differential. Modem
Peltier pumps consist of solid-state N- and P-type semiconductors sandwiched between ceramic electrical
insulators, as illustrated in Figure 3.29a [Phywe Systeme, 2003]. The ceramic plates serve as thermal
conductors and mechanical supports. One is called a heat source and the other a heat sink. When current
from a DC source is applied to the heat pump, the P and N charges move to the heat sink plate, where they
transfer heat. This results in a drop in temperature of the heat source plate and a corresponding rise in
temperature of the heat sink plate. The larger the current, the larger is the temperature difference between
the two plates.

9. What is the cyberGrasp? Where are its actuators placed and how is force feedback produced in this case?

CyberGlove interface box transmits the resulting finger position data to the CyberGrasp force control unit (FCU) over
an RS232 line. The same FCU receives wrist position data from a 3D magnetic tracker worn by the user. The resulting
hand 3D positions are sent to the host computer running the simulation over aLAN). The host computer then
performs collision detection and inputs the resulting finger contact forces into the FCU. The contact force targets are
into analog currents, which are amplified and sent to one of five electrical actuators located in an actuator housing
unit. The actuator torques are transmitted to the user's fingertips through a system of cables and a mechanical
exoskeleton worn on top of the CyberGlove. The exoskeleton has the dual role of guiding the cables using two cams
for each finger as well as of serving as a mechanical amplifier to increase the forces felt at the fingertip.

10. The house extends from 30 to 54 in z, from 0 to 16 in x, and from 0 tp 16 in y, by keeping this
as data, sketch the viewing situation in 3D as well as outcome of the setup for perspective
projection.

VRP: (0,0,54) : origin (WC)


VPN: (0,0,1) : z axis (WC)
VUP: (0,1,0) : y axis (WC)
PRP: (8,6,30) : ( VRC)
Window: ( -1,17,-1,17) : (VRC)

11. The house extends from 30 to 54 in z, from 0 to 16 in x, and from 0 tp 16 in y, by keeping this
as data, sketch the viewing situation in 3D as well as outcome of the setup for perspective
projection.
VRP: (0,0,0) : origin (WC)
VPN: (0,0,1) : z axis (WC)
VUP: (0,1,0) : y axis (WC)
PRP: (8,6,84) : ( VRC)
Window: ( -50,50,-50,50) : (VRC)
12. Describe the functions for segmenting the display file.
To create a new segment, we open it and then call graphic primitives to add to the segment the lines and
text to be displayed; then we close the segment. The same sequence of operations applied to an existing
segment will cause that segment to be replaced by a new one. To remove a segment from the display file,
we delete it. Thus we need only three basic functions: OpenSegment (n) open a display file segment named
n CloseSegment close the open segment DeleteSegment (n) remove from the display file the segment
named n
Eg:
InitGraphics; SetWindow(0, 0, 500, 500); OpenSegment (/); MoveTo (100, 100); LineTo (150, 200); LineTo
(200,100); LineTo (100,100); CloseSegment;
13. Why double buffering is required, explain from the context of Display file compilation.
Modern architectures use double buffering, meaning that there is a front buffer and a back buffer. The front
buffer stores the image being displayed while at the same time a new scene is being written into the back
buffer. Then the back buffer is displayed and a new image is rendered into the front buffer, and so on. This is
done to reduce the flicker associated with the display hardware refresh.
14. Write a note on “Free Storage Allocation” and “Display file structure”

Free Storage Allocation

the display-file compiler at frequent intervals needs blocks of unused memory in which to construct new
segments. Just as frequently it discards blocks of memory for which it has no further use. A free-storage
allocation system is therefore needed to supply blocks of free memory and to receive blocks that are vacated.

Three requirements are of particular importance:

1. The amount of memory required is unknown at the time of allocation; free storage is needed at the moment
we open a segment, but the amount needed is not known until the segment is closed.

2. Speed of allocation is important since any sizable delay adversely affects the program’s response.

3. Blocks cannot immediately be reused after they become free. Here again, we encounter a concurrency
problem. If we reuse a block immediately, the display processor may still be executing instructions within it, and
if we overwrite these instructions, we may corrupt the display file.

Display File Structure

An essential part of the display file structure is the name table, which allows us to
determine a segment’s address in memory from its name. The name table may be
stored in a vector A third field may be added containing the length of each segment
to assist in garbage collection.

An alternate scheme is to store the segment name at the head of the segment itself,
as shown in Figure 8-8. Space for names is then exhausted only with the exhaustion of free storage itself. We can
locate any given segment by stepping along the linked list of segments. To speed up this search, we should
include in the head of the segment a pointer to the next segment.

15. Write a note on the following


a) Defining symbols by procedures
b) Display procedures
c) Boxing
d) Advantage and disadvantage of display procedures.
16. Write a short note on the following Input techniques
a) Positioning techniques
Positioning, sometimes known as locating, is one of the most basic graphical input techniques. The user
indicates a position on the screen with an input device, and this position is used to insert a symbol or to
define the endpoint of a line. The need for positioning occurs very often in geometric modeling
applications.Positioning involves the user in first moving the cursor or tracking cross to the desired spot
on the screen and then notifying the computer by pressing a button or key. Most graphical input devices
incorporate buttons or pressureactivated switches for this purpose.
A single positioning operation can be used to insert a symbol, as shown

and two in succession can define the endpoints of a

Rubber band techniques

One of the earliest examples of positioning feedback to be demonstrated was the rubber-band line. The user
specifies the line in the normal way by positioning its two endpoints. As he moves from the first endpoint to the
second, the program displays a line from the first endpoint to the cursor position (Figure 12-12); thus he can see the
lie of the line before he finishes positioning it. The effect is of an elastic line stretched between the first endpoint
and the cursor; hence the name for this technique. Rubber-band lines are helpful in applications where lines must be
positioned to pass through or near other points.

b) Pointing and selection


Graphical input devices play a very important role in allowing the user to point to information on the
screen. In many applications pointing, rather than positioning, is the basis for interaction. The user may
have no need to add more information to the picture and may be interested solely in studying and
asking questions about the information already displayed.
Selection techniques
1. The use ofselection points. In order to select a graphical unit the user points to a specific spot, such as
the center of a circle or an endpoint of a line. Selection points can be provided for symbols and larger
subpictures. highlighting or increased brightness
2. Defining a bounding rectangle. The user can define two opposite corners of a rectangle and in this
way select an object that lies within the rectangle
3. Multiple keys for selection. When the user has positioned the cursor over the item he wishes to
select, he can press one of several keys according to the type of item.
4. Prefix commands. The type of item to be selected can be determined by the user’s prior choice of
command. The command is given before the selection and may specify which type of item is to be
selected. Thus three different delete commands might be provided, delete point, delete LINE, and
DELETE SYMBOL.
5. Modes. The user may be able to change the selection mechanism by setting different modes of
operation. In one mode the program might allow only line selection and in another just symbol selection.
c) Inking and painting
If we sample the position of a graphical input device at regular intervals and display a dot at each
sampled position, a trail will be displayed ofthe movement of the device (Figure 12-36). This technique,
which closely simulates the effect of drawing on paper, is called inking.

A raster display incorporating a random-access frame buffercan be treated as a painting surface for
interactive purposes. As the user moves the cursor around, a trace of its path can be left on the screen.
The user can build up freehand drawings of surprisingly good quality. It is possible to provide a range of
tools for painting on a raster display; these tools take the form of brushes that lay down trails of
different thicknesses and colors. For example, instead of depositing a single dot at each sampled input
position, the program can insert a group of dots so as to fill in a square or circle; the result will be a much
thicker trace,
d) Online character recognition
he positions a character by first typing it and then indicating its position on the screen. We may ask
whether it is always necessary to define explicitly what we wish to draw before defining its geometry.
Could we not provide a few more features of the geometry so that the identity of the object is obvious?
This is the reasoning behind on-line character recognition, one of the most interesting of all interactive
graphical techniques. Using the inking technique described in the previous section, the user draws
several freehand strokes that define the character or symbol he wishes to insert. The computer then
attempts to recognize the character by analyzing these strokes. As it recognizes each character, it erases
the strokes and replaces them by a neatly drawn symbol

17. Discuss appropriate ways of event handling for the following


a) Polling
First, the user often has more than one device at his disposal, and the program cannot predict which one
he will use next. Second, even if we restrict the user to a single device, we cannot predict when he will
use it. The danger is not so much that the program may have to wait indefinitely but that it may miss the
input data altogether because it is busy with some other task when the user decides to do something. s
polling: we periodically check the status of each device. As we have seen in Chapter 11, input devices are
connected to the computer by means of registers whose contents the computer can read. A keyboard
usually has two registers, one to indicate whether a key has been struck, the other to identify the key by
its character code.
b) Interrupts: solve 2nd problem:losing data when user inputs during task

computers: they possess hardware that makes it easy for the central processor to switch rapidly between two or
more programmed tasks. Tasks are assigned different priorities so that higher-priority tasks may interrupt tasks of
lower priority. Tasks may be associated with specific peripheral devices in such a way that when a signal, or
interrupt, is received from a device, control passes to its task. If the interrupt is received while a lower-priority task is
running, switching is immediate, and control passes back to the lower-priority task when the other task has run to
completion. If the interrupt is of insufficient priority to cause immediate switching, its task is run when all higher-
priority interrupts have been processed.

c)Event queue

The main prog may be in the midst of a lengthy computation while user enters
input. the input data must be saved until the computation is complete. Sometimes
the user will have time to perform several input actions before the computer can
attend to them.Each such input action must be passed to the main program in the
order ofreceipt by the polling task. We employ an event queue to pass input data
from the polling task to the main program in the correct order. The queue is a list
of blocks, each representing one user action, or event. The polling task adds event
blocks to the tail of the queue, storing in each block the type of device causing the
event and the contents of the device registers. The main program takes events off
the head of the queue and invokes the appropriate process in response.

c) Polling task design


d) Light –pen Interrupts
Two kinds of light-pen interrupts may occur. The user may point the pen at an item on the screen in
order to select it; this results in a selection interrupt. When the user is positioning with the pen, a
tracking pattern is displayed in order to follow the pen’s movement and tracking interrupts are
generated when the pen sees the pattern. The light-pen task must distinguish between selection and
tracking interrupts. This can be achieved by checking the display address register to see whether the
display has been stopped while displaying the tracking pattern. If it has, the polling task proceeds with
the tracking process; if it has not, the interrupt is treated as a selection interrupt and a selection event is
generated.

Potrebbero piacerti anche