Sei sulla pagina 1di 10

Module 4

Module 4

Visible surface detection

Determine what is visible in a scene from a view point.Visible-surface detection


determines which (parts of) surfaces are obscured by other surfaces.An object is visible if
there exists a direct line-of-sight to that point, unobstructed by any other objects.
Issues related to various approaches to solve this problem are:

• Device independent
• Processing time
• Memory requirements
• Easy to implement in hardware .

Classification of visible surface detection

Depending upon whether they deal with object definitions or their projected images the
surface detection algorithm is classified into:

Object space method: compares objects and part of objects to each other within the scene
to determine which surface to display as visible surface.

Image space method: here visibility is determined point by point at each pixel position
on the projection plane.

Back-face detection

The “back” faces of a closed polyhedron are never visible.


Its an object space method.
 A simple way to perform hidden surface is to remove all ``back facing'' polygons.
 The observation is that if polygon normal is facing away from the viewer then it is
``back facing.''
 For solid objects, this means the polygon will not be seen by the viewer.

N is a normal vector to a polygon surface.


V is a vector in the direction from the eye. viewing direction.

If V.N > 0 then the polygon surface is a back face.

Object description when converted to projection coordinates and viewing direction


parallel to z axis

V.N = VzC

Viewing direction along negative z axis ,the polygon is back face if C < = 0.

1
Module 4

For polygons that are partially at the back cannot be determined with this method and
requires additional test.

Depth-Buffer method

• Its an image space method.


• Compares surface depths at each pixel positions.
• Also known as Z-Buffer method.
• Depth of a surface is the distance of the surface from view plane along the z
axis.
• Method usually used for scenes containing only polygon surfaces.
• Initially the object description is converted to projection coordinates.
• Each (x,y,z) position converted to its orthographic projection position(x,y).
• For each pixel positions (x,y),the depths(z values) of each surface overlapping
at that pixel point is compared.
• Two buffers are used in this method depth buffer and refresh buffer.
• Depth buffer stores the maximum depth at each pixel position
• For each pixel position there is an entry in the depth buffer
• Refresh buffer stores the intensity of the particular surface that is having max
depth.

2
Module 4

depth value can be calculated from the plane equation for the surface as

z = -Ax - By – D / C ----------------------(1)

The next pixel to be compared differ by one value for x horizontally.

After completing all values of x for a particular y value next value for y is
obtained by incrementing one to the previous value.this way the whole screen I
scaned one by one.

From equation (1) the depth value for next position (x+1,y) is obtained by

z’ = -A(x+1) – By – D/C

z’ = z – A/C

depth value down the edge is

z’ = z + ((A/m) + B)/C

if it’s a vertical edge

z’ = z + B/C

3
Module 4

A-Buffer method
• An extension of the depth-buffer for dealing with anti-aliasing, area-averaging,
transparency, and translucency.
• Each pixel references a linked list of surfaces.
• The transparent surface and the surfaces visible through that transparent surface.
• Each position in the A-buffer has two fields Depth field stores a (1)depth value
And (2)Surface data field.
• Surface data field contains : RGB intensity components.
Opacity parameter.
Depth.
Percent of area coverage.
Surface identifier.
Pointer to next surface.

Scan line algorithms:

• Image space method.


• Polygons are consireded scanline wise.

• Method contains flag for each polygon and a edge table.


• Considering a particular scan line, while touching a particular edge of a polygon,
the flag of that polygon is set and that edge is inserted to the edge table.
• On encountering the other edge of that polygon the flag is reset.
• For this interval that particular polygon is displayed
• In case there is an overlap i.e. while the flag for one polygon is set, another
polygon edge comes, then the depth calculations are made and the surface with
more depth is displayed.
• This works for those surfaces that do not cut each other.

4
Module 4

BSPTree method

• build a data structure based on the polygons in our scene


• correctly traversing this tree enumerates objects from back to front

• Assume that no objects in our space overlap.


• Use planes to recursively split our object space, keeping a tree structure of these
recursive splits.
• Choose a splitting plane, dividing our objects into three sets – those on each side
of the plane, and those fully contained on the plane.

• when an object (like object 1) is divided by a splitting plane


It is divided into two objects, one on each side of the plane.
• When we reach a convex space containing exactly zero or one objects, that is a
leaf node.

5
Module 4

6
Module 4

• Once the tree is constructed, every root-to-leaf path describes a single convex
subspace.

• Correctly traversing this tree enumerates objects from back to front.

Area subdivision methods

• A divide and conquer algorithm


• Image space method.
• Successively divide the total viewing area into smaller and smaller rectangle until
each small area is part of a single surface or the area becomes pixel sized.
• Different test performed to find out whether area is part of single surface or a
complex one.
• From the viewing plane test are applied to find out whether to divide the area or
not if it is a apart of a single surface.
• There are four possible relations that a surface can have with the boundary of the
area(the subdivided that area we are considering).
• 1.surrounding surface-the suface that completely encloses the area.
• 2.overlapping surface-the surface that is partially inside and partially outside the
area.
• 3.inside surface-the surface that is completely inside the area.
• 4.outside surface-the surface that is completely outside the considered area.
• No further subdivision of the area is required if one of the following conditions
are true:
1. All surfaces are outside surfaces with respect to the area.

7
Module 4

2. only one inside,overlapping or surrounding surface is in the area.


3. a surrounding surface obscures all other surfaces within the area boundaries.

Octree methods

• The View volume is represented as an octree


• Visible-surface is determined by searching octree nodes in a front-to-back order
– Bits operation, front-side vs. back-side.
• Octree Traversal
– Hierarchically recursive traversal.

8
Module 4

• Octree nodes are projected onto the viewing surface in a front-to-back order.
• Any surfaces toward the rear of the front octants (0,1,2,3) or in the back octants
(4,5,6,7) may be hidden by the front surfaces.
• With the numbering method (0,1,2,3,4,5,6,7), nodes representing octants 0,1,2,3
for the entire region are visited before the nodes representing octants 4,5,6,7.
• Similarly the nodes for the front four sub octants of octant 0 are visited before the
nodes for the four back sub octants.
• When a colour is encountered in an octree node, the corresponding pixel in the
frame buffer is painted only if no previous color has been loaded into the same
pixel position.
• In most cases, both a front and a back octant must be considered in determining
the correct color values for a quadrant. But
- If the front octant is homogeneously filled with some color,
we do not process the back octant
- If the front is empty, it is necessary only to process the rear
octant.
- If the front octant has heterogeneous regions, it has to be
subdivided and the sub-octants are handled recursively.

Polygon rendering methods.

Flat Shading or constant intensity shading.

• Shade entire polygon one colour


• Perform lighting calcuation at:

o One polygon vertex


o Center of polygon

• It provides accurate rendering if following assumptions are valid


o Oject is polyhedron and its not approximation of an object with curved
surfaces.
o All light sources illuminating the object are far from the object.
o Viewing position is sufficiently far from the object.

Goraud shading

• Gouraud shading interpolates colors across a polygon from the vertices.


 Lighting calculations are only performed at the vertices.
• Works well for triangles.

9
Module 4

• Gouraud shading actually first interpolates between vertices and assigns values
along triangle edges
• then it interpolates across the scan line based on the interpolated edge crossing
values.
• One of the main advantages to Gouraud is that it smoothes out triangle edges on
mesh surfaces, giving objects a more realistic appearance.
• The following calculations are made in this method
o determine average unit normal vector at each polygon
vertex
o apply illumination model at each vertex.
o Linearly interpolate the vertex intensities over the surface.
(REFER TEXT FOR DETAILS OF INETNSITY INTERPOLATION ALONG THE
EDGES)

Phond shading

 Phong Shading interpolates lighting model parameters, not colours.


 Much better rendition of highlights.
 A normal is specified at each vertex of a polygon.
 Vertex normals are independent of the polygon normal.
 Vertex normals should relate to the surface being approximated by the polygon.
 The normal is interpolated across the polygon (using Gouraud techniques).
 At each pixel,

o Interpolate the normal...


o Interpolate other shading parameters...
o Compute the view and light vectors...
o Evaluate the lighting model.

 The lighting model does not have to be the Phong lighting model...
 Normal interpolation is nominally done by vector addition and renormalization.
 Several ``fast'' approximations are possible.
 The view and light vectors may also be interpolated or approximated.
 Problems with Phong shading:

o Distances change under perspective transformation


o Can't perform lighting calculation in projected space
o Normals don't map through perspective transformation
o Normals lost after projection
o Have to perform lighting calculation in world space
o Requires mapping position backward through perspective
transformation

10

Potrebbero piacerti anche