Sei sulla pagina 1di 18

Essays

Q:Describe in detail the classification of visible surface detection algorithms.(Repeated)


Visible surface detection

Suppose we have a 3 dimensional object represented in the computer system. We have seen the
methods to project points in the 3 D object on to a 2D view window. Only some p[arts of the 3D object
are visible to us. Only the parts that come in our viewing direction are visible. Other parts are hidden.
We need to show only visible parts in the view window. We have rto eliminate hidden surfaces.
Thus if we have a 3D scene, we have to identify those parts of the scene that are visible from a given
viewing direction.
A number of algorithms have been devised for efficient identification of visible parts. These
algorithms are called visible surface detection methods or hidden surface elimination methods.

Classification of visible surface detection algorithms

Visible surface detection algorithms are classified in to


1. Object space methods and
2. Image space methods

Object space methods


It compares objects and parts of objects in a scene to determine which surfaces can be labeled as
visible. Example. Back face detection method

Image space methods


In this, visibility is decided point by point at each pixel position on the projection on the
projection plane.
Eg. Z buffer method, scan line method

Backface detection method(repeated 4 mark Question also)

Suppose we are given a 3D object. We can assume that it is divided in to a number of planes. We
know that the equation of a plane is Ax + By + Cz + D = 0.

(x2,y2,z2) (x1,y1,z1)
(this point is inside the object)

l n

m
Suppose in the given 3D object, plane klm has the equation Ax + By + Cz + D = 0.
Then a point (x1, y1, z1) is inside the plane if Ax1 + By1 + Cz1 + D < 0. A point (x2, y2, z2) is outside
the plane, if Ax2 + By2 + Cz2 + D > 0. These conditions can be used to check for planes in a 3D object
that are not visible.
We can simplify this test by considering the normal to a plane surface. Suppose a
plane has the equation Ax + By + Cz + D =0. then the normal vector N to a polygon surface is (A, B,
C).

N (A,B,C)

Suppose V is a vector in the viewing direction. Then the polygon lmn is a back face (not
visible), if V. N > 0.

l
N (A,B,C)

V
m o

n
Again we can simplify this test.
We have assumed that the viewing direction is along the z axis. Then V = (0, 0
, Vz).

Yv N (A,B,C) l

Xv V (0,0,Vz)
o
m
Zv
N
The the polygon lmn is a back face, if Vz. C > 0.

Again we can simplify this test.


In a right handed coordinate system, with viewing direction along negative z axis, the polygon is a back
face, if C <= 0.

Yv

N (A, B, C)
Xv

Zv

This techniqure is called backface detection method. We can use this test to eliminate all the
back faces. We cannot delete planes that are partially visible.

a
b
h

d
g
e

f
V

In the above 3D object, if we are using back face detection method, the planes h, a, b, will be found as
back faces. These planes are eliminated. The plane c is partially visible. It will not be eliminated.
The back face detection method can be used to eliminate completely invisible planes
in a 3D object.

Depth buffer method

This is also called Z – buffer method. This is an image space method.


Our assumption is that we are viewing a 3D object along z- axis. We know that a
3D object is composed of a number of surfaces.
In this method, we are using a depth buffer and a refresh buffer. The depth buffer
stores the z value (depth) at each pixel position (x, y). it is initialized to 0.
The refresh buffer is used for each pixel position (x, y). it is used to store intensity
value at the position (x, y).

S3

(x,y,3) S2 Yv

(x,y,10) S1

(x,y,50)
Xv
(x,y)

Zv

In the above 3D object, there are 3 plane surfaces S1, S2 and S3. at point (x, y) only surface S1 is
visible. This is because the z value (depth) of S1 (that is 50) is greater than that of S2 (z value is 10)
and S3 (z value is 3) . (we are viewing in the negative z direction ).
Thus here the depth (x, y) = 50. then in the refresh buffer, the intensity at the point (x, y, 50) will
be stored. (intensity of surface S1 at (x, y)).
Thus the procedure is to initialize all positions in the depth buffer to 0. (minimum depth).
Refresh buffer is initialized to the background intensity.
Each surface listed in the surface list are processed, one scan line at a time, and we calculate the
depth ( z value) at each pixel (x, y) position. The calculated depth is compared to the value previously
stored in the depth buffer at that position. If the calculated depth is greater than the value stored in the
depth buffer, new depth value is stored. The surface intensity at that position is determined and placed
in the (x,y) location in the refresh buffer.

Projection
Two basic projection methods are
Parallel projection and
Perspective projection.

Parallel projection(repeated question)


In a parallel projection, coordinate positions are transformed to the view plane along parallel
lines.

Projection plane

P2 P2’

P1 P1’

A parallel projection preserves relative proportions of objects.


Accurate views of the various sides of an object are obtained using parallel projection. But we will not
get a realistic representation of the object.

Orthographic parallel projection

When the projection is perpendicular to the view plane, we have an orthographic parallel
projection.

View plane

Line of projection
Orthographic projections are often used to obtain the front, side and top views of an object.
Top view

Side view

Front view

Transformation equations for orthographic parallel projection are simple.


Suppose the view plane is perpendicular to Zv axis and along Xv, Yv plane, then any point (x, y,
z) on the object projected on to the view plane is

xp = x
yp = y

Yv
(x, y, z)

view plane

Xv
(x, y)

Zv

Oblique parallel projection

When the projection is not perpendicular the view plane, we have an oblique parallel projection.

View plane

An oblique parallel projection is obtained by projecting points along parallel lines that are not
perpendicular to the view plane. (projection plane).

Yv View plane

(xp,yp,0)
(x,y,z) 

 Xv

(x,y,0)

Suppose we need an oblique projection of point (x, y, z) in a 3d object, we have to specify the
angles  and .

Zv
In this oblique projection, point (x, y, z) on an object is projected to (xp, yp) on the view plane.
The oblique projection line is the line joining point (x, y, z) to point (xp, yp).

Cos  = xp - x
L
xp = x + L Cos 

Sin  = yp - y
L
yp = y + L Sin 

tan  = z-0
L

tan  = z
L

L = z
Tan 

Suppose L1= 1
Tan 

Then
L = z L1
Then equations

xp = x + L cos 
yp = y + L sin 

xp = x + z. L1. cos 
yp = y + z. L1. sin 

it can be written in matrix form as

xp 1 0 L1 cos  0 x
yp = 0 1 L1 sin  0 y
zp 0 0 0 0 z
1 0 0 0 1 1

thus the transformation matrix for producing any parallel projection on to a view plane that is
along the Xv Yv plane is written as

M parallel = 1 0 L1 cos  0
0 1 L1 sin  0
0 0 0 0
0 0 0 1

we will get an orthographic projection (  = 90 ) , when L1 = 0


(means 1/ tan  = 0, tan  = infinity,  = 90)

Cavalier parallel projection

When tan  = 1 (  = 45 ), the view obtained is called cavalier projection.

 = 45
 = 30

cavalier projection of a cube

Cabinet parallel projection

When tan  = 2, the resulting view is called a cabinet projection.


 = 45  = 30

Cabinet projection of a cube

Cabinet projections are more realistic than cavalier projections.

Perspective projection

P2 view plane

P2’

Projection reference point


P1 P1’

In perspective projection, positions on an object are transformed to the view plane along
lines that converge to a point called the projection reference point. (center of projection).
Perspective projection produces realistic views.

Suppose the projection reference point is at position Zprp along Zv axis. Suppose view
plane is at Zvp.
P(x,y,z)

(xp,yp,Zvp)

Zvp Zprp Zv

Suppose P(x,y,z) is any point on the 3 dimensional object. Suppose (xp, yp, Zvp) is the
projection of this point on the view plane.
Suppose(x’, y’, z’) be any point on the perspective projection line. It can be written as,

X’ = x – xt
Y’ = y – yt
Z’ = z – ( z- Zprp ) t
Where value of t varies from 0 to 1.

When u = 0, we are at position P(x, y, z). when u = 1, we are at projection reference point Zprp.
That is (0, 0, Zprp).

On the view plane,


Z’ = Zvp.

At this position,
Zvp = z – ( z- Zprp) t

t= Zvp – z
Zprp - z

Substitute t in x’ and y’,

x’ = x – xu

xp = x – x (Zvp – z)
Zprp – z

xp = x. Zprp – x z – x Zvp + x z
Zprp - z

= x. Zprp – x Zvp
Zprp – z
= x ( Zprp – Zvp)
Zprp – z

xp = x ( Zprp – Zvp)
Zprp – z

xp = x. dp
Zprp – z
Where dp = Zprp – Zvp
y’ = y - yu

yp = y – y (Zvp – z)
Zprp – z
yp = y. Zprp – y z – y Zvp + y z
Zprp - z

= y. Zprp – y Zvp
Zprp – z
= y ( Zprp – Zvp)
Zprp – z

yp = y ( Zprp – Zvp)
Zprp – z

yp = y. dp
Zprp – z
Where dp = Zprp – Zvp

Zp = Zvp
= dp – Zprp

where dp = Zprp – Zvp is the distance of the view plane from projection reference point.

thus

xp = x. dp
(Zprp – z)

yp = y dp
(Zprp – z)

zp = Zvp

In matrix form this is written as

xh 1 0 0 0 x
yh = 0 1 0 0 y
zh 0 0 -Zvp/dp Zvp(Zprp/dp) z
h 0 0 -1/dp Zprp/dp 1
where h= Zprp – z
dp
xh = xp .h
yh = yp. h

The above is the transformation equation for perspective projection of a 3 dimensional object , if
the view plane is perpendicular to Zv axis and projection reference point is along the Zv axis.

Q:Explain about the hidden surface elimination method

Backface detection method

Suppose we are given a 3D object. We can assume that it is divided in to a number of planes. We
know that the equation of a plane is Ax + By + Cz + D = 0.

(x2,y2,z2) (x1,y1,z1)
(this point is inside the object)

l n

Suppose in the given 3D object, plane klm has the equation Ax + By + Cz + D = 0.


Then a point (x1, y1, z1) is inside the plane if Ax1 + By1 + Cz1 + D < 0. A point (x2, y2, z2) is outside
the plane, if Ax2 + By2 + Cz2 + D > 0. These conditions can be used to check for planes in a 3D object
that are not visible.
We can simplify this test by considering the normal to a plane surface. Suppose a
plane has the equation Ax + By + Cz + D =0. then the normal vector N to a polygon surface is (A, B,
C).

N (A,B,C)
Suppose V is a vector in the viewing direction. Then the polygon lmn is a back face (not
visible), if V. N > 0.

l
N (A,B,C)

V
m o

n
Again we can simplify this test.
We have assumed that the viewing direction is along the z axis. Then V = (0, 0
, Vz).

Yv N (A,B,C) l

Xv V (0,0,Vz)
o

m
Zv
N
The the polygon lmn is a back face, if Vz. C > 0.

Again we can simplify this test.


In a right handed coordinate system, with viewing direction along negative z axis, the polygon is a back
face, if C <= 0.

Yv

N (A, B, C)
Xv
Zv

This techniqure is called backface detection method. We can use this test to eliminate all the
back faces. We cannot delete planes that are partially visible.

a
b
h

d
g
e

In the above 3D object, if we are using back face detection method, the planes h, a, b, will be found as
back faces. These planes are eliminated. The plane c is partially visible. It will not be eliminated.
The back face detection method can be used to eliminate completely invisible planes
in a 3D object.

Depth buffer method

This is also called Z – buffer method. This is an image space method.


Our assumption is that we are viewing a 3D object along z- axis. We know that a
3D object is composed of a number of surfaces.
In this method, we are using a depth buffer and a refresh buffer. The depth buffer
stores the z value (depth) at each pixel position (x, y). it is initialized to 0.
The refresh buffer is used for each pixel position (x, y). it is used to store intensity
value at the position (x, y).

S3

(x,y,3) S2 Yv

(x,y,10) S1

(x,y,50)
Xv
(x,y)

Zv

In the above 3D object, there are 3 plane surfaces S1, S2 and S3. at point (x, y) only surface S1 is
visible. This is because the z value (depth) of S1 (that is 50) is greater than that of S2 (z value is 10)
and S3 (z value is 3) . (we are viewing in the negative z direction ).
Thus here the depth (x, y) = 50. then in the refresh buffer, the intensity at the point (x, y, 50) will
be stored. (intensity of surface S1 at (x, y)).

Thus the procedure is to initialize all positions in the depth buffer to 0. (minimum depth).
Refresh buffer is initialized to the background intensity.
Each surface listed in the surface list are processed, one scan line at a time, and we calculate the
depth ( z value) at each pixel (x, y) position. The calculated depth is compared to the value previously
stored in the depth buffer at that position. If the calculated depth is greater than the value stored in the
depth buffer, new depth value is stored. The surface intensity at that position is determined and placed
in the (x,y) location in the refresh buffer. Page 7)

Potrebbero piacerti anche